mirror of
https://github.com/open-webui/open-webui.git
synced 2025-12-12 20:35:19 +00:00
Merge branch 'dev' into dev
This commit is contained in:
commit
de94a84af5
163 changed files with 9866 additions and 5988 deletions
100
CHANGELOG.md
100
CHANGELOG.md
|
|
@ -5,6 +5,106 @@ All notable changes to this project will be documented in this file.
|
|||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [0.6.36] - 2025-11-07
|
||||
|
||||
### Added
|
||||
|
||||
- 🔐 OAuth group parsing now supports configurable separators via the "OAUTH_GROUPS_SEPARATOR" environment variable, enabling proper handling of semicolon-separated group claims from providers like CILogon. [#18987](https://github.com/open-webui/open-webui/pull/18987), [#18979](https://github.com/open-webui/open-webui/issues/18979)
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🛠️ Tool calling functionality is restored by correcting asynchronous function handling in tool parameter updates. [#18981](https://github.com/open-webui/open-webui/issues/18981)
|
||||
- 🖼️ The ComfyUI image edit workflow editor modal now opens correctly when clicking the Edit button. [#18978](https://github.com/open-webui/open-webui/issues/18978)
|
||||
- 🔥 Firecrawl import errors are resolved by implementing lazy loading and using the correct class name. [#18973](https://github.com/open-webui/open-webui/issues/18973)
|
||||
- 🔌 Socket.IO CORS warning is resolved by properly configuring CORS origins for Socket.IO connections. [Commit](https://github.com/open-webui/open-webui/commit/639d26252e528c9c37a5f553b11eb94376d8792d)
|
||||
|
||||
## [0.6.35] - 2025-11-06
|
||||
|
||||
### Added
|
||||
|
||||
- 🖼️ Image generation system received a comprehensive overhaul with major new capabilities including full image editing support allowing users to modify existing images using text prompts with OpenAI, Gemini, or ComfyUI engines, adding Gemini 2.5 Flash Image (Nano Banana) support, Qwen Image Edit integration, resolution of base64-encoded image display issues, streamlined AUTOMATIC1111 configuration by consolidating parameters into a flexible JSON parameters field, and enhanced UI with a code editor modal for ComfyUI workflow management. [#17434](https://github.com/open-webui/open-webui/pull/17434), [#16976](https://github.com/open-webui/open-webui/issues/16976), [Commit](https://github.com/open-webui/open-webui/commit/8e5690aab4f632a57027e2acf880b8f89a8717c0), [Commit](https://github.com/open-webui/open-webui/commit/72f8539fd2e679fec0762945f22f4b8a6920afa0), [Commit](https://github.com/open-webui/open-webui/commit/8d34fcb586eeee1fac6da2f991518b8a68b00b72), [Commit](https://github.com/open-webui/open-webui/commit/72900cd686de1fa6be84b5a8a2fc857cff7b91b8)
|
||||
- 🔒 CORS origin validation was added to WebSocket connections as a defense-in-depth security measure against cross-site WebSocket hijacking attacks. [#18411](https://github.com/open-webui/open-webui/pull/18411), [#18410](https://github.com/open-webui/open-webui/issues/18410)
|
||||
- 🔄 Automatic page refresh now occurs when a version update is detected via WebSocket connection, ensuring users always run the latest version without cache issues. [Commit](https://github.com/open-webui/open-webui/commit/989f192c92d2fe55daa31336e7971e21798b96ae)
|
||||
- 🐍 Experimental initial preparations for Python 3.13 compatibility by updating dependencies with security enhancements and cryptographic improvements. [#18430](https://github.com/open-webui/open-webui/pull/18430), [#18424](https://github.com/open-webui/open-webui/pull/18424)
|
||||
- ⚡ Image compression now preserves the original image format instead of converting to PNG, significantly reducing file sizes and improving chat loading performance. [#18506](https://github.com/open-webui/open-webui/pull/18506)
|
||||
- 🎤 Mistral Voxtral model support was added for text-to-speech, including voxtral-small and voxtral-mini models with both transcription and chat completion API support. [#18934](https://github.com/open-webui/open-webui/pull/18934)
|
||||
- 🔊 Text-to-speech now uses a global audio queue system to prevent overlapping playback, ensuring only one TTS instance plays at a time with proper stop/start controls and automatic cleanup when switching between messages. [#16152](https://github.com/open-webui/open-webui/pull/16152), [#18744](https://github.com/open-webui/open-webui/pull/18744), [#16150](https://github.com/open-webui/open-webui/issues/16150)
|
||||
- 🔊 ELEVENLABS_API_BASE_URL environment variable now allows configuration of custom ElevenLabs API endpoints, enabling support for EU residency API requirements. [#18402](https://github.com/open-webui/open-webui/issues/18402)
|
||||
- 🔐 OAUTH_ROLES_SEPARATOR environment variable now allows custom role separators for OAuth roles that contain commas, useful for roles specified in LDAP syntax. [#18572](https://github.com/open-webui/open-webui/pull/18572)
|
||||
- 📄 External document loaders can now optionally forward user information headers when ENABLE_FORWARD_USER_INFO_HEADERS is enabled, enabling cost tracking, audit logs, and usage analytics for external services. [#18731](https://github.com/open-webui/open-webui/pull/18731)
|
||||
- 📄 MISTRAL_OCR_API_BASE_URL environment variable now allows configuration of custom Mistral OCR API endpoints for flexible deployment options. [Commit](https://github.com/open-webui/open-webui/commit/415b93c7c35c2e2db4425e6da1b88b3750f496b0)
|
||||
- ⌨️ Keyboard shortcut hints are now displayed on sidebar buttons with a refactored shortcuts modal that accurately reflects all available hotkeys across different keyboard layouts. [#18473](https://github.com/open-webui/open-webui/pull/18473)
|
||||
- 🛠️ Tooltips now display tool descriptions when hovering over tool names on the model edit page, improving usability and providing immediate context. [#18707](https://github.com/open-webui/open-webui/pull/18707)
|
||||
- 📝 "Create a new note" from the search modal now immediately creates a new private note and opens it in the editor instead of navigating to the generic notes page. [#18255](https://github.com/open-webui/open-webui/pull/18255)
|
||||
- 🖨️ Code block output now preserves whitespace formatting with monospace font to accurately reflect terminal behavior. [#18352](https://github.com/open-webui/open-webui/pull/18352)
|
||||
- ✏️ Edit button is now available in the three-dot menu of models in the workspace section for quick access to model editing, with the menu reorganized for better user experience and Edit, Clone, Copy Link, and Share options logically grouped. [#18574](https://github.com/open-webui/open-webui/pull/18574)
|
||||
- 📌 Sidebar models section is now collapsible, allowing users to expand and collapse the pinned models list for better sidebar organization. [Commit](https://github.com/open-webui/open-webui/commit/82c08a3b5d189f81c96b6548cc872198771015b0)
|
||||
- 🌙 Dark mode styles for select elements were added using Tailwind CSS classes, improving consistency across the interface. [#18636](https://github.com/open-webui/open-webui/pull/18636)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Translations for Portuguese (Brazil), Greek, German, Traditional Chinese, Simplified Chinese, Spanish, Georgian, Danish, and Estonian were enhanced and expanded.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🔒 Server-Sent Event (SSE) code injection vulnerability in Direct Connections is resolved by blocking event emission from untrusted external model servers; event emitters from direct connected model servers are no longer supported, preventing arbitrary JavaScript execution in user browsers. [Commit](https://github.com/open-webui/open-webui/commit/8af6a4cf21b756a66cd58378a01c60f74c39b7ca)
|
||||
- 🛡️ DOM XSS vulnerability in "Insert Prompt as Rich Text" is resolved by sanitizing HTML content with DOMPurify before rendering. [Commit](https://github.com/open-webui/open-webui/commit/eb9c4c0e358c274aea35f21c2856c0a20051e5f1)
|
||||
- ⚙️ MCP server cancellation scope corruption is prevented by reversing disconnection order to follow LIFO and properly handling exceptions, resolving 100% CPU usage when resuming chats with expired tokens or using multiple streamable MCP servers. [#18537](https://github.com/open-webui/open-webui/pull/18537)
|
||||
- 🔧 UI freeze when querying models with knowledge bases containing inconsistent distance metrics is resolved by properly initializing the distances array in citations. [#18585](https://github.com/open-webui/open-webui/pull/18585)
|
||||
- 🤖 Duplicate model IDs from multiple OpenAI endpoints are now automatically deduplicated server-side, preventing frontend crashes for users with unified gateway proxies that aggregate multiple providers. [Commit](https://github.com/open-webui/open-webui/commit/fdf7ca11d4f3cc8fe63e81c98dc0d1e48e52ba36)
|
||||
- 🔐 Login failures with passwords longer than 72 bytes are resolved by safely truncating oversized passwords for bcrypt compatibility. [#18157](https://github.com/open-webui/open-webui/issues/18157)
|
||||
- 🔐 OAuth 2.1 MCP tool connections now automatically re-register clients when stored client IDs become stale, preventing unauthorized_client errors after editing tool endpoints and providing detailed error messages for callback failures. [#18415](https://github.com/open-webui/open-webui/pull/18415), [#18309](https://github.com/open-webui/open-webui/issues/18309)
|
||||
- 🔓 OAuth 2.1 discovery, metadata fetching, and dynamic client registration now correctly use HTTP proxy environment variables when trust_env is enabled. [Commit](https://github.com/open-webui/open-webui/commit/bafeb76c411483bd6b135f0edbcdce048120f264)
|
||||
- 🔌 MCP server connection failures now display clear error messages in the chat interface instead of silently failing. [#18892](https://github.com/open-webui/open-webui/pull/18892), [#18889](https://github.com/open-webui/open-webui/issues/18889)
|
||||
- 💬 Chat titles are now properly generated even when title auto-generation is disabled in interface settings, fixing an issue where chats would remain labeled as "New chat". [#18761](https://github.com/open-webui/open-webui/pull/18761), [#18717](https://github.com/open-webui/open-webui/issues/18717), [#6478](https://github.com/open-webui/open-webui/issues/6478)
|
||||
- 🔍 Chat query errors are prevented by properly validating and handling the "order_by" parameter to ensure requested columns exist. [#18400](https://github.com/open-webui/open-webui/pull/18400), [#18452](https://github.com/open-webui/open-webui/pull/18452)
|
||||
- 🔧 Root-level max_tokens parameter is no longer dropped when proxying to Ollama, properly converting to num_predict to limit output token length as intended. [#18618](https://github.com/open-webui/open-webui/issues/18618)
|
||||
- 🔑 Self-hosted Marker instances can now be used without requiring an API key, while keeping it optional for datalab Marker service users. [#18617](https://github.com/open-webui/open-webui/issues/18617)
|
||||
- 🔧 OpenAPI specification endpoint conflict between "/api/v1/models" and "/api/v1/models/" is resolved by changing the models router endpoint to "/list", preventing duplicate operationId errors when generating TypeScript API clients. [#18758](https://github.com/open-webui/open-webui/issues/18758)
|
||||
- 🏷️ Model tags are now de-duplicated case-insensitively in both the model selector and workspace models page, preventing duplicate entries with different capitalization from appearing in filter dropdowns. [#18716](https://github.com/open-webui/open-webui/pull/18716), [#18711](https://github.com/open-webui/open-webui/issues/18711)
|
||||
- 📄 Docling RAG parameter configuration is now correctly saved in the admin UI by fixing the typo in the "DOCLING_PARAMS" parameter name. [#18390](https://github.com/open-webui/open-webui/pull/18390)
|
||||
- 📃 Tika document processing now automatically detects content types instead of relying on potentially incorrect browser-provided mime-types, improving file handling accuracy for formats like RTF. [#18765](https://github.com/open-webui/open-webui/pull/18765), [#18683](https://github.com/open-webui/open-webui/issues/18683)
|
||||
- 🖼️ Image and video uploads to knowledge bases now display proper error messages instead of showing an infinite spinner when the content extraction engine does not support these file types. [#18514](https://github.com/open-webui/open-webui/issues/18514)
|
||||
- 📝 Notes PDF export now properly detects and applies dark mode styling consistently across both the notes list and individual note pages, with a shared utility function to eliminate code duplication. [#18526](https://github.com/open-webui/open-webui/issues/18526)
|
||||
- 💭 Details tags for reasoning content are now correctly identified and rendered even when the same tag is present in user messages. [#18840](https://github.com/open-webui/open-webui/pull/18840), [#18294](https://github.com/open-webui/open-webui/issues/18294)
|
||||
- 📊 Mermaid and Vega rendering errors now display inline with the code instead of showing repetitive toast notifications, improving user experience when models generate invalid diagram syntax. [Commit](https://github.com/open-webui/open-webui/commit/fdc0f04a8b7dd0bc9f9dc0e7e30854f7a0eea3e9)
|
||||
- 📈 Mermaid diagram rendering errors no longer cause UI unavailability or display error messages below the input box. [#18493](https://github.com/open-webui/open-webui/pull/18493), [#18340](https://github.com/open-webui/open-webui/issues/18340)
|
||||
- 🔗 Web search SSL verification is now asynchronous, preventing the website from hanging during web search operations. [#18714](https://github.com/open-webui/open-webui/pull/18714), [#18699](https://github.com/open-webui/open-webui/issues/18699)
|
||||
- 🌍 Web search results now correctly use HTTP proxy environment variables when WEB_SEARCH_TRUST_ENV is enabled. [#18667](https://github.com/open-webui/open-webui/pull/18667), [#7008](https://github.com/open-webui/open-webui/discussions/7008)
|
||||
- 🔍 Google Programmable Search Engine now properly includes referer headers, enabling API keys with HTTP referrer restrictions configured in Google Cloud Console. [#18871](https://github.com/open-webui/open-webui/pull/18871), [#18870](https://github.com/open-webui/open-webui/issues/18870)
|
||||
- ⚡ YouTube video transcript fetching now works correctly when using a proxy connection. [#18419](https://github.com/open-webui/open-webui/pull/18419)
|
||||
- 🎙️ Speech-to-text transcription no longer deletes or replaces existing text in the prompt input field, properly preserving any previously entered content. [#18540](https://github.com/open-webui/open-webui/issues/18540)
|
||||
- 🎙️ The "Instant Auto-Send After Voice Transcription" setting now functions correctly and automatically sends transcribed text when enabled. [#18466](https://github.com/open-webui/open-webui/issues/18466)
|
||||
- ⚙️ Chat settings now load properly when reopening a tab or starting a new session by initializing defaults when sessionStorage is empty. [#18438](https://github.com/open-webui/open-webui/pull/18438)
|
||||
- 🔎 Folder tag search in the sidebar now correctly handles folder names with multiple spaces by replacing all spaces with underscores. [Commit](https://github.com/open-webui/open-webui/commit/a8fe979af68e47e4e4bb3eb76e48d93d60cd2a45)
|
||||
- 🛠️ Functions page now updates immediately after deleting a function, removing the need for a manual page reload. [#18912](https://github.com/open-webui/open-webui/pull/18912), [#18908](https://github.com/open-webui/open-webui/issues/18908)
|
||||
- 🛠️ Native tool calling now properly supports sequential tool calls with shared context, allowing tools to access images and data from previous tool executions in the same conversation. [#18664](https://github.com/open-webui/open-webui/pull/18664)
|
||||
- 🎯 Globally enabled actions in the model editor now correctly apply as global instead of being treated as disabled. [#18577](https://github.com/open-webui/open-webui/pull/18577)
|
||||
- 📋 Clipboard images pasted via the "{{CLIPBOARD}}" prompt variable are now correctly converted to base64 format before being sent to the backend, resolving base64 encoding errors. [#18432](https://github.com/open-webui/open-webui/pull/18432), [#18425](https://github.com/open-webui/open-webui/issues/18425)
|
||||
- 📋 File list is now cleared when switching to models that do not support file uploads, preventing files from being sent to incompatible models. [#18496](https://github.com/open-webui/open-webui/pull/18496)
|
||||
- 📂 Move menu no longer displays when folders are empty. [#18484](https://github.com/open-webui/open-webui/pull/18484)
|
||||
- 📁 Folder and channel creation now validates that names are not empty, preventing creation of folders or channels with no name and showing an error toast if attempted. [#18564](https://github.com/open-webui/open-webui/pull/18564)
|
||||
- 🖊️ Rich text input no longer removes text between equals signs when pasting code with comparison operators. [#18551](https://github.com/open-webui/open-webui/issues/18551)
|
||||
- ⌨️ Keyboard shortcuts now display the correct keys for international and non-QWERTY keyboard layouts by detecting the user's layout using the Keyboard API. [#18533](https://github.com/open-webui/open-webui/pull/18533)
|
||||
- 🌐 "Attach Webpage" button now displays with correct disabled styling when a model does not support file uploads. [#18483](https://github.com/open-webui/open-webui/pull/18483)
|
||||
- 🎚️ Divider no longer displays in the integrations menu when no integrations are enabled. [#18487](https://github.com/open-webui/open-webui/pull/18487)
|
||||
- 📱 Chat controls button is now properly hidden on mobile for users without admin or explicit chat control permissions. [#18641](https://github.com/open-webui/open-webui/pull/18641)
|
||||
- 📍 User menu, download submenu, and move submenu are now repositioned to prevent overlap with the Chat Controls sidebar when it is open. [Commit](https://github.com/open-webui/open-webui/commit/414ab51cb6df1ab0d6c85ac6c1f2c5c9a5f8e2aa)
|
||||
- 🎯 Artifacts button no longer appears in the chat menu when there are no artifacts to display. [Commit](https://github.com/open-webui/open-webui/commit/ed6449d35f84f68dc75ee5c6b3f4748a3fda0096)
|
||||
- 🎨 Artifacts view now automatically displays when opening an existing conversation containing artifacts, improving user experience. [#18215](https://github.com/open-webui/open-webui/pull/18215)
|
||||
- 🖌️ Formatting toolbar is no longer hidden under images or code blocks in chat and now displays correctly above all message content.
|
||||
- 🎨 Layout shift near system instructions is prevented by properly rendering the chat component when system prompts are empty. [#18594](https://github.com/open-webui/open-webui/pull/18594)
|
||||
- 📐 Modal layout shift caused by scrollbar appearance is prevented by adding a stable scrollbar gutter. [#18591](https://github.com/open-webui/open-webui/pull/18591)
|
||||
- ✨ Spacing between icon and label in the user menu dropdown items is now consistent. [#18595](https://github.com/open-webui/open-webui/pull/18595)
|
||||
- 💬 Duplicate prompt suggestions no longer cause the webpage to freeze or throw JavaScript errors by implementing proper key management with composite keys. [#18841](https://github.com/open-webui/open-webui/pull/18841), [#18566](https://github.com/open-webui/open-webui/issues/18566)
|
||||
- 🔍 Chat preview loading in the search modal now works correctly for all search results by fixing an index boundary check that previously caused out-of-bounds errors. [#18911](https://github.com/open-webui/open-webui/pull/18911)
|
||||
- ♿ Screen reader support was enhanced by wrapping messages in semantic elements with descriptive aria-labels, adding "Assistant is typing" and "Response complete" announcements for improved accessibility. [#18735](https://github.com/open-webui/open-webui/pull/18735)
|
||||
- 🔒 Incorrect await call in the OAuth 2.1 flow is removed, eliminating a logged exception during authentication. [#18236](https://github.com/open-webui/open-webui/pull/18236)
|
||||
- 🛡️ Duplicate crossorigin attribute in the manifest file was removed. [#18413](https://github.com/open-webui/open-webui/pull/18413)
|
||||
|
||||
### Changed
|
||||
|
||||
- 🔄 Firecrawl integration was refactored to use the official Firecrawl SDK instead of direct HTTP requests and langchain_community FireCrawlLoader, improving reliability and performance with batch scraping support and enhanced error handling. [#18635](https://github.com/open-webui/open-webui/pull/18635)
|
||||
- 📄 MinerU content extraction engine now only supports PDF files following the upstream removal of LibreOffice document conversion in version 2.0.0; users needing to process office documents should convert them to PDF format first. [#18448](https://github.com/open-webui/open-webui/issues/18448)
|
||||
|
||||
## [0.6.34] - 2025-10-16
|
||||
|
||||
### Added
|
||||
|
|
|
|||
|
|
@ -570,6 +570,8 @@ OAUTH_BLOCKED_GROUPS = PersistentConfig(
|
|||
os.environ.get("OAUTH_BLOCKED_GROUPS", "[]"),
|
||||
)
|
||||
|
||||
OAUTH_GROUPS_SEPARATOR = os.environ.get("OAUTH_GROUPS_SEPARATOR", ";")
|
||||
|
||||
OAUTH_ROLES_CLAIM = PersistentConfig(
|
||||
"OAUTH_ROLES_CLAIM",
|
||||
"oauth.roles_claim",
|
||||
|
|
@ -1122,6 +1124,10 @@ ENABLE_LOGIN_FORM = PersistentConfig(
|
|||
os.environ.get("ENABLE_LOGIN_FORM", "True").lower() == "true",
|
||||
)
|
||||
|
||||
ENABLE_PASSWORD_AUTH = (
|
||||
os.environ.get("ENABLE_PASSWORD_AUTH", "True").lower()
|
||||
== "true"
|
||||
)
|
||||
|
||||
DEFAULT_LOCALE = PersistentConfig(
|
||||
"DEFAULT_LOCALE",
|
||||
|
|
@ -2464,6 +2470,12 @@ DOCUMENT_INTELLIGENCE_KEY = PersistentConfig(
|
|||
os.getenv("DOCUMENT_INTELLIGENCE_KEY", ""),
|
||||
)
|
||||
|
||||
MISTRAL_OCR_API_BASE_URL = PersistentConfig(
|
||||
"MISTRAL_OCR_API_BASE_URL",
|
||||
"rag.MISTRAL_OCR_API_BASE_URL",
|
||||
os.getenv("MISTRAL_OCR_API_BASE_URL", "https://api.mistral.ai/v1"),
|
||||
)
|
||||
|
||||
MISTRAL_OCR_API_KEY = PersistentConfig(
|
||||
"MISTRAL_OCR_API_KEY",
|
||||
"rag.mistral_ocr_api_key",
|
||||
|
|
@ -2689,10 +2701,6 @@ Provide a clear and direct response to the user's query, including inline citati
|
|||
<context>
|
||||
{{CONTEXT}}
|
||||
</context>
|
||||
|
||||
<user_query>
|
||||
{{QUERY}}
|
||||
</user_query>
|
||||
"""
|
||||
|
||||
RAG_TEMPLATE = PersistentConfig(
|
||||
|
|
@ -3074,16 +3082,30 @@ EXTERNAL_WEB_LOADER_API_KEY = PersistentConfig(
|
|||
# Images
|
||||
####################################
|
||||
|
||||
ENABLE_IMAGE_GENERATION = PersistentConfig(
|
||||
"ENABLE_IMAGE_GENERATION",
|
||||
"image_generation.enable",
|
||||
os.environ.get("ENABLE_IMAGE_GENERATION", "").lower() == "true",
|
||||
)
|
||||
|
||||
IMAGE_GENERATION_ENGINE = PersistentConfig(
|
||||
"IMAGE_GENERATION_ENGINE",
|
||||
"image_generation.engine",
|
||||
os.getenv("IMAGE_GENERATION_ENGINE", "openai"),
|
||||
)
|
||||
|
||||
ENABLE_IMAGE_GENERATION = PersistentConfig(
|
||||
"ENABLE_IMAGE_GENERATION",
|
||||
"image_generation.enable",
|
||||
os.environ.get("ENABLE_IMAGE_GENERATION", "").lower() == "true",
|
||||
IMAGE_GENERATION_MODEL = PersistentConfig(
|
||||
"IMAGE_GENERATION_MODEL",
|
||||
"image_generation.model",
|
||||
os.getenv("IMAGE_GENERATION_MODEL", ""),
|
||||
)
|
||||
|
||||
IMAGE_SIZE = PersistentConfig(
|
||||
"IMAGE_SIZE", "image_generation.size", os.getenv("IMAGE_SIZE", "512x512")
|
||||
)
|
||||
|
||||
IMAGE_STEPS = PersistentConfig(
|
||||
"IMAGE_STEPS", "image_generation.steps", int(os.getenv("IMAGE_STEPS", 50))
|
||||
)
|
||||
|
||||
ENABLE_IMAGE_PROMPT_GENERATION = PersistentConfig(
|
||||
|
|
@ -3103,35 +3125,17 @@ AUTOMATIC1111_API_AUTH = PersistentConfig(
|
|||
os.getenv("AUTOMATIC1111_API_AUTH", ""),
|
||||
)
|
||||
|
||||
AUTOMATIC1111_CFG_SCALE = PersistentConfig(
|
||||
"AUTOMATIC1111_CFG_SCALE",
|
||||
"image_generation.automatic1111.cfg_scale",
|
||||
(
|
||||
float(os.environ.get("AUTOMATIC1111_CFG_SCALE"))
|
||||
if os.environ.get("AUTOMATIC1111_CFG_SCALE")
|
||||
else None
|
||||
),
|
||||
)
|
||||
automatic1111_params = os.getenv("AUTOMATIC1111_PARAMS", "")
|
||||
try:
|
||||
automatic1111_params = json.loads(automatic1111_params)
|
||||
except json.JSONDecodeError:
|
||||
automatic1111_params = {}
|
||||
|
||||
|
||||
AUTOMATIC1111_SAMPLER = PersistentConfig(
|
||||
"AUTOMATIC1111_SAMPLER",
|
||||
"image_generation.automatic1111.sampler",
|
||||
(
|
||||
os.environ.get("AUTOMATIC1111_SAMPLER")
|
||||
if os.environ.get("AUTOMATIC1111_SAMPLER")
|
||||
else None
|
||||
),
|
||||
)
|
||||
|
||||
AUTOMATIC1111_SCHEDULER = PersistentConfig(
|
||||
"AUTOMATIC1111_SCHEDULER",
|
||||
"image_generation.automatic1111.scheduler",
|
||||
(
|
||||
os.environ.get("AUTOMATIC1111_SCHEDULER")
|
||||
if os.environ.get("AUTOMATIC1111_SCHEDULER")
|
||||
else None
|
||||
),
|
||||
AUTOMATIC1111_PARAMS = PersistentConfig(
|
||||
"AUTOMATIC1111_PARAMS",
|
||||
"image_generation.automatic1111.api_auth",
|
||||
automatic1111_params,
|
||||
)
|
||||
|
||||
COMFYUI_BASE_URL = PersistentConfig(
|
||||
|
|
@ -3297,18 +3301,79 @@ IMAGES_GEMINI_API_KEY = PersistentConfig(
|
|||
os.getenv("IMAGES_GEMINI_API_KEY", GEMINI_API_KEY),
|
||||
)
|
||||
|
||||
IMAGE_SIZE = PersistentConfig(
|
||||
"IMAGE_SIZE", "image_generation.size", os.getenv("IMAGE_SIZE", "512x512")
|
||||
IMAGES_GEMINI_ENDPOINT_METHOD = PersistentConfig(
|
||||
"IMAGES_GEMINI_ENDPOINT_METHOD",
|
||||
"image_generation.gemini.endpoint_method",
|
||||
os.getenv("IMAGES_GEMINI_ENDPOINT_METHOD", ""),
|
||||
)
|
||||
|
||||
IMAGE_STEPS = PersistentConfig(
|
||||
"IMAGE_STEPS", "image_generation.steps", int(os.getenv("IMAGE_STEPS", 50))
|
||||
|
||||
IMAGE_EDIT_ENGINE = PersistentConfig(
|
||||
"IMAGE_EDIT_ENGINE",
|
||||
"images.edit.engine",
|
||||
os.getenv("IMAGE_EDIT_ENGINE", "openai"),
|
||||
)
|
||||
|
||||
IMAGE_GENERATION_MODEL = PersistentConfig(
|
||||
"IMAGE_GENERATION_MODEL",
|
||||
"image_generation.model",
|
||||
os.getenv("IMAGE_GENERATION_MODEL", ""),
|
||||
IMAGE_EDIT_MODEL = PersistentConfig(
|
||||
"IMAGE_EDIT_MODEL",
|
||||
"images.edit.model",
|
||||
os.getenv("IMAGE_EDIT_MODEL", ""),
|
||||
)
|
||||
|
||||
IMAGE_EDIT_SIZE = PersistentConfig(
|
||||
"IMAGE_EDIT_SIZE", "images.edit.size", os.getenv("IMAGE_EDIT_SIZE", "")
|
||||
)
|
||||
|
||||
IMAGES_EDIT_OPENAI_API_BASE_URL = PersistentConfig(
|
||||
"IMAGES_EDIT_OPENAI_API_BASE_URL",
|
||||
"images.edit.openai.api_base_url",
|
||||
os.getenv("IMAGES_EDIT_OPENAI_API_BASE_URL", OPENAI_API_BASE_URL),
|
||||
)
|
||||
IMAGES_EDIT_OPENAI_API_VERSION = PersistentConfig(
|
||||
"IMAGES_EDIT_OPENAI_API_VERSION",
|
||||
"images.edit.openai.api_version",
|
||||
os.getenv("IMAGES_EDIT_OPENAI_API_VERSION", ""),
|
||||
)
|
||||
|
||||
IMAGES_EDIT_OPENAI_API_KEY = PersistentConfig(
|
||||
"IMAGES_EDIT_OPENAI_API_KEY",
|
||||
"images.edit.openai.api_key",
|
||||
os.getenv("IMAGES_EDIT_OPENAI_API_KEY", OPENAI_API_KEY),
|
||||
)
|
||||
|
||||
IMAGES_EDIT_GEMINI_API_BASE_URL = PersistentConfig(
|
||||
"IMAGES_EDIT_GEMINI_API_BASE_URL",
|
||||
"images.edit.gemini.api_base_url",
|
||||
os.getenv("IMAGES_EDIT_GEMINI_API_BASE_URL", GEMINI_API_BASE_URL),
|
||||
)
|
||||
IMAGES_EDIT_GEMINI_API_KEY = PersistentConfig(
|
||||
"IMAGES_EDIT_GEMINI_API_KEY",
|
||||
"images.edit.gemini.api_key",
|
||||
os.getenv("IMAGES_EDIT_GEMINI_API_KEY", GEMINI_API_KEY),
|
||||
)
|
||||
|
||||
|
||||
IMAGES_EDIT_COMFYUI_BASE_URL = PersistentConfig(
|
||||
"IMAGES_EDIT_COMFYUI_BASE_URL",
|
||||
"images.edit.comfyui.base_url",
|
||||
os.getenv("IMAGES_EDIT_COMFYUI_BASE_URL", ""),
|
||||
)
|
||||
IMAGES_EDIT_COMFYUI_API_KEY = PersistentConfig(
|
||||
"IMAGES_EDIT_COMFYUI_API_KEY",
|
||||
"images.edit.comfyui.api_key",
|
||||
os.getenv("IMAGES_EDIT_COMFYUI_API_KEY", ""),
|
||||
)
|
||||
|
||||
IMAGES_EDIT_COMFYUI_WORKFLOW = PersistentConfig(
|
||||
"IMAGES_EDIT_COMFYUI_WORKFLOW",
|
||||
"images.edit.comfyui.workflow",
|
||||
os.getenv("IMAGES_EDIT_COMFYUI_WORKFLOW", ""),
|
||||
)
|
||||
|
||||
IMAGES_EDIT_COMFYUI_WORKFLOW_NODES = PersistentConfig(
|
||||
"IMAGES_EDIT_COMFYUI_WORKFLOW_NODES",
|
||||
"images.edit.comfyui.nodes",
|
||||
[],
|
||||
)
|
||||
|
||||
####################################
|
||||
|
|
@ -3343,6 +3408,10 @@ DEEPGRAM_API_KEY = PersistentConfig(
|
|||
os.getenv("DEEPGRAM_API_KEY", ""),
|
||||
)
|
||||
|
||||
# ElevenLabs configuration
|
||||
ELEVENLABS_API_BASE_URL = os.getenv(
|
||||
"ELEVENLABS_API_BASE_URL", "https://api.elevenlabs.io"
|
||||
)
|
||||
|
||||
AUDIO_STT_OPENAI_API_BASE_URL = PersistentConfig(
|
||||
"AUDIO_STT_OPENAI_API_BASE_URL",
|
||||
|
|
@ -3410,6 +3479,24 @@ AUDIO_STT_AZURE_MAX_SPEAKERS = PersistentConfig(
|
|||
os.getenv("AUDIO_STT_AZURE_MAX_SPEAKERS", ""),
|
||||
)
|
||||
|
||||
AUDIO_STT_MISTRAL_API_KEY = PersistentConfig(
|
||||
"AUDIO_STT_MISTRAL_API_KEY",
|
||||
"audio.stt.mistral.api_key",
|
||||
os.getenv("AUDIO_STT_MISTRAL_API_KEY", ""),
|
||||
)
|
||||
|
||||
AUDIO_STT_MISTRAL_API_BASE_URL = PersistentConfig(
|
||||
"AUDIO_STT_MISTRAL_API_BASE_URL",
|
||||
"audio.stt.mistral.api_base_url",
|
||||
os.getenv("AUDIO_STT_MISTRAL_API_BASE_URL", "https://api.mistral.ai/v1"),
|
||||
)
|
||||
|
||||
AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS = PersistentConfig(
|
||||
"AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS",
|
||||
"audio.stt.mistral.use_chat_completions",
|
||||
os.getenv("AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS", "false").lower() == "true",
|
||||
)
|
||||
|
||||
AUDIO_TTS_OPENAI_API_BASE_URL = PersistentConfig(
|
||||
"AUDIO_TTS_OPENAI_API_BASE_URL",
|
||||
"audio.tts.openai.api_base_url",
|
||||
|
|
|
|||
|
|
@ -569,6 +569,21 @@ else:
|
|||
CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = 30
|
||||
|
||||
|
||||
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE = os.environ.get(
|
||||
"CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE", ""
|
||||
)
|
||||
|
||||
if CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE == "":
|
||||
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE = None
|
||||
else:
|
||||
try:
|
||||
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE = int(
|
||||
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE
|
||||
)
|
||||
except Exception:
|
||||
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE = None
|
||||
|
||||
|
||||
####################################
|
||||
# WEBSOCKET SUPPORT
|
||||
####################################
|
||||
|
|
|
|||
|
|
@ -146,9 +146,7 @@ from open_webui.config import (
|
|||
# Image
|
||||
AUTOMATIC1111_API_AUTH,
|
||||
AUTOMATIC1111_BASE_URL,
|
||||
AUTOMATIC1111_CFG_SCALE,
|
||||
AUTOMATIC1111_SAMPLER,
|
||||
AUTOMATIC1111_SCHEDULER,
|
||||
AUTOMATIC1111_PARAMS,
|
||||
COMFYUI_BASE_URL,
|
||||
COMFYUI_API_KEY,
|
||||
COMFYUI_WORKFLOW,
|
||||
|
|
@ -164,6 +162,19 @@ from open_webui.config import (
|
|||
IMAGES_OPENAI_API_KEY,
|
||||
IMAGES_GEMINI_API_BASE_URL,
|
||||
IMAGES_GEMINI_API_KEY,
|
||||
IMAGES_GEMINI_ENDPOINT_METHOD,
|
||||
IMAGE_EDIT_ENGINE,
|
||||
IMAGE_EDIT_MODEL,
|
||||
IMAGE_EDIT_SIZE,
|
||||
IMAGES_EDIT_OPENAI_API_BASE_URL,
|
||||
IMAGES_EDIT_OPENAI_API_KEY,
|
||||
IMAGES_EDIT_OPENAI_API_VERSION,
|
||||
IMAGES_EDIT_GEMINI_API_BASE_URL,
|
||||
IMAGES_EDIT_GEMINI_API_KEY,
|
||||
IMAGES_EDIT_COMFYUI_BASE_URL,
|
||||
IMAGES_EDIT_COMFYUI_API_KEY,
|
||||
IMAGES_EDIT_COMFYUI_WORKFLOW,
|
||||
IMAGES_EDIT_COMFYUI_WORKFLOW_NODES,
|
||||
# Audio
|
||||
AUDIO_STT_ENGINE,
|
||||
AUDIO_STT_MODEL,
|
||||
|
|
@ -175,6 +186,9 @@ from open_webui.config import (
|
|||
AUDIO_STT_AZURE_LOCALES,
|
||||
AUDIO_STT_AZURE_BASE_URL,
|
||||
AUDIO_STT_AZURE_MAX_SPEAKERS,
|
||||
AUDIO_STT_MISTRAL_API_KEY,
|
||||
AUDIO_STT_MISTRAL_API_BASE_URL,
|
||||
AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS,
|
||||
AUDIO_TTS_ENGINE,
|
||||
AUDIO_TTS_MODEL,
|
||||
AUDIO_TTS_VOICE,
|
||||
|
|
@ -266,6 +280,7 @@ from open_webui.config import (
|
|||
DOCLING_PICTURE_DESCRIPTION_API,
|
||||
DOCUMENT_INTELLIGENCE_ENDPOINT,
|
||||
DOCUMENT_INTELLIGENCE_KEY,
|
||||
MISTRAL_OCR_API_BASE_URL,
|
||||
MISTRAL_OCR_API_KEY,
|
||||
RAG_TEXT_SPLITTER,
|
||||
TIKTOKEN_ENCODING_NAME,
|
||||
|
|
@ -482,9 +497,11 @@ from open_webui.utils.auth import (
|
|||
)
|
||||
from open_webui.utils.plugin import install_tool_and_function_dependencies
|
||||
from open_webui.utils.oauth import (
|
||||
get_oauth_client_info_with_dynamic_client_registration,
|
||||
encrypt_data,
|
||||
decrypt_data,
|
||||
OAuthManager,
|
||||
OAuthClientManager,
|
||||
decrypt_data,
|
||||
OAuthClientInformationFull,
|
||||
)
|
||||
from open_webui.utils.security_headers import SecurityHeadersMiddleware
|
||||
|
|
@ -856,6 +873,7 @@ app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL = DOCLING_PICTURE_DESCRIPTION
|
|||
app.state.config.DOCLING_PICTURE_DESCRIPTION_API = DOCLING_PICTURE_DESCRIPTION_API
|
||||
app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT = DOCUMENT_INTELLIGENCE_ENDPOINT
|
||||
app.state.config.DOCUMENT_INTELLIGENCE_KEY = DOCUMENT_INTELLIGENCE_KEY
|
||||
app.state.config.MISTRAL_OCR_API_BASE_URL = MISTRAL_OCR_API_BASE_URL
|
||||
app.state.config.MISTRAL_OCR_API_KEY = MISTRAL_OCR_API_KEY
|
||||
app.state.config.MINERU_API_MODE = MINERU_API_MODE
|
||||
app.state.config.MINERU_API_URL = MINERU_API_URL
|
||||
|
|
@ -1062,27 +1080,40 @@ app.state.config.IMAGE_GENERATION_ENGINE = IMAGE_GENERATION_ENGINE
|
|||
app.state.config.ENABLE_IMAGE_GENERATION = ENABLE_IMAGE_GENERATION
|
||||
app.state.config.ENABLE_IMAGE_PROMPT_GENERATION = ENABLE_IMAGE_PROMPT_GENERATION
|
||||
|
||||
app.state.config.IMAGE_GENERATION_MODEL = IMAGE_GENERATION_MODEL
|
||||
app.state.config.IMAGE_SIZE = IMAGE_SIZE
|
||||
app.state.config.IMAGE_STEPS = IMAGE_STEPS
|
||||
|
||||
app.state.config.IMAGES_OPENAI_API_BASE_URL = IMAGES_OPENAI_API_BASE_URL
|
||||
app.state.config.IMAGES_OPENAI_API_VERSION = IMAGES_OPENAI_API_VERSION
|
||||
app.state.config.IMAGES_OPENAI_API_KEY = IMAGES_OPENAI_API_KEY
|
||||
|
||||
app.state.config.IMAGES_GEMINI_API_BASE_URL = IMAGES_GEMINI_API_BASE_URL
|
||||
app.state.config.IMAGES_GEMINI_API_KEY = IMAGES_GEMINI_API_KEY
|
||||
|
||||
app.state.config.IMAGE_GENERATION_MODEL = IMAGE_GENERATION_MODEL
|
||||
app.state.config.IMAGES_GEMINI_ENDPOINT_METHOD = IMAGES_GEMINI_ENDPOINT_METHOD
|
||||
|
||||
app.state.config.AUTOMATIC1111_BASE_URL = AUTOMATIC1111_BASE_URL
|
||||
app.state.config.AUTOMATIC1111_API_AUTH = AUTOMATIC1111_API_AUTH
|
||||
app.state.config.AUTOMATIC1111_CFG_SCALE = AUTOMATIC1111_CFG_SCALE
|
||||
app.state.config.AUTOMATIC1111_SAMPLER = AUTOMATIC1111_SAMPLER
|
||||
app.state.config.AUTOMATIC1111_SCHEDULER = AUTOMATIC1111_SCHEDULER
|
||||
app.state.config.AUTOMATIC1111_PARAMS = AUTOMATIC1111_PARAMS
|
||||
|
||||
app.state.config.COMFYUI_BASE_URL = COMFYUI_BASE_URL
|
||||
app.state.config.COMFYUI_API_KEY = COMFYUI_API_KEY
|
||||
app.state.config.COMFYUI_WORKFLOW = COMFYUI_WORKFLOW
|
||||
app.state.config.COMFYUI_WORKFLOW_NODES = COMFYUI_WORKFLOW_NODES
|
||||
|
||||
app.state.config.IMAGE_SIZE = IMAGE_SIZE
|
||||
app.state.config.IMAGE_STEPS = IMAGE_STEPS
|
||||
|
||||
app.state.config.IMAGE_EDIT_ENGINE = IMAGE_EDIT_ENGINE
|
||||
app.state.config.IMAGE_EDIT_MODEL = IMAGE_EDIT_MODEL
|
||||
app.state.config.IMAGE_EDIT_SIZE = IMAGE_EDIT_SIZE
|
||||
app.state.config.IMAGES_EDIT_OPENAI_API_BASE_URL = IMAGES_EDIT_OPENAI_API_BASE_URL
|
||||
app.state.config.IMAGES_EDIT_OPENAI_API_KEY = IMAGES_EDIT_OPENAI_API_KEY
|
||||
app.state.config.IMAGES_EDIT_OPENAI_API_VERSION = IMAGES_EDIT_OPENAI_API_VERSION
|
||||
app.state.config.IMAGES_EDIT_GEMINI_API_BASE_URL = IMAGES_EDIT_GEMINI_API_BASE_URL
|
||||
app.state.config.IMAGES_EDIT_GEMINI_API_KEY = IMAGES_EDIT_GEMINI_API_KEY
|
||||
app.state.config.IMAGES_EDIT_COMFYUI_BASE_URL = IMAGES_EDIT_COMFYUI_BASE_URL
|
||||
app.state.config.IMAGES_EDIT_COMFYUI_API_KEY = IMAGES_EDIT_COMFYUI_API_KEY
|
||||
app.state.config.IMAGES_EDIT_COMFYUI_WORKFLOW = IMAGES_EDIT_COMFYUI_WORKFLOW
|
||||
app.state.config.IMAGES_EDIT_COMFYUI_WORKFLOW_NODES = IMAGES_EDIT_COMFYUI_WORKFLOW_NODES
|
||||
|
||||
|
||||
########################################
|
||||
|
|
@ -1108,6 +1139,12 @@ app.state.config.AUDIO_STT_AZURE_LOCALES = AUDIO_STT_AZURE_LOCALES
|
|||
app.state.config.AUDIO_STT_AZURE_BASE_URL = AUDIO_STT_AZURE_BASE_URL
|
||||
app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS = AUDIO_STT_AZURE_MAX_SPEAKERS
|
||||
|
||||
app.state.config.AUDIO_STT_MISTRAL_API_KEY = AUDIO_STT_MISTRAL_API_KEY
|
||||
app.state.config.AUDIO_STT_MISTRAL_API_BASE_URL = AUDIO_STT_MISTRAL_API_BASE_URL
|
||||
app.state.config.AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS = (
|
||||
AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS
|
||||
)
|
||||
|
||||
app.state.config.TTS_ENGINE = AUDIO_TTS_ENGINE
|
||||
|
||||
app.state.config.TTS_MODEL = AUDIO_TTS_MODEL
|
||||
|
|
@ -1941,6 +1978,7 @@ if len(app.state.config.TOOL_SERVER_CONNECTIONS) > 0:
|
|||
if tool_server_connection.get("type", "openapi") == "mcp":
|
||||
server_id = tool_server_connection.get("info", {}).get("id")
|
||||
auth_type = tool_server_connection.get("auth_type", "none")
|
||||
|
||||
if server_id and auth_type == "oauth_2.1":
|
||||
oauth_client_info = tool_server_connection.get("info", {}).get(
|
||||
"oauth_client_info", ""
|
||||
|
|
@ -1986,6 +2024,64 @@ except Exception as e:
|
|||
)
|
||||
|
||||
|
||||
async def register_client(self, request, client_id: str) -> bool:
|
||||
server_type, server_id = client_id.split(":", 1)
|
||||
|
||||
connection = None
|
||||
connection_idx = None
|
||||
|
||||
for idx, conn in enumerate(request.app.state.config.TOOL_SERVER_CONNECTIONS or []):
|
||||
if conn.get("type", "openapi") == server_type:
|
||||
info = conn.get("info", {})
|
||||
if info.get("id") == server_id:
|
||||
connection = conn
|
||||
connection_idx = idx
|
||||
break
|
||||
|
||||
if connection is None or connection_idx is None:
|
||||
log.warning(
|
||||
f"Unable to locate MCP tool server configuration for client {client_id} during re-registration"
|
||||
)
|
||||
return False
|
||||
|
||||
server_url = connection.get("url")
|
||||
oauth_server_key = (connection.get("config") or {}).get("oauth_server_key")
|
||||
|
||||
try:
|
||||
oauth_client_info = (
|
||||
await get_oauth_client_info_with_dynamic_client_registration(
|
||||
request,
|
||||
client_id,
|
||||
server_url,
|
||||
oauth_server_key,
|
||||
)
|
||||
)
|
||||
except Exception as e:
|
||||
log.error(f"Dynamic client re-registration failed for {client_id}: {e}")
|
||||
return False
|
||||
|
||||
try:
|
||||
request.app.state.config.TOOL_SERVER_CONNECTIONS[connection_idx] = {
|
||||
**connection,
|
||||
"info": {
|
||||
**connection.get("info", {}),
|
||||
"oauth_client_info": encrypt_data(
|
||||
oauth_client_info.model_dump(mode="json")
|
||||
),
|
||||
},
|
||||
}
|
||||
except Exception as e:
|
||||
log.error(
|
||||
f"Failed to persist updated OAuth client info for tool server {client_id}: {e}"
|
||||
)
|
||||
return False
|
||||
|
||||
oauth_client_manager.remove_client(client_id)
|
||||
oauth_client_manager.add_client(client_id, oauth_client_info)
|
||||
log.info(f"Re-registered OAuth client {client_id} for tool server")
|
||||
return True
|
||||
|
||||
|
||||
@app.get("/oauth/clients/{client_id}/authorize")
|
||||
async def oauth_client_authorize(
|
||||
client_id: str,
|
||||
|
|
@ -1993,6 +2089,41 @@ async def oauth_client_authorize(
|
|||
response: Response,
|
||||
user=Depends(get_verified_user),
|
||||
):
|
||||
# ensure_valid_client_registration
|
||||
client = oauth_client_manager.get_client(client_id)
|
||||
client_info = oauth_client_manager.get_client_info(client_id)
|
||||
if client is None or client_info is None:
|
||||
raise HTTPException(status.HTTP_404_NOT_FOUND)
|
||||
|
||||
if not await oauth_client_manager._preflight_authorization_url(client, client_info):
|
||||
log.info(
|
||||
"Detected invalid OAuth client %s; attempting re-registration",
|
||||
client_id,
|
||||
)
|
||||
|
||||
registered = await register_client(request, client_id)
|
||||
if not registered:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail="Failed to re-register OAuth client",
|
||||
)
|
||||
|
||||
client = oauth_client_manager.get_client(client_id)
|
||||
client_info = oauth_client_manager.get_client_info(client_id)
|
||||
if client is None or client_info is None:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail="OAuth client unavailable after re-registration",
|
||||
)
|
||||
|
||||
if not await oauth_client_manager._preflight_authorization_url(
|
||||
client, client_info
|
||||
):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail="OAuth client registration is still invalid after re-registration",
|
||||
)
|
||||
|
||||
return await oauth_client_manager.handle_authorize(request, client_id=client_id)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -765,15 +765,20 @@ class ChatTable:
|
|||
)
|
||||
|
||||
elif dialect_name == "postgresql":
|
||||
# PostgreSQL relies on proper JSON query for search
|
||||
# PostgreSQL doesn't allow null bytes in text. We filter those out by checking
|
||||
# the JSON representation for \u0000 before attempting text extraction
|
||||
postgres_content_sql = (
|
||||
"EXISTS ("
|
||||
" SELECT 1 "
|
||||
" FROM json_array_elements(Chat.chat->'messages') AS message "
|
||||
" WHERE LOWER(message->>'content') LIKE '%' || :content_key || '%'"
|
||||
" WHERE message->'content' IS NOT NULL "
|
||||
" AND (message->'content')::text NOT LIKE '%\\u0000%' "
|
||||
" AND LOWER(message->>'content') LIKE '%' || :content_key || '%'"
|
||||
")"
|
||||
)
|
||||
postgres_content_clause = text(postgres_content_sql)
|
||||
# Also filter out chats with null bytes in title
|
||||
query = query.filter(text("Chat.title::text NOT LIKE '%\\x00%'"))
|
||||
query = query.filter(
|
||||
or_(
|
||||
Chat.title.ilike(bindparam("title_key")),
|
||||
|
|
|
|||
|
|
@ -98,6 +98,12 @@ class FileForm(BaseModel):
|
|||
access_control: Optional[dict] = None
|
||||
|
||||
|
||||
class FileUpdateForm(BaseModel):
|
||||
hash: Optional[str] = None
|
||||
data: Optional[dict] = None
|
||||
meta: Optional[dict] = None
|
||||
|
||||
|
||||
class FilesTable:
|
||||
def insert_new_file(self, user_id: str, form_data: FileForm) -> Optional[FileModel]:
|
||||
with get_db() as db:
|
||||
|
|
@ -204,6 +210,29 @@ class FilesTable:
|
|||
for file in db.query(File).filter_by(user_id=user_id).all()
|
||||
]
|
||||
|
||||
def update_file_by_id(
|
||||
self, id: str, form_data: FileUpdateForm
|
||||
) -> Optional[FileModel]:
|
||||
with get_db() as db:
|
||||
try:
|
||||
file = db.query(File).filter_by(id=id).first()
|
||||
|
||||
if form_data.hash is not None:
|
||||
file.hash = form_data.hash
|
||||
|
||||
if form_data.data is not None:
|
||||
file.data = {**(file.data if file.data else {}), **form_data.data}
|
||||
|
||||
if form_data.meta is not None:
|
||||
file.meta = {**(file.meta if file.meta else {}), **form_data.meta}
|
||||
|
||||
file.updated_at = int(time.time())
|
||||
db.commit()
|
||||
return FileModel.model_validate(file)
|
||||
except Exception as e:
|
||||
log.exception(f"Error updating file completely by id: {e}")
|
||||
return None
|
||||
|
||||
def update_file_hash_by_id(self, id: str, hash: str) -> Optional[FileModel]:
|
||||
with get_db() as db:
|
||||
try:
|
||||
|
|
|
|||
|
|
@ -262,5 +262,16 @@ class OAuthSessionTable:
|
|||
log.error(f"Error deleting OAuth sessions by user ID: {e}")
|
||||
return False
|
||||
|
||||
def delete_sessions_by_provider(self, provider: str) -> bool:
|
||||
"""Delete all OAuth sessions for a provider"""
|
||||
try:
|
||||
with get_db() as db:
|
||||
db.query(OAuthSession).filter_by(provider=provider).delete()
|
||||
db.commit()
|
||||
return True
|
||||
except Exception as e:
|
||||
log.error(f"Error deleting OAuth sessions by provider {provider}: {e}")
|
||||
return False
|
||||
|
||||
|
||||
OAuthSessions = OAuthSessionTable()
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ from urllib.parse import quote
|
|||
|
||||
from langchain_core.document_loaders import BaseLoader
|
||||
from langchain_core.documents import Document
|
||||
from open_webui.utils.headers import include_user_info_headers
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
|
@ -18,6 +19,7 @@ class ExternalDocumentLoader(BaseLoader):
|
|||
url: str,
|
||||
api_key: str,
|
||||
mime_type=None,
|
||||
user=None,
|
||||
**kwargs,
|
||||
) -> None:
|
||||
self.url = url
|
||||
|
|
@ -26,6 +28,8 @@ class ExternalDocumentLoader(BaseLoader):
|
|||
self.file_path = file_path
|
||||
self.mime_type = mime_type
|
||||
|
||||
self.user = user
|
||||
|
||||
def load(self) -> List[Document]:
|
||||
with open(self.file_path, "rb") as f:
|
||||
data = f.read()
|
||||
|
|
@ -42,6 +46,9 @@ class ExternalDocumentLoader(BaseLoader):
|
|||
except:
|
||||
pass
|
||||
|
||||
if self.user is not None:
|
||||
headers = include_user_info_headers(headers, self.user)
|
||||
|
||||
url = self.url
|
||||
if url.endswith("/"):
|
||||
url = url[:-1]
|
||||
|
|
|
|||
|
|
@ -228,6 +228,7 @@ class DoclingLoader:
|
|||
class Loader:
|
||||
def __init__(self, engine: str = "", **kwargs):
|
||||
self.engine = engine
|
||||
self.user = kwargs.get("user", None)
|
||||
self.kwargs = kwargs
|
||||
|
||||
def load(
|
||||
|
|
@ -264,6 +265,7 @@ class Loader:
|
|||
url=self.kwargs.get("EXTERNAL_DOCUMENT_LOADER_URL"),
|
||||
api_key=self.kwargs.get("EXTERNAL_DOCUMENT_LOADER_API_KEY"),
|
||||
mime_type=file_content_type,
|
||||
user=self.user,
|
||||
)
|
||||
elif self.engine == "tika" and self.kwargs.get("TIKA_SERVER_URL"):
|
||||
if self._is_text_file(file_ext, file_content_type):
|
||||
|
|
@ -272,7 +274,6 @@ class Loader:
|
|||
loader = TikaLoader(
|
||||
url=self.kwargs.get("TIKA_SERVER_URL"),
|
||||
file_path=file_path,
|
||||
mime_type=file_content_type,
|
||||
extract_images=self.kwargs.get("PDF_EXTRACT_IMAGES"),
|
||||
)
|
||||
elif (
|
||||
|
|
@ -369,14 +370,8 @@ class Loader:
|
|||
azure_credential=DefaultAzureCredential(),
|
||||
)
|
||||
elif self.engine == "mineru" and file_ext in [
|
||||
"pdf",
|
||||
"doc",
|
||||
"docx",
|
||||
"ppt",
|
||||
"pptx",
|
||||
"xls",
|
||||
"xlsx",
|
||||
]:
|
||||
"pdf"
|
||||
]: # MinerU currently only supports PDF
|
||||
loader = MinerULoader(
|
||||
file_path=file_path,
|
||||
api_mode=self.kwargs.get("MINERU_API_MODE", "local"),
|
||||
|
|
@ -391,16 +386,9 @@ class Loader:
|
|||
in ["pdf"] # Mistral OCR currently only supports PDF and images
|
||||
):
|
||||
loader = MistralLoader(
|
||||
api_key=self.kwargs.get("MISTRAL_OCR_API_KEY"), file_path=file_path
|
||||
)
|
||||
elif (
|
||||
self.engine == "external"
|
||||
and self.kwargs.get("MISTRAL_OCR_API_KEY") != ""
|
||||
and file_ext
|
||||
in ["pdf"] # Mistral OCR currently only supports PDF and images
|
||||
):
|
||||
loader = MistralLoader(
|
||||
api_key=self.kwargs.get("MISTRAL_OCR_API_KEY"), file_path=file_path
|
||||
base_url=self.kwargs.get("MISTRAL_OCR_API_BASE_URL"),
|
||||
api_key=self.kwargs.get("MISTRAL_OCR_API_KEY"),
|
||||
file_path=file_path,
|
||||
)
|
||||
else:
|
||||
if file_ext == "pdf":
|
||||
|
|
|
|||
|
|
@ -33,13 +33,14 @@ class MinerULoader:
|
|||
self.api_key = api_key
|
||||
|
||||
# Parse params dict with defaults
|
||||
params = params or {}
|
||||
self.params = params or {}
|
||||
self.enable_ocr = params.get("enable_ocr", False)
|
||||
self.enable_formula = params.get("enable_formula", True)
|
||||
self.enable_table = params.get("enable_table", True)
|
||||
self.language = params.get("language", "en")
|
||||
self.model_version = params.get("model_version", "pipeline")
|
||||
self.page_ranges = params.get("page_ranges", "")
|
||||
|
||||
self.page_ranges = self.params.pop("page_ranges", "")
|
||||
|
||||
# Validate API mode
|
||||
if self.api_mode not in ["local", "cloud"]:
|
||||
|
|
@ -76,27 +77,10 @@ class MinerULoader:
|
|||
|
||||
# Build form data for Local API
|
||||
form_data = {
|
||||
**self.params,
|
||||
"return_md": "true",
|
||||
"formula_enable": str(self.enable_formula).lower(),
|
||||
"table_enable": str(self.enable_table).lower(),
|
||||
}
|
||||
|
||||
# Parse method based on OCR setting
|
||||
if self.enable_ocr:
|
||||
form_data["parse_method"] = "ocr"
|
||||
else:
|
||||
form_data["parse_method"] = "auto"
|
||||
|
||||
# Language configuration (Local API uses lang_list array)
|
||||
if self.language:
|
||||
form_data["lang_list"] = self.language
|
||||
|
||||
# Backend/model version (Local API uses "backend" parameter)
|
||||
if self.model_version == "vlm":
|
||||
form_data["backend"] = "vlm-vllm-engine"
|
||||
else:
|
||||
form_data["backend"] = "pipeline"
|
||||
|
||||
# Page ranges (Local API uses start_page_id and end_page_id)
|
||||
if self.page_ranges:
|
||||
# For simplicity, if page_ranges is specified, log a warning
|
||||
|
|
@ -236,10 +220,7 @@ class MinerULoader:
|
|||
|
||||
# Build request body
|
||||
request_body = {
|
||||
"enable_formula": self.enable_formula,
|
||||
"enable_table": self.enable_table,
|
||||
"language": self.language,
|
||||
"model_version": self.model_version,
|
||||
**self.params,
|
||||
"files": [
|
||||
{
|
||||
"name": filename,
|
||||
|
|
|
|||
|
|
@ -30,10 +30,9 @@ class MistralLoader:
|
|||
- Enhanced error handling with retryable error classification
|
||||
"""
|
||||
|
||||
BASE_API_URL = "https://api.mistral.ai/v1"
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
base_url: str,
|
||||
api_key: str,
|
||||
file_path: str,
|
||||
timeout: int = 300, # 5 minutes default
|
||||
|
|
@ -55,6 +54,9 @@ class MistralLoader:
|
|||
if not os.path.exists(file_path):
|
||||
raise FileNotFoundError(f"File not found at {file_path}")
|
||||
|
||||
self.base_url = (
|
||||
base_url.rstrip("/") if base_url else "https://api.mistral.ai/v1"
|
||||
)
|
||||
self.api_key = api_key
|
||||
self.file_path = file_path
|
||||
self.timeout = timeout
|
||||
|
|
@ -240,7 +242,7 @@ class MistralLoader:
|
|||
in a context manager to minimize memory usage duration.
|
||||
"""
|
||||
log.info("Uploading file to Mistral API")
|
||||
url = f"{self.BASE_API_URL}/files"
|
||||
url = f"{self.base_url}/files"
|
||||
|
||||
def upload_request():
|
||||
# MEMORY OPTIMIZATION: Use context manager to minimize file handle lifetime
|
||||
|
|
@ -275,7 +277,7 @@ class MistralLoader:
|
|||
|
||||
async def _upload_file_async(self, session: aiohttp.ClientSession) -> str:
|
||||
"""Async file upload with streaming for better memory efficiency."""
|
||||
url = f"{self.BASE_API_URL}/files"
|
||||
url = f"{self.base_url}/files"
|
||||
|
||||
async def upload_request():
|
||||
# Create multipart writer for streaming upload
|
||||
|
|
@ -321,7 +323,7 @@ class MistralLoader:
|
|||
def _get_signed_url(self, file_id: str) -> str:
|
||||
"""Retrieves a temporary signed URL for the uploaded file (sync version)."""
|
||||
log.info(f"Getting signed URL for file ID: {file_id}")
|
||||
url = f"{self.BASE_API_URL}/files/{file_id}/url"
|
||||
url = f"{self.base_url}/files/{file_id}/url"
|
||||
params = {"expiry": 1}
|
||||
signed_url_headers = {**self.headers, "Accept": "application/json"}
|
||||
|
||||
|
|
@ -346,7 +348,7 @@ class MistralLoader:
|
|||
self, session: aiohttp.ClientSession, file_id: str
|
||||
) -> str:
|
||||
"""Async signed URL retrieval."""
|
||||
url = f"{self.BASE_API_URL}/files/{file_id}/url"
|
||||
url = f"{self.base_url}/files/{file_id}/url"
|
||||
params = {"expiry": 1}
|
||||
|
||||
headers = {**self.headers, "Accept": "application/json"}
|
||||
|
|
@ -373,7 +375,7 @@ class MistralLoader:
|
|||
def _process_ocr(self, signed_url: str) -> Dict[str, Any]:
|
||||
"""Sends the signed URL to the OCR endpoint for processing (sync version)."""
|
||||
log.info("Processing OCR via Mistral API")
|
||||
url = f"{self.BASE_API_URL}/ocr"
|
||||
url = f"{self.base_url}/ocr"
|
||||
ocr_headers = {
|
||||
**self.headers,
|
||||
"Content-Type": "application/json",
|
||||
|
|
@ -407,7 +409,7 @@ class MistralLoader:
|
|||
self, session: aiohttp.ClientSession, signed_url: str
|
||||
) -> Dict[str, Any]:
|
||||
"""Async OCR processing with timing metrics."""
|
||||
url = f"{self.BASE_API_URL}/ocr"
|
||||
url = f"{self.base_url}/ocr"
|
||||
|
||||
headers = {
|
||||
**self.headers,
|
||||
|
|
@ -446,7 +448,7 @@ class MistralLoader:
|
|||
def _delete_file(self, file_id: str) -> None:
|
||||
"""Deletes the file from Mistral storage (sync version)."""
|
||||
log.info(f"Deleting uploaded file ID: {file_id}")
|
||||
url = f"{self.BASE_API_URL}/files/{file_id}"
|
||||
url = f"{self.base_url}/files/{file_id}"
|
||||
|
||||
try:
|
||||
response = requests.delete(
|
||||
|
|
@ -467,7 +469,7 @@ class MistralLoader:
|
|||
async def delete_request():
|
||||
self._debug_log(f"Deleting file ID: {file_id}")
|
||||
async with session.delete(
|
||||
url=f"{self.BASE_API_URL}/files/{file_id}",
|
||||
url=f"{self.base_url}/files/{file_id}",
|
||||
headers=self.headers,
|
||||
timeout=aiohttp.ClientTimeout(
|
||||
total=self.cleanup_timeout
|
||||
|
|
|
|||
|
|
@ -71,6 +71,7 @@ def get_loader(request, url: str):
|
|||
url,
|
||||
verify_ssl=request.app.state.config.ENABLE_WEB_LOADER_SSL_VERIFICATION,
|
||||
requests_per_second=request.app.state.config.WEB_LOADER_CONCURRENT_REQUESTS,
|
||||
trust_env=request.app.state.config.WEB_SEARCH_TRUST_ENV,
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -159,10 +160,18 @@ def query_doc_with_hybrid_search(
|
|||
hybrid_bm25_weight: float,
|
||||
) -> dict:
|
||||
try:
|
||||
# First check if collection_result has the required attributes
|
||||
if (
|
||||
not collection_result
|
||||
or not hasattr(collection_result, "documents")
|
||||
or not collection_result.documents
|
||||
or not hasattr(collection_result, "metadatas")
|
||||
):
|
||||
log.warning(f"query_doc_with_hybrid_search:no_docs {collection_name}")
|
||||
return {"documents": [], "metadatas": [], "distances": []}
|
||||
|
||||
# Now safely check the documents content after confirming attributes exist
|
||||
if (
|
||||
not collection_result.documents
|
||||
or len(collection_result.documents) == 0
|
||||
or not collection_result.documents[0]
|
||||
):
|
||||
|
|
@ -507,11 +516,13 @@ def get_reranking_function(reranking_engine, reranking_model, reranking_function
|
|||
if reranking_function is None:
|
||||
return None
|
||||
if reranking_engine == "external":
|
||||
return lambda sentences, user=None: reranking_function.predict(
|
||||
sentences, user=user
|
||||
return lambda query, documents, user=None: reranking_function.predict(
|
||||
[(query, doc.page_content) for doc in documents], user=user
|
||||
)
|
||||
else:
|
||||
return lambda sentences, user=None: reranking_function.predict(sentences)
|
||||
return lambda query, documents, user=None: reranking_function.predict(
|
||||
[(query, doc.page_content) for doc in documents]
|
||||
)
|
||||
|
||||
|
||||
def get_sources_from_items(
|
||||
|
|
@ -1055,9 +1066,7 @@ class RerankCompressor(BaseDocumentCompressor):
|
|||
|
||||
scores = None
|
||||
if reranking:
|
||||
scores = self.reranking_function(
|
||||
[(query, doc.page_content) for doc in documents]
|
||||
)
|
||||
scores = self.reranking_function(query, documents)
|
||||
else:
|
||||
from sentence_transformers import util
|
||||
|
||||
|
|
|
|||
|
|
@ -2,27 +2,42 @@ import logging
|
|||
from typing import Optional, List
|
||||
|
||||
import requests
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
|
||||
from fastapi import Request
|
||||
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.utils.headers import include_user_info_headers
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def search_external(
|
||||
request: Request,
|
||||
external_url: str,
|
||||
external_api_key: str,
|
||||
query: str,
|
||||
count: int,
|
||||
filter_list: Optional[List[str]] = None,
|
||||
user=None,
|
||||
) -> List[SearchResult]:
|
||||
try:
|
||||
headers = {
|
||||
"User-Agent": "Open WebUI (https://github.com/open-webui/open-webui) RAG Bot",
|
||||
"Authorization": f"Bearer {external_api_key}",
|
||||
}
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
chat_id = getattr(request.state, "chat_id", None)
|
||||
if chat_id:
|
||||
headers["X-OpenWebUI-Chat-Id"] = str(chat_id)
|
||||
|
||||
response = requests.post(
|
||||
external_url,
|
||||
headers={
|
||||
"User-Agent": "Open WebUI (https://github.com/open-webui/open-webui) RAG Bot",
|
||||
"Authorization": f"Bearer {external_api_key}",
|
||||
},
|
||||
headers=headers,
|
||||
json={
|
||||
"query": query,
|
||||
"count": count,
|
||||
|
|
|
|||
|
|
@ -4,7 +4,6 @@ from typing import Optional, List
|
|||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
from firecrawl import Firecrawl
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
|
@ -18,7 +17,9 @@ def search_firecrawl(
|
|||
filter_list: Optional[List[str]] = None,
|
||||
) -> List[SearchResult]:
|
||||
try:
|
||||
firecrawl = Firecrawl(api_key=firecrawl_api_key, api_url=firecrawl_url)
|
||||
from firecrawl import FirecrawlApp
|
||||
|
||||
firecrawl = FirecrawlApp(api_key=firecrawl_api_key, api_url=firecrawl_url)
|
||||
response = firecrawl.search(
|
||||
query=query, limit=count, ignore_invalid_urls=True, timeout=count * 3
|
||||
)
|
||||
|
|
|
|||
|
|
@ -15,6 +15,7 @@ def search_google_pse(
|
|||
query: str,
|
||||
count: int,
|
||||
filter_list: Optional[list[str]] = None,
|
||||
referer: Optional[str] = None,
|
||||
) -> list[SearchResult]:
|
||||
"""Search using Google's Programmable Search Engine API and return the results as a list of SearchResult objects.
|
||||
Handles pagination for counts greater than 10.
|
||||
|
|
@ -30,7 +31,11 @@ def search_google_pse(
|
|||
list[SearchResult]: A list of SearchResult objects.
|
||||
"""
|
||||
url = "https://www.googleapis.com/customsearch/v1"
|
||||
|
||||
headers = {"Content-Type": "application/json"}
|
||||
if referer:
|
||||
headers["Referer"] = referer
|
||||
|
||||
all_results = []
|
||||
start_index = 1 # Google PSE start parameter is 1-based
|
||||
|
||||
|
|
|
|||
|
|
@ -16,6 +16,8 @@ from typing import (
|
|||
Union,
|
||||
Literal,
|
||||
)
|
||||
|
||||
from fastapi.concurrency import run_in_threadpool
|
||||
import aiohttp
|
||||
import certifi
|
||||
import validators
|
||||
|
|
@ -39,7 +41,6 @@ from open_webui.config import (
|
|||
)
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
from firecrawl import Firecrawl
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
|
@ -142,13 +143,13 @@ class RateLimitMixin:
|
|||
|
||||
|
||||
class URLProcessingMixin:
|
||||
def _verify_ssl_cert(self, url: str) -> bool:
|
||||
async def _verify_ssl_cert(self, url: str) -> bool:
|
||||
"""Verify SSL certificate for a URL."""
|
||||
return verify_ssl_cert(url)
|
||||
return await run_in_threadpool(verify_ssl_cert, url)
|
||||
|
||||
async def _safe_process_url(self, url: str) -> bool:
|
||||
"""Perform safety checks before processing a URL."""
|
||||
if self.verify_ssl and not self._verify_ssl_cert(url):
|
||||
if self.verify_ssl and not await self._verify_ssl_cert(url):
|
||||
raise ValueError(f"SSL certificate verification failed for {url}")
|
||||
await self._wait_for_rate_limit()
|
||||
return True
|
||||
|
|
@ -225,7 +226,9 @@ class SafeFireCrawlLoader(BaseLoader, RateLimitMixin, URLProcessingMixin):
|
|||
self.params,
|
||||
)
|
||||
try:
|
||||
firecrawl = Firecrawl(api_key=self.api_key, api_url=self.api_url)
|
||||
from firecrawl import FirecrawlApp
|
||||
|
||||
firecrawl = FirecrawlApp(api_key=self.api_key, api_url=self.api_url)
|
||||
result = firecrawl.batch_scrape(
|
||||
self.web_paths,
|
||||
formats=["markdown"],
|
||||
|
|
@ -264,7 +267,9 @@ class SafeFireCrawlLoader(BaseLoader, RateLimitMixin, URLProcessingMixin):
|
|||
self.params,
|
||||
)
|
||||
try:
|
||||
firecrawl = Firecrawl(api_key=self.api_key, api_url=self.api_url)
|
||||
from firecrawl import FirecrawlApp
|
||||
|
||||
firecrawl = FirecrawlApp(api_key=self.api_key, api_url=self.api_url)
|
||||
result = firecrawl.batch_scrape(
|
||||
self.web_paths,
|
||||
formats=["markdown"],
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ import logging
|
|||
import os
|
||||
import uuid
|
||||
import html
|
||||
import base64
|
||||
from functools import lru_cache
|
||||
from pydub import AudioSegment
|
||||
from pydub.silence import split_on_silence
|
||||
|
|
@ -39,13 +40,14 @@ from open_webui.config import (
|
|||
WHISPER_MODEL_DIR,
|
||||
CACHE_DIR,
|
||||
WHISPER_LANGUAGE,
|
||||
ELEVENLABS_API_BASE_URL,
|
||||
)
|
||||
|
||||
from open_webui.constants import ERROR_MESSAGES
|
||||
from open_webui.env import (
|
||||
ENV,
|
||||
AIOHTTP_CLIENT_SESSION_SSL,
|
||||
AIOHTTP_CLIENT_TIMEOUT,
|
||||
ENV,
|
||||
SRC_LOG_LEVELS,
|
||||
DEVICE_TYPE,
|
||||
ENABLE_FORWARD_USER_INFO_HEADERS,
|
||||
|
|
@ -178,6 +180,9 @@ class STTConfigForm(BaseModel):
|
|||
AZURE_LOCALES: str
|
||||
AZURE_BASE_URL: str
|
||||
AZURE_MAX_SPEAKERS: str
|
||||
MISTRAL_API_KEY: str
|
||||
MISTRAL_API_BASE_URL: str
|
||||
MISTRAL_USE_CHAT_COMPLETIONS: bool
|
||||
|
||||
|
||||
class AudioConfigUpdateForm(BaseModel):
|
||||
|
|
@ -214,6 +219,9 @@ async def get_audio_config(request: Request, user=Depends(get_admin_user)):
|
|||
"AZURE_LOCALES": request.app.state.config.AUDIO_STT_AZURE_LOCALES,
|
||||
"AZURE_BASE_URL": request.app.state.config.AUDIO_STT_AZURE_BASE_URL,
|
||||
"AZURE_MAX_SPEAKERS": request.app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS,
|
||||
"MISTRAL_API_KEY": request.app.state.config.AUDIO_STT_MISTRAL_API_KEY,
|
||||
"MISTRAL_API_BASE_URL": request.app.state.config.AUDIO_STT_MISTRAL_API_BASE_URL,
|
||||
"MISTRAL_USE_CHAT_COMPLETIONS": request.app.state.config.AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS,
|
||||
},
|
||||
}
|
||||
|
||||
|
|
@ -255,6 +263,13 @@ async def update_audio_config(
|
|||
request.app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS = (
|
||||
form_data.stt.AZURE_MAX_SPEAKERS
|
||||
)
|
||||
request.app.state.config.AUDIO_STT_MISTRAL_API_KEY = form_data.stt.MISTRAL_API_KEY
|
||||
request.app.state.config.AUDIO_STT_MISTRAL_API_BASE_URL = (
|
||||
form_data.stt.MISTRAL_API_BASE_URL
|
||||
)
|
||||
request.app.state.config.AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS = (
|
||||
form_data.stt.MISTRAL_USE_CHAT_COMPLETIONS
|
||||
)
|
||||
|
||||
if request.app.state.config.STT_ENGINE == "":
|
||||
request.app.state.faster_whisper_model = set_faster_whisper_model(
|
||||
|
|
@ -290,6 +305,9 @@ async def update_audio_config(
|
|||
"AZURE_LOCALES": request.app.state.config.AUDIO_STT_AZURE_LOCALES,
|
||||
"AZURE_BASE_URL": request.app.state.config.AUDIO_STT_AZURE_BASE_URL,
|
||||
"AZURE_MAX_SPEAKERS": request.app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS,
|
||||
"MISTRAL_API_KEY": request.app.state.config.AUDIO_STT_MISTRAL_API_KEY,
|
||||
"MISTRAL_API_BASE_URL": request.app.state.config.AUDIO_STT_MISTRAL_API_BASE_URL,
|
||||
"MISTRAL_USE_CHAT_COMPLETIONS": request.app.state.config.AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS,
|
||||
},
|
||||
}
|
||||
|
||||
|
|
@ -413,7 +431,7 @@ async def speech(request: Request, user=Depends(get_verified_user)):
|
|||
timeout=timeout, trust_env=True
|
||||
) as session:
|
||||
async with session.post(
|
||||
f"https://api.elevenlabs.io/v1/text-to-speech/{voice_id}",
|
||||
f"{ELEVENLABS_API_BASE_URL}/v1/text-to-speech/{voice_id}",
|
||||
json={
|
||||
"text": payload["input"],
|
||||
"model_id": request.app.state.config.TTS_MODEL,
|
||||
|
|
@ -828,6 +846,186 @@ def transcription_handler(request, file_path, metadata):
|
|||
detail=detail if detail else "Open WebUI: Server Connection Error",
|
||||
)
|
||||
|
||||
elif request.app.state.config.STT_ENGINE == "mistral":
|
||||
# Check file exists
|
||||
if not os.path.exists(file_path):
|
||||
raise HTTPException(status_code=400, detail="Audio file not found")
|
||||
|
||||
# Check file size
|
||||
file_size = os.path.getsize(file_path)
|
||||
if file_size > MAX_FILE_SIZE:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail=f"File size exceeds limit of {MAX_FILE_SIZE_MB}MB",
|
||||
)
|
||||
|
||||
api_key = request.app.state.config.AUDIO_STT_MISTRAL_API_KEY
|
||||
api_base_url = (
|
||||
request.app.state.config.AUDIO_STT_MISTRAL_API_BASE_URL
|
||||
or "https://api.mistral.ai/v1"
|
||||
)
|
||||
use_chat_completions = (
|
||||
request.app.state.config.AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS
|
||||
)
|
||||
|
||||
if not api_key:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="Mistral API key is required for Mistral STT",
|
||||
)
|
||||
|
||||
r = None
|
||||
try:
|
||||
# Use voxtral-mini-latest as the default model for transcription
|
||||
model = request.app.state.config.STT_MODEL or "voxtral-mini-latest"
|
||||
|
||||
log.info(
|
||||
f"Mistral STT - model: {model}, "
|
||||
f"method: {'chat_completions' if use_chat_completions else 'transcriptions'}"
|
||||
)
|
||||
|
||||
if use_chat_completions:
|
||||
# Use chat completions API with audio input
|
||||
# This method requires mp3 or wav format
|
||||
audio_file_to_use = file_path
|
||||
|
||||
if is_audio_conversion_required(file_path):
|
||||
log.debug("Converting audio to mp3 for chat completions API")
|
||||
converted_path = convert_audio_to_mp3(file_path)
|
||||
if converted_path:
|
||||
audio_file_to_use = converted_path
|
||||
else:
|
||||
log.error("Audio conversion failed")
|
||||
raise HTTPException(
|
||||
status_code=500,
|
||||
detail="Audio conversion failed. Chat completions API requires mp3 or wav format.",
|
||||
)
|
||||
|
||||
# Read and encode audio file as base64
|
||||
with open(audio_file_to_use, "rb") as audio_file:
|
||||
audio_base64 = base64.b64encode(audio_file.read()).decode("utf-8")
|
||||
|
||||
# Prepare chat completions request
|
||||
url = f"{api_base_url}/chat/completions"
|
||||
|
||||
# Add language instruction if specified
|
||||
language = metadata.get("language", None) if metadata else None
|
||||
if language:
|
||||
text_instruction = f"Transcribe this audio exactly as spoken in {language}. Do not translate it."
|
||||
else:
|
||||
text_instruction = "Transcribe this audio exactly as spoken in its original language. Do not translate it to another language."
|
||||
|
||||
payload = {
|
||||
"model": model,
|
||||
"messages": [
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{
|
||||
"type": "input_audio",
|
||||
"input_audio": audio_base64,
|
||||
},
|
||||
{"type": "text", "text": text_instruction},
|
||||
],
|
||||
}
|
||||
],
|
||||
}
|
||||
|
||||
r = requests.post(
|
||||
url=url,
|
||||
json=payload,
|
||||
headers={
|
||||
"Authorization": f"Bearer {api_key}",
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
)
|
||||
|
||||
r.raise_for_status()
|
||||
response = r.json()
|
||||
|
||||
# Extract transcript from chat completion response
|
||||
transcript = (
|
||||
response.get("choices", [{}])[0]
|
||||
.get("message", {})
|
||||
.get("content", "")
|
||||
.strip()
|
||||
)
|
||||
if not transcript:
|
||||
raise ValueError("Empty transcript in response")
|
||||
|
||||
data = {"text": transcript}
|
||||
|
||||
else:
|
||||
# Use dedicated transcriptions API
|
||||
url = f"{api_base_url}/audio/transcriptions"
|
||||
|
||||
# Determine the MIME type
|
||||
mime_type, _ = mimetypes.guess_type(file_path)
|
||||
if not mime_type:
|
||||
mime_type = "audio/webm"
|
||||
|
||||
# Use context manager to ensure file is properly closed
|
||||
with open(file_path, "rb") as audio_file:
|
||||
files = {"file": (filename, audio_file, mime_type)}
|
||||
data_form = {"model": model}
|
||||
|
||||
# Add language if specified in metadata
|
||||
language = metadata.get("language", None) if metadata else None
|
||||
if language:
|
||||
data_form["language"] = language
|
||||
|
||||
r = requests.post(
|
||||
url=url,
|
||||
files=files,
|
||||
data=data_form,
|
||||
headers={
|
||||
"Authorization": f"Bearer {api_key}",
|
||||
},
|
||||
)
|
||||
|
||||
r.raise_for_status()
|
||||
response = r.json()
|
||||
|
||||
# Extract transcript from response
|
||||
transcript = response.get("text", "").strip()
|
||||
if not transcript:
|
||||
raise ValueError("Empty transcript in response")
|
||||
|
||||
data = {"text": transcript}
|
||||
|
||||
# Save transcript to json file (consistent with other providers)
|
||||
transcript_file = f"{file_dir}/{id}.json"
|
||||
with open(transcript_file, "w") as f:
|
||||
json.dump(data, f)
|
||||
|
||||
log.debug(data)
|
||||
return data
|
||||
|
||||
except ValueError as e:
|
||||
log.exception("Error parsing Mistral response")
|
||||
raise HTTPException(
|
||||
status_code=500,
|
||||
detail=f"Failed to parse Mistral response: {str(e)}",
|
||||
)
|
||||
except requests.exceptions.RequestException as e:
|
||||
log.exception(e)
|
||||
detail = None
|
||||
|
||||
try:
|
||||
if r is not None and r.status_code != 200:
|
||||
res = r.json()
|
||||
if "error" in res:
|
||||
detail = f"External: {res['error'].get('message', '')}"
|
||||
else:
|
||||
detail = f"External: {r.text}"
|
||||
except Exception:
|
||||
detail = f"External: {e}"
|
||||
|
||||
raise HTTPException(
|
||||
status_code=getattr(r, "status_code", 500) if r else 500,
|
||||
detail=detail if detail else "Open WebUI: Server Connection Error",
|
||||
)
|
||||
|
||||
|
||||
def transcribe(request: Request, file_path: str, metadata: Optional[dict] = None):
|
||||
log.info(f"transcribe: {file_path} {metadata}")
|
||||
|
|
@ -1037,7 +1235,7 @@ def get_available_models(request: Request) -> list[dict]:
|
|||
elif request.app.state.config.TTS_ENGINE == "elevenlabs":
|
||||
try:
|
||||
response = requests.get(
|
||||
"https://api.elevenlabs.io/v1/models",
|
||||
f"{ELEVENLABS_API_BASE_URL}/v1/models",
|
||||
headers={
|
||||
"xi-api-key": request.app.state.config.TTS_API_KEY,
|
||||
"Content-Type": "application/json",
|
||||
|
|
@ -1141,7 +1339,7 @@ def get_elevenlabs_voices(api_key: str) -> dict:
|
|||
try:
|
||||
# TODO: Add retries
|
||||
response = requests.get(
|
||||
"https://api.elevenlabs.io/v1/voices",
|
||||
f"{ELEVENLABS_API_BASE_URL}/v1/voices",
|
||||
headers={
|
||||
"xi-api-key": api_key,
|
||||
"Content-Type": "application/json",
|
||||
|
|
|
|||
|
|
@ -35,7 +35,7 @@ from open_webui.env import (
|
|||
)
|
||||
from fastapi import APIRouter, Depends, HTTPException, Request, status
|
||||
from fastapi.responses import RedirectResponse, Response, JSONResponse
|
||||
from open_webui.config import OPENID_PROVIDER_URL, ENABLE_OAUTH_SIGNUP, ENABLE_LDAP
|
||||
from open_webui.config import OPENID_PROVIDER_URL, ENABLE_OAUTH_SIGNUP, ENABLE_LDAP, ENABLE_PASSWORD_AUTH
|
||||
from pydantic import BaseModel
|
||||
|
||||
from open_webui.utils.misc import parse_duration, validate_email_format
|
||||
|
|
@ -185,7 +185,17 @@ async def update_password(
|
|||
############################
|
||||
@router.post("/ldap", response_model=SessionUserResponse)
|
||||
async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
|
||||
ENABLE_LDAP = request.app.state.config.ENABLE_LDAP
|
||||
# Security checks FIRST - before loading any config
|
||||
if not request.app.state.config.ENABLE_LDAP:
|
||||
raise HTTPException(400, detail="LDAP authentication is not enabled")
|
||||
|
||||
if (not ENABLE_PASSWORD_AUTH):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail=ERROR_MESSAGES.ACTION_PROHIBITED,
|
||||
)
|
||||
|
||||
# NOW load LDAP config variables
|
||||
LDAP_SERVER_LABEL = request.app.state.config.LDAP_SERVER_LABEL
|
||||
LDAP_SERVER_HOST = request.app.state.config.LDAP_SERVER_HOST
|
||||
LDAP_SERVER_PORT = request.app.state.config.LDAP_SERVER_PORT
|
||||
|
|
@ -206,9 +216,6 @@ async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
|
|||
else "ALL"
|
||||
)
|
||||
|
||||
if not ENABLE_LDAP:
|
||||
raise HTTPException(400, detail="LDAP authentication is not enabled")
|
||||
|
||||
try:
|
||||
tls = Tls(
|
||||
validate=LDAP_VALIDATE_CERT,
|
||||
|
|
@ -463,6 +470,12 @@ async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
|
|||
|
||||
@router.post("/signin", response_model=SessionUserResponse)
|
||||
async def signin(request: Request, response: Response, form_data: SigninForm):
|
||||
if (not ENABLE_PASSWORD_AUTH):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail=ERROR_MESSAGES.ACTION_PROHIBITED,
|
||||
)
|
||||
|
||||
if WEBUI_AUTH_TRUSTED_EMAIL_HEADER:
|
||||
if WEBUI_AUTH_TRUSTED_EMAIL_HEADER not in request.headers:
|
||||
raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_TRUSTED_HEADER)
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import logging
|
||||
import copy
|
||||
from fastapi import APIRouter, Depends, Request, HTTPException
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
import aiohttp
|
||||
|
|
@ -15,6 +16,7 @@ from open_webui.utils.tools import (
|
|||
set_tool_servers,
|
||||
)
|
||||
from open_webui.utils.mcp.client import MCPClient
|
||||
from open_webui.models.oauth_sessions import OAuthSessions
|
||||
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
|
|
@ -165,6 +167,21 @@ async def set_tool_servers_config(
|
|||
form_data: ToolServersConfigForm,
|
||||
user=Depends(get_admin_user),
|
||||
):
|
||||
for connection in request.app.state.config.TOOL_SERVER_CONNECTIONS:
|
||||
server_type = connection.get("type", "openapi")
|
||||
auth_type = connection.get("auth_type", "none")
|
||||
|
||||
if auth_type == "oauth_2.1":
|
||||
# Remove existing OAuth clients for tool servers
|
||||
server_id = connection.get("info", {}).get("id")
|
||||
client_key = f"{server_type}:{server_id}"
|
||||
|
||||
try:
|
||||
request.app.state.oauth_client_manager.remove_client(client_key)
|
||||
except:
|
||||
pass
|
||||
|
||||
# Set new tool server connections
|
||||
request.app.state.config.TOOL_SERVER_CONNECTIONS = [
|
||||
connection.model_dump() for connection in form_data.TOOL_SERVER_CONNECTIONS
|
||||
]
|
||||
|
|
@ -176,6 +193,7 @@ async def set_tool_servers_config(
|
|||
if server_type == "mcp":
|
||||
server_id = connection.get("info", {}).get("id")
|
||||
auth_type = connection.get("auth_type", "none")
|
||||
|
||||
if auth_type == "oauth_2.1" and server_id:
|
||||
try:
|
||||
oauth_client_info = connection.get("info", {}).get(
|
||||
|
|
@ -211,7 +229,7 @@ async def verify_tool_servers_config(
|
|||
log.debug(
|
||||
f"Trying to fetch OAuth 2.1 discovery document from {discovery_url}"
|
||||
)
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with aiohttp.ClientSession(trust_env=True) as session:
|
||||
async with session.get(
|
||||
discovery_url
|
||||
) as oauth_server_metadata_response:
|
||||
|
|
|
|||
|
|
@ -115,6 +115,10 @@ def process_uploaded_file(request, file, file_path, file_item, file_metadata, us
|
|||
request.app.state.config.CONTENT_EXTRACTION_ENGINE == "external"
|
||||
):
|
||||
process_file(request, ProcessFileForm(file_id=file_item.id), user=user)
|
||||
else:
|
||||
raise Exception(
|
||||
f"File type {file.content_type} is not supported for processing"
|
||||
)
|
||||
else:
|
||||
log.info(
|
||||
f"File type {file.content_type} is not provided, but trying to process anyway"
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load diff
|
|
@ -44,7 +44,9 @@ def validate_model_id(model_id: str) -> bool:
|
|||
###########################
|
||||
|
||||
|
||||
@router.get("/", response_model=list[ModelUserResponse])
|
||||
@router.get(
|
||||
"/list", response_model=list[ModelUserResponse]
|
||||
) # do NOT use "/" as path, conflicts with main.py
|
||||
async def get_models(id: Optional[str] = None, user=Depends(get_verified_user)):
|
||||
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
|
||||
return Models.get_models()
|
||||
|
|
|
|||
|
|
@ -45,6 +45,7 @@ from open_webui.utils.payload import (
|
|||
)
|
||||
from open_webui.utils.misc import (
|
||||
convert_logit_bias_input_to_json,
|
||||
stream_chunks_handler,
|
||||
)
|
||||
|
||||
from open_webui.utils.auth import get_admin_user, get_verified_user
|
||||
|
|
@ -501,50 +502,55 @@ async def get_all_models(request: Request, user: UserModel) -> dict[str, list]:
|
|||
return response
|
||||
return None
|
||||
|
||||
def merge_models_lists(model_lists):
|
||||
def is_supported_openai_models(model_id):
|
||||
if any(
|
||||
name in model_id
|
||||
for name in [
|
||||
"babbage",
|
||||
"dall-e",
|
||||
"davinci",
|
||||
"embedding",
|
||||
"tts",
|
||||
"whisper",
|
||||
]
|
||||
):
|
||||
return False
|
||||
return True
|
||||
|
||||
def get_merged_models(model_lists):
|
||||
log.debug(f"merge_models_lists {model_lists}")
|
||||
merged_list = []
|
||||
models = {}
|
||||
|
||||
for idx, models in enumerate(model_lists):
|
||||
if models is not None and "error" not in models:
|
||||
for idx, model_list in enumerate(model_lists):
|
||||
if model_list is not None and "error" not in model_list:
|
||||
for model in model_list:
|
||||
model_id = model.get("id") or model.get("name")
|
||||
|
||||
merged_list.extend(
|
||||
[
|
||||
{
|
||||
if (
|
||||
"api.openai.com"
|
||||
in request.app.state.config.OPENAI_API_BASE_URLS[idx]
|
||||
and not is_supported_openai_models(model_id)
|
||||
):
|
||||
# Skip unwanted OpenAI models
|
||||
continue
|
||||
|
||||
if model_id and model_id not in models:
|
||||
models[model_id] = {
|
||||
**model,
|
||||
"name": model.get("name", model["id"]),
|
||||
"name": model.get("name", model_id),
|
||||
"owned_by": "openai",
|
||||
"openai": model,
|
||||
"connection_type": model.get("connection_type", "external"),
|
||||
"urlIdx": idx,
|
||||
}
|
||||
for model in models
|
||||
if (model.get("id") or model.get("name"))
|
||||
and (
|
||||
"api.openai.com"
|
||||
not in request.app.state.config.OPENAI_API_BASE_URLS[idx]
|
||||
or not any(
|
||||
name in model["id"]
|
||||
for name in [
|
||||
"babbage",
|
||||
"dall-e",
|
||||
"davinci",
|
||||
"embedding",
|
||||
"tts",
|
||||
"whisper",
|
||||
]
|
||||
)
|
||||
)
|
||||
]
|
||||
)
|
||||
|
||||
return merged_list
|
||||
return models
|
||||
|
||||
models = {"data": merge_models_lists(map(extract_data, responses))}
|
||||
models = get_merged_models(map(extract_data, responses))
|
||||
log.debug(f"models: {models}")
|
||||
|
||||
request.app.state.OPENAI_MODELS = {model["id"]: model for model in models["data"]}
|
||||
return models
|
||||
request.app.state.OPENAI_MODELS = models
|
||||
return {"data": list(models.values())}
|
||||
|
||||
|
||||
@router.get("/models")
|
||||
|
|
@ -947,7 +953,7 @@ async def generate_chat_completion(
|
|||
if "text/event-stream" in r.headers.get("Content-Type", ""):
|
||||
streaming = True
|
||||
return StreamingResponse(
|
||||
r.content,
|
||||
stream_chunks_handler(r.content),
|
||||
status_code=r.status,
|
||||
headers=dict(r.headers),
|
||||
background=BackgroundTask(
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ from langchain.text_splitter import RecursiveCharacterTextSplitter, TokenTextSpl
|
|||
from langchain_text_splitters import MarkdownHeaderTextSplitter
|
||||
from langchain_core.documents import Document
|
||||
|
||||
from open_webui.models.files import FileModel, Files
|
||||
from open_webui.models.files import FileModel, FileUpdateForm, Files
|
||||
from open_webui.models.knowledge import Knowledges
|
||||
from open_webui.storage.provider import Storage
|
||||
|
||||
|
|
@ -465,6 +465,7 @@ async def get_rag_config(request: Request, user=Depends(get_admin_user)):
|
|||
"DOCLING_PICTURE_DESCRIPTION_API": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API,
|
||||
"DOCUMENT_INTELLIGENCE_ENDPOINT": request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT,
|
||||
"DOCUMENT_INTELLIGENCE_KEY": request.app.state.config.DOCUMENT_INTELLIGENCE_KEY,
|
||||
"MISTRAL_OCR_API_BASE_URL": request.app.state.config.MISTRAL_OCR_API_BASE_URL,
|
||||
"MISTRAL_OCR_API_KEY": request.app.state.config.MISTRAL_OCR_API_KEY,
|
||||
# MinerU settings
|
||||
"MINERU_API_MODE": request.app.state.config.MINERU_API_MODE,
|
||||
|
|
@ -650,6 +651,7 @@ class ConfigForm(BaseModel):
|
|||
DOCLING_PICTURE_DESCRIPTION_API: Optional[dict] = None
|
||||
DOCUMENT_INTELLIGENCE_ENDPOINT: Optional[str] = None
|
||||
DOCUMENT_INTELLIGENCE_KEY: Optional[str] = None
|
||||
MISTRAL_OCR_API_BASE_URL: Optional[str] = None
|
||||
MISTRAL_OCR_API_KEY: Optional[str] = None
|
||||
|
||||
# MinerU settings
|
||||
|
|
@ -891,6 +893,12 @@ async def update_rag_config(
|
|||
if form_data.DOCUMENT_INTELLIGENCE_KEY is not None
|
||||
else request.app.state.config.DOCUMENT_INTELLIGENCE_KEY
|
||||
)
|
||||
|
||||
request.app.state.config.MISTRAL_OCR_API_BASE_URL = (
|
||||
form_data.MISTRAL_OCR_API_BASE_URL
|
||||
if form_data.MISTRAL_OCR_API_BASE_URL is not None
|
||||
else request.app.state.config.MISTRAL_OCR_API_BASE_URL
|
||||
)
|
||||
request.app.state.config.MISTRAL_OCR_API_KEY = (
|
||||
form_data.MISTRAL_OCR_API_KEY
|
||||
if form_data.MISTRAL_OCR_API_KEY is not None
|
||||
|
|
@ -1182,6 +1190,7 @@ async def update_rag_config(
|
|||
"DOCLING_PICTURE_DESCRIPTION_API": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API,
|
||||
"DOCUMENT_INTELLIGENCE_ENDPOINT": request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT,
|
||||
"DOCUMENT_INTELLIGENCE_KEY": request.app.state.config.DOCUMENT_INTELLIGENCE_KEY,
|
||||
"MISTRAL_OCR_API_BASE_URL": request.app.state.config.MISTRAL_OCR_API_BASE_URL,
|
||||
"MISTRAL_OCR_API_KEY": request.app.state.config.MISTRAL_OCR_API_KEY,
|
||||
# MinerU settings
|
||||
"MINERU_API_MODE": request.app.state.config.MINERU_API_MODE,
|
||||
|
|
@ -1565,6 +1574,7 @@ def process_file(
|
|||
file_path = Storage.get_file(file_path)
|
||||
loader = Loader(
|
||||
engine=request.app.state.config.CONTENT_EXTRACTION_ENGINE,
|
||||
user=user,
|
||||
DATALAB_MARKER_API_KEY=request.app.state.config.DATALAB_MARKER_API_KEY,
|
||||
DATALAB_MARKER_API_BASE_URL=request.app.state.config.DATALAB_MARKER_API_BASE_URL,
|
||||
DATALAB_MARKER_ADDITIONAL_CONFIG=request.app.state.config.DATALAB_MARKER_ADDITIONAL_CONFIG,
|
||||
|
|
@ -1597,6 +1607,7 @@ def process_file(
|
|||
PDF_EXTRACT_IMAGES=request.app.state.config.PDF_EXTRACT_IMAGES,
|
||||
DOCUMENT_INTELLIGENCE_ENDPOINT=request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT,
|
||||
DOCUMENT_INTELLIGENCE_KEY=request.app.state.config.DOCUMENT_INTELLIGENCE_KEY,
|
||||
MISTRAL_OCR_API_BASE_URL=request.app.state.config.MISTRAL_OCR_API_BASE_URL,
|
||||
MISTRAL_OCR_API_KEY=request.app.state.config.MISTRAL_OCR_API_KEY,
|
||||
MINERU_API_MODE=request.app.state.config.MINERU_API_MODE,
|
||||
MINERU_API_URL=request.app.state.config.MINERU_API_URL,
|
||||
|
|
@ -1800,7 +1811,9 @@ def process_web(
|
|||
)
|
||||
|
||||
|
||||
def search_web(request: Request, engine: str, query: str) -> list[SearchResult]:
|
||||
def search_web(
|
||||
request: Request, engine: str, query: str, user=None
|
||||
) -> list[SearchResult]:
|
||||
"""Search the web using a search engine and return the results as a list of SearchResult objects.
|
||||
Will look for a search engine API key in environment variables in the following order:
|
||||
- SEARXNG_QUERY_URL
|
||||
|
|
@ -1875,6 +1888,7 @@ def search_web(request: Request, engine: str, query: str) -> list[SearchResult]:
|
|||
query,
|
||||
request.app.state.config.WEB_SEARCH_RESULT_COUNT,
|
||||
request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
|
||||
referer=request.app.state.config.WEBUI_URL,
|
||||
)
|
||||
else:
|
||||
raise Exception(
|
||||
|
|
@ -2057,11 +2071,13 @@ def search_web(request: Request, engine: str, query: str) -> list[SearchResult]:
|
|||
)
|
||||
elif engine == "external":
|
||||
return search_external(
|
||||
request,
|
||||
request.app.state.config.EXTERNAL_WEB_SEARCH_URL,
|
||||
request.app.state.config.EXTERNAL_WEB_SEARCH_API_KEY,
|
||||
query,
|
||||
request.app.state.config.WEB_SEARCH_RESULT_COUNT,
|
||||
request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
|
||||
user=user,
|
||||
)
|
||||
else:
|
||||
raise Exception("No search engine API key found in environment variables")
|
||||
|
|
@ -2086,6 +2102,7 @@ async def process_web_search(
|
|||
request,
|
||||
request.app.state.config.WEB_SEARCH_ENGINE,
|
||||
query,
|
||||
user,
|
||||
)
|
||||
for query in form_data.queries
|
||||
]
|
||||
|
|
@ -2435,16 +2452,19 @@ def process_files_batch(
|
|||
"""
|
||||
Process a batch of files and save them to the vector database.
|
||||
"""
|
||||
results: List[BatchProcessFilesResult] = []
|
||||
errors: List[BatchProcessFilesResult] = []
|
||||
|
||||
collection_name = form_data.collection_name
|
||||
|
||||
file_results: List[BatchProcessFilesResult] = []
|
||||
file_errors: List[BatchProcessFilesResult] = []
|
||||
file_updates: List[FileUpdateForm] = []
|
||||
|
||||
# Prepare all documents first
|
||||
all_docs: List[Document] = []
|
||||
|
||||
for file in form_data.files:
|
||||
try:
|
||||
text_content = file.data.get("content", "")
|
||||
|
||||
docs: List[Document] = [
|
||||
Document(
|
||||
page_content=text_content.replace("<br/>", "\n"),
|
||||
|
|
@ -2458,16 +2478,21 @@ def process_files_batch(
|
|||
)
|
||||
]
|
||||
|
||||
hash = calculate_sha256_string(text_content)
|
||||
Files.update_file_hash_by_id(file.id, hash)
|
||||
Files.update_file_data_by_id(file.id, {"content": text_content})
|
||||
|
||||
all_docs.extend(docs)
|
||||
results.append(BatchProcessFilesResult(file_id=file.id, status="prepared"))
|
||||
|
||||
file_updates.append(
|
||||
FileUpdateForm(
|
||||
hash=calculate_sha256_string(text_content),
|
||||
data={"content": text_content},
|
||||
)
|
||||
)
|
||||
file_results.append(
|
||||
BatchProcessFilesResult(file_id=file.id, status="prepared")
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
log.error(f"process_files_batch: Error processing file {file.id}: {str(e)}")
|
||||
errors.append(
|
||||
file_errors.append(
|
||||
BatchProcessFilesResult(file_id=file.id, status="failed", error=str(e))
|
||||
)
|
||||
|
||||
|
|
@ -2483,20 +2508,18 @@ def process_files_batch(
|
|||
)
|
||||
|
||||
# Update all files with collection name
|
||||
for result in results:
|
||||
Files.update_file_metadata_by_id(
|
||||
result.file_id, {"collection_name": collection_name}
|
||||
)
|
||||
result.status = "completed"
|
||||
for file_update, file_result in zip(file_updates, file_results):
|
||||
Files.update_file_by_id(id=file_result.file_id, form_data=file_update)
|
||||
file_result.status = "completed"
|
||||
|
||||
except Exception as e:
|
||||
log.error(
|
||||
f"process_files_batch: Error saving documents to vector DB: {str(e)}"
|
||||
)
|
||||
for result in results:
|
||||
result.status = "failed"
|
||||
errors.append(
|
||||
BatchProcessFilesResult(file_id=result.file_id, error=str(e))
|
||||
for file_result in file_results:
|
||||
file_result.status = "failed"
|
||||
file_errors.append(
|
||||
BatchProcessFilesResult(file_id=file_result.file_id, error=str(e))
|
||||
)
|
||||
|
||||
return BatchProcessFilesResponse(results=results, errors=errors)
|
||||
return BatchProcessFilesResponse(results=file_results, errors=file_errors)
|
||||
|
|
|
|||
|
|
@ -361,7 +361,7 @@ async def get_user_by_id(user_id: str, user=Depends(get_verified_user)):
|
|||
)
|
||||
|
||||
|
||||
@router.get("/{user_id}/oauth/sessions", response_model=Optional[dict])
|
||||
@router.get("/{user_id}/oauth/sessions")
|
||||
async def get_user_oauth_sessions_by_id(user_id: str, user=Depends(get_admin_user)):
|
||||
sessions = OAuthSessions.get_sessions_by_user_id(user_id)
|
||||
if sessions and len(sessions) > 0:
|
||||
|
|
|
|||
|
|
@ -126,10 +126,3 @@ async def download_db(user=Depends(get_admin_user)):
|
|||
)
|
||||
|
||||
|
||||
@router.get("/litellm/config")
|
||||
async def download_litellm_config_yaml(user=Depends(get_admin_user)):
|
||||
return FileResponse(
|
||||
f"{DATA_DIR}/litellm/config.yaml",
|
||||
media_type="application/octet-stream",
|
||||
filename="config.yaml",
|
||||
)
|
||||
|
|
|
|||
|
|
@ -23,6 +23,7 @@ from open_webui.config import (
|
|||
)
|
||||
|
||||
from open_webui.env import (
|
||||
VERSION,
|
||||
ENABLE_WEBSOCKET_SUPPORT,
|
||||
WEBSOCKET_MANAGER,
|
||||
WEBSOCKET_REDIS_URL,
|
||||
|
|
@ -52,6 +53,9 @@ log.setLevel(SRC_LOG_LEVELS["SOCKET"])
|
|||
|
||||
REDIS = None
|
||||
|
||||
# Configure CORS for Socket.IO
|
||||
SOCKETIO_CORS_ORIGINS = "*" if CORS_ALLOW_ORIGIN == ["*"] else CORS_ALLOW_ORIGIN
|
||||
|
||||
if WEBSOCKET_MANAGER == "redis":
|
||||
if WEBSOCKET_SENTINEL_HOSTS:
|
||||
mgr = socketio.AsyncRedisManager(
|
||||
|
|
@ -62,7 +66,7 @@ if WEBSOCKET_MANAGER == "redis":
|
|||
else:
|
||||
mgr = socketio.AsyncRedisManager(WEBSOCKET_REDIS_URL)
|
||||
sio = socketio.AsyncServer(
|
||||
cors_allowed_origins=CORS_ALLOW_ORIGIN,
|
||||
cors_allowed_origins=SOCKETIO_CORS_ORIGINS,
|
||||
async_mode="asgi",
|
||||
transports=(["websocket"] if ENABLE_WEBSOCKET_SUPPORT else ["polling"]),
|
||||
allow_upgrades=ENABLE_WEBSOCKET_SUPPORT,
|
||||
|
|
@ -71,7 +75,7 @@ if WEBSOCKET_MANAGER == "redis":
|
|||
)
|
||||
else:
|
||||
sio = socketio.AsyncServer(
|
||||
cors_allowed_origins=CORS_ALLOW_ORIGIN,
|
||||
cors_allowed_origins=SOCKETIO_CORS_ORIGINS,
|
||||
async_mode="asgi",
|
||||
transports=(["websocket"] if ENABLE_WEBSOCKET_SUPPORT else ["polling"]),
|
||||
allow_upgrades=ENABLE_WEBSOCKET_SUPPORT,
|
||||
|
|
@ -278,6 +282,8 @@ async def connect(sid, environ, auth):
|
|||
else:
|
||||
USER_POOL[user.id] = [sid]
|
||||
|
||||
await sio.enter_room(sid, f"user:{user.id}")
|
||||
|
||||
|
||||
@sio.on("user-join")
|
||||
async def user_join(sid, data):
|
||||
|
|
@ -300,6 +306,7 @@ async def user_join(sid, data):
|
|||
else:
|
||||
USER_POOL[user.id] = [sid]
|
||||
|
||||
await sio.enter_room(sid, f"user:{user.id}")
|
||||
# Join all the channels
|
||||
channels = Channels.get_channels_by_user_id(user.id)
|
||||
log.debug(f"{channels=}")
|
||||
|
|
@ -645,40 +652,24 @@ async def disconnect(sid):
|
|||
def get_event_emitter(request_info, update_db=True):
|
||||
async def __event_emitter__(event_data):
|
||||
user_id = request_info["user_id"]
|
||||
chat_id = request_info["chat_id"]
|
||||
message_id = request_info["message_id"]
|
||||
|
||||
session_ids = list(
|
||||
set(
|
||||
USER_POOL.get(user_id, [])
|
||||
+ (
|
||||
[request_info.get("session_id")]
|
||||
if request_info.get("session_id")
|
||||
else []
|
||||
)
|
||||
)
|
||||
await sio.emit(
|
||||
"events",
|
||||
{
|
||||
"chat_id": chat_id,
|
||||
"message_id": message_id,
|
||||
"data": event_data,
|
||||
},
|
||||
room=f"user:{user_id}",
|
||||
)
|
||||
|
||||
chat_id = request_info.get("chat_id", None)
|
||||
message_id = request_info.get("message_id", None)
|
||||
|
||||
emit_tasks = [
|
||||
sio.emit(
|
||||
"events",
|
||||
{
|
||||
"chat_id": chat_id,
|
||||
"message_id": message_id,
|
||||
"data": event_data,
|
||||
},
|
||||
to=session_id,
|
||||
)
|
||||
for session_id in session_ids
|
||||
]
|
||||
|
||||
await asyncio.gather(*emit_tasks)
|
||||
if (
|
||||
update_db
|
||||
and message_id
|
||||
and not request_info.get("chat_id", "").startswith("local:")
|
||||
):
|
||||
|
||||
if "type" in event_data and event_data["type"] == "status":
|
||||
Chats.add_message_status_to_chat_by_id_and_message_id(
|
||||
request_info["chat_id"],
|
||||
|
|
@ -768,7 +759,14 @@ def get_event_emitter(request_info, update_db=True):
|
|||
},
|
||||
)
|
||||
|
||||
return __event_emitter__
|
||||
if (
|
||||
"user_id" in request_info
|
||||
and "chat_id" in request_info
|
||||
and "message_id" in request_info
|
||||
):
|
||||
return __event_emitter__
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
def get_event_call(request_info):
|
||||
|
|
@ -784,7 +782,14 @@ def get_event_call(request_info):
|
|||
)
|
||||
return response
|
||||
|
||||
return __event_caller__
|
||||
if (
|
||||
"session_id" in request_info
|
||||
and "chat_id" in request_info
|
||||
and "message_id" in request_info
|
||||
):
|
||||
return __event_caller__
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
get_event_caller = get_event_call
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
from open_webui.routers.images import (
|
||||
load_b64_image_data,
|
||||
get_image_data,
|
||||
upload_image,
|
||||
)
|
||||
|
||||
|
|
@ -22,7 +22,7 @@ def get_image_url_from_base64(request, base64_image_string, metadata, user):
|
|||
if "data:image/png;base64" in base64_image_string:
|
||||
image_url = ""
|
||||
# Extract base64 image data from the line
|
||||
image_data, content_type = load_b64_image_data(base64_image_string)
|
||||
image_data, content_type = get_image_data(base64_image_string)
|
||||
if image_data is not None:
|
||||
image_url = upload_image(
|
||||
request,
|
||||
|
|
|
|||
11
backend/open_webui/utils/headers.py
Normal file
11
backend/open_webui/utils/headers.py
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
from urllib.parse import quote
|
||||
|
||||
|
||||
def include_user_info_headers(headers, user):
|
||||
return {
|
||||
**headers,
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
|
|
@ -2,6 +2,8 @@ import asyncio
|
|||
import json
|
||||
import logging
|
||||
import random
|
||||
import requests
|
||||
import aiohttp
|
||||
import urllib.parse
|
||||
import urllib.request
|
||||
from typing import Optional
|
||||
|
|
@ -91,6 +93,25 @@ def get_images(ws, prompt, client_id, base_url, api_key):
|
|||
return {"data": output_images}
|
||||
|
||||
|
||||
async def comfyui_upload_image(image_file_item, base_url, api_key):
|
||||
url = f"{base_url}/api/upload/image"
|
||||
headers = {}
|
||||
|
||||
if api_key:
|
||||
headers["Authorization"] = f"Bearer {api_key}"
|
||||
|
||||
_, (filename, file_bytes, mime_type) = image_file_item
|
||||
|
||||
form = aiohttp.FormData()
|
||||
form.add_field("image", file_bytes, filename=filename, content_type=mime_type)
|
||||
form.add_field("type", "input") # required by ComfyUI
|
||||
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with session.post(url, data=form, headers=headers) as resp:
|
||||
resp.raise_for_status()
|
||||
return await resp.json()
|
||||
|
||||
|
||||
class ComfyUINodeInput(BaseModel):
|
||||
type: Optional[str] = None
|
||||
node_ids: list[str] = []
|
||||
|
|
@ -103,7 +124,7 @@ class ComfyUIWorkflow(BaseModel):
|
|||
nodes: list[ComfyUINodeInput]
|
||||
|
||||
|
||||
class ComfyUIGenerateImageForm(BaseModel):
|
||||
class ComfyUICreateImageForm(BaseModel):
|
||||
workflow: ComfyUIWorkflow
|
||||
|
||||
prompt: str
|
||||
|
|
@ -116,8 +137,8 @@ class ComfyUIGenerateImageForm(BaseModel):
|
|||
seed: Optional[int] = None
|
||||
|
||||
|
||||
async def comfyui_generate_image(
|
||||
model: str, payload: ComfyUIGenerateImageForm, client_id, base_url, api_key
|
||||
async def comfyui_create_image(
|
||||
model: str, payload: ComfyUICreateImageForm, client_id, base_url, api_key
|
||||
):
|
||||
ws_url = base_url.replace("http://", "ws://").replace("https://", "wss://")
|
||||
workflow = json.loads(payload.workflow.workflow)
|
||||
|
|
@ -191,3 +212,102 @@ async def comfyui_generate_image(
|
|||
ws.close()
|
||||
|
||||
return images
|
||||
|
||||
|
||||
class ComfyUIEditImageForm(BaseModel):
|
||||
workflow: ComfyUIWorkflow
|
||||
|
||||
image: str | list[str]
|
||||
prompt: str
|
||||
width: Optional[int] = None
|
||||
height: Optional[int] = None
|
||||
n: Optional[int] = None
|
||||
|
||||
steps: Optional[int] = None
|
||||
seed: Optional[int] = None
|
||||
|
||||
|
||||
async def comfyui_edit_image(
|
||||
model: str, payload: ComfyUIEditImageForm, client_id, base_url, api_key
|
||||
):
|
||||
ws_url = base_url.replace("http://", "ws://").replace("https://", "wss://")
|
||||
workflow = json.loads(payload.workflow.workflow)
|
||||
|
||||
for node in payload.workflow.nodes:
|
||||
if node.type:
|
||||
if node.type == "model":
|
||||
for node_id in node.node_ids:
|
||||
workflow[node_id]["inputs"][node.key] = model
|
||||
elif node.type == "image":
|
||||
if isinstance(payload.image, list):
|
||||
# check if multiple images are provided
|
||||
for idx, node_id in enumerate(node.node_ids):
|
||||
if idx < len(payload.image):
|
||||
workflow[node_id]["inputs"][node.key] = payload.image[idx]
|
||||
else:
|
||||
for node_id in node.node_ids:
|
||||
workflow[node_id]["inputs"][node.key] = payload.image
|
||||
elif node.type == "prompt":
|
||||
for node_id in node.node_ids:
|
||||
workflow[node_id]["inputs"][
|
||||
node.key if node.key else "text"
|
||||
] = payload.prompt
|
||||
elif node.type == "negative_prompt":
|
||||
for node_id in node.node_ids:
|
||||
workflow[node_id]["inputs"][
|
||||
node.key if node.key else "text"
|
||||
] = payload.negative_prompt
|
||||
elif node.type == "width":
|
||||
for node_id in node.node_ids:
|
||||
workflow[node_id]["inputs"][
|
||||
node.key if node.key else "width"
|
||||
] = payload.width
|
||||
elif node.type == "height":
|
||||
for node_id in node.node_ids:
|
||||
workflow[node_id]["inputs"][
|
||||
node.key if node.key else "height"
|
||||
] = payload.height
|
||||
elif node.type == "n":
|
||||
for node_id in node.node_ids:
|
||||
workflow[node_id]["inputs"][
|
||||
node.key if node.key else "batch_size"
|
||||
] = payload.n
|
||||
elif node.type == "steps":
|
||||
for node_id in node.node_ids:
|
||||
workflow[node_id]["inputs"][
|
||||
node.key if node.key else "steps"
|
||||
] = payload.steps
|
||||
elif node.type == "seed":
|
||||
seed = (
|
||||
payload.seed
|
||||
if payload.seed
|
||||
else random.randint(0, 1125899906842624)
|
||||
)
|
||||
for node_id in node.node_ids:
|
||||
workflow[node_id]["inputs"][node.key] = seed
|
||||
else:
|
||||
for node_id in node.node_ids:
|
||||
workflow[node_id]["inputs"][node.key] = node.value
|
||||
|
||||
try:
|
||||
ws = websocket.WebSocket()
|
||||
headers = {"Authorization": f"Bearer {api_key}"}
|
||||
ws.connect(f"{ws_url}/ws?clientId={client_id}", header=headers)
|
||||
log.info("WebSocket connection established.")
|
||||
except Exception as e:
|
||||
log.exception(f"Failed to connect to WebSocket server: {e}")
|
||||
return None
|
||||
|
||||
try:
|
||||
log.info("Sending workflow to WebSocket server.")
|
||||
log.info(f"Workflow: {workflow}")
|
||||
images = await asyncio.to_thread(
|
||||
get_images, ws, workflow, client_id, base_url, api_key
|
||||
)
|
||||
except Exception as e:
|
||||
log.exception(f"Error while receiving images: {e}")
|
||||
images = None
|
||||
|
||||
ws.close()
|
||||
|
||||
return images
|
||||
|
|
|
|||
|
|
@ -45,10 +45,10 @@ from open_webui.routers.retrieval import (
|
|||
SearchForm,
|
||||
)
|
||||
from open_webui.routers.images import (
|
||||
load_b64_image_data,
|
||||
image_generations,
|
||||
GenerateImageForm,
|
||||
upload_image,
|
||||
CreateImageForm,
|
||||
image_edits,
|
||||
EditImageForm,
|
||||
)
|
||||
from open_webui.routers.pipelines import (
|
||||
process_pipeline_inlet_filter,
|
||||
|
|
@ -91,7 +91,7 @@ from open_webui.utils.misc import (
|
|||
convert_logit_bias_input_to_json,
|
||||
get_content_from_message,
|
||||
)
|
||||
from open_webui.utils.tools import get_tools
|
||||
from open_webui.utils.tools import get_tools, get_updated_tool_function
|
||||
from open_webui.utils.plugin import load_function_module_by_id
|
||||
from open_webui.utils.filter import (
|
||||
get_sorted_filter_ids,
|
||||
|
|
@ -302,19 +302,23 @@ async def chat_completion_tools_handler(
|
|||
def get_tools_function_calling_payload(messages, task_model_id, content):
|
||||
user_message = get_last_user_message(messages)
|
||||
|
||||
if user_message and messages and messages[-1]["role"] == "user":
|
||||
# Remove the last user message to avoid duplication
|
||||
messages = messages[:-1]
|
||||
|
||||
recent_messages = messages[-4:] if len(messages) > 4 else messages
|
||||
chat_history = "\n".join(
|
||||
f"{message['role'].upper()}: \"\"\"{get_content_from_message(message)}\"\"\""
|
||||
for message in recent_messages
|
||||
)
|
||||
|
||||
prompt = f"History:\n{chat_history}\nQuery: {user_message}"
|
||||
prompt = f"History:\n{chat_history}\nQuery: {user_message}" if chat_history else f"Query: {user_message}"
|
||||
|
||||
return {
|
||||
"model": task_model_id,
|
||||
"messages": [
|
||||
{"role": "system", "content": content},
|
||||
{"role": "user", "content": f"Query: {prompt}"},
|
||||
{"role": "user", "content": prompt},
|
||||
],
|
||||
"stream": False,
|
||||
"metadata": {"task": str(TASKS.FUNCTION_CALLING)},
|
||||
|
|
@ -718,9 +722,31 @@ async def chat_web_search_handler(
|
|||
return form_data
|
||||
|
||||
|
||||
def get_last_images(message_list):
|
||||
images = []
|
||||
for message in reversed(message_list):
|
||||
images_flag = False
|
||||
for file in message.get("files", []):
|
||||
if file.get("type") == "image":
|
||||
images.append(file.get("url"))
|
||||
images_flag = True
|
||||
|
||||
if images_flag:
|
||||
break
|
||||
|
||||
return images
|
||||
|
||||
|
||||
async def chat_image_generation_handler(
|
||||
request: Request, form_data: dict, extra_params: dict, user
|
||||
):
|
||||
metadata = extra_params.get("__metadata__", {})
|
||||
chat_id = metadata.get("chat_id", None)
|
||||
if not chat_id:
|
||||
return form_data
|
||||
|
||||
chat = Chats.get_chat_by_id_and_user_id(chat_id, user.id)
|
||||
|
||||
__event_emitter__ = extra_params["__event_emitter__"]
|
||||
await __event_emitter__(
|
||||
{
|
||||
|
|
@ -729,87 +755,151 @@ async def chat_image_generation_handler(
|
|||
}
|
||||
)
|
||||
|
||||
messages = form_data["messages"]
|
||||
user_message = get_last_user_message(messages)
|
||||
messages_map = chat.chat.get("history", {}).get("messages", {})
|
||||
message_id = chat.chat.get("history", {}).get("currentId")
|
||||
message_list = get_message_list(messages_map, message_id)
|
||||
user_message = get_last_user_message(message_list)
|
||||
|
||||
prompt = user_message
|
||||
negative_prompt = ""
|
||||
|
||||
if request.app.state.config.ENABLE_IMAGE_PROMPT_GENERATION:
|
||||
try:
|
||||
res = await generate_image_prompt(
|
||||
request,
|
||||
{
|
||||
"model": form_data["model"],
|
||||
"messages": messages,
|
||||
},
|
||||
user,
|
||||
)
|
||||
|
||||
response = res["choices"][0]["message"]["content"]
|
||||
|
||||
try:
|
||||
bracket_start = response.find("{")
|
||||
bracket_end = response.rfind("}") + 1
|
||||
|
||||
if bracket_start == -1 or bracket_end == -1:
|
||||
raise Exception("No JSON object found in the response")
|
||||
|
||||
response = response[bracket_start:bracket_end]
|
||||
response = json.loads(response)
|
||||
prompt = response.get("prompt", [])
|
||||
except Exception as e:
|
||||
prompt = user_message
|
||||
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
prompt = user_message
|
||||
input_images = get_last_images(message_list)
|
||||
|
||||
system_message_content = ""
|
||||
if len(input_images) == 0:
|
||||
# Create image(s)
|
||||
if request.app.state.config.ENABLE_IMAGE_PROMPT_GENERATION:
|
||||
try:
|
||||
res = await generate_image_prompt(
|
||||
request,
|
||||
{
|
||||
"model": form_data["model"],
|
||||
"messages": form_data["messages"],
|
||||
},
|
||||
user,
|
||||
)
|
||||
|
||||
try:
|
||||
images = await image_generations(
|
||||
request=request,
|
||||
form_data=GenerateImageForm(**{"prompt": prompt}),
|
||||
user=user,
|
||||
)
|
||||
response = res["choices"][0]["message"]["content"]
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {"description": "Image created", "done": True},
|
||||
}
|
||||
)
|
||||
try:
|
||||
bracket_start = response.find("{")
|
||||
bracket_end = response.rfind("}") + 1
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "files",
|
||||
"data": {
|
||||
"files": [
|
||||
{
|
||||
"type": "image",
|
||||
"url": image["url"],
|
||||
}
|
||||
for image in images
|
||||
]
|
||||
},
|
||||
}
|
||||
)
|
||||
if bracket_start == -1 or bracket_end == -1:
|
||||
raise Exception("No JSON object found in the response")
|
||||
|
||||
system_message_content = "<context>User is shown the generated image, tell the user that the image has been generated</context>"
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {
|
||||
"description": f"An error occurred while generating an image",
|
||||
"done": True,
|
||||
},
|
||||
}
|
||||
)
|
||||
response = response[bracket_start:bracket_end]
|
||||
response = json.loads(response)
|
||||
prompt = response.get("prompt", [])
|
||||
except Exception as e:
|
||||
prompt = user_message
|
||||
|
||||
system_message_content = "<context>Unable to generate an image, tell the user that an error occurred</context>"
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
prompt = user_message
|
||||
|
||||
try:
|
||||
images = await image_generations(
|
||||
request=request,
|
||||
form_data=CreateImageForm(**{"prompt": prompt}),
|
||||
user=user,
|
||||
)
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {"description": "Image created", "done": True},
|
||||
}
|
||||
)
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "files",
|
||||
"data": {
|
||||
"files": [
|
||||
{
|
||||
"type": "image",
|
||||
"url": image["url"],
|
||||
}
|
||||
for image in images
|
||||
]
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
system_message_content = "<context>The requested image has been created and is now being shown to the user. Let them know that it has been generated.</context>"
|
||||
except Exception as e:
|
||||
log.debug(e)
|
||||
|
||||
error_message = ""
|
||||
if isinstance(e, HTTPException):
|
||||
if e.detail and isinstance(e.detail, dict):
|
||||
error_message = e.detail.get("message", str(e.detail))
|
||||
else:
|
||||
error_message = str(e.detail)
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {
|
||||
"description": f"An error occurred while generating an image",
|
||||
"done": True,
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
system_message_content = f"<context>Image generation was attempted but failed. The system is currently unable to generate the image. Tell the user that an error occurred: {error_message}</context>"
|
||||
else:
|
||||
# Edit image(s)
|
||||
try:
|
||||
images = await image_edits(
|
||||
request=request,
|
||||
form_data=EditImageForm(**{"prompt": prompt, "image": input_images}),
|
||||
user=user,
|
||||
)
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {"description": "Image created", "done": True},
|
||||
}
|
||||
)
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "files",
|
||||
"data": {
|
||||
"files": [
|
||||
{
|
||||
"type": "image",
|
||||
"url": image["url"],
|
||||
}
|
||||
for image in images
|
||||
]
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
system_message_content = "<context>The requested image has been created and is now being shown to the user. Let them know that it has been generated.</context>"
|
||||
except Exception as e:
|
||||
log.debug(e)
|
||||
|
||||
error_message = ""
|
||||
if isinstance(e, HTTPException):
|
||||
if e.detail and isinstance(e.detail, dict):
|
||||
error_message = e.detail.get("message", str(e.detail))
|
||||
else:
|
||||
error_message = str(e.detail)
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {
|
||||
"description": f"An error occurred while generating an image",
|
||||
"done": True,
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
system_message_content = f"<context>Image generation was attempted but failed. The system is currently unable to generate the image. Tell the user that an error occurred: {error_message}</context>"
|
||||
|
||||
if system_message_content:
|
||||
form_data["messages"] = add_or_update_system_message(
|
||||
|
|
@ -1307,6 +1397,17 @@ async def process_chat_payload(request, form_data, user, metadata, model):
|
|||
}
|
||||
except Exception as e:
|
||||
log.debug(e)
|
||||
if event_emitter:
|
||||
await event_emitter(
|
||||
{
|
||||
"type": "chat:message:error",
|
||||
"data": {
|
||||
"error": {
|
||||
"content": f"Failed to connect to MCP server '{server_id}'"
|
||||
}
|
||||
},
|
||||
}
|
||||
)
|
||||
continue
|
||||
|
||||
tools_dict = await get_tools(
|
||||
|
|
@ -1543,16 +1644,13 @@ async def process_chat_response(
|
|||
if not metadata.get("chat_id", "").startswith(
|
||||
"local:"
|
||||
): # Only update titles and tags for non-temp chats
|
||||
if (
|
||||
TASKS.TITLE_GENERATION in tasks
|
||||
and tasks[TASKS.TITLE_GENERATION]
|
||||
):
|
||||
if TASKS.TITLE_GENERATION in tasks:
|
||||
user_message = get_last_user_message(messages)
|
||||
if user_message and len(user_message) > 100:
|
||||
user_message = user_message[:100] + "..."
|
||||
|
||||
title = None
|
||||
if tasks[TASKS.TITLE_GENERATION]:
|
||||
|
||||
res = await generate_title(
|
||||
request,
|
||||
{
|
||||
|
|
@ -1603,7 +1701,8 @@ async def process_chat_response(
|
|||
"data": title,
|
||||
}
|
||||
)
|
||||
elif len(messages) == 2:
|
||||
|
||||
if title == None and len(messages) == 2:
|
||||
title = messages[0].get("content", user_message)
|
||||
|
||||
Chats.update_chat_title_by_id(metadata["chat_id"], title)
|
||||
|
|
@ -1949,9 +2048,11 @@ async def process_chat_response(
|
|||
content = f"{content}{tool_calls_display_content}"
|
||||
|
||||
elif block["type"] == "reasoning":
|
||||
reasoning_display_content = "\n".join(
|
||||
(f"> {line}" if not line.startswith(">") else line)
|
||||
for line in block["content"].splitlines()
|
||||
reasoning_display_content = html.escape(
|
||||
"\n".join(
|
||||
(f"> {line}" if not line.startswith(">") else line)
|
||||
for line in block["content"].splitlines()
|
||||
)
|
||||
)
|
||||
|
||||
reasoning_duration = block.get("duration", None)
|
||||
|
|
@ -2852,7 +2953,16 @@ async def process_chat_response(
|
|||
)
|
||||
|
||||
else:
|
||||
tool_function = tool["callable"]
|
||||
tool_function = get_updated_tool_function(
|
||||
function=tool["callable"],
|
||||
extra_params={
|
||||
"__messages__": form_data.get(
|
||||
"messages", []
|
||||
),
|
||||
"__files__": metadata.get("files", []),
|
||||
},
|
||||
)
|
||||
|
||||
tool_result = await tool_function(
|
||||
**tool_function_params
|
||||
)
|
||||
|
|
|
|||
|
|
@ -8,10 +8,11 @@ from datetime import timedelta
|
|||
from pathlib import Path
|
||||
from typing import Callable, Optional
|
||||
import json
|
||||
import aiohttp
|
||||
|
||||
|
||||
import collections.abc
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from open_webui.env import SRC_LOG_LEVELS, CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["MAIN"])
|
||||
|
|
@ -539,3 +540,68 @@ def extract_urls(text: str) -> list[str]:
|
|||
r"(https?://[^\s]+)", re.IGNORECASE
|
||||
) # Matches http and https URLs
|
||||
return url_pattern.findall(text)
|
||||
|
||||
|
||||
def stream_chunks_handler(stream: aiohttp.StreamReader):
|
||||
"""
|
||||
Handle stream response chunks, supporting large data chunks that exceed the original 16kb limit.
|
||||
When a single line exceeds max_buffer_size, returns an empty JSON string {} and skips subsequent data
|
||||
until encountering normally sized data.
|
||||
|
||||
:param stream: The stream reader to handle.
|
||||
:return: An async generator that yields the stream data.
|
||||
"""
|
||||
|
||||
max_buffer_size = CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE
|
||||
if max_buffer_size is None or max_buffer_size <= 0:
|
||||
return stream
|
||||
|
||||
async def yield_safe_stream_chunks():
|
||||
buffer = b""
|
||||
skip_mode = False
|
||||
|
||||
async for data, _ in stream.iter_chunks():
|
||||
if not data:
|
||||
continue
|
||||
|
||||
# In skip_mode, if buffer already exceeds the limit, clear it (it's part of an oversized line)
|
||||
if skip_mode and len(buffer) > max_buffer_size:
|
||||
buffer = b""
|
||||
|
||||
lines = (buffer + data).split(b"\n")
|
||||
|
||||
# Process complete lines (except the last possibly incomplete fragment)
|
||||
for i in range(len(lines) - 1):
|
||||
line = lines[i]
|
||||
|
||||
if skip_mode:
|
||||
# Skip mode: check if current line is small enough to exit skip mode
|
||||
if len(line) <= max_buffer_size:
|
||||
skip_mode = False
|
||||
yield line
|
||||
else:
|
||||
yield b"data: {}"
|
||||
else:
|
||||
# Normal mode: check if line exceeds limit
|
||||
if len(line) > max_buffer_size:
|
||||
skip_mode = True
|
||||
yield b"data: {}"
|
||||
log.info(f"Skip mode triggered, line size: {len(line)}")
|
||||
else:
|
||||
yield line
|
||||
|
||||
# Save the last incomplete fragment
|
||||
buffer = lines[-1]
|
||||
|
||||
# Check if buffer exceeds limit
|
||||
if not skip_mode and len(buffer) > max_buffer_size:
|
||||
skip_mode = True
|
||||
log.info(f"Skip mode triggered, buffer size: {len(buffer)}")
|
||||
# Clear oversized buffer to prevent unlimited growth
|
||||
buffer = b""
|
||||
|
||||
# Process remaining buffer data
|
||||
if buffer and not skip_mode:
|
||||
yield buffer
|
||||
|
||||
return yield_safe_stream_chunks()
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@ from open_webui.functions import get_function_models
|
|||
|
||||
from open_webui.models.functions import Functions
|
||||
from open_webui.models.models import Models
|
||||
from open_webui.models.groups import Groups
|
||||
|
||||
|
||||
from open_webui.utils.plugin import (
|
||||
|
|
@ -356,6 +357,7 @@ def get_filtered_models(models, user):
|
|||
or (user.role == "admin" and not BYPASS_ADMIN_ACCESS_CONTROL)
|
||||
) and not BYPASS_MODEL_ACCESS_CONTROL:
|
||||
filtered_models = []
|
||||
user_group_ids = {group.id for group in Groups.get_groups_by_member_id(user.id)}
|
||||
for model in models:
|
||||
if model.get("arena"):
|
||||
if has_access(
|
||||
|
|
@ -364,6 +366,7 @@ def get_filtered_models(models, user):
|
|||
access_control=model.get("info", {})
|
||||
.get("meta", {})
|
||||
.get("access_control", {}),
|
||||
user_group_ids=user_group_ids,
|
||||
):
|
||||
filtered_models.append(model)
|
||||
continue
|
||||
|
|
@ -377,6 +380,7 @@ def get_filtered_models(models, user):
|
|||
user.id,
|
||||
type="read",
|
||||
access_control=model_info.access_control,
|
||||
user_group_ids=user_group_ids,
|
||||
)
|
||||
):
|
||||
filtered_models.append(model)
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import base64
|
||||
import copy
|
||||
import hashlib
|
||||
import logging
|
||||
import mimetypes
|
||||
|
|
@ -41,6 +42,7 @@ from open_webui.config import (
|
|||
ENABLE_OAUTH_GROUP_MANAGEMENT,
|
||||
ENABLE_OAUTH_GROUP_CREATION,
|
||||
OAUTH_BLOCKED_GROUPS,
|
||||
OAUTH_GROUPS_SEPARATOR,
|
||||
OAUTH_ROLES_CLAIM,
|
||||
OAUTH_SUB_CLAIM,
|
||||
OAUTH_GROUPS_CLAIM,
|
||||
|
|
@ -74,6 +76,8 @@ from mcp.shared.auth import (
|
|||
OAuthMetadata,
|
||||
)
|
||||
|
||||
from authlib.oauth2.rfc6749.errors import OAuth2Error
|
||||
|
||||
|
||||
class OAuthClientInformationFull(OAuthClientMetadata):
|
||||
issuer: Optional[str] = None # URL of the OAuth server that issued this client
|
||||
|
|
@ -150,6 +154,37 @@ def decrypt_data(data: str):
|
|||
raise
|
||||
|
||||
|
||||
def _build_oauth_callback_error_message(e: Exception) -> str:
|
||||
"""
|
||||
Produce a user-facing callback error string with actionable context.
|
||||
Keeps the message short and strips newlines for safe redirect usage.
|
||||
"""
|
||||
if isinstance(e, OAuth2Error):
|
||||
parts = [p for p in [e.error, e.description] if p]
|
||||
detail = " - ".join(parts)
|
||||
elif isinstance(e, HTTPException):
|
||||
detail = e.detail if isinstance(e.detail, str) else str(e.detail)
|
||||
elif isinstance(e, aiohttp.ClientResponseError):
|
||||
detail = f"Upstream provider returned {e.status}: {e.message}"
|
||||
elif isinstance(e, aiohttp.ClientError):
|
||||
detail = str(e)
|
||||
elif isinstance(e, KeyError):
|
||||
missing = str(e).strip("'")
|
||||
if missing.lower() == "state":
|
||||
detail = "Missing state parameter in callback (session may have expired)"
|
||||
else:
|
||||
detail = f"Missing expected key '{missing}' in OAuth response"
|
||||
else:
|
||||
detail = str(e)
|
||||
|
||||
detail = detail.replace("\n", " ").strip()
|
||||
if not detail:
|
||||
detail = e.__class__.__name__
|
||||
|
||||
message = f"OAuth callback failed: {detail}"
|
||||
return message[:197] + "..." if len(message) > 200 else message
|
||||
|
||||
|
||||
def is_in_blocked_groups(group_name: str, groups: list) -> bool:
|
||||
"""
|
||||
Check if a group name matches any blocked pattern.
|
||||
|
|
@ -251,7 +286,7 @@ async def get_oauth_client_info_with_dynamic_client_registration(
|
|||
# Attempt to fetch OAuth server metadata to get registration endpoint & scopes
|
||||
discovery_urls = get_discovery_urls(oauth_server_url)
|
||||
for url in discovery_urls:
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with aiohttp.ClientSession(trust_env=True) as session:
|
||||
async with session.get(
|
||||
url, ssl=AIOHTTP_CLIENT_SESSION_SSL
|
||||
) as oauth_server_metadata_response:
|
||||
|
|
@ -287,7 +322,7 @@ async def get_oauth_client_info_with_dynamic_client_registration(
|
|||
)
|
||||
|
||||
# Perform dynamic client registration and return client info
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with aiohttp.ClientSession(trust_env=True) as session:
|
||||
async with session.post(
|
||||
registration_url, json=registration_data, ssl=AIOHTTP_CLIENT_SESSION_SSL
|
||||
) as oauth_client_registration_response:
|
||||
|
|
@ -371,6 +406,82 @@ class OAuthClientManager:
|
|||
if client_id in self.clients:
|
||||
del self.clients[client_id]
|
||||
log.info(f"Removed OAuth client {client_id}")
|
||||
|
||||
if hasattr(self.oauth, "_clients"):
|
||||
if client_id in self.oauth._clients:
|
||||
self.oauth._clients.pop(client_id, None)
|
||||
|
||||
if hasattr(self.oauth, "_registry"):
|
||||
if client_id in self.oauth._registry:
|
||||
self.oauth._registry.pop(client_id, None)
|
||||
|
||||
return True
|
||||
|
||||
async def _preflight_authorization_url(
|
||||
self, client, client_info: OAuthClientInformationFull
|
||||
) -> bool:
|
||||
# TODO: Replace this logic with a more robust OAuth client registration validation
|
||||
# Only perform preflight checks for Starlette OAuth clients
|
||||
if not hasattr(client, "create_authorization_url"):
|
||||
return True
|
||||
|
||||
redirect_uri = None
|
||||
if client_info.redirect_uris:
|
||||
redirect_uri = str(client_info.redirect_uris[0])
|
||||
|
||||
try:
|
||||
auth_data = await client.create_authorization_url(redirect_uri=redirect_uri)
|
||||
authorization_url = auth_data.get("url")
|
||||
|
||||
if not authorization_url:
|
||||
return True
|
||||
except Exception as e:
|
||||
log.debug(
|
||||
f"Skipping OAuth preflight for client {client_info.client_id}: {e}",
|
||||
)
|
||||
return True
|
||||
|
||||
try:
|
||||
async with aiohttp.ClientSession(trust_env=True) as session:
|
||||
async with session.get(
|
||||
authorization_url,
|
||||
allow_redirects=False,
|
||||
ssl=AIOHTTP_CLIENT_SESSION_SSL,
|
||||
) as resp:
|
||||
if resp.status < 400:
|
||||
return True
|
||||
response_text = await resp.text()
|
||||
|
||||
error = None
|
||||
error_description = ""
|
||||
|
||||
content_type = resp.headers.get("content-type", "")
|
||||
if "application/json" in content_type:
|
||||
try:
|
||||
payload = json.loads(response_text)
|
||||
error = payload.get("error")
|
||||
error_description = payload.get("error_description", "")
|
||||
except:
|
||||
pass
|
||||
else:
|
||||
error_description = response_text
|
||||
|
||||
error_message = f"{error or ''} {error_description or ''}".lower()
|
||||
|
||||
if any(
|
||||
keyword in error_message
|
||||
for keyword in ("invalid_client", "invalid client", "client id")
|
||||
):
|
||||
log.warning(
|
||||
f"OAuth client preflight detected invalid registration for {client_info.client_id}: {error} {error_description}"
|
||||
)
|
||||
|
||||
return False
|
||||
except Exception as e:
|
||||
log.debug(
|
||||
f"Skipping OAuth preflight network check for client {client_info.client_id}: {e}"
|
||||
)
|
||||
|
||||
return True
|
||||
|
||||
def get_client(self, client_id):
|
||||
|
|
@ -561,7 +672,6 @@ class OAuthClientManager:
|
|||
client = self.get_client(client_id)
|
||||
if client is None:
|
||||
raise HTTPException(404)
|
||||
|
||||
client_info = self.get_client_info(client_id)
|
||||
if client_info is None:
|
||||
raise HTTPException(404)
|
||||
|
|
@ -569,7 +679,8 @@ class OAuthClientManager:
|
|||
redirect_uri = (
|
||||
client_info.redirect_uris[0] if client_info.redirect_uris else None
|
||||
)
|
||||
return await client.authorize_redirect(request, str(redirect_uri))
|
||||
redirect_uri_str = str(redirect_uri) if redirect_uri else None
|
||||
return await client.authorize_redirect(request, redirect_uri_str)
|
||||
|
||||
async def handle_callback(self, request, client_id: str, user_id: str, response):
|
||||
client = self.get_client(client_id)
|
||||
|
|
@ -621,8 +732,14 @@ class OAuthClientManager:
|
|||
error_message = "Failed to obtain OAuth token"
|
||||
log.warning(error_message)
|
||||
except Exception as e:
|
||||
error_message = "OAuth callback error"
|
||||
log.warning(f"OAuth callback error: {e}")
|
||||
error_message = _build_oauth_callback_error_message(e)
|
||||
log.warning(
|
||||
"OAuth callback error for user_id=%s client_id=%s: %s",
|
||||
user_id,
|
||||
client_id,
|
||||
error_message,
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
redirect_url = (
|
||||
str(request.app.state.config.WEBUI_URL or request.base_url)
|
||||
|
|
@ -630,7 +747,9 @@ class OAuthClientManager:
|
|||
|
||||
if error_message:
|
||||
log.debug(error_message)
|
||||
redirect_url = f"{redirect_url}/?error={error_message}"
|
||||
redirect_url = (
|
||||
f"{redirect_url}/?error={urllib.parse.quote_plus(error_message)}"
|
||||
)
|
||||
return RedirectResponse(url=redirect_url, headers=response.headers)
|
||||
|
||||
response = RedirectResponse(url=redirect_url, headers=response.headers)
|
||||
|
|
@ -917,7 +1036,11 @@ class OAuthManager:
|
|||
if isinstance(claim_data, list):
|
||||
user_oauth_groups = claim_data
|
||||
elif isinstance(claim_data, str):
|
||||
user_oauth_groups = [claim_data]
|
||||
# Split by the configured separator if present
|
||||
if OAUTH_GROUPS_SEPARATOR in claim_data:
|
||||
user_oauth_groups = claim_data.split(OAUTH_GROUPS_SEPARATOR)
|
||||
else:
|
||||
user_oauth_groups = [claim_data]
|
||||
else:
|
||||
user_oauth_groups = []
|
||||
|
||||
|
|
@ -1104,7 +1227,13 @@ class OAuthManager:
|
|||
try:
|
||||
token = await client.authorize_access_token(request)
|
||||
except Exception as e:
|
||||
log.warning(f"OAuth callback error: {e}")
|
||||
detailed_error = _build_oauth_callback_error_message(e)
|
||||
log.warning(
|
||||
"OAuth callback error during authorize_access_token for provider %s: %s",
|
||||
provider,
|
||||
detailed_error,
|
||||
exc_info=True,
|
||||
)
|
||||
raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED)
|
||||
|
||||
# Try to get userinfo from the token first, some providers include it there
|
||||
|
|
|
|||
|
|
@ -208,20 +208,21 @@ def rag_template(template: str, context: str, query: str):
|
|||
if "[query]" in context:
|
||||
query_placeholder = "{{QUERY" + str(uuid.uuid4()) + "}}"
|
||||
template = template.replace("[query]", query_placeholder)
|
||||
query_placeholders.append(query_placeholder)
|
||||
query_placeholders.append((query_placeholder, "[query]"))
|
||||
|
||||
if "{{QUERY}}" in context:
|
||||
query_placeholder = "{{QUERY" + str(uuid.uuid4()) + "}}"
|
||||
template = template.replace("{{QUERY}}", query_placeholder)
|
||||
query_placeholders.append(query_placeholder)
|
||||
query_placeholders.append((query_placeholder, "{{QUERY}}"))
|
||||
|
||||
template = template.replace("[context]", context)
|
||||
template = template.replace("{{CONTEXT}}", context)
|
||||
|
||||
template = template.replace("[query]", query)
|
||||
template = template.replace("{{QUERY}}", query)
|
||||
|
||||
for query_placeholder in query_placeholders:
|
||||
template = template.replace(query_placeholder, query)
|
||||
for query_placeholder, original_placeholder in query_placeholders:
|
||||
template = template.replace(query_placeholder, original_placeholder)
|
||||
|
||||
return template
|
||||
|
||||
|
|
|
|||
|
|
@ -85,9 +85,26 @@ def get_async_tool_function_and_apply_extra_params(
|
|||
update_wrapper(new_function, function)
|
||||
new_function.__signature__ = new_sig
|
||||
|
||||
new_function.__function__ = function # type: ignore
|
||||
new_function.__extra_params__ = extra_params # type: ignore
|
||||
|
||||
return new_function
|
||||
|
||||
|
||||
def get_updated_tool_function(function: Callable, extra_params: dict):
|
||||
# Get the original function and merge updated params
|
||||
__function__ = getattr(function, "__function__", None)
|
||||
__extra_params__ = getattr(function, "__extra_params__", None)
|
||||
|
||||
if __function__ is not None and __extra_params__ is not None:
|
||||
return get_async_tool_function_and_apply_extra_params(
|
||||
__function__,
|
||||
{**__extra_params__, **extra_params},
|
||||
)
|
||||
|
||||
return function
|
||||
|
||||
|
||||
async def get_tools(
|
||||
request: Request, tool_ids: list[str], user: UserModel, extra_params: dict
|
||||
) -> dict[str, dict]:
|
||||
|
|
|
|||
|
|
@ -51,7 +51,7 @@ async def post_webhook(name: str, url: str, message: str, event_data: dict) -> b
|
|||
payload = {**event_data}
|
||||
|
||||
log.debug(f"payload: {payload}")
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with aiohttp.ClientSession(trust_env=True) as session:
|
||||
async with session.post(url, json=payload) as r:
|
||||
r_text = await r.text()
|
||||
r.raise_for_status()
|
||||
|
|
|
|||
|
|
@ -57,22 +57,28 @@ We appreciate the community's interest in identifying potential vulnerabilities.
|
|||
> [!NOTE]
|
||||
> **Note**: If you believe you have found a security issue that
|
||||
>
|
||||
> 1. affects default configurations **or**
|
||||
> 2. represents a genuine bypass of intended security controls **or**
|
||||
> 3. works only with non-default configurations **but the configuration in question is likely to be used by production deployments** > **then we absolutely want to hear about it.** This policy is intended to filter configuration issues and deployment problems, not to discourage legitimate security research.
|
||||
> 1. affects default configurations, **or**
|
||||
> 2. represents a genuine bypass of intended security controls, **or**
|
||||
> 3. works only with non-default configurations, **but the configuration in question is likely to be used by production deployments**, **then we absolutely want to hear about it.** This policy is intended to filter configuration issues and deployment problems, not to discourage legitimate security research.
|
||||
|
||||
8. **Threat Model Understanding Required**: Reports must demonstrate understanding of Open WebUI's self-hosted, authenticated, role-based access control architecture. Comparing Open WebUI to services with fundamentally different security models without acknowledging the architectural differences may result in report rejection.
|
||||
|
||||
9. **CVSS Scoring Accuracy:** If you include a CVSS score with your report, it must accurately reflect the vulnerability according to CVSS methodology. Common errors include 1) rating PR:N (None) when authentication is required, 2) scoring hypothetical attack chains instead of the actual vulnerability, or 3) inflating severity without evidence. **We will adjust inaccurate CVSS scores.** Intentionally inflated scores may result in report rejection.
|
||||
|
||||
> [!WARNING] > **Using CVE Precedents:** If you cite other CVEs to support your report, ensure they are **genuinely comparable** in vulnerability type, threat model, and attack vector. Citing CVEs from different product categories, different vulnerability classes or different deployment models will lead us to suspect the use of AI in your report.
|
||||
> [!WARNING]
|
||||
>
|
||||
> **Using CVE Precedents:** If you cite other CVEs to support your report, ensure they are **genuinely comparable** in vulnerability type, threat model, and attack vector. Citing CVEs from different product categories, different vulnerability classes or different deployment models will lead us to suspect the use of AI in your report.
|
||||
|
||||
11. **Admin Actions Are Out of Scope:** Vulnerabilities that require an administrator to actively perform unsafe actions are **not considered valid vulnerabilities**. Admins have full system control and are expected to understand the security implications of their actions and configurations. This includes but is not limited to: adding malicious external servers (models, tools, webhooks), pasting untrusted code into Functions/Tools, or intentionally weakening security settings. **Reports requiring admin negligence or social engineering of admins may be rejected.**
|
||||
|
||||
12. **AI report transparency:** Due to an extreme spike in AI-aided vulnerability reports **YOU MUST DISCLOSE if AI was used in any capacity** - whether for writing the report, generating the PoC, or identifying the vulnerability. If AI helped you in any way shape or form in the creation of the report, PoC or finding the vulnerability, you MUST disclose it.
|
||||
10. **Admin Actions Are Out of Scope:** Vulnerabilities that require an administrator to actively perform unsafe actions are **not considered valid vulnerabilities**. Admins have full system control and are expected to understand the security implications of their actions and configurations. This includes but is not limited to: adding malicious external servers (models, tools, webhooks), pasting untrusted code into Functions/Tools, or intentionally weakening security settings. **Reports requiring admin negligence or social engineering of admins may be rejected.**
|
||||
|
||||
> [!NOTE]
|
||||
> AI-aided vulnerability reports **will not be rejected by us by default.** But:
|
||||
> Similar to rule "Default Configuration Testing": If you believe you have found a vulnerability that affects admins and is NOT caused by admin negligence or intentionally malicious actions,
|
||||
> **then we absolutely want to hear about it.** This policy is intended to filter social engineering attacks on admins, malicious plugins being deployed by admins and similar malicious actions, not to discourage legitimate security research.
|
||||
|
||||
11. **AI report transparency:** Due to an extreme spike in AI-aided vulnerability reports **YOU MUST DISCLOSE if AI was used in any capacity** - whether for writing the report, generating the PoC, or identifying the vulnerability. If AI helped you in any way shape or form in the creation of the report, PoC or finding the vulnerability, you MUST disclose it.
|
||||
|
||||
> [!NOTE]
|
||||
> AI-aided vulnerability reports **will not be rejected by us by default**. But:
|
||||
>
|
||||
> - If we suspect you used AI (but you did not disclose it to us), we will be asking tough follow-up questions to validate your understanding of the reported vulnerability and Open WebUI itself.
|
||||
> - If we suspect you used AI (but you did not disclose it to us) **and** your report ends up being invalid/not a vulnerability/not reproducible, then you **may be banned** from reporting future vulnerabilities.
|
||||
|
|
@ -94,7 +100,7 @@ We appreciate the community's interest in identifying potential vulnerabilities.
|
|||
If you want to report a vulnerability and can meet the outlined requirements, [open a vulnerability report here](https://github.com/open-webui/open-webui/security/advisories/new).
|
||||
If you feel like you are not able to follow ALL outlined requirements for vulnerability-specific reasons, still do report it, we will check every report either way.
|
||||
|
||||
## Product Security And For Non-Vulnerability Security Concerns:
|
||||
## Product Security And For Non-Vulnerability Related Security Concerns:
|
||||
|
||||
If your concern does not meet the vulnerability requirements outlined above, is not a vulnerability, **but is still related to security concerns**, then use the following channels instead:
|
||||
|
||||
|
|
@ -121,4 +127,4 @@ For any other immediate concerns, please create an issue in our [issue tracker](
|
|||
|
||||
---
|
||||
|
||||
_Last updated on **2025-10-17**._
|
||||
_Last updated on **2025-11-06**._
|
||||
|
|
|
|||
4
package-lock.json
generated
4
package-lock.json
generated
|
|
@ -1,12 +1,12 @@
|
|||
{
|
||||
"name": "open-webui",
|
||||
"version": "0.6.34",
|
||||
"version": "0.6.36",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "open-webui",
|
||||
"version": "0.6.34",
|
||||
"version": "0.6.36",
|
||||
"dependencies": {
|
||||
"@azure/msal-browser": "^4.5.0",
|
||||
"@codemirror/lang-javascript": "^6.2.2",
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "open-webui",
|
||||
"version": "0.6.34",
|
||||
"version": "0.6.36",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"dev": "npm run pyodide:fetch && vite dev --host",
|
||||
|
|
|
|||
|
|
@ -129,8 +129,8 @@ li p {
|
|||
}
|
||||
|
||||
::-webkit-scrollbar {
|
||||
height: 0.4rem;
|
||||
width: 0.4rem;
|
||||
height: 0.45rem;
|
||||
width: 0.45rem;
|
||||
}
|
||||
|
||||
::-webkit-scrollbar-track {
|
||||
|
|
|
|||
|
|
@ -63,6 +63,10 @@ export const uploadFile = async (token: string, file: File, metadata?: object |
|
|||
console.error(data.error);
|
||||
res.error = data.error;
|
||||
}
|
||||
|
||||
if (res?.data) {
|
||||
res.data = data;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1401,6 +1401,33 @@ export const getChangelog = async () => {
|
|||
return res;
|
||||
};
|
||||
|
||||
export const getVersion = async (token: string) => {
|
||||
let error = null;
|
||||
|
||||
const res = await fetch(`${WEBUI_BASE_URL}/api/version`, {
|
||||
method: 'GET',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
Authorization: `Bearer ${token}`
|
||||
}
|
||||
})
|
||||
.then(async (res) => {
|
||||
if (!res.ok) throw await res.json();
|
||||
return res.json();
|
||||
})
|
||||
.catch((err) => {
|
||||
console.error(err);
|
||||
error = err;
|
||||
return null;
|
||||
});
|
||||
|
||||
if (error) {
|
||||
throw error;
|
||||
}
|
||||
|
||||
return res?.version ?? null;
|
||||
};
|
||||
|
||||
export const getVersionUpdates = async (token: string) => {
|
||||
let error = null;
|
||||
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
import { WEBUI_API_BASE_URL } from '$lib/constants';
|
||||
|
||||
export const getModels = async (token: string = '') => {
|
||||
export const getModelItems = async (token: string = '') => {
|
||||
let error = null;
|
||||
|
||||
const res = await fetch(`${WEBUI_API_BASE_URL}/models/`, {
|
||||
const res = await fetch(`${WEBUI_API_BASE_URL}/models/list`, {
|
||||
method: 'GET',
|
||||
headers: {
|
||||
Accept: 'application/json',
|
||||
|
|
|
|||
|
|
@ -180,38 +180,3 @@ export const downloadDatabase = async (token: string) => {
|
|||
}
|
||||
};
|
||||
|
||||
export const downloadLiteLLMConfig = async (token: string) => {
|
||||
let error = null;
|
||||
|
||||
const res = await fetch(`${WEBUI_API_BASE_URL}/utils/litellm/config`, {
|
||||
method: 'GET',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
Authorization: `Bearer ${token}`
|
||||
}
|
||||
})
|
||||
.then(async (response) => {
|
||||
if (!response.ok) {
|
||||
throw await response.json();
|
||||
}
|
||||
return response.blob();
|
||||
})
|
||||
.then((blob) => {
|
||||
const url = window.URL.createObjectURL(blob);
|
||||
const a = document.createElement('a');
|
||||
a.href = url;
|
||||
a.download = 'config.yaml';
|
||||
document.body.appendChild(a);
|
||||
a.click();
|
||||
window.URL.revokeObjectURL(url);
|
||||
})
|
||||
.catch((err) => {
|
||||
console.error(err);
|
||||
error = err.detail;
|
||||
return null;
|
||||
});
|
||||
|
||||
if (error) {
|
||||
throw error;
|
||||
}
|
||||
};
|
||||
|
|
|
|||
|
|
@ -158,6 +158,7 @@
|
|||
|
||||
if (res) {
|
||||
toast.success($i18n.t('Function deleted successfully'));
|
||||
functions = functions.filter((f) => f.id !== func.id);
|
||||
|
||||
_functions.set(await getFunctions(localStorage.token));
|
||||
models.set(
|
||||
|
|
|
|||
|
|
@ -50,6 +50,9 @@
|
|||
let STT_AZURE_BASE_URL = '';
|
||||
let STT_AZURE_MAX_SPEAKERS = '';
|
||||
let STT_DEEPGRAM_API_KEY = '';
|
||||
let STT_MISTRAL_API_KEY = '';
|
||||
let STT_MISTRAL_API_BASE_URL = '';
|
||||
let STT_MISTRAL_USE_CHAT_COMPLETIONS = false;
|
||||
|
||||
let STT_WHISPER_MODEL_LOADING = false;
|
||||
|
||||
|
|
@ -135,7 +138,10 @@
|
|||
AZURE_REGION: STT_AZURE_REGION,
|
||||
AZURE_LOCALES: STT_AZURE_LOCALES,
|
||||
AZURE_BASE_URL: STT_AZURE_BASE_URL,
|
||||
AZURE_MAX_SPEAKERS: STT_AZURE_MAX_SPEAKERS
|
||||
AZURE_MAX_SPEAKERS: STT_AZURE_MAX_SPEAKERS,
|
||||
MISTRAL_API_KEY: STT_MISTRAL_API_KEY,
|
||||
MISTRAL_API_BASE_URL: STT_MISTRAL_API_BASE_URL,
|
||||
MISTRAL_USE_CHAT_COMPLETIONS: STT_MISTRAL_USE_CHAT_COMPLETIONS
|
||||
}
|
||||
});
|
||||
|
||||
|
|
@ -184,6 +190,9 @@
|
|||
STT_AZURE_BASE_URL = res.stt.AZURE_BASE_URL;
|
||||
STT_AZURE_MAX_SPEAKERS = res.stt.AZURE_MAX_SPEAKERS;
|
||||
STT_DEEPGRAM_API_KEY = res.stt.DEEPGRAM_API_KEY;
|
||||
STT_MISTRAL_API_KEY = res.stt.MISTRAL_API_KEY;
|
||||
STT_MISTRAL_API_BASE_URL = res.stt.MISTRAL_API_BASE_URL;
|
||||
STT_MISTRAL_USE_CHAT_COMPLETIONS = res.stt.MISTRAL_USE_CHAT_COMPLETIONS;
|
||||
}
|
||||
|
||||
await getVoices();
|
||||
|
|
@ -201,7 +210,7 @@
|
|||
<div class=" space-y-3 overflow-y-scroll scrollbar-hidden h-full">
|
||||
<div class="flex flex-col gap-3">
|
||||
<div>
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Speech-to-Text')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Speech-to-Text')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -235,6 +244,7 @@
|
|||
<option value="web">{$i18n.t('Web API')}</option>
|
||||
<option value="deepgram">{$i18n.t('Deepgram')}</option>
|
||||
<option value="azure">{$i18n.t('Azure AI Speech')}</option>
|
||||
<option value="mistral">{$i18n.t('MistralAI')}</option>
|
||||
</select>
|
||||
</div>
|
||||
</div>
|
||||
|
|
@ -367,6 +377,67 @@
|
|||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{:else if STT_ENGINE === 'mistral'}
|
||||
<div>
|
||||
<div class="mt-1 flex gap-2 mb-1">
|
||||
<input
|
||||
class="flex-1 w-full bg-transparent outline-hidden"
|
||||
placeholder={$i18n.t('API Base URL')}
|
||||
bind:value={STT_MISTRAL_API_BASE_URL}
|
||||
required
|
||||
/>
|
||||
|
||||
<SensitiveInput placeholder={$i18n.t('API Key')} bind:value={STT_MISTRAL_API_KEY} />
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr class="border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
<div>
|
||||
<div class=" mb-1.5 text-xs font-medium">{$i18n.t('STT Model')}</div>
|
||||
<div class="flex w-full">
|
||||
<div class="flex-1">
|
||||
<input
|
||||
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
||||
bind:value={STT_MODEL}
|
||||
placeholder="voxtral-mini-latest"
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
<div class="mt-2 mb-1 text-xs text-gray-400 dark:text-gray-500">
|
||||
{$i18n.t('Leave empty to use the default model (voxtral-mini-latest).')}
|
||||
<a
|
||||
class=" hover:underline dark:text-gray-200 text-gray-800"
|
||||
href="https://docs.mistral.ai/capabilities/audio_transcription"
|
||||
target="_blank"
|
||||
>
|
||||
{$i18n.t('Learn more about Voxtral transcription.')}
|
||||
</a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr class="border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
<div>
|
||||
<div class="flex items-center justify-between mb-2">
|
||||
<div class="text-xs font-medium">{$i18n.t('Use Chat Completions API')}</div>
|
||||
<label class="relative inline-flex items-center cursor-pointer">
|
||||
<input
|
||||
type="checkbox"
|
||||
bind:checked={STT_MISTRAL_USE_CHAT_COMPLETIONS}
|
||||
class="sr-only peer"
|
||||
/>
|
||||
<div
|
||||
class="w-9 h-5 bg-gray-200 peer-focus:outline-none peer-focus:ring-2 peer-focus:ring-blue-300 dark:peer-focus:ring-blue-800 rounded-full peer dark:bg-gray-700 peer-checked:after:translate-x-full peer-checked:after:border-white after:content-[''] after:absolute after:top-[2px] after:left-[2px] after:bg-white after:border-gray-300 after:border after:rounded-full after:h-4 after:w-4 after:transition-all dark:border-gray-600 peer-checked:bg-blue-600"
|
||||
></div>
|
||||
</label>
|
||||
</div>
|
||||
<div class="text-xs text-gray-400 dark:text-gray-500">
|
||||
{$i18n.t(
|
||||
'Use /v1/chat/completions endpoint instead of /v1/audio/transcriptions for potentially better accuracy.'
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
{:else if STT_ENGINE === ''}
|
||||
<div>
|
||||
<div class=" mb-1.5 text-xs font-medium">{$i18n.t('STT Model')}</div>
|
||||
|
|
@ -427,7 +498,7 @@
|
|||
</div>
|
||||
|
||||
<div>
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Text-to-Speech')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Text-to-Speech')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
|
|||
|
|
@ -41,7 +41,7 @@
|
|||
{#if config}
|
||||
<div>
|
||||
<div class="mb-3.5">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -164,7 +164,7 @@
|
|||
</div>
|
||||
|
||||
<div class="mb-3.5">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Code Interpreter')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Code Interpreter')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
|
|||
|
|
@ -219,7 +219,7 @@
|
|||
<div class=" overflow-y-scroll scrollbar-hidden h-full">
|
||||
{#if ENABLE_OPENAI_API !== null && ENABLE_OLLAMA_API !== null && connectionsConfig !== null}
|
||||
<div class="mb-3.5">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
import fileSaver from 'file-saver';
|
||||
const { saveAs } = fileSaver;
|
||||
|
||||
import { downloadDatabase, downloadLiteLLMConfig } from '$lib/apis/utils';
|
||||
import { downloadDatabase } from '$lib/apis/utils';
|
||||
import { onMount, getContext } from 'svelte';
|
||||
import { config, user } from '$lib/stores';
|
||||
import { toast } from 'svelte-sonner';
|
||||
|
|
|
|||
|
|
@ -212,6 +212,15 @@
|
|||
await embeddingModelUpdateHandler();
|
||||
}
|
||||
|
||||
if (RAGConfig.MINERU_PARAMS) {
|
||||
try {
|
||||
JSON.parse(RAGConfig.MINERU_PARAMS);
|
||||
} catch (e) {
|
||||
toast.error($i18n.t('Invalid JSON format in MinerU Parameters'));
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
const res = await updateRAGConfig(localStorage.token, {
|
||||
...RAGConfig,
|
||||
ALLOWED_FILE_EXTENSIONS: RAGConfig.ALLOWED_FILE_EXTENSIONS.split(',')
|
||||
|
|
@ -220,7 +229,13 @@
|
|||
DOCLING_PICTURE_DESCRIPTION_LOCAL: JSON.parse(
|
||||
RAGConfig.DOCLING_PICTURE_DESCRIPTION_LOCAL || '{}'
|
||||
),
|
||||
DOCLING_PICTURE_DESCRIPTION_API: JSON.parse(RAGConfig.DOCLING_PICTURE_DESCRIPTION_API || '{}')
|
||||
DOCLING_PICTURE_DESCRIPTION_API: JSON.parse(
|
||||
RAGConfig.DOCLING_PICTURE_DESCRIPTION_API || '{}'
|
||||
),
|
||||
MINERU_PARAMS:
|
||||
typeof RAGConfig.MINERU_PARAMS === 'string' && RAGConfig.MINERU_PARAMS.trim() !== ''
|
||||
? JSON.parse(RAGConfig.MINERU_PARAMS)
|
||||
: {}
|
||||
});
|
||||
dispatch('save');
|
||||
};
|
||||
|
|
@ -261,6 +276,11 @@
|
|||
2
|
||||
);
|
||||
|
||||
config.MINERU_PARAMS =
|
||||
typeof config.MINERU_PARAMS === 'object'
|
||||
? JSON.stringify(config.MINERU_PARAMS ?? {}, null, 2)
|
||||
: config.MINERU_PARAMS;
|
||||
|
||||
RAGConfig = config;
|
||||
});
|
||||
</script>
|
||||
|
|
@ -317,7 +337,7 @@
|
|||
<div class=" space-y-2.5 overflow-y-scroll scrollbar-hidden h-full pr-1.5">
|
||||
<div class="">
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -746,6 +766,11 @@
|
|||
</div>
|
||||
{:else if RAGConfig.CONTENT_EXTRACTION_ENGINE === 'mistral_ocr'}
|
||||
<div class="my-0.5 flex gap-2 pr-2">
|
||||
<input
|
||||
class="flex-1 w-full text-sm bg-transparent outline-hidden"
|
||||
placeholder={$i18n.t('Enter Mistral API Base URL')}
|
||||
bind:value={RAGConfig.MISTRAL_OCR_API_BASE_URL}
|
||||
/>
|
||||
<SensitiveInput
|
||||
placeholder={$i18n.t('Enter Mistral API Key')}
|
||||
bind:value={RAGConfig.MISTRAL_OCR_API_KEY}
|
||||
|
|
@ -802,8 +827,8 @@
|
|||
</div>
|
||||
|
||||
<!-- Parameters -->
|
||||
<div class="flex justify-between w-full mt-2">
|
||||
<div class="self-center text-xs font-medium">
|
||||
<div class="flex flex-col justify-between w-full mt-2">
|
||||
<div class="text-xs font-medium">
|
||||
<Tooltip
|
||||
content={$i18n.t(
|
||||
'Advanced parameters for MinerU parsing (enable_ocr, enable_formula, enable_table, language, model_version, page_ranges)'
|
||||
|
|
@ -813,22 +838,9 @@
|
|||
{$i18n.t('Parameters')}
|
||||
</Tooltip>
|
||||
</div>
|
||||
<div class="">
|
||||
<div class="mt-1.5">
|
||||
<Textarea
|
||||
value={typeof RAGConfig.MINERU_PARAMS === 'object' &&
|
||||
RAGConfig.MINERU_PARAMS !== null &&
|
||||
Object.keys(RAGConfig.MINERU_PARAMS).length > 0
|
||||
? JSON.stringify(RAGConfig.MINERU_PARAMS, null, 2)
|
||||
: ''}
|
||||
on:input={(e) => {
|
||||
try {
|
||||
const value = e.target.value.trim();
|
||||
RAGConfig.MINERU_PARAMS = value ? JSON.parse(value) : {};
|
||||
} catch (err) {
|
||||
// Keep the string value if JSON is invalid (user is still typing)
|
||||
RAGConfig.MINERU_PARAMS = e.target.value;
|
||||
}
|
||||
}}
|
||||
bind:value={RAGConfig.MINERU_PARAMS}
|
||||
placeholder={`{\n "enable_ocr": false,\n "enable_formula": true,\n "enable_table": true,\n "language": "en",\n "model_version": "pipeline",\n "page_ranges": ""\n}`}
|
||||
minSize={100}
|
||||
/>
|
||||
|
|
@ -914,7 +926,7 @@
|
|||
|
||||
{#if !RAGConfig.BYPASS_EMBEDDING_AND_RETRIEVAL}
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Embedding')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Embedding')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -1089,7 +1101,7 @@
|
|||
</div>
|
||||
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Retrieval')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Retrieval')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -1332,7 +1344,7 @@
|
|||
{/if}
|
||||
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Files')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Files')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -1444,7 +1456,7 @@
|
|||
</div>
|
||||
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Integration')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Integration')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -1464,7 +1476,7 @@
|
|||
</div>
|
||||
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Danger Zone')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Danger Zone')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
|
|||
|
|
@ -104,7 +104,7 @@
|
|||
{#if evaluationConfig !== null}
|
||||
<div class="">
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -119,7 +119,7 @@
|
|||
|
||||
{#if evaluationConfig.ENABLE_EVALUATION_ARENA_MODELS}
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium flex justify-between items-center">
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium flex justify-between items-center">
|
||||
<div>
|
||||
{$i18n.t('Manage')}
|
||||
</div>
|
||||
|
|
|
|||
|
|
@ -118,11 +118,11 @@
|
|||
updateHandler();
|
||||
}}
|
||||
>
|
||||
<div class="mt-0.5 space-y-3 overflow-y-scroll scrollbar-hidden h-full">
|
||||
<div class="space-y-3 overflow-y-scroll scrollbar-hidden h-full">
|
||||
{#if adminConfig !== null}
|
||||
<div class="">
|
||||
<div class="mb-3.5">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -280,7 +280,7 @@
|
|||
</div>
|
||||
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Authentication')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Authentication')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -637,7 +637,7 @@
|
|||
</div>
|
||||
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Features')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Features')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load diff
|
|
@ -111,7 +111,7 @@
|
|||
>
|
||||
<div class=" overflow-y-scroll scrollbar-hidden h-full pr-1.5">
|
||||
<div class="mb-3.5">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Tasks')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Tasks')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -384,7 +384,7 @@
|
|||
</div>
|
||||
|
||||
<div class="mb-3.5">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('UI')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('UI')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
|
|||
|
|
@ -313,7 +313,7 @@
|
|||
|
||||
<div class=" my-2 mb-5" id="model-list">
|
||||
{#if models.length > 0}
|
||||
{#each filteredModels as model, modelIdx (model.id)}
|
||||
{#each filteredModels as model, modelIdx (`${model.id}-${modelIdx}`)}
|
||||
<div
|
||||
class=" flex space-x-4 cursor-pointer w-full px-3 py-2 dark:hover:bg-white/5 hover:bg-black/5 rounded-lg transition {model
|
||||
?.meta?.hidden
|
||||
|
|
|
|||
|
|
@ -59,7 +59,7 @@
|
|||
{#if servers !== null}
|
||||
<div class="">
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
|
|||
|
|
@ -95,7 +95,7 @@
|
|||
{#if webConfig}
|
||||
<div class="">
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -724,7 +724,7 @@
|
|||
</div>
|
||||
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Loader')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Loader')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
|
|||
|
|
@ -112,17 +112,6 @@
|
|||
}}
|
||||
/>
|
||||
|
||||
{#key selectedUser}
|
||||
<EditUserModal
|
||||
bind:show={showEditUserModal}
|
||||
{selectedUser}
|
||||
sessionUser={$user}
|
||||
on:save={async () => {
|
||||
getUserList();
|
||||
}}
|
||||
/>
|
||||
{/key}
|
||||
|
||||
<AddUserModal
|
||||
bind:show={showAddUserModal}
|
||||
on:save={async () => {
|
||||
|
|
@ -130,6 +119,15 @@
|
|||
}}
|
||||
/>
|
||||
|
||||
<EditUserModal
|
||||
bind:show={showEditUserModal}
|
||||
{selectedUser}
|
||||
sessionUser={$user}
|
||||
on:save={async () => {
|
||||
getUserList();
|
||||
}}
|
||||
/>
|
||||
|
||||
{#if selectedUser}
|
||||
<UserChatsModal bind:show={showUserChatsModal} user={selectedUser} />
|
||||
{/if}
|
||||
|
|
|
|||
|
|
@ -22,6 +22,18 @@
|
|||
export let selectedUser;
|
||||
export let sessionUser;
|
||||
|
||||
$: if (show) {
|
||||
init();
|
||||
}
|
||||
|
||||
const init = () => {
|
||||
if (selectedUser) {
|
||||
_user = selectedUser;
|
||||
_user.password = '';
|
||||
loadUserGroups();
|
||||
}
|
||||
};
|
||||
|
||||
let _user = {
|
||||
profile_image_url: '',
|
||||
role: 'pending',
|
||||
|
|
@ -52,14 +64,6 @@
|
|||
return null;
|
||||
});
|
||||
};
|
||||
|
||||
onMount(() => {
|
||||
if (selectedUser) {
|
||||
_user = selectedUser;
|
||||
_user.password = '';
|
||||
loadUserGroups();
|
||||
}
|
||||
});
|
||||
</script>
|
||||
|
||||
<Modal size="sm" bind:show>
|
||||
|
|
|
|||
|
|
@ -19,6 +19,7 @@
|
|||
|
||||
<nav class="sticky top-0 z-30 w-full px-1.5 py-1.5 -mb-8 flex items-center drag-region">
|
||||
<div
|
||||
id="navbar-bg-gradient-to-b"
|
||||
class=" bg-linear-to-b via-50% from-white via-white to-transparent dark:from-gray-900 dark:via-gray-900 dark:to-transparent pointer-events-none absolute inset-0 -bottom-7 z-[-1]"
|
||||
></div>
|
||||
|
||||
|
|
|
|||
|
|
@ -27,6 +27,7 @@
|
|||
banners,
|
||||
user,
|
||||
socket,
|
||||
audioQueue,
|
||||
showControls,
|
||||
showCallOverlay,
|
||||
currentChatPage,
|
||||
|
|
@ -43,6 +44,7 @@
|
|||
pinnedChats,
|
||||
showEmbeds
|
||||
} from '$lib/stores';
|
||||
|
||||
import {
|
||||
convertMessagesToHistory,
|
||||
copyToClipboard,
|
||||
|
|
@ -53,6 +55,8 @@
|
|||
removeAllDetails,
|
||||
getCodeBlockContents
|
||||
} from '$lib/utils';
|
||||
import { AudioQueue } from '$lib/utils/audio';
|
||||
|
||||
import {
|
||||
createNewChat,
|
||||
getAllTags,
|
||||
|
|
@ -529,17 +533,28 @@
|
|||
let showControlsSubscribe = null;
|
||||
let selectedFolderSubscribe = null;
|
||||
|
||||
const stopAudio = () => {
|
||||
try {
|
||||
speechSynthesis.cancel();
|
||||
$audioQueue.stop();
|
||||
} catch {}
|
||||
};
|
||||
|
||||
onMount(async () => {
|
||||
loading = true;
|
||||
console.log('mounted');
|
||||
window.addEventListener('message', onMessageHandler);
|
||||
$socket?.on('events', chatEventHandler);
|
||||
|
||||
audioQueue.set(new AudioQueue(document.getElementById('audioElement')));
|
||||
|
||||
pageSubscribe = page.subscribe(async (p) => {
|
||||
if (p.url.pathname === '/') {
|
||||
await tick();
|
||||
initNewChat();
|
||||
}
|
||||
|
||||
stopAudio();
|
||||
});
|
||||
|
||||
const storageChatInput = sessionStorage.getItem(
|
||||
|
|
@ -621,6 +636,7 @@
|
|||
chatIdUnsubscriber?.();
|
||||
window.removeEventListener('message', onMessageHandler);
|
||||
$socket?.off('events', chatEventHandler);
|
||||
$audioQueue?.destroy();
|
||||
} catch (e) {
|
||||
console.error(e);
|
||||
}
|
||||
|
|
@ -2308,7 +2324,7 @@
|
|||
</title>
|
||||
</svelte:head>
|
||||
|
||||
<audio id="audioElement" src="" style="display: none;" />
|
||||
<audio id="audioElement" src="" style="display: none;"></audio>
|
||||
|
||||
<EventConfirmDialog
|
||||
bind:show={showEventConfirmation}
|
||||
|
|
@ -2593,7 +2609,7 @@
|
|||
|
||||
<style>
|
||||
::-webkit-scrollbar {
|
||||
height: 0.6rem;
|
||||
width: 0.6rem;
|
||||
height: 0.5rem;
|
||||
width: 0.5rem;
|
||||
}
|
||||
</style>
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load diff
|
|
@ -233,7 +233,7 @@
|
|||
{/if}
|
||||
</Tooltip>
|
||||
|
||||
<Tooltip content={item.description || decodeString(item?.name)} placement="top-start">
|
||||
<Tooltip content={`${decodeString(item?.name)}`} placement="top-start">
|
||||
<div class="line-clamp-1 flex-1">
|
||||
{decodeString(item?.name)}
|
||||
</div>
|
||||
|
|
|
|||
|
|
@ -94,7 +94,6 @@
|
|||
|
||||
<Dropdown
|
||||
bind:show
|
||||
{closeOnOutsideClick}
|
||||
on:change={(e) => {
|
||||
if (e.detail === false) {
|
||||
onClose();
|
||||
|
|
|
|||
|
|
@ -335,6 +335,7 @@
|
|||
|
||||
stopDurationCounter();
|
||||
audioChunks = [];
|
||||
visualizerData = Array(VISUALIZER_BUFFER_LENGTH).fill(0);
|
||||
|
||||
if (stream) {
|
||||
const tracks = stream.getTracks();
|
||||
|
|
|
|||
|
|
@ -59,8 +59,8 @@
|
|||
|
||||
let _token = null;
|
||||
|
||||
let mermaidHtml = null;
|
||||
let vegaHtml = null;
|
||||
let renderHTML = null;
|
||||
let renderError = null;
|
||||
|
||||
let highlightedCode = null;
|
||||
let executing = false;
|
||||
|
|
@ -340,24 +340,24 @@
|
|||
onUpdate(token);
|
||||
if (lang === 'mermaid' && (token?.raw ?? '').slice(-4).includes('```')) {
|
||||
try {
|
||||
mermaidHtml = await renderMermaid(code);
|
||||
renderHTML = await renderMermaid(code);
|
||||
} catch (error) {
|
||||
console.error('Failed to render mermaid diagram:', error);
|
||||
const errorMsg = error instanceof Error ? error.message : String(error);
|
||||
toast.error($i18n.t('Failed to render diagram') + `: ${errorMsg}`);
|
||||
mermaidHtml = null;
|
||||
renderError = $i18n.t('Failed to render diagram') + `: ${errorMsg}`;
|
||||
renderHTML = null;
|
||||
}
|
||||
} else if (
|
||||
(lang === 'vega' || lang === 'vega-lite') &&
|
||||
(token?.raw ?? '').slice(-4).includes('```')
|
||||
) {
|
||||
try {
|
||||
vegaHtml = await renderVegaVisualization(code);
|
||||
renderHTML = await renderVegaVisualization(code);
|
||||
} catch (error) {
|
||||
console.error('Failed to render Vega visualization:', error);
|
||||
const errorMsg = error instanceof Error ? error.message : String(error);
|
||||
toast.error($i18n.t('Failed to render diagram') + `: ${errorMsg}`);
|
||||
vegaHtml = null;
|
||||
renderError = $i18n.t('Failed to render visualization') + `: ${errorMsg}`;
|
||||
renderHTML = null;
|
||||
}
|
||||
}
|
||||
};
|
||||
|
|
@ -420,25 +420,24 @@
|
|||
class="relative {className} flex flex-col rounded-3xl border border-gray-100 dark:border-gray-850 my-0.5"
|
||||
dir="ltr"
|
||||
>
|
||||
{#if lang === 'mermaid'}
|
||||
{#if mermaidHtml}
|
||||
{#if ['mermaid', 'vega', 'vega-lite'].includes(lang)}
|
||||
{#if renderHTML}
|
||||
<SvgPanZoom
|
||||
className=" rounded-3xl max-h-fit overflow-hidden"
|
||||
svg={mermaidHtml}
|
||||
svg={renderHTML}
|
||||
content={_token.text}
|
||||
/>
|
||||
{:else}
|
||||
<pre class="mermaid">{code}</pre>
|
||||
{/if}
|
||||
{:else if lang === 'vega' || lang === 'vega-lite'}
|
||||
{#if vegaHtml}
|
||||
<SvgPanZoom
|
||||
className="rounded-3xl max-h-fit overflow-hidden"
|
||||
svg={vegaHtml}
|
||||
content={_token.text}
|
||||
/>
|
||||
{:else}
|
||||
<pre class="vega">{code}</pre>
|
||||
<div class="p-3">
|
||||
{#if renderError}
|
||||
<div
|
||||
class="flex gap-2.5 border px-4 py-3 border-red-600/10 bg-red-600/10 rounded-2xl mb-2"
|
||||
>
|
||||
{renderError}
|
||||
</div>
|
||||
{/if}
|
||||
<pre>{code}</pre>
|
||||
</div>
|
||||
{/if}
|
||||
{:else}
|
||||
<div
|
||||
|
|
|
|||
|
|
@ -176,7 +176,7 @@
|
|||
{onSourceClick}
|
||||
{onTaskClick}
|
||||
{onSave}
|
||||
onUpdate={(token) => {
|
||||
onUpdate={async (token) => {
|
||||
const { lang, text: code } = token;
|
||||
|
||||
if (
|
||||
|
|
@ -185,6 +185,7 @@
|
|||
!$mobile &&
|
||||
$chatId
|
||||
) {
|
||||
await tick();
|
||||
showArtifacts.set(true);
|
||||
showControls.set(true);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -131,12 +131,9 @@
|
|||
{/if}
|
||||
{:else if token.text.includes(`<source_id`)}
|
||||
<Source {id} {token} onClick={onSourceClick} />
|
||||
{:else if token.text.trim().match(/^<br\s*\/?>$/i)}
|
||||
<br />
|
||||
{:else}
|
||||
{@const br = token.text.match(/<br\s*\/?>/)}
|
||||
{#if br}
|
||||
<br />
|
||||
{:else}
|
||||
{token.text}
|
||||
{/if}
|
||||
{token.text}
|
||||
{/if}
|
||||
{/if}
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@
|
|||
export let onSourceClick: Function = () => {};
|
||||
</script>
|
||||
|
||||
{#each tokens as token}
|
||||
{#each tokens as token, tokenIdx (tokenIdx)}
|
||||
{#if token.type === 'escape'}
|
||||
{unescapeHtml(token.text)}
|
||||
{:else if token.type === 'html'}
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@
|
|||
{:else}
|
||||
{#each texts as text}
|
||||
<span class="" transition:fade={{ duration: 100 }}>
|
||||
{text}
|
||||
{text}{' '}
|
||||
</span>
|
||||
{/each}
|
||||
{/if}
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
<script lang="ts">
|
||||
import { decode } from 'html-entities';
|
||||
import DOMPurify from 'dompurify';
|
||||
import { onMount, getContext } from 'svelte';
|
||||
const i18n = getContext('i18n');
|
||||
|
|
@ -10,6 +11,7 @@
|
|||
import { unescapeHtml } from '$lib/utils';
|
||||
|
||||
import { WEBUI_BASE_URL } from '$lib/constants';
|
||||
import { settings } from '$lib/stores';
|
||||
|
||||
import CodeBlock from '$lib/components/chat/Messages/CodeBlock.svelte';
|
||||
import MarkdownInlineTokens from '$lib/components/chat/Messages/Markdown/MarkdownInlineTokens.svelte';
|
||||
|
|
@ -20,7 +22,6 @@
|
|||
import Download from '$lib/components/icons/Download.svelte';
|
||||
|
||||
import Source from './Source.svelte';
|
||||
import { settings } from '$lib/stores';
|
||||
import HtmlToken from './HTMLToken.svelte';
|
||||
|
||||
export let id: string;
|
||||
|
|
@ -304,7 +305,7 @@
|
|||
<div class=" mb-1.5" slot="content">
|
||||
<svelte:self
|
||||
id={`${id}-${tokenIdx}-d`}
|
||||
tokens={marked.lexer(token.text)}
|
||||
tokens={marked.lexer(decode(token.text))}
|
||||
attributes={token?.attributes}
|
||||
{done}
|
||||
{editCodeBlock}
|
||||
|
|
|
|||
|
|
@ -46,6 +46,7 @@
|
|||
</script>
|
||||
|
||||
<div
|
||||
role="listitem"
|
||||
class="flex flex-col justify-between px-5 mb-3 w-full {($settings?.widescreenMode ?? null)
|
||||
? 'max-w-full'
|
||||
: 'max-w-5xl'} mx-auto rounded-lg group"
|
||||
|
|
|
|||
|
|
@ -15,7 +15,15 @@
|
|||
import { getChatById } from '$lib/apis/chats';
|
||||
import { generateTags } from '$lib/apis';
|
||||
|
||||
import { config, models, settings, temporaryChatEnabled, TTSWorker, user } from '$lib/stores';
|
||||
import {
|
||||
audioQueue,
|
||||
config,
|
||||
models,
|
||||
settings,
|
||||
temporaryChatEnabled,
|
||||
TTSWorker,
|
||||
user
|
||||
} from '$lib/stores';
|
||||
import { synthesizeOpenAISpeech } from '$lib/apis/audio';
|
||||
import { imageGenerations } from '$lib/apis/images';
|
||||
import {
|
||||
|
|
@ -156,7 +164,6 @@
|
|||
|
||||
let messageIndexEdit = false;
|
||||
|
||||
let audioParts: Record<number, HTMLAudioElement | null> = {};
|
||||
let speaking = false;
|
||||
let speakingIdx: number | undefined;
|
||||
|
||||
|
|
@ -178,51 +185,25 @@
|
|||
}
|
||||
};
|
||||
|
||||
const playAudio = (idx: number) => {
|
||||
return new Promise<void>((res) => {
|
||||
speakingIdx = idx;
|
||||
const audio = audioParts[idx];
|
||||
const stopAudio = () => {
|
||||
try {
|
||||
speechSynthesis.cancel();
|
||||
$audioQueue.stop();
|
||||
} catch {}
|
||||
|
||||
if (!audio) {
|
||||
return res();
|
||||
}
|
||||
|
||||
audio.play();
|
||||
audio.onended = async () => {
|
||||
await new Promise((r) => setTimeout(r, 300));
|
||||
|
||||
if (Object.keys(audioParts).length - 1 === idx) {
|
||||
speaking = false;
|
||||
}
|
||||
|
||||
res();
|
||||
};
|
||||
});
|
||||
};
|
||||
|
||||
const toggleSpeakMessage = async () => {
|
||||
if (speaking) {
|
||||
try {
|
||||
speechSynthesis.cancel();
|
||||
|
||||
if (speakingIdx !== undefined && audioParts[speakingIdx]) {
|
||||
audioParts[speakingIdx]!.pause();
|
||||
audioParts[speakingIdx]!.currentTime = 0;
|
||||
}
|
||||
} catch {}
|
||||
|
||||
speaking = false;
|
||||
speakingIdx = undefined;
|
||||
return;
|
||||
}
|
||||
};
|
||||
|
||||
const speak = async () => {
|
||||
if (!(message?.content ?? '').trim().length) {
|
||||
toast.info($i18n.t('No content to speak'));
|
||||
return;
|
||||
}
|
||||
|
||||
speaking = true;
|
||||
|
||||
const content = removeAllDetails(message.content);
|
||||
|
||||
if ($config.audio.tts.engine === '') {
|
||||
|
|
@ -241,12 +222,12 @@
|
|||
|
||||
console.log(voice);
|
||||
|
||||
const speak = new SpeechSynthesisUtterance(content);
|
||||
speak.rate = $settings.audio?.tts?.playbackRate ?? 1;
|
||||
const speech = new SpeechSynthesisUtterance(content);
|
||||
speech.rate = $settings.audio?.tts?.playbackRate ?? 1;
|
||||
|
||||
console.log(speak);
|
||||
console.log(speech);
|
||||
|
||||
speak.onend = () => {
|
||||
speech.onend = () => {
|
||||
speaking = false;
|
||||
if ($settings.conversationMode) {
|
||||
document.getElementById('voice-input-button')?.click();
|
||||
|
|
@ -254,15 +235,21 @@
|
|||
};
|
||||
|
||||
if (voice) {
|
||||
speak.voice = voice;
|
||||
speech.voice = voice;
|
||||
}
|
||||
|
||||
speechSynthesis.speak(speak);
|
||||
speechSynthesis.speak(speech);
|
||||
}
|
||||
}, 100);
|
||||
} else {
|
||||
loadingSpeech = true;
|
||||
$audioQueue.setId(`${message.id}`);
|
||||
$audioQueue.setPlaybackRate($settings.audio?.tts?.playbackRate ?? 1);
|
||||
$audioQueue.onStopped = () => {
|
||||
speaking = false;
|
||||
speakingIdx = undefined;
|
||||
};
|
||||
|
||||
loadingSpeech = true;
|
||||
const messageContentParts: string[] = getMessageContentParts(
|
||||
content,
|
||||
$config?.audio?.tts?.split_on ?? 'punctuation'
|
||||
|
|
@ -278,17 +265,6 @@
|
|||
}
|
||||
|
||||
console.debug('Prepared message content for TTS', messageContentParts);
|
||||
|
||||
audioParts = messageContentParts.reduce(
|
||||
(acc, _sentence, idx) => {
|
||||
acc[idx] = null;
|
||||
return acc;
|
||||
},
|
||||
{} as typeof audioParts
|
||||
);
|
||||
|
||||
let lastPlayedAudioPromise = Promise.resolve(); // Initialize a promise that resolves immediately
|
||||
|
||||
if ($settings.audio?.tts?.engine === 'browser-kokoro') {
|
||||
if (!$TTSWorker) {
|
||||
await TTSWorker.set(
|
||||
|
|
@ -315,12 +291,9 @@
|
|||
});
|
||||
|
||||
if (blob) {
|
||||
const audio = new Audio(blob);
|
||||
audio.playbackRate = $settings.audio?.tts?.playbackRate ?? 1;
|
||||
|
||||
audioParts[idx] = audio;
|
||||
const url = URL.createObjectURL(blob);
|
||||
$audioQueue.enqueue(url);
|
||||
loadingSpeech = false;
|
||||
lastPlayedAudioPromise = lastPlayedAudioPromise.then(() => playAudio(idx));
|
||||
}
|
||||
}
|
||||
} else {
|
||||
|
|
@ -341,13 +314,10 @@
|
|||
|
||||
if (res) {
|
||||
const blob = await res.blob();
|
||||
const blobUrl = URL.createObjectURL(blob);
|
||||
const audio = new Audio(blobUrl);
|
||||
audio.playbackRate = $settings.audio?.tts?.playbackRate ?? 1;
|
||||
const url = URL.createObjectURL(blob);
|
||||
|
||||
audioParts[idx] = audio;
|
||||
$audioQueue.enqueue(url);
|
||||
loadingSpeech = false;
|
||||
lastPlayedAudioPromise = lastPlayedAudioPromise.then(() => playAudio(idx));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -620,7 +590,7 @@
|
|||
<div class="flex-auto w-0 pl-1 relative">
|
||||
<Name>
|
||||
<Tooltip content={model?.name ?? message.model} placement="top-start">
|
||||
<span class="line-clamp-1 text-black dark:text-white">
|
||||
<span id="response-message-model-name" class="line-clamp-1 text-black dark:text-white">
|
||||
{model?.name ?? message.model}
|
||||
</span>
|
||||
</Tooltip>
|
||||
|
|
@ -648,10 +618,7 @@
|
|||
<div class="chat-{message.role} w-full min-w-full markdown-prose">
|
||||
<div>
|
||||
{#if model?.info?.meta?.capabilities?.status_updates ?? true}
|
||||
<StatusHistory
|
||||
statusHistory={message?.statusHistory}
|
||||
expand={message?.content === ''}
|
||||
/>
|
||||
<StatusHistory statusHistory={message?.statusHistory} />
|
||||
{/if}
|
||||
|
||||
{#if message?.files && message.files?.filter((f) => f.type === 'image').length > 0}
|
||||
|
|
@ -995,7 +962,11 @@
|
|||
: 'invisible group-hover:visible'} p-1.5 hover:bg-black/5 dark:hover:bg-white/5 rounded-lg dark:hover:text-white hover:text-black transition"
|
||||
on:click={() => {
|
||||
if (!loadingSpeech) {
|
||||
toggleSpeakMessage();
|
||||
if (speaking) {
|
||||
stopAudio();
|
||||
} else {
|
||||
speak();
|
||||
}
|
||||
}
|
||||
}}
|
||||
>
|
||||
|
|
|
|||
|
|
@ -29,38 +29,6 @@
|
|||
{#if history && history.length > 0}
|
||||
{#if status?.hidden !== true}
|
||||
<div class="text-sm flex flex-col w-full">
|
||||
{#if showHistory}
|
||||
<div class="flex flex-row">
|
||||
{#if history.length > 1}
|
||||
<div class="w-full">
|
||||
{#each history as status, idx}
|
||||
{#if idx !== history.length - 1}
|
||||
<div class="flex items-stretch gap-2 mb-1">
|
||||
<div class=" ">
|
||||
<div class="pt-3 px-1 mb-1.5">
|
||||
<span
|
||||
class="relative flex size-1.5 rounded-full justify-center items-center"
|
||||
>
|
||||
<span
|
||||
class="relative inline-flex size-1.5 rounded-full bg-gray-500 dark:bg-gray-300"
|
||||
></span>
|
||||
</span>
|
||||
</div>
|
||||
|
||||
<div
|
||||
class="w-[0.5px] ml-[6.5px] h-[calc(100%-14px)] bg-gray-300 dark:bg-gray-700"
|
||||
/>
|
||||
</div>
|
||||
|
||||
<StatusItem {status} done={true} />
|
||||
</div>
|
||||
{/if}
|
||||
{/each}
|
||||
</div>
|
||||
{/if}
|
||||
</div>
|
||||
{/if}
|
||||
|
||||
<button
|
||||
class="w-full"
|
||||
on:click={() => {
|
||||
|
|
@ -68,23 +36,38 @@
|
|||
}}
|
||||
>
|
||||
<div class="flex items-start gap-2">
|
||||
{#if history.length > 1}
|
||||
<div class="pt-3 px-1">
|
||||
<span class="relative flex size-1.5 rounded-full justify-center items-center">
|
||||
{#if status?.done === false}
|
||||
<span
|
||||
class="absolute inline-flex h-full w-full animate-ping rounded-full bg-gray-500 dark:bg-gray-300 opacity-75"
|
||||
></span>
|
||||
{/if}
|
||||
<span
|
||||
class="relative inline-flex size-1.5 rounded-full bg-gray-500 dark:bg-gray-300"
|
||||
></span>
|
||||
</span>
|
||||
</div>
|
||||
{/if}
|
||||
<StatusItem {status} />
|
||||
</div>
|
||||
</button>
|
||||
|
||||
{#if showHistory}
|
||||
<div class="flex flex-row">
|
||||
{#if history.length > 1}
|
||||
<div class="w-full">
|
||||
{#each history as status, idx}
|
||||
<div class="flex items-stretch gap-2 mb-1">
|
||||
<div class=" ">
|
||||
<div class="pt-3 px-1 mb-1.5">
|
||||
<span class="relative flex size-1.5 rounded-full justify-center items-center">
|
||||
<span
|
||||
class="relative inline-flex size-1.5 rounded-full bg-gray-500 dark:bg-gray-400"
|
||||
></span>
|
||||
</span>
|
||||
</div>
|
||||
{#if idx !== history.length - 1}
|
||||
<div
|
||||
class="w-[0.5px] ml-[6.5px] h-[calc(100%-14px)] bg-gray-300 dark:bg-gray-700"
|
||||
/>
|
||||
{/if}
|
||||
</div>
|
||||
|
||||
<StatusItem {status} done={true} />
|
||||
</div>
|
||||
{/each}
|
||||
</div>
|
||||
{/if}
|
||||
</div>
|
||||
{/if}
|
||||
</div>
|
||||
{/if}
|
||||
{/if}
|
||||
|
|
|
|||
|
|
@ -121,7 +121,10 @@
|
|||
if (selectedTag === '') {
|
||||
return true;
|
||||
}
|
||||
return (item.model?.tags ?? []).map((tag) => tag.name).includes(selectedTag);
|
||||
|
||||
return (item.model?.tags ?? [])
|
||||
.map((tag) => tag.name.toLowerCase())
|
||||
.includes(selectedTag.toLowerCase());
|
||||
})
|
||||
.filter((item) => {
|
||||
if (selectedConnectionType === '') {
|
||||
|
|
@ -139,7 +142,9 @@
|
|||
if (selectedTag === '') {
|
||||
return true;
|
||||
}
|
||||
return (item.model?.tags ?? []).map((tag) => tag.name).includes(selectedTag);
|
||||
return (item.model?.tags ?? [])
|
||||
.map((tag) => tag.name.toLowerCase())
|
||||
.includes(selectedTag.toLowerCase());
|
||||
})
|
||||
.filter((item) => {
|
||||
if (selectedConnectionType === '') {
|
||||
|
|
@ -315,8 +320,7 @@
|
|||
tags = items
|
||||
.filter((item) => !(item.model?.info?.meta?.hidden ?? false))
|
||||
.flatMap((item) => item.model?.tags ?? [])
|
||||
.map((tag) => tag.name);
|
||||
|
||||
.map((tag) => tag.name.toLowerCase());
|
||||
// Remove duplicates and sort
|
||||
tags = Array.from(new Set(tags)).sort((a, b) => a.localeCompare(b));
|
||||
}
|
||||
|
|
|
|||
|
|
@ -73,6 +73,7 @@
|
|||
<nav class="sticky top-0 z-30 w-full py-1 -mb-8 flex flex-col items-center drag-region">
|
||||
<div class="flex items-center w-full pl-1.5 pr-1">
|
||||
<div
|
||||
id="navbar-bg-gradient-to-b"
|
||||
class=" bg-linear-to-b via-40% to-97% from-white via-white to-transparent dark:from-gray-900 dark:via-gray-900 dark:to-transparent pointer-events-none absolute inset-0 -bottom-7 z-[-1]"
|
||||
></div>
|
||||
|
||||
|
|
|
|||
|
|
@ -65,6 +65,36 @@
|
|||
<div class="flex flex-col md:flex-row w-full md:space-x-2 dark:text-gray-200">
|
||||
<div class="flex flex-col w-full sm:flex-row sm:justify-center sm:space-x-6">
|
||||
<div class=" grid grid-cols-1 sm:grid-cols-2 gap-2 gap-x-4 w-full">
|
||||
<!-- {$i18n.t('Chat')} -->
|
||||
<!-- {$i18n.t('Global')} -->
|
||||
<!-- {$i18n.t('Input')} -->
|
||||
<!-- {$i18n.t('Message')} -->
|
||||
|
||||
<!-- {$i18n.t('New Chat')} -->
|
||||
<!-- {$i18n.t('New Temporary Chat')} -->
|
||||
<!-- {$i18n.t('Delete Chat')} -->
|
||||
<!-- {$i18n.t('Search')} -->
|
||||
<!-- {$i18n.t('Open Settings')} -->
|
||||
<!-- {$i18n.t('Show Shortcuts')} -->
|
||||
<!-- {$i18n.t('Toggle Sidebar')} -->
|
||||
<!-- {$i18n.t('Close Modal')} -->
|
||||
<!-- {$i18n.t('Focus Chat Input')} -->
|
||||
<!-- {$i18n.t('Accept Autocomplete Generation\nJump to Prompt Variable')} -->
|
||||
<!-- {$i18n.t('Prevent File Creation')} -->
|
||||
<!-- {$i18n.t('Attach File From Knowledge')} -->
|
||||
<!-- {$i18n.t('Add Custom Prompt')} -->
|
||||
<!-- {$i18n.t('Talk to Model')} -->
|
||||
<!-- {$i18n.t('Generate Message Pair')} -->
|
||||
<!-- {$i18n.t('Regenerate Response')} -->
|
||||
<!-- {$i18n.t('Stop Generating')} -->
|
||||
<!-- {$i18n.t('Edit Last Message')} -->
|
||||
<!-- {$i18n.t('Copy Last Response')} -->
|
||||
<!-- {$i18n.t('Copy Last Code Block')} -->
|
||||
|
||||
<!-- {$i18n.t('Only active when "Paste Large Text as File" setting is toggled on.')} -->
|
||||
<!-- {$i18n.t('Only active when the chat input is in focus.')} -->
|
||||
<!-- {$i18n.t('Only active when the chat input is in focus and an LLM is generating a response.')} -->
|
||||
<!-- {$i18n.t('Only can be triggered when the chat input is in focus.')} -->
|
||||
{#each items as shortcut}
|
||||
<div class="col-span-1 flex items-start">
|
||||
<ShortcutItem {shortcut} {isMac} />
|
||||
|
|
|
|||
|
|
@ -84,7 +84,7 @@
|
|||
<div class="h-40 w-full">
|
||||
{#if filteredPrompts.length > 0}
|
||||
<div role="list" class="max-h-40 overflow-auto scrollbar-none items-start {className}">
|
||||
{#each filteredPrompts as prompt, idx (prompt.id || prompt.content)}
|
||||
{#each filteredPrompts as prompt, idx (prompt.id || `${prompt.content}-${idx}`)}
|
||||
<!-- svelte-ignore a11y-no-interactive-element-to-noninteractive-role -->
|
||||
<button
|
||||
role="listitem"
|
||||
|
|
|
|||
65
src/lib/components/common/CodeEditorModal.svelte
Normal file
65
src/lib/components/common/CodeEditorModal.svelte
Normal file
|
|
@ -0,0 +1,65 @@
|
|||
<script lang="ts">
|
||||
import { onMount, getContext } from 'svelte';
|
||||
|
||||
import CodeEditor from './CodeEditor.svelte';
|
||||
import Drawer from './Drawer.svelte';
|
||||
|
||||
const i18n = getContext('i18n');
|
||||
|
||||
let {
|
||||
show = $bindable(),
|
||||
value = $bindable(),
|
||||
lang = 'python',
|
||||
onChange = () => {},
|
||||
onSave = () => {}
|
||||
} = $props();
|
||||
|
||||
let boilerplate = ``;
|
||||
|
||||
let codeEditor = $state(null);
|
||||
let _content = $state(value);
|
||||
|
||||
$effect(() => {
|
||||
if (_content) {
|
||||
value = _content;
|
||||
}
|
||||
});
|
||||
</script>
|
||||
|
||||
<Drawer bind:show>
|
||||
<div class="flex h-full flex-col">
|
||||
<div
|
||||
class=" sticky top-0 z-30 flex justify-between bg-white px-4.5 pt-3 pb-3 dark:bg-gray-900 dark:text-gray-100"
|
||||
>
|
||||
<div class=" font-primary self-center text-lg font-medium">
|
||||
{$i18n.t('Code Editor')}
|
||||
</div>
|
||||
<button
|
||||
class="self-center"
|
||||
aria-label="Close"
|
||||
onclick={() => {
|
||||
show = false;
|
||||
}}
|
||||
>
|
||||
<svg
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
viewBox="0 0 20 20"
|
||||
fill="currentColor"
|
||||
class="h-5 w-5"
|
||||
>
|
||||
<path
|
||||
d="M6.28 5.22a.75.75 0 00-1.06 1.06L8.94 10l-3.72 3.72a.75.75 0 101.06 1.06L10 11.06l3.72 3.72a.75.75 0 101.06-1.06L11.06 10l3.72-3.72a.75.75 0 00-1.06-1.06L10 8.94 6.28 5.22z"
|
||||
/>
|
||||
</svg>
|
||||
</button>
|
||||
</div>
|
||||
|
||||
<div
|
||||
class="flex h-full w-full flex-1 flex-col md:flex-row md:space-x-4 dark:text-gray-200 overflow-y-auto"
|
||||
>
|
||||
<div class=" flex h-full w-full flex-col sm:flex-row sm:justify-center sm:space-x-6">
|
||||
<CodeEditor bind:this={codeEditor} {value} {boilerplate} {lang} {onChange} {onSave} />
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</Drawer>
|
||||
|
|
@ -2,7 +2,6 @@
|
|||
import { onDestroy, onMount } from 'svelte';
|
||||
import { flyAndScale } from '$lib/utils/transitions';
|
||||
import { fade, fly, slide } from 'svelte/transition';
|
||||
import { isApp } from '$lib/stores';
|
||||
|
||||
export let show = false;
|
||||
export let className = '';
|
||||
|
|
@ -54,26 +53,25 @@
|
|||
|
||||
<!-- svelte-ignore a11y-click-events-have-key-events -->
|
||||
<!-- svelte-ignore a11y-no-static-element-interactions -->
|
||||
|
||||
<div
|
||||
bind:this={modalElement}
|
||||
class="modal fixed right-0 {$isApp
|
||||
? ' ml-[4.5rem] max-w-[calc(100%-4.5rem)]'
|
||||
: ''} left-0 bottom-0 bg-black/60 w-full h-screen max-h-[100dvh] flex justify-center z-999 overflow-hidden overscroll-contain"
|
||||
in:fly={{ y: 100, duration: 100 }}
|
||||
on:mousedown={() => {
|
||||
show = false;
|
||||
}}
|
||||
>
|
||||
{#if show}
|
||||
<div
|
||||
class=" mt-auto w-full bg-gray-50 dark:bg-gray-900 dark:text-gray-100 {className} max-h-[100dvh] overflow-y-auto scrollbar-hidden"
|
||||
on:mousedown={(e) => {
|
||||
e.stopPropagation();
|
||||
bind:this={modalElement}
|
||||
class="modal fixed right-0 bottom-0 left-0 z-999 flex h-screen max-h-[100dvh] w-full justify-center overflow-hidden overscroll-contain bg-black/60"
|
||||
in:fly={{ y: 100, duration: 100 }}
|
||||
on:mousedown={() => {
|
||||
show = false;
|
||||
}}
|
||||
>
|
||||
<slot />
|
||||
<div
|
||||
class=" mt-auto w-full bg-gray-50 dark:bg-gray-900 dark:text-gray-100 {className} scrollbar-hidden max-h-[100dvh] overflow-y-auto"
|
||||
on:mousedown={(e) => {
|
||||
e.stopPropagation();
|
||||
}}
|
||||
>
|
||||
<slot />
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{/if}
|
||||
|
||||
<style>
|
||||
.modal-content {
|
||||
|
|
|
|||
|
|
@ -161,7 +161,7 @@
|
|||
</div>
|
||||
|
||||
{#if edit}
|
||||
<div>
|
||||
<div class=" self-end">
|
||||
<Tooltip
|
||||
content={enableFullContent
|
||||
? $i18n.t(
|
||||
|
|
@ -205,7 +205,7 @@
|
|||
</div>
|
||||
{:else if isPDF}
|
||||
<div
|
||||
class="flex mb-2.5 scrollbar-none overflow-x-auto w-full border-b border-gray-100 dark:border-gray-800 text-center text-sm font-medium bg-transparent dark:text-gray-200"
|
||||
class="flex mb-2.5 scrollbar-none overflow-x-auto w-full border-b border-gray-50 dark:border-gray-850 text-center text-sm font-medium bg-transparent dark:text-gray-200"
|
||||
>
|
||||
<button
|
||||
class="min-w-fit py-1.5 px-4 border-b {selectedTab === ''
|
||||
|
|
|
|||
|
|
@ -1,3 +1,6 @@
|
|||
import { mount, unmount } from 'svelte';
|
||||
import { createClassComponent } from 'svelte/legacy';
|
||||
|
||||
import tippy from 'tippy.js';
|
||||
|
||||
export function getSuggestionRenderer(Component: any, ComponentProps = {}) {
|
||||
|
|
@ -15,7 +18,8 @@ export function getSuggestionRenderer(Component: any, ComponentProps = {}) {
|
|||
document.body.appendChild(container);
|
||||
|
||||
// mount Svelte component
|
||||
component = new Component({
|
||||
component = createClassComponent({
|
||||
component: Component,
|
||||
target: container,
|
||||
props: {
|
||||
char: props?.text,
|
||||
|
|
@ -104,7 +108,12 @@ export function getSuggestionRenderer(Component: any, ComponentProps = {}) {
|
|||
popup?.destroy();
|
||||
popup = null;
|
||||
|
||||
component?.$destroy();
|
||||
try {
|
||||
component.$destroy();
|
||||
} catch (e) {
|
||||
console.error('Error unmounting component:', e);
|
||||
}
|
||||
|
||||
component = null;
|
||||
|
||||
if (container?.parentNode) container.parentNode.removeChild(container);
|
||||
|
|
|
|||
|
|
@ -11,7 +11,9 @@
|
|||
export let className =
|
||||
'w-full rounded-lg px-3.5 py-2 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden h-full';
|
||||
|
||||
export let onInput = () => {};
|
||||
export let onBlur = () => {};
|
||||
|
||||
let textareaElement;
|
||||
|
||||
// Adjust height on mount and after setting the element.
|
||||
|
|
@ -58,6 +60,8 @@
|
|||
{readonly}
|
||||
on:input={(e) => {
|
||||
resize();
|
||||
|
||||
onInput(e);
|
||||
}}
|
||||
on:focus={() => {
|
||||
resize();
|
||||
|
|
|
|||
|
|
@ -5,7 +5,6 @@
|
|||
|
||||
import Switch from './Switch.svelte';
|
||||
import MapSelector from './Valves/MapSelector.svelte';
|
||||
import { split } from 'postcss/lib/list';
|
||||
|
||||
export let valvesSpec = null;
|
||||
export let valves = {};
|
||||
|
|
@ -168,7 +167,7 @@
|
|||
on:change={() => {
|
||||
dispatch('change');
|
||||
}}
|
||||
/>
|
||||
></textarea>
|
||||
{/if}
|
||||
</div>
|
||||
</div>
|
||||
|
|
|
|||
|
|
@ -56,12 +56,7 @@
|
|||
}
|
||||
|
||||
const loadChatPreview = async (selectedIdx) => {
|
||||
if (
|
||||
!chatList ||
|
||||
chatList.length === 0 ||
|
||||
selectedIdx === null ||
|
||||
chatList[selectedIdx] === undefined
|
||||
) {
|
||||
if (!chatList || chatList.length === 0 || selectedIdx === null) {
|
||||
selectedChat = null;
|
||||
messages = null;
|
||||
history = null;
|
||||
|
|
@ -70,8 +65,11 @@
|
|||
}
|
||||
|
||||
const selectedChatIdx = selectedIdx - actions.length;
|
||||
if (selectedChatIdx < 0) {
|
||||
if (selectedChatIdx < 0 || selectedChatIdx >= chatList.length) {
|
||||
selectedChat = null;
|
||||
messages = null;
|
||||
history = null;
|
||||
selectedModels = [''];
|
||||
return;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -183,6 +183,7 @@
|
|||
console.log('initChatList');
|
||||
currentChatPage.set(1);
|
||||
allChatsLoaded = false;
|
||||
scrollPaginationEnabled.set(false);
|
||||
|
||||
initFolders();
|
||||
await Promise.all([
|
||||
|
|
@ -368,10 +369,6 @@
|
|||
navElement.style['-webkit-app-region'] = 'drag';
|
||||
}
|
||||
}
|
||||
|
||||
if (!$showSidebar && !value) {
|
||||
showSidebar.set(true);
|
||||
}
|
||||
}),
|
||||
showSidebar.subscribe(async (value) => {
|
||||
localStorage.sidebar = value;
|
||||
|
|
@ -751,7 +748,10 @@
|
|||
</a>
|
||||
|
||||
<a href="/" class="flex flex-1 px-1.5" on:click={newChatHandler}>
|
||||
<div class=" self-center font-medium text-gray-850 dark:text-white font-primary">
|
||||
<div
|
||||
id="sidebar-webui-name"
|
||||
class=" self-center font-medium text-gray-850 dark:text-white font-primary"
|
||||
>
|
||||
{$WEBUI_NAME}
|
||||
</div>
|
||||
</a>
|
||||
|
|
|
|||
|
|
@ -76,7 +76,7 @@
|
|||
.filter((tag) => {
|
||||
const tagName = lastWord.slice(4);
|
||||
if (tagName) {
|
||||
const tagId = tagName.replace(' ', '_').toLowerCase();
|
||||
const tagId = tagName.replaceAll(' ', '_').toLowerCase();
|
||||
|
||||
if (tag.id !== tagId) {
|
||||
return tag.id.startsWith(tagId);
|
||||
|
|
@ -99,8 +99,8 @@
|
|||
.filter((folder) => {
|
||||
const folderName = lastWord.slice(7);
|
||||
if (folderName) {
|
||||
const id = folder.name.replace(' ', '_').toLowerCase();
|
||||
const folderId = folderName.replace(' ', '_').toLowerCase();
|
||||
const id = folder.name.replaceAll(' ', '_').toLowerCase();
|
||||
const folderId = folderName.replaceAll(' ', '_').toLowerCase();
|
||||
|
||||
if (id !== folderId) {
|
||||
return id.startsWith(folderId);
|
||||
|
|
@ -113,7 +113,7 @@
|
|||
})
|
||||
.map((folder) => {
|
||||
return {
|
||||
id: folder.name.replace(' ', '_').toLowerCase(),
|
||||
id: folder.name.replaceAll(' ', '_').toLowerCase(),
|
||||
name: folder.name,
|
||||
type: 'folder'
|
||||
};
|
||||
|
|
|
|||
|
|
@ -66,8 +66,8 @@
|
|||
<DropdownMenu.Content
|
||||
class="w-full {className} rounded-2xl px-1 py-1 border border-gray-100 dark:border-gray-800 z-50 bg-white dark:bg-gray-850 dark:text-white shadow-lg text-sm"
|
||||
sideOffset={4}
|
||||
side="bottom"
|
||||
align="start"
|
||||
side="top"
|
||||
align="end"
|
||||
transition={(e) => fade(e, { duration: 100 })}
|
||||
>
|
||||
<DropdownMenu.Item
|
||||
|
|
|
|||
|
|
@ -38,11 +38,12 @@
|
|||
WEBUI_NAME
|
||||
} from '$lib/stores';
|
||||
|
||||
import NotePanel from '$lib/components/notes/NotePanel.svelte';
|
||||
import { downloadPdf } from './utils';
|
||||
|
||||
import Controls from './NoteEditor/Controls.svelte';
|
||||
import Chat from './NoteEditor/Chat.svelte';
|
||||
|
||||
import NotePanel from '$lib/components/notes/NotePanel.svelte';
|
||||
import AccessControlModal from '$lib/components/workspace/common/AccessControlModal.svelte';
|
||||
|
||||
async function loadLocale(locales) {
|
||||
|
|
@ -566,117 +567,11 @@ ${content}
|
|||
const blob = new Blob([note.data.content.md], { type: 'text/markdown' });
|
||||
saveAs(blob, `${note.title}.md`);
|
||||
} else if (type === 'pdf') {
|
||||
await downloadPdf(note);
|
||||
}
|
||||
};
|
||||
|
||||
const downloadPdf = async (note) => {
|
||||
try {
|
||||
const [{ default: jsPDF }, { default: html2canvas }] = await Promise.all([
|
||||
import('jspdf'),
|
||||
import('html2canvas-pro')
|
||||
]);
|
||||
|
||||
// Define a fixed virtual screen size
|
||||
const virtualWidth = 1024; // Fixed width (adjust as needed)
|
||||
const virtualHeight = 1400; // Fixed height (adjust as needed)
|
||||
|
||||
// STEP 1. Get a DOM node to render
|
||||
const html = note.data?.content?.html ?? '';
|
||||
const isDarkMode = document.documentElement.classList.contains('dark');
|
||||
|
||||
let node;
|
||||
if (html instanceof HTMLElement) {
|
||||
node = html;
|
||||
} else {
|
||||
const virtualWidth = 800; // px, fixed width for cloned element
|
||||
|
||||
// Clone and style
|
||||
node = document.createElement('div');
|
||||
|
||||
// title node
|
||||
const titleNode = document.createElement('div');
|
||||
titleNode.textContent = note.title;
|
||||
titleNode.style.fontSize = '24px';
|
||||
titleNode.style.fontWeight = 'medium';
|
||||
titleNode.style.paddingBottom = '20px';
|
||||
titleNode.style.color = isDarkMode ? 'white' : 'black';
|
||||
node.appendChild(titleNode);
|
||||
|
||||
const contentNode = document.createElement('div');
|
||||
|
||||
contentNode.innerHTML = html;
|
||||
|
||||
node.appendChild(contentNode);
|
||||
|
||||
node.classList.add('text-black');
|
||||
node.classList.add('dark:text-white');
|
||||
node.style.width = `${virtualWidth}px`;
|
||||
node.style.position = 'absolute';
|
||||
node.style.left = '-9999px';
|
||||
node.style.height = 'auto';
|
||||
node.style.padding = '40px 40px';
|
||||
|
||||
console.log(node);
|
||||
document.body.appendChild(node);
|
||||
try {
|
||||
await downloadPdf(note);
|
||||
} catch (error) {
|
||||
toast.error(`${error}`);
|
||||
}
|
||||
|
||||
// Render to canvas with predefined width
|
||||
const canvas = await html2canvas(node, {
|
||||
useCORS: true,
|
||||
backgroundColor: isDarkMode ? '#000' : '#fff',
|
||||
scale: 2, // Keep at 1x to avoid unexpected enlargements
|
||||
width: virtualWidth, // Set fixed virtual screen width
|
||||
windowWidth: virtualWidth, // Ensure consistent rendering
|
||||
windowHeight: virtualHeight
|
||||
});
|
||||
|
||||
// Remove hidden node if needed
|
||||
if (!(html instanceof HTMLElement)) {
|
||||
document.body.removeChild(node);
|
||||
}
|
||||
|
||||
const imgData = canvas.toDataURL('image/jpeg', 0.7);
|
||||
|
||||
// A4 page settings
|
||||
const pdf = new jsPDF('p', 'mm', 'a4');
|
||||
const imgWidth = 210; // A4 width in mm
|
||||
const pageWidthMM = 210; // A4 width in mm
|
||||
const pageHeight = 297; // A4 height in mm
|
||||
const pageHeightMM = 297; // A4 height in mm
|
||||
|
||||
if (isDarkMode) {
|
||||
pdf.setFillColor(0, 0, 0);
|
||||
pdf.rect(0, 0, pageWidthMM, pageHeightMM, 'F'); // black bg
|
||||
}
|
||||
|
||||
// Maintain aspect ratio
|
||||
const imgHeight = (canvas.height * imgWidth) / canvas.width;
|
||||
let heightLeft = imgHeight;
|
||||
let position = 0;
|
||||
|
||||
pdf.addImage(imgData, 'JPEG', 0, position, imgWidth, imgHeight);
|
||||
heightLeft -= pageHeight;
|
||||
|
||||
// Handle additional pages
|
||||
while (heightLeft > 0) {
|
||||
position -= pageHeight;
|
||||
pdf.addPage();
|
||||
|
||||
if (isDarkMode) {
|
||||
pdf.setFillColor(0, 0, 0);
|
||||
pdf.rect(0, 0, pageWidthMM, pageHeightMM, 'F'); // black bg
|
||||
}
|
||||
|
||||
pdf.addImage(imgData, 'JPEG', 0, position, imgWidth, imgHeight);
|
||||
heightLeft -= pageHeight;
|
||||
}
|
||||
|
||||
pdf.save(`${note.title}.pdf`);
|
||||
} catch (error) {
|
||||
console.error('Error generating PDF', error);
|
||||
|
||||
toast.error(`${error}`);
|
||||
}
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -35,12 +35,7 @@
|
|||
import { goto } from '$app/navigation';
|
||||
import { onMount, tick, getContext } from 'svelte';
|
||||
|
||||
import {
|
||||
OLLAMA_API_BASE_URL,
|
||||
OPENAI_API_BASE_URL,
|
||||
WEBUI_API_BASE_URL,
|
||||
WEBUI_BASE_URL
|
||||
} from '$lib/constants';
|
||||
import { WEBUI_BASE_URL } from '$lib/constants';
|
||||
import { WEBUI_NAME, config, user, models, settings } from '$lib/stores';
|
||||
|
||||
import { chatCompletion } from '$lib/apis/openai';
|
||||
|
|
@ -189,7 +184,10 @@ Based on the user's instruction, update and enhance the existing notes or select
|
|||
{
|
||||
model: model.id,
|
||||
stream: true,
|
||||
messages: chatMessages
|
||||
messages: chatMessages.map((m) => ({
|
||||
role: m.role,
|
||||
content: m.content
|
||||
}))
|
||||
// ...(files && files.length > 0 ? { files } : {}) // TODO: Decide whether to use native file handling or not
|
||||
},
|
||||
`${WEBUI_BASE_URL}/api`
|
||||
|
|
@ -327,7 +325,7 @@ Based on the user's instruction, update and enhance the existing notes or select
|
|||
});
|
||||
</script>
|
||||
|
||||
<div class="flex items-center mb-1.5 pt-1.5 px-2.5">
|
||||
<div class="flex items-center mb-1.5 pt-1.5">
|
||||
<div class="flex items-center mr-1">
|
||||
<button
|
||||
class="p-0.5 bg-transparent transition rounded-lg"
|
||||
|
|
@ -358,7 +356,7 @@ Based on the user's instruction, update and enhance the existing notes or select
|
|||
</div>
|
||||
</div>
|
||||
|
||||
<div class="flex flex-col items-center flex-1 @container px-2.5">
|
||||
<div class="flex flex-col items-center flex-1 @container">
|
||||
<div class=" flex flex-col justify-between w-full overflow-y-auto h-full">
|
||||
<div class="mx-auto w-full md:px-0 h-full relative">
|
||||
<div class=" flex flex-col h-full">
|
||||
|
|
@ -377,7 +375,7 @@ Based on the user's instruction, update and enhance the existing notes or select
|
|||
|
||||
<div class=" pb-[1rem]">
|
||||
{#if selectedContent}
|
||||
<div class="text-xs rounded-xl px-3.5 py-3 w-full markdown-prose-xs">
|
||||
<div class="text-xs rounded-xl px-2.5 py-3 w-full markdown-prose-xs">
|
||||
<blockquote>
|
||||
<div class=" line-clamp-3">
|
||||
{selectedContent?.text}
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@
|
|||
};
|
||||
</script>
|
||||
|
||||
<div class="flex items-center mb-1.5 pt-1.5 px-2.5">
|
||||
<div class="flex items-center mb-1.5 pt-1.5">
|
||||
<div class=" mr-1 flex items-center">
|
||||
<button
|
||||
class="p-0.5 bg-transparent transition rounded-lg"
|
||||
|
|
@ -36,10 +36,10 @@
|
|||
</div>
|
||||
</div>
|
||||
|
||||
<div class="mt-1 px-2.5">
|
||||
<div class="mt-1 px-1.5">
|
||||
<div class="pb-10">
|
||||
{#if files.length > 0}
|
||||
<div class=" text-xs font-medium pb-1">{$i18n.t('Files')}</div>
|
||||
<div class=" text-xs font-medium mb-2">{$i18n.t('Files')}</div>
|
||||
|
||||
<div class="flex flex-col gap-1">
|
||||
{#each files.filter((file) => file.type !== 'image') as file, fileIdx}
|
||||
|
|
|
|||
|
|
@ -99,7 +99,7 @@
|
|||
{#if show}
|
||||
<div class="flex max-h-full min-h-full">
|
||||
<div
|
||||
class="w-full pt-2 bg-white dark:shadow-lg dark:bg-gray-850 z-40 pointer-events-auto overflow-y-auto scrollbar-hidden flex flex-col"
|
||||
class="w-full pt-2 bg-white dark:shadow-lg dark:bg-gray-850 z-40 pointer-events-auto overflow-y-auto scrollbar-hidden flex flex-col px-2"
|
||||
>
|
||||
<slot />
|
||||
</div>
|
||||
|
|
|
|||
|
|
@ -36,6 +36,8 @@
|
|||
import { createNewNote, deleteNoteById, getNotes } from '$lib/apis/notes';
|
||||
import { capitalizeFirstLetter, copyToClipboard, getTimeRange } from '$lib/utils';
|
||||
|
||||
import { downloadPdf } from './utils';
|
||||
|
||||
import EllipsisHorizontal from '../icons/EllipsisHorizontal.svelte';
|
||||
import DeleteConfirmDialog from '$lib/components/common/ConfirmDialog.svelte';
|
||||
import Search from '../icons/Search.svelte';
|
||||
|
|
@ -124,82 +126,18 @@
|
|||
};
|
||||
|
||||
const downloadHandler = async (type) => {
|
||||
console.log('downloadHandler', type);
|
||||
console.log('selectedNote', selectedNote);
|
||||
if (type === 'md') {
|
||||
if (type === 'txt') {
|
||||
const blob = new Blob([selectedNote.data.content.md], { type: 'text/plain' });
|
||||
saveAs(blob, `${selectedNote.title}.txt`);
|
||||
} else if (type === 'md') {
|
||||
const blob = new Blob([selectedNote.data.content.md], { type: 'text/markdown' });
|
||||
saveAs(blob, `${selectedNote.title}.md`);
|
||||
} else if (type === 'pdf') {
|
||||
await downloadPdf(selectedNote);
|
||||
}
|
||||
};
|
||||
|
||||
const downloadPdf = async (note) => {
|
||||
try {
|
||||
const [{ default: jsPDF }, { default: html2canvas }] = await Promise.all([
|
||||
import('jspdf'),
|
||||
import('html2canvas-pro')
|
||||
]);
|
||||
|
||||
// Define a fixed virtual screen size
|
||||
const virtualWidth = 1024; // Fixed width (adjust as needed)
|
||||
const virtualHeight = 1400; // Fixed height (adjust as needed)
|
||||
|
||||
// STEP 1. Get a DOM node to render
|
||||
const html = note.data?.content?.html ?? '';
|
||||
let node;
|
||||
if (html instanceof HTMLElement) {
|
||||
node = html;
|
||||
} else {
|
||||
// If it's HTML string, render to a temporary hidden element
|
||||
node = document.createElement('div');
|
||||
node.innerHTML = html;
|
||||
document.body.appendChild(node);
|
||||
try {
|
||||
await downloadPdf(selectedNote);
|
||||
} catch (error) {
|
||||
toast.error(`${error}`);
|
||||
}
|
||||
|
||||
// Render to canvas with predefined width
|
||||
const canvas = await html2canvas(node, {
|
||||
useCORS: true,
|
||||
scale: 2, // Keep at 1x to avoid unexpected enlargements
|
||||
width: virtualWidth, // Set fixed virtual screen width
|
||||
windowWidth: virtualWidth, // Ensure consistent rendering
|
||||
windowHeight: virtualHeight
|
||||
});
|
||||
|
||||
// Remove hidden node if needed
|
||||
if (!(html instanceof HTMLElement)) {
|
||||
document.body.removeChild(node);
|
||||
}
|
||||
|
||||
const imgData = canvas.toDataURL('image/jpeg', 0.7);
|
||||
|
||||
// A4 page settings
|
||||
const pdf = new jsPDF('p', 'mm', 'a4');
|
||||
const imgWidth = 210; // A4 width in mm
|
||||
const pageHeight = 297; // A4 height in mm
|
||||
|
||||
// Maintain aspect ratio
|
||||
const imgHeight = (canvas.height * imgWidth) / canvas.width;
|
||||
let heightLeft = imgHeight;
|
||||
let position = 0;
|
||||
|
||||
pdf.addImage(imgData, 'JPEG', 0, position, imgWidth, imgHeight);
|
||||
heightLeft -= pageHeight;
|
||||
|
||||
// Handle additional pages
|
||||
while (heightLeft > 0) {
|
||||
position -= pageHeight;
|
||||
pdf.addPage();
|
||||
|
||||
pdf.addImage(imgData, 'JPEG', 0, position, imgWidth, imgHeight);
|
||||
heightLeft -= pageHeight;
|
||||
}
|
||||
|
||||
pdf.save(`${note.title}.pdf`);
|
||||
} catch (error) {
|
||||
console.error('Error generating PDF', error);
|
||||
|
||||
toast.error(`${error}`);
|
||||
}
|
||||
};
|
||||
|
||||
|
|
|
|||
103
src/lib/components/notes/utils.ts
Normal file
103
src/lib/components/notes/utils.ts
Normal file
|
|
@ -0,0 +1,103 @@
|
|||
export const downloadPdf = async (note) => {
|
||||
const [{ default: jsPDF }, { default: html2canvas }] = await Promise.all([
|
||||
import('jspdf'),
|
||||
import('html2canvas-pro')
|
||||
]);
|
||||
|
||||
// Define a fixed virtual screen size
|
||||
const virtualWidth = 1024; // Fixed width (adjust as needed)
|
||||
const virtualHeight = 1400; // Fixed height (adjust as needed)
|
||||
|
||||
// STEP 1. Get a DOM node to render
|
||||
const html = note.data?.content?.html ?? '';
|
||||
const isDarkMode = document.documentElement.classList.contains('dark');
|
||||
|
||||
let node;
|
||||
if (html instanceof HTMLElement) {
|
||||
node = html;
|
||||
} else {
|
||||
const virtualWidth = 800; // px, fixed width for cloned element
|
||||
|
||||
// Clone and style
|
||||
node = document.createElement('div');
|
||||
|
||||
// title node
|
||||
const titleNode = document.createElement('div');
|
||||
titleNode.textContent = note.title;
|
||||
titleNode.style.fontSize = '24px';
|
||||
titleNode.style.fontWeight = 'medium';
|
||||
titleNode.style.paddingBottom = '20px';
|
||||
titleNode.style.color = isDarkMode ? 'white' : 'black';
|
||||
node.appendChild(titleNode);
|
||||
|
||||
const contentNode = document.createElement('div');
|
||||
|
||||
contentNode.innerHTML = html;
|
||||
|
||||
node.appendChild(contentNode);
|
||||
|
||||
node.classList.add('text-black');
|
||||
node.classList.add('dark:text-white');
|
||||
node.style.width = `${virtualWidth}px`;
|
||||
node.style.position = 'absolute';
|
||||
node.style.left = '-9999px';
|
||||
node.style.height = 'auto';
|
||||
node.style.padding = '40px 40px';
|
||||
|
||||
console.log(node);
|
||||
document.body.appendChild(node);
|
||||
}
|
||||
|
||||
// Render to canvas with predefined width
|
||||
const canvas = await html2canvas(node, {
|
||||
useCORS: true,
|
||||
backgroundColor: isDarkMode ? '#000' : '#fff',
|
||||
scale: 2, // Keep at 1x to avoid unexpected enlargements
|
||||
width: virtualWidth, // Set fixed virtual screen width
|
||||
windowWidth: virtualWidth, // Ensure consistent rendering
|
||||
windowHeight: virtualHeight
|
||||
});
|
||||
|
||||
// Remove hidden node if needed
|
||||
if (!(html instanceof HTMLElement)) {
|
||||
document.body.removeChild(node);
|
||||
}
|
||||
|
||||
const imgData = canvas.toDataURL('image/jpeg', 0.7);
|
||||
|
||||
// A4 page settings
|
||||
const pdf = new jsPDF('p', 'mm', 'a4');
|
||||
const imgWidth = 210; // A4 width in mm
|
||||
const pageWidthMM = 210; // A4 width in mm
|
||||
const pageHeight = 297; // A4 height in mm
|
||||
const pageHeightMM = 297; // A4 height in mm
|
||||
|
||||
if (isDarkMode) {
|
||||
pdf.setFillColor(0, 0, 0);
|
||||
pdf.rect(0, 0, pageWidthMM, pageHeightMM, 'F'); // black bg
|
||||
}
|
||||
|
||||
// Maintain aspect ratio
|
||||
const imgHeight = (canvas.height * imgWidth) / canvas.width;
|
||||
let heightLeft = imgHeight;
|
||||
let position = 0;
|
||||
|
||||
pdf.addImage(imgData, 'JPEG', 0, position, imgWidth, imgHeight);
|
||||
heightLeft -= pageHeight;
|
||||
|
||||
// Handle additional pages
|
||||
while (heightLeft > 0) {
|
||||
position -= pageHeight;
|
||||
pdf.addPage();
|
||||
|
||||
if (isDarkMode) {
|
||||
pdf.setFillColor(0, 0, 0);
|
||||
pdf.rect(0, 0, pageWidthMM, pageHeightMM, 'F'); // black bg
|
||||
}
|
||||
|
||||
pdf.addImage(imgData, 'JPEG', 0, position, imgWidth, imgHeight);
|
||||
heightLeft -= pageHeight;
|
||||
}
|
||||
|
||||
pdf.save(`${note.title}.pdf`);
|
||||
};
|
||||
|
|
@ -184,12 +184,6 @@
|
|||
|
||||
if (uploadedFile) {
|
||||
console.log(uploadedFile);
|
||||
|
||||
if (uploadedFile.error) {
|
||||
console.warn('File upload warning:', uploadedFile.error);
|
||||
toast.warning(uploadedFile.error);
|
||||
}
|
||||
|
||||
knowledge.files = knowledge.files.map((item) => {
|
||||
if (item.itemId === tempItemId) {
|
||||
item.id = uploadedFile.id;
|
||||
|
|
@ -199,7 +193,14 @@
|
|||
delete item.itemId;
|
||||
return item;
|
||||
});
|
||||
await addFileHandler(uploadedFile.id);
|
||||
|
||||
if (uploadedFile.error) {
|
||||
console.warn('File upload warning:', uploadedFile.error);
|
||||
toast.warning(uploadedFile.error);
|
||||
knowledge.files = knowledge.files.filter((file) => file.id !== uploadedFile.id);
|
||||
} else {
|
||||
await addFileHandler(uploadedFile.id);
|
||||
}
|
||||
} else {
|
||||
toast.error($i18n.t('Failed to upload file.'));
|
||||
}
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@
|
|||
import {
|
||||
createNewModel,
|
||||
deleteModelById,
|
||||
getModels as getWorkspaceModels,
|
||||
getModelItems as getWorkspaceModels,
|
||||
toggleModelById,
|
||||
updateModelById
|
||||
} from '$lib/apis/models';
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
<script lang="ts">
|
||||
import Checkbox from '$lib/components/common/Checkbox.svelte';
|
||||
import Tooltip from '$lib/components/common/Tooltip.svelte';
|
||||
import { getContext, onMount } from 'svelte';
|
||||
|
||||
export let tools = [];
|
||||
|
|
@ -46,9 +47,11 @@
|
|||
/>
|
||||
</div>
|
||||
|
||||
<div class=" py-0.5 text-sm w-full capitalize font-medium">
|
||||
{_tools[tool].name}
|
||||
</div>
|
||||
<Tooltip content={_tools[tool]?.meta?.description ?? _tools[tool].id}>
|
||||
<div class=" py-0.5 text-sm w-full capitalize font-medium">
|
||||
{_tools[tool].name}
|
||||
</div>
|
||||
</Tooltip>
|
||||
</div>
|
||||
{/each}
|
||||
</div>
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@
|
|||
let promptsImportInputElement: HTMLInputElement;
|
||||
let loaded = false;
|
||||
|
||||
let importFiles = '';
|
||||
let importFiles = null;
|
||||
let query = '';
|
||||
|
||||
let prompts = [];
|
||||
|
|
|
|||
|
|
@ -106,7 +106,7 @@
|
|||
<div class="flex flex-col w-full">
|
||||
<div class="flex items-center">
|
||||
<input
|
||||
class="text-2xl font-semibold w-full bg-transparent outline-hidden"
|
||||
class="text-2xl font-medium w-full bg-transparent outline-hidden"
|
||||
placeholder={$i18n.t('Title')}
|
||||
bind:value={title}
|
||||
required
|
||||
|
|
@ -153,7 +153,7 @@
|
|||
<div>
|
||||
<Textarea
|
||||
className="text-sm w-full bg-transparent outline-hidden overflow-y-hidden resize-none"
|
||||
placeholder={$i18n.t('Write a summary in 50 words that summarizes [topic or keyword].')}
|
||||
placeholder={$i18n.t('Write a summary in 50 words that summarizes {{topic}}.')}
|
||||
bind:value={content}
|
||||
rows={6}
|
||||
required
|
||||
|
|
@ -181,7 +181,7 @@
|
|||
|
||||
<div class="my-4 flex justify-end pb-20">
|
||||
<button
|
||||
class=" text-sm w-full lg:w-fit px-4 py-2 transition rounded-lg {loading
|
||||
class=" text-sm w-full lg:w-fit px-4 py-2 transition rounded-xl {loading
|
||||
? ' cursor-not-allowed bg-black hover:bg-gray-900 text-white dark:bg-white dark:hover:bg-gray-100 dark:text-black'
|
||||
: 'bg-black hover:bg-gray-900 text-white dark:bg-white dark:hover:bg-gray-100 dark:text-black'} flex w-full justify-center"
|
||||
type="submit"
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue