diff --git a/.github/workflows/codespell.yml b/.github/workflows/codespell.yml new file mode 100644 index 0000000000..b231667430 --- /dev/null +++ b/.github/workflows/codespell.yml @@ -0,0 +1,25 @@ +# Codespell configuration is within pyproject.toml +--- +name: Codespell + +on: + push: + branches: [main] + pull_request: + branches: [main] + +permissions: + contents: read + +jobs: + codespell: + name: Check for spelling errors + runs-on: ubuntu-latest + + steps: + - name: Checkout + uses: actions/checkout@v4 + - name: Annotate locations with typos + uses: codespell-project/codespell-problem-matcher@v1 + - name: Codespell + uses: codespell-project/actions-codespell@v2 diff --git a/CHANGELOG.md b/CHANGELOG.md index eaf0f6213a..80f5f481e5 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,6 +5,33 @@ All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## [0.5.4] - 2024-01-05 + +### Added + +- **🔄 Clone Shared Chats**: Effortlessly clone shared chats to save time and streamline collaboration, perfect for reusing insightful discussions or custom setups. +- **📣 Native Notifications for Channel Messages**: Stay informed with integrated desktop notifications for channel messages, ensuring you never miss important updates while multitasking. +- **🔥 Torch MPS Support**: MPS support for Mac users when Open WebUI is installed directly, offering better performance and compatibility for AI workloads. +- **🌍 Enhanced Translations**: Small improvements to various translations, ensuring a smoother global user experience. + +### Fixed + +- **🖼️ Image-Only Messages in Channels**: You can now send images without accompanying text or content in channels. +- **❌ Proper Exception Handling**: Enhanced error feedback by ensuring exceptions are raised clearly, reducing confusion and promoting smoother debugging. +- **🔍 RAG Query Generation Restored**: Fixed query generation issues for Retrieval-Augmented Generation, improving retrieval accuracy and ensuring seamless functionality. +- **📩 MOA Response Functionality Fixed**: Addressed an error with the MOA response generation feature. +- **💬 Channel Thread Loading with 50+ Messages**: Resolved an issue where channel threads stalled when exceeding 50 messages, ensuring smooth navigation in active discussions. +- **🔑 API Endpoint Restrictions Resolution**: Fixed a critical bug where the 'API_KEY_ALLOWED_ENDPOINTS' setting was not functioning as intended, ensuring API access is limited to specified endpoints for enhanced security. +- **🛠️ Action Functions Restored**: Corrected an issue preventing action functions from working, restoring their utility for customized automations and workflows. +- **📂 Temporary Chat JSON Export Fix**: Resolved a bug blocking temporary chats from being exported in JSON format, ensuring seamless data portability. + +### Changed + +- **🎛️ Sidebar UI Tweaks**: Chat folders, including pinned folders, now display below the Chats section for better organization; the "New Folder" button has been relocated to the Chats section for a more intuitive workflow. +- **🏗️ Real-Time Save Disabled by Default**: The 'ENABLE_REALTIME_CHAT_SAVE' setting is now off by default, boosting response speed for users who prioritize performance in high-paced workflows or less critical scenarios. +- **🎤 Audio Input Echo Cancellation**: Audio input now features echo cancellation enabled by default, reducing audio feedback for improved clarity during conversations or voice-based interactions. +- **🔧 General Reliability Improvements**: Numerous under-the-hood enhancements have been made to improve platform stability, boost overall performance, and ensure a more seamless, dependable experience across workflows. + ## [0.5.3] - 2024-12-31 ### Added diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md index b1c7b56a33..eb54b48947 100644 --- a/CODE_OF_CONDUCT.md +++ b/CODE_OF_CONDUCT.md @@ -2,7 +2,7 @@ ## Our Pledge -As members, contributors, and leaders of this community, we pledge to make participation in our open-source project a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. +As members, contributors, and leaders of this community, we pledge to make participation in our open-source project a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socioeconomic status, nationality, personal appearance, race, religion, or sexual identity and orientation. We are committed to creating and maintaining an open, respectful, and professional environment where positive contributions and meaningful discussions can flourish. By participating in this project, you agree to uphold these values and align your behavior to the standards outlined in this Code of Conduct. diff --git a/README.md b/README.md index 8c7e4cdf7c..5f3299a3e3 100644 --- a/README.md +++ b/README.md @@ -11,7 +11,9 @@ [](https://discord.gg/5rJgQTnV4s) [](https://github.com/sponsors/tjbck) -Open WebUI is an [extensible](https://github.com/open-webui/pipelines), feature-rich, and user-friendly self-hosted WebUI designed to operate entirely offline. It supports various LLM runners, including Ollama and OpenAI-compatible APIs. For more information, be sure to check out our [Open WebUI Documentation](https://docs.openwebui.com/). +**Open WebUI is an [extensible](https://docs.openwebui.com/features/plugin/), feature-rich, and user-friendly self-hosted AI platform designed to operate entirely offline.** It supports various LLM runners like **Ollama** and **OpenAI-compatible APIs**, with **built-in inference engine** for RAG, making it a **powerful AI deployment solution**. + +For more information, be sure to check out our [Open WebUI Documentation](https://docs.openwebui.com/).  diff --git a/backend/open_webui/config.py b/backend/open_webui/config.py index d8ea985fe6..a48b2db055 100644 --- a/backend/open_webui/config.py +++ b/backend/open_webui/config.py @@ -1211,6 +1211,9 @@ if VECTOR_DB == "pgvector" and not PGVECTOR_DB_URL.startswith("postgres"): raise ValueError( "Pgvector requires setting PGVECTOR_DB_URL or using Postgres with vector extension as the primary database." ) +PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH = int( + os.environ.get("PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH", "1536") +) #################################### # Information Retrieval (RAG) diff --git a/backend/open_webui/env.py b/backend/open_webui/env.py index 8aafa15b67..f16f2ea6ef 100644 --- a/backend/open_webui/env.py +++ b/backend/open_webui/env.py @@ -53,6 +53,11 @@ if USE_CUDA.lower() == "true": else: DEVICE_TYPE = "cpu" +try: + if torch.backends.mps.is_available() and torch.backends.mps.is_built(): + DEVICE_TYPE = "mps" +except Exception: + pass #################################### # LOGGING @@ -313,7 +318,7 @@ RESET_CONFIG_ON_START = ( ENABLE_REALTIME_CHAT_SAVE = ( - os.environ.get("ENABLE_REALTIME_CHAT_SAVE", "True").lower() == "true" + os.environ.get("ENABLE_REALTIME_CHAT_SAVE", "False").lower() == "true" ) #################################### diff --git a/backend/open_webui/models/chats.py b/backend/open_webui/models/chats.py index 18f802afe9..8e721da787 100644 --- a/backend/open_webui/models/chats.py +++ b/backend/open_webui/models/chats.py @@ -469,6 +469,8 @@ class ChatTable: def get_chat_by_share_id(self, id: str) -> Optional[ChatModel]: try: with get_db() as db: + # it is possible that the shared link was deleted. hence, + # we check if the chat is still shared by checkng if a chat with the share_id exists chat = db.query(Chat).filter_by(share_id=id).first() if chat: diff --git a/backend/open_webui/models/messages.py b/backend/open_webui/models/messages.py index 87da2f3fbb..a27ae52519 100644 --- a/backend/open_webui/models/messages.py +++ b/backend/open_webui/models/messages.py @@ -189,9 +189,11 @@ class MessageTable: .all() ) - return [ - MessageModel.model_validate(message) for message in all_messages - ] + [MessageModel.model_validate(message)] + # If length of all_messages is less than limit, then add the parent message + if len(all_messages) < limit: + all_messages.append(message) + + return [MessageModel.model_validate(message) for message in all_messages] def update_message_by_id( self, id: str, form_data: MessageForm diff --git a/backend/open_webui/retrieval/vector/dbs/pgvector.py b/backend/open_webui/retrieval/vector/dbs/pgvector.py index cb8c545e92..64b6fd6c7f 100644 --- a/backend/open_webui/retrieval/vector/dbs/pgvector.py +++ b/backend/open_webui/retrieval/vector/dbs/pgvector.py @@ -5,6 +5,7 @@ from sqlalchemy import ( create_engine, Column, Integer, + MetaData, select, text, Text, @@ -19,9 +20,9 @@ from pgvector.sqlalchemy import Vector from sqlalchemy.ext.mutable import MutableDict from open_webui.retrieval.vector.main import VectorItem, SearchResult, GetResult -from open_webui.config import PGVECTOR_DB_URL +from open_webui.config import PGVECTOR_DB_URL, PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH -VECTOR_LENGTH = 1536 +VECTOR_LENGTH = PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH Base = declarative_base() @@ -56,6 +57,9 @@ class PgvectorClient: # Ensure the pgvector extension is available self.session.execute(text("CREATE EXTENSION IF NOT EXISTS vector;")) + # Check vector length consistency + self.check_vector_length() + # Create the tables if they do not exist # Base.metadata.create_all requires a bind (engine or connection) # Get the connection from the session @@ -82,6 +86,38 @@ class PgvectorClient: print(f"Error during initialization: {e}") raise + def check_vector_length(self) -> None: + """ + Check if the VECTOR_LENGTH matches the existing vector column dimension in the database. + Raises an exception if there is a mismatch. + """ + metadata = MetaData() + metadata.reflect(bind=self.session.bind, only=["document_chunk"]) + + if "document_chunk" in metadata.tables: + document_chunk_table = metadata.tables["document_chunk"] + if "vector" in document_chunk_table.columns: + vector_column = document_chunk_table.columns["vector"] + vector_type = vector_column.type + if isinstance(vector_type, Vector): + db_vector_length = vector_type.dim + if db_vector_length != VECTOR_LENGTH: + raise Exception( + f"VECTOR_LENGTH {VECTOR_LENGTH} does not match existing vector column dimension {db_vector_length}. " + "Cannot change vector size after initialization without migrating the data." + ) + else: + raise Exception( + "The 'vector' column exists but is not of type 'Vector'." + ) + else: + raise Exception( + "The 'vector' column does not exist in the 'document_chunk' table." + ) + else: + # Table does not exist yet; no action needed + pass + def adjust_vector_length(self, vector: List[float]) -> List[float]: # Adjust vector to have length VECTOR_LENGTH current_length = len(vector) diff --git a/backend/open_webui/retrieval/web/testdata/brave.json b/backend/open_webui/retrieval/web/testdata/brave.json index 38487390d9..0cc72109ef 100644 --- a/backend/open_webui/retrieval/web/testdata/brave.json +++ b/backend/open_webui/retrieval/web/testdata/brave.json @@ -683,7 +683,7 @@ "age": "October 29, 2022", "extra_snippets": [ "You can pass many options to the configure script; run ./configure --help to find out more. On macOS case-insensitive file systems and on Cygwin, the executable is called python.exe; elsewhere it's just python.", - "Building a complete Python installation requires the use of various additional third-party libraries, depending on your build platform and configure options. Not all standard library modules are buildable or useable on all platforms. Refer to the Install dependencies section of the Developer Guide for current detailed information on dependencies for various Linux distributions and macOS.", + "Building a complete Python installation requires the use of various additional third-party libraries, depending on your build platform and configure options. Not all standard library modules are buildable or usable on all platforms. Refer to the Install dependencies section of the Developer Guide for current detailed information on dependencies for various Linux distributions and macOS.", "To get an optimized build of Python, configure --enable-optimizations before you run make. This sets the default make targets up to enable Profile Guided Optimization (PGO) and may be used to auto-enable Link Time Optimization (LTO) on some platforms. For more details, see the sections below.", "Copyright © 2001-2024 Python Software Foundation. All rights reserved." ] diff --git a/backend/open_webui/routers/chats.py b/backend/open_webui/routers/chats.py index 5e0e75e24b..a001dd01f2 100644 --- a/backend/open_webui/routers/chats.py +++ b/backend/open_webui/routers/chats.py @@ -463,6 +463,30 @@ async def clone_chat_by_id(id: str, user=Depends(get_verified_user)): ) +############################ +# CloneSharedChatById +############################ + + +@router.post("/{id}/clone/shared", response_model=Optional[ChatResponse]) +async def clone_shared_chat_by_id(id: str, user=Depends(get_verified_user)): + chat = Chats.get_chat_by_share_id(id) + if chat: + updated_chat = { + **chat.chat, + "originalChatId": chat.id, + "branchPointMessageId": chat.chat["history"]["currentId"], + "title": f"Clone of {chat.title}", + } + + chat = Chats.insert_new_chat(user.id, ChatForm(**{"chat": updated_chat})) + return ChatResponse(**chat.model_dump()) + else: + raise HTTPException( + status_code=status.HTTP_401_UNAUTHORIZED, detail=ERROR_MESSAGES.DEFAULT() + ) + + ############################ # ArchiveChat ############################ diff --git a/backend/open_webui/routers/tasks.py b/backend/open_webui/routers/tasks.py index 107b99b09f..7d14a9d18d 100644 --- a/backend/open_webui/routers/tasks.py +++ b/backend/open_webui/routers/tasks.py @@ -499,8 +499,8 @@ async def generate_moa_response( "model": task_model_id, "messages": [{"role": "user", "content": content}], "stream": form_data.get("stream", False), - "chat_id": form_data.get("chat_id", None), "metadata": { + "chat_id": form_data.get("chat_id", None), "task": str(TASKS.MOA_RESPONSE_GENERATION), "task_body": form_data, }, diff --git a/backend/open_webui/utils/auth.py b/backend/open_webui/utils/auth.py index 2c16afd76c..3a04909606 100644 --- a/backend/open_webui/utils/auth.py +++ b/backend/open_webui/utils/auth.py @@ -99,9 +99,9 @@ def get_current_user( if request.app.state.config.ENABLE_API_KEY_ENDPOINT_RESTRICTIONS: allowed_paths = [ path.strip() - for path in str(request.app.state.config.API_KEY_ALLOWED_PATHS).split( - "," - ) + for path in str( + request.app.state.config.API_KEY_ALLOWED_ENDPOINTS + ).split(",") ] if request.url.path not in allowed_paths: diff --git a/backend/open_webui/utils/chat.py b/backend/open_webui/utils/chat.py index 7fce764194..0719f6af5b 100644 --- a/backend/open_webui/utils/chat.py +++ b/backend/open_webui/utils/chat.py @@ -315,6 +315,7 @@ async def chat_action(request: Request, action_id: str, form_data: dict, user: A "chat_id": data["chat_id"], "message_id": data["id"], "session_id": data["session_id"], + "user_id": user.id, } ) __event_call__ = get_event_call( @@ -322,6 +323,7 @@ async def chat_action(request: Request, action_id: str, form_data: dict, user: A "chat_id": data["chat_id"], "message_id": data["id"], "session_id": data["session_id"], + "user_id": user.id, } ) diff --git a/backend/open_webui/utils/middleware.py b/backend/open_webui/utils/middleware.py index 0e32bf626a..3c53435a4e 100644 --- a/backend/open_webui/utils/middleware.py +++ b/backend/open_webui/utils/middleware.py @@ -494,6 +494,7 @@ async def chat_completion_files_handler( if files := body.get("metadata", {}).get("files", None): try: queries_response = await generate_queries( + request, { "model": body["model"], "messages": body["messages"], @@ -644,7 +645,7 @@ async def process_chat_payload(request, form_data, metadata, user, model): request, form_data, model, extra_params ) except Exception as e: - return Exception(f"Error: {e}") + raise Exception(f"Error: {e}") tool_ids = form_data.pop("tool_ids", None) files = form_data.pop("files", None) diff --git a/backend/requirements.txt b/backend/requirements.txt index e386248798..f951d78dbb 100644 --- a/backend/requirements.txt +++ b/backend/requirements.txt @@ -3,7 +3,7 @@ uvicorn[standard]==0.30.6 pydantic==2.9.2 python-multipart==0.0.18 -Flask==3.0.3 +Flask==3.1.0 Flask-Cors==5.0.0 python-socketio==5.11.3 @@ -18,7 +18,7 @@ aiofiles sqlalchemy==2.0.32 alembic==1.14.0 -peewee==3.17.6 +peewee==3.17.8 peewee-migrate==1.12.2 psycopg2-binary==2.9.9 pgvector==0.3.5 @@ -55,7 +55,7 @@ einops==0.8.0 ftfy==6.2.3 pypdf==4.3.1 -fpdf2==2.7.9 +fpdf2==2.8.2 pymdown-extensions==10.11.2 docx2txt==0.8 python-pptx==1.0.0 @@ -67,7 +67,7 @@ pandas==2.2.3 openpyxl==3.1.5 pyxlsb==1.0.10 xlrd==2.0.1 -validators==0.33.0 +validators==0.34.0 psutil sentencepiece soundfile==0.12.1 @@ -78,7 +78,7 @@ rank-bm25==0.2.2 faster-whisper==1.0.3 -PyJWT[crypto]==2.9.0 +PyJWT[crypto]==2.10.1 authlib==1.3.2 black==24.8.0 diff --git a/package-lock.json b/package-lock.json index 1afeaecbec..0d78397efa 100644 --- a/package-lock.json +++ b/package-lock.json @@ -1,12 +1,12 @@ { "name": "open-webui", - "version": "0.5.3", + "version": "0.5.4", "lockfileVersion": 3, "requires": true, "packages": { "": { "name": "open-webui", - "version": "0.5.3", + "version": "0.5.4", "dependencies": { "@codemirror/lang-javascript": "^6.2.2", "@codemirror/lang-python": "^6.1.6", diff --git a/package.json b/package.json index 71fd418a7e..6a0f451fc9 100644 --- a/package.json +++ b/package.json @@ -1,6 +1,6 @@ { "name": "open-webui", - "version": "0.5.3", + "version": "0.5.4", "private": true, "scripts": { "dev": "npm run pyodide:fetch && vite dev --host", diff --git a/pyproject.toml b/pyproject.toml index de14a9fa16..63a97e69aa 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -11,7 +11,7 @@ dependencies = [ "pydantic==2.9.2", "python-multipart==0.0.18", - "Flask==3.0.3", + "Flask==3.1.0", "Flask-Cors==5.0.0", "python-socketio==5.11.3", @@ -26,7 +26,7 @@ dependencies = [ "sqlalchemy==2.0.32", "alembic==1.14.0", - "peewee==3.17.6", + "peewee==3.17.8", "peewee-migrate==1.12.2", "psycopg2-binary==2.9.9", "pgvector==0.3.5", @@ -61,7 +61,7 @@ dependencies = [ "ftfy==6.2.3", "pypdf==4.3.1", - "fpdf2==2.7.9", + "fpdf2==2.8.2", "pymdown-extensions==10.11.2", "docx2txt==0.8", "python-pptx==1.0.0", @@ -73,7 +73,7 @@ dependencies = [ "openpyxl==3.1.5", "pyxlsb==1.0.10", "xlrd==2.0.1", - "validators==0.33.0", + "validators==0.34.0", "psutil", "sentencepiece", "soundfile==0.12.1", @@ -84,7 +84,7 @@ dependencies = [ "faster-whisper==1.0.3", - "PyJWT[crypto]==2.9.0", + "PyJWT[crypto]==2.10.1", "authlib==1.3.2", "black==24.8.0", @@ -151,3 +151,10 @@ exclude = [ "chroma.sqlite3", ] force-include = { "CHANGELOG.md" = "open_webui/CHANGELOG.md", build = "open_webui/frontend" } + +[tool.codespell] +# Ref: https://github.com/codespell-project/codespell#using-a-config-file +skip = '.git*,*.svg,package-lock.json,i18n,*.lock,*.css,*-bundle.js,locales,example-doc.txt,emoji-shortcodes.json' +check-hidden = true +# ignore-regex = '' +ignore-words-list = 'ans' diff --git a/src/lib/apis/chats/index.ts b/src/lib/apis/chats/index.ts index d93d21c73a..1772529d39 100644 --- a/src/lib/apis/chats/index.ts +++ b/src/lib/apis/chats/index.ts @@ -618,6 +618,44 @@ export const cloneChatById = async (token: string, id: string) => { return res; }; +export const cloneSharedChatById = async (token: string, id: string) => { + let error = null; + + const res = await fetch(`${WEBUI_API_BASE_URL}/chats/${id}/clone/shared`, { + method: 'POST', + headers: { + Accept: 'application/json', + 'Content-Type': 'application/json', + ...(token && { authorization: `Bearer ${token}` }) + } + }) + .then(async (res) => { + if (!res.ok) throw await res.json(); + return res.json(); + }) + .then((json) => { + return json; + }) + .catch((err) => { + error = err; + + if ('detail' in err) { + error = err.detail; + } else { + error = err; + } + + console.log(err); + return null; + }); + + if (error) { + throw error; + } + + return res; +}; + export const shareChatById = async (token: string, id: string) => { let error = null; diff --git a/src/lib/components/NotificationToast.svelte b/src/lib/components/NotificationToast.svelte index 0cd416d7e3..59129d3140 100644 --- a/src/lib/components/NotificationToast.svelte +++ b/src/lib/components/NotificationToast.svelte @@ -1,5 +1,5 @@ diff --git a/src/lib/components/admin/Settings/Models.svelte b/src/lib/components/admin/Settings/Models.svelte index f084de65aa..580f3dcfd4 100644 --- a/src/lib/components/admin/Settings/Models.svelte +++ b/src/lib/components/admin/Settings/Models.svelte @@ -128,7 +128,7 @@ await toggleModelById(localStorage.token, model.id); } - await init(); + // await init(); _models.set(await getModels(localStorage.token)); }; diff --git a/src/lib/components/channel/Channel.svelte b/src/lib/components/channel/Channel.svelte index b205afcb3d..68d35a4c68 100644 --- a/src/lib/components/channel/Channel.svelte +++ b/src/lib/components/channel/Channel.svelte @@ -136,7 +136,7 @@ }; const submitHandler = async ({ content, data }) => { - if (!content) { + if (!content && (data?.files ?? []).length === 0) { return; } diff --git a/src/lib/components/channel/MessageInput.svelte b/src/lib/components/channel/MessageInput.svelte index c0605da8c0..aabcb2be1c 100644 --- a/src/lib/components/channel/MessageInput.svelte +++ b/src/lib/components/channel/MessageInput.svelte @@ -243,7 +243,7 @@ }; const submitHandler = async () => { - if (content === '') { + if (content === '' && files.length === 0) { return; } @@ -581,11 +581,11 @@ {#if showUserProfile} - + {message?.user?.name} @@ -189,7 +189,7 @@ - {formatDate(message.created_at / 1000000)} + {formatDate(message.created_at / 1000000)} {/if} diff --git a/src/lib/components/channel/Messages/Message/ReactionPicker.svelte b/src/lib/components/channel/Messages/Message/ReactionPicker.svelte index 2453b2dbf8..575b2a83db 100644 --- a/src/lib/components/channel/Messages/Message/ReactionPicker.svelte +++ b/src/lib/components/channel/Messages/Message/ReactionPicker.svelte @@ -61,13 +61,13 @@ ); } }); - // Group emojis into rows of 6 + // Group emojis into rows of 8 emojiRows = []; let currentRow = []; flattenedEmojis.forEach((item) => { if (item.type === 'emoji') { currentRow.push(item); - if (currentRow.length === 7) { + if (currentRow.length === 8) { emojiRows.push(currentRow); currentRow = []; } @@ -126,7 +126,7 @@ {#if emojiRows.length === 0} No results {:else} - + {#if item.length === 1 && item[0].type === 'group'} @@ -136,7 +136,7 @@ {:else} - + {#each item as emojiItem} `:${code}:`).join(', ')} diff --git a/src/lib/components/chat/Chat.svelte b/src/lib/components/chat/Chat.svelte index d5982af90b..609a4ae6d3 100644 --- a/src/lib/components/chat/Chat.svelte +++ b/src/lib/components/chat/Chat.svelte @@ -1151,13 +1151,6 @@ if (done) { message.done = true; - if ($settings.notificationEnabled && !document.hasFocus()) { - new Notification(`${message.model}`, { - body: message.content, - icon: `${WEBUI_BASE_URL}/static/favicon.png` - }); - } - if ($settings.responseAutoCopy) { copyToClipboard(message.content); } diff --git a/src/lib/components/chat/MessageInput/CallOverlay.svelte b/src/lib/components/chat/MessageInput/CallOverlay.svelte index 6f3b465a6c..1ac1ce6e6d 100644 --- a/src/lib/components/chat/MessageInput/CallOverlay.svelte +++ b/src/lib/components/chat/MessageInput/CallOverlay.svelte @@ -217,7 +217,13 @@ const startRecording = async () => { if ($showCallOverlay) { if (!audioStream) { - audioStream = await navigator.mediaDevices.getUserMedia({ audio: true }); + audioStream = await navigator.mediaDevices.getUserMedia({ + audio: { + echoCancellation: true, + noiseSuppression: true, + autoGainControl: true + } + }); } mediaRecorder = new MediaRecorder(audioStream); diff --git a/src/lib/components/chat/MessageInput/VoiceRecording.svelte b/src/lib/components/chat/MessageInput/VoiceRecording.svelte index 8ba0e5bdb6..7044d88d5e 100644 --- a/src/lib/components/chat/MessageInput/VoiceRecording.svelte +++ b/src/lib/components/chat/MessageInput/VoiceRecording.svelte @@ -161,7 +161,13 @@ const startRecording = async () => { startDurationCounter(); - stream = await navigator.mediaDevices.getUserMedia({ audio: true }); + stream = await navigator.mediaDevices.getUserMedia({ + audio: { + echoCancellation: true, + noiseSuppression: true, + autoGainControl: true + } + }); mediaRecorder = new MediaRecorder(stream); mediaRecorder.onstart = () => { console.log('Recording started'); diff --git a/src/lib/components/chat/Messages.svelte b/src/lib/components/chat/Messages.svelte index 2b0748dc7e..7f18f3a350 100644 --- a/src/lib/components/chat/Messages.svelte +++ b/src/lib/components/chat/Messages.svelte @@ -16,6 +16,8 @@ const i18n = getContext('i18n'); + export let className = 'h-full flex pt-8'; + export let chatId = ''; export let user = $_user; @@ -333,7 +335,7 @@ }; - + {#if Object.keys(history?.messages ?? {}).length == 0} - {model?.name ?? message.model} + + + {model?.name ?? message.model} + + {#if message.timestamp} {dayjs(message.timestamp * 1000).format($i18n.t('h:mm a'))} diff --git a/src/lib/components/chat/ShareChatModal.svelte b/src/lib/components/chat/ShareChatModal.svelte index f7cc6d6be5..b30c168d66 100644 --- a/src/lib/components/chat/ShareChatModal.svelte +++ b/src/lib/components/chat/ShareChatModal.svelte @@ -132,11 +132,11 @@ - + {#if $config?.features.enable_community_sharing} { shareChat(); @@ -148,7 +148,7 @@ {/if} { diff --git a/src/lib/components/common/Textarea.svelte b/src/lib/components/common/Textarea.svelte index ab5ebe2aca..bcd5b4d75b 100644 --- a/src/lib/components/common/Textarea.svelte +++ b/src/lib/components/common/Textarea.svelte @@ -56,7 +56,7 @@