Merge branch 'dev' into leaderboard-evaluation-additions

This commit is contained in:
Ayana H 2025-06-12 22:41:20 -07:00 committed by GitHub
commit 4bb41d3ead
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
166 changed files with 4524 additions and 1711 deletions

View file

@ -7,6 +7,15 @@ OPENAI_API_KEY=''
# AUTOMATIC1111_BASE_URL="http://localhost:7860" # AUTOMATIC1111_BASE_URL="http://localhost:7860"
# For production, you should only need one host as
# fastapi serves the svelte-kit built frontend and backend from the same host and port.
# To test with CORS locally, you can set something like
# CORS_ALLOW_ORIGIN='http://localhost:5173;http://localhost:8080'
CORS_ALLOW_ORIGIN='*'
# For production you should set this to match the proxy configuration (127.0.0.1)
FORWARDED_ALLOW_IPS='*'
# DO NOT TRACK # DO NOT TRACK
SCARF_NO_ANALYTICS=true SCARF_NO_ANALYTICS=true
DO_NOT_TRACK=true DO_NOT_TRACK=true

48
.gitattributes vendored
View file

@ -1 +1,49 @@
# TypeScript
*.ts text eol=lf
*.tsx text eol=lf
# JavaScript
*.js text eol=lf
*.jsx text eol=lf
*.mjs text eol=lf
*.cjs text eol=lf
# Svelte
*.svelte text eol=lf
# HTML/CSS
*.html text eol=lf
*.css text eol=lf
*.scss text eol=lf
*.less text eol=lf
# Config files and JSON
*.json text eol=lf
*.jsonc text eol=lf
*.yml text eol=lf
*.yaml text eol=lf
*.toml text eol=lf
# Shell scripts
*.sh text eol=lf *.sh text eol=lf
# Markdown & docs
*.md text eol=lf
*.mdx text eol=lf
*.txt text eol=lf
# Git-related
.gitattributes text eol=lf
.gitignore text eol=lf
# Prettier and other dotfiles
.prettierrc text eol=lf
.prettierignore text eol=lf
.eslintrc text eol=lf
.eslintignore text eol=lf
.stylelintrc text eol=lf
.editorconfig text eol=lf
# Misc
*.env text eol=lf
*.lock text eol=lf

View file

@ -14,16 +14,18 @@ env:
jobs: jobs:
build-main-image: build-main-image:
runs-on: ${{ matrix.platform == 'linux/arm64' && 'ubuntu-24.04-arm' || 'ubuntu-latest' }} runs-on: ${{ matrix.runner }}
permissions: permissions:
contents: read contents: read
packages: write packages: write
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
platform: include:
- linux/amd64 - platform: linux/amd64
- linux/arm64 runner: ubuntu-latest
- platform: linux/arm64
runner: ubuntu-24.04-arm
steps: steps:
# GitHub Packages requires the entire repository name to be in lowercase # GitHub Packages requires the entire repository name to be in lowercase
@ -111,16 +113,18 @@ jobs:
retention-days: 1 retention-days: 1
build-cuda-image: build-cuda-image:
runs-on: ${{ matrix.platform == 'linux/arm64' && 'ubuntu-24.04-arm' || 'ubuntu-latest' }} runs-on: ${{ matrix.runner }}
permissions: permissions:
contents: read contents: read
packages: write packages: write
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
platform: include:
- linux/amd64 - platform: linux/amd64
- linux/arm64 runner: ubuntu-latest
- platform: linux/arm64
runner: ubuntu-24.04-arm
steps: steps:
# GitHub Packages requires the entire repository name to be in lowercase # GitHub Packages requires the entire repository name to be in lowercase
@ -211,16 +215,18 @@ jobs:
retention-days: 1 retention-days: 1
build-cuda126-image: build-cuda126-image:
runs-on: ${{ matrix.platform == 'linux/arm64' && 'ubuntu-24.04-arm' || 'ubuntu-latest' }} runs-on: ${{ matrix.runner }}
permissions: permissions:
contents: read contents: read
packages: write packages: write
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
platform: include:
- linux/amd64 - platform: linux/amd64
- linux/arm64 runner: ubuntu-latest
- platform: linux/arm64
runner: ubuntu-24.04-arm
steps: steps:
# GitHub Packages requires the entire repository name to be in lowercase # GitHub Packages requires the entire repository name to be in lowercase
@ -312,16 +318,18 @@ jobs:
retention-days: 1 retention-days: 1
build-ollama-image: build-ollama-image:
runs-on: ${{ matrix.platform == 'linux/arm64' && 'ubuntu-24.04-arm' || 'ubuntu-latest' }} runs-on: ${{ matrix.runner }}
permissions: permissions:
contents: read contents: read
packages: write packages: write
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
platform: include:
- linux/amd64 - platform: linux/amd64
- linux/arm64 runner: ubuntu-latest
- platform: linux/arm64
runner: ubuntu-24.04-arm
steps: steps:
# GitHub Packages requires the entire repository name to be in lowercase # GitHub Packages requires the entire repository name to be in lowercase

View file

@ -5,5 +5,6 @@
"printWidth": 100, "printWidth": 100,
"plugins": ["prettier-plugin-svelte"], "plugins": ["prettier-plugin-svelte"],
"pluginSearchDirs": ["."], "pluginSearchDirs": ["."],
"overrides": [{ "files": "*.svelte", "options": { "parser": "svelte" } }] "overrides": [{ "files": "*.svelte", "options": { "parser": "svelte" } }],
"endOfLine": "lf"
} }

View file

@ -5,6 +5,45 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/), The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.6.14] - 2025-06-10
### Added
- 🤖 **Automatic "Follow Up" Suggestions**: Open WebUI now intelligently generates actionable "Follow Up" suggestions automatically with each message you send, helping you stay productive and inspired without interrupting your flow; you can always disable this in Settings if you prefer a distraction-free experience.
- 🧩 **OpenAI-Compatible Embeddings Endpoint**: Introducing a fully OpenAI-style '/api/embeddings' endpoint—now you can plug in OpenAI-style embeddings workflows with zero hassle, making integrations with external tools and platforms seamless and familiar.
- ↗️ **Model Pinning for Quick Access**: Pin your favorite or most-used models to the sidebar for instant selection—no more scrolling through long model lists; your go-to models are always visible and ready for fast access.
- 📌 **Selector Model Item Menu**: Each model in the selector now features a menu where you can easily pin/unpin to the sidebar and copy a direct link—simplifying collaboration and staying organized in even the busiest environments.
- 🛑 **Reliable Stop for Ongoing Chats in Multi-Replica Setups**: Stopping or cancelling an in-progress chat now works reliably even in clustered deployments—ensuring every user can interrupt AI output at any time, no matter your scale.
- 🧠 **'Think' Parameter for Ollama Models**: Leverage new 'think' parameter support for Ollama—giving you advanced control over AI reasoning process and further tuning model behavior for your unique use cases.
- 💬 **Picture Description Modes for Docling**: Customize how images are described/extracted by Docling Loader for smarter, more detailed, and workflow-tailored image understanding in your document pipelines.
- 🛠 **Settings Modal Deep Linking**: Every tab in Settings now has its own route—making direct navigation and sharing of precise settings faster and more intuitive.
- 🎤 **Audio HTML Component Token**: Easily embed and play audio directly in your chats, improving voice-based workflows and making audio content instantly accessible and manageable from any conversation.
- 🔑 **Support for Secret Key File**: Now you can specify 'WEBUI_SECRET_KEY_FILE' for more secure and flexible key management—ideal for advanced deployments and tighter security standards.
- 💡 **Clarity When Cloning Prompts**: Cloned workspace prompts are clearly labelled with "(Clone)" and IDs have "-clone", keeping your prompt library organized and preventing accidental overwrites.
- 📝 **Dedicated User Role Edit Modal**: Updating user roles now reliably opens a dedicated edit user modal instead of cycling through roles—making it safer and more clear to manage team permissions.
- 🏞️ **Better Handling & Storage of Interpreter-Generated Images**: Code interpreter-generated images are now centrally stored and reliably loaded from the database or cloud storage, ensuring your artifacts are always available.
- 🚀 **Pinecone & Vector Search Optimizations**: Applied latest best practices from Pinecone for smarter timeouts, intelligent retry control, improved connection pooling, faster DNS, and concurrent batch handling—giving you more reliable, faster document search and RAG performance without manual tweaks.
- ⚙️ **Ollama Advanced Parameters Unified**: 'keep_alive' and 'format' options are now integrated into the advanced params section—edit everything from the model editor for flexible model control.
- 🛠️ **CUDA 12.6 Docker Image Support**: Deploy to NVIDIA GPUs with capability 7.0 and below (e.g., V100, GTX1080) via new cuda126 image—broadening your hardware options for scalable AI workloads.
- 🔒 **Experimental Table-Level PGVector Data Encryption**: Activate pgcrypto encryption support for pgvector to secure your vector search table contents, giving organizations enhanced compliance and data protection—perfect for enterprise or regulated environments.
- 👁 **Accessibility Upgrades Across Interface**: Chat buttons and close controls are now labelled and structured for optimal accessibility support, ensuring smoother operation with assistive technologies.
- 🎨 **High-Contrast Mode Expansions**: High-contrast accessibility mode now also applies to menu items, tabs, and search input fields, offering a more readable experience for all users.
- 🛠️ **Tooltip & Translation Clarity**: Improved translation and tooltip clarity, especially over radio buttons, making the UI more understandable for all users.
- 🔠 **Global Localization & Translation Improvements**: Hefty upgrades to Traditional Chinese, Simplified Chinese, Hebrew, Russian, Irish, German, and Danish translation packs—making the platform feel native and intuitive for even more users worldwide.
- ⚡ **General Backend Stability & Security Enhancements**: Refined numerous backend routines to minimize memory use, improve performance, and streamline integration with external APIs—making the entire platform more robust and secure for daily work.
### Fixed
- 🏷 **Feedback Score Display Improved**: Addressed overflow and visibility issues with feedback scores for more readable and accessible evaluations.
- 🗂 **Admin Settings Model Edits Apply Immediately**: Changes made in the Model Editor within Admin Settings now take effect instantly, eliminating confusion during model management.
- 🔄 **Assigned Tools Update Instantly on New Chats**: Models assigned with specific tools now consistently update and are available in every new chat—making tool workflows more predictable and robust.
- 🛠 **Document Settings Saved Only on User Action**: Document settings now save only when you press the Save button, reducing accidental changes and ensuring greater control.
- 🔊 **Voice Recording on Older iOS Devices Restored**: Voice input is now fully functional on older iOS devices, keeping voice workflows accessible to all users.
- 🔒 **Trusted Email Header Session Security**: User sessions now strictly verify the trusted email header matches the logged-in user's email, ensuring secure authentication and preventing accidental session switching.
- 🔒 **Consistent User Signout on Email Mismatch**: When the trusted email in the header changes, you will now be properly signed out and redirected, safeguarding your session's integrity.
- 🛠 **General Error & Content Validation Improvements**: Smarter error handling means clearer messages and fewer unnecessary retries—making batch uploads, document handling, and knowledge indexing more resilient.
- 🕵️ **Better Feedback on Chat Title Edits**: Error messages now show clearly if problems occur while editing chat titles.
## [0.6.13] - 2025-05-30 ## [0.6.13] - 2025-05-30
### Added ### Added

View file

@ -73,7 +73,7 @@ Want to learn more about Open WebUI's features? Check out our [Open WebUI docume
</a> </a>
</td> </td>
<td> <td>
N8N • Does your interface have a backend yet?<br>Try <a href="https://n8n.io/">n8n</a> <a href="https://n8n.io/">n8n</a> • Does your interface have a backend yet?<br>Try <a href="https://n8n.io/">n8n</a>
</td> </td>
</tr> </tr>
<tr> <tr>
@ -86,6 +86,16 @@ Want to learn more about Open WebUI's features? Check out our [Open WebUI docume
<a href="https://warp.dev/open-webui">Warp</a> • The intelligent terminal for developers <a href="https://warp.dev/open-webui">Warp</a> • The intelligent terminal for developers
</td> </td>
</tr> </tr>
<tr>
<td>
<a href="https://tailscale.com/blog/self-host-a-local-ai-stack/?utm_source=OpenWebUI&utm_medium=paid-ad-placement&utm_campaign=OpenWebUI-Docs" target="_blank">
<img src="https://docs.openwebui.com/sponsors/logos/tailscale.png" alt="Tailscale" style="width: 8rem; height: 8rem; border-radius: .75rem;" />
</a>
</td>
<td>
<a href="https://tailscale.com/blog/self-host-a-local-ai-stack/?utm_source=OpenWebUI&utm_medium=paid-ad-placement&utm_campaign=OpenWebUI-Docs">Tailscale</a> • Connect self-hosted AI to any device with Tailscale
</td>
</tr>
</table> </table>
--- ---
@ -181,6 +191,8 @@ After installation, you can access Open WebUI at [http://localhost:3000](http://
We offer various installation alternatives, including non-Docker native installation methods, Docker Compose, Kustomize, and Helm. Visit our [Open WebUI Documentation](https://docs.openwebui.com/getting-started/) or join our [Discord community](https://discord.gg/5rJgQTnV4s) for comprehensive guidance. We offer various installation alternatives, including non-Docker native installation methods, Docker Compose, Kustomize, and Helm. Visit our [Open WebUI Documentation](https://docs.openwebui.com/getting-started/) or join our [Discord community](https://discord.gg/5rJgQTnV4s) for comprehensive guidance.
Look at the [Local Development Guide](https://docs.openwebui.com/getting-started/advanced-topics/development) for instructions on setting up a local development environment.
### Troubleshooting ### Troubleshooting
Encountering connection issues? Our [Open WebUI Documentation](https://docs.openwebui.com/troubleshooting/) has got you covered. For further assistance and to join our vibrant community, visit the [Open WebUI Discord](https://discord.gg/5rJgQTnV4s). Encountering connection issues? Our [Open WebUI Documentation](https://docs.openwebui.com/troubleshooting/) has got you covered. For further assistance and to join our vibrant community, visit the [Open WebUI Discord](https://discord.gg/5rJgQTnV4s).

View file

@ -347,6 +347,24 @@ MICROSOFT_CLIENT_TENANT_ID = PersistentConfig(
os.environ.get("MICROSOFT_CLIENT_TENANT_ID", ""), os.environ.get("MICROSOFT_CLIENT_TENANT_ID", ""),
) )
MICROSOFT_CLIENT_LOGIN_BASE_URL = PersistentConfig(
"MICROSOFT_CLIENT_LOGIN_BASE_URL",
"oauth.microsoft.login_base_url",
os.environ.get(
"MICROSOFT_CLIENT_LOGIN_BASE_URL", "https://login.microsoftonline.com"
),
)
MICROSOFT_CLIENT_PICTURE_URL = PersistentConfig(
"MICROSOFT_CLIENT_PICTURE_URL",
"oauth.microsoft.picture_url",
os.environ.get(
"MICROSOFT_CLIENT_PICTURE_URL",
"https://graph.microsoft.com/v1.0/me/photo/$value",
),
)
MICROSOFT_OAUTH_SCOPE = PersistentConfig( MICROSOFT_OAUTH_SCOPE = PersistentConfig(
"MICROSOFT_OAUTH_SCOPE", "MICROSOFT_OAUTH_SCOPE",
"oauth.microsoft.scope", "oauth.microsoft.scope",
@ -542,7 +560,7 @@ def load_oauth_providers():
name="microsoft", name="microsoft",
client_id=MICROSOFT_CLIENT_ID.value, client_id=MICROSOFT_CLIENT_ID.value,
client_secret=MICROSOFT_CLIENT_SECRET.value, client_secret=MICROSOFT_CLIENT_SECRET.value,
server_metadata_url=f"https://login.microsoftonline.com/{MICROSOFT_CLIENT_TENANT_ID.value}/v2.0/.well-known/openid-configuration?appid={MICROSOFT_CLIENT_ID.value}", server_metadata_url=f"{MICROSOFT_CLIENT_LOGIN_BASE_URL.value}/{MICROSOFT_CLIENT_TENANT_ID.value}/v2.0/.well-known/openid-configuration?appid={MICROSOFT_CLIENT_ID.value}",
client_kwargs={ client_kwargs={
"scope": MICROSOFT_OAUTH_SCOPE.value, "scope": MICROSOFT_OAUTH_SCOPE.value,
}, },
@ -551,7 +569,7 @@ def load_oauth_providers():
OAUTH_PROVIDERS["microsoft"] = { OAUTH_PROVIDERS["microsoft"] = {
"redirect_uri": MICROSOFT_REDIRECT_URI.value, "redirect_uri": MICROSOFT_REDIRECT_URI.value,
"picture_url": "https://graph.microsoft.com/v1.0/me/photo/$value", "picture_url": MICROSOFT_CLIENT_PICTURE_URL.value,
"register": microsoft_oauth_register, "register": microsoft_oauth_register,
} }
@ -1245,12 +1263,6 @@ if THREAD_POOL_SIZE is not None and isinstance(THREAD_POOL_SIZE, str):
THREAD_POOL_SIZE = None THREAD_POOL_SIZE = None
def validate_cors_origins(origins):
for origin in origins:
if origin != "*":
validate_cors_origin(origin)
def validate_cors_origin(origin): def validate_cors_origin(origin):
parsed_url = urlparse(origin) parsed_url = urlparse(origin)
@ -1270,16 +1282,17 @@ def validate_cors_origin(origin):
# To test CORS_ALLOW_ORIGIN locally, you can set something like # To test CORS_ALLOW_ORIGIN locally, you can set something like
# CORS_ALLOW_ORIGIN=http://localhost:5173;http://localhost:8080 # CORS_ALLOW_ORIGIN=http://localhost:5173;http://localhost:8080
# in your .env file depending on your frontend port, 5173 in this case. # in your .env file depending on your frontend port, 5173 in this case.
CORS_ALLOW_ORIGIN = os.environ.get( CORS_ALLOW_ORIGIN = os.environ.get("CORS_ALLOW_ORIGIN", "*").split(";")
"CORS_ALLOW_ORIGIN", "*;http://localhost:5173;http://localhost:8080"
).split(";")
if "*" in CORS_ALLOW_ORIGIN: if CORS_ALLOW_ORIGIN == ["*"]:
log.warning( log.warning(
"\n\nWARNING: CORS_ALLOW_ORIGIN IS SET TO '*' - NOT RECOMMENDED FOR PRODUCTION DEPLOYMENTS.\n" "\n\nWARNING: CORS_ALLOW_ORIGIN IS SET TO '*' - NOT RECOMMENDED FOR PRODUCTION DEPLOYMENTS.\n"
) )
else:
validate_cors_origins(CORS_ALLOW_ORIGIN) # You have to pick between a single wildcard or a list of origins.
# Doing both will result in CORS errors in the browser.
for origin in CORS_ALLOW_ORIGIN:
validate_cors_origin(origin)
class BannerModel(BaseModel): class BannerModel(BaseModel):
@ -1419,9 +1432,9 @@ FOLLOW_UP_GENERATION_PROMPT_TEMPLATE = PersistentConfig(
) )
DEFAULT_FOLLOW_UP_GENERATION_PROMPT_TEMPLATE = """### Task: DEFAULT_FOLLOW_UP_GENERATION_PROMPT_TEMPLATE = """### Task:
SSuggest 3-5 relevant follow-up questions or prompts that the **user** might naturally ask next in this conversation, based on the chat history, to help continue or deepen the discussion. Suggest 3-5 relevant follow-up questions or prompts that the user might naturally ask next in this conversation as a **user**, based on the chat history, to help continue or deepen the discussion.
### Guidelines: ### Guidelines:
- Phrase all follow-up questions from the users perspective, addressed to the assistant or expert. - Write all follow-up questions from the users point of view, directed to the assistant.
- Make questions concise, clear, and directly related to the discussed topic(s). - Make questions concise, clear, and directly related to the discussed topic(s).
- Only suggest follow-ups that make sense given the chat content and do not repeat what was already covered. - Only suggest follow-ups that make sense given the chat content and do not repeat what was already covered.
- If the conversation is very short or not specific, suggest more general (but relevant) follow-ups the user might ask. - If the conversation is very short or not specific, suggest more general (but relevant) follow-ups the user might ask.
@ -1812,6 +1825,13 @@ PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH = int(
os.environ.get("PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH", "1536") os.environ.get("PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH", "1536")
) )
PGVECTOR_PGCRYPTO = os.getenv("PGVECTOR_PGCRYPTO", "false").lower() == "true"
PGVECTOR_PGCRYPTO_KEY = os.getenv("PGVECTOR_PGCRYPTO_KEY", None)
if PGVECTOR_PGCRYPTO and not PGVECTOR_PGCRYPTO_KEY:
raise ValueError(
"PGVECTOR_PGCRYPTO is enabled but PGVECTOR_PGCRYPTO_KEY is not set. Please provide a valid key."
)
# Pinecone # Pinecone
PINECONE_API_KEY = os.environ.get("PINECONE_API_KEY", None) PINECONE_API_KEY = os.environ.get("PINECONE_API_KEY", None)
PINECONE_ENVIRONMENT = os.environ.get("PINECONE_ENVIRONMENT", None) PINECONE_ENVIRONMENT = os.environ.get("PINECONE_ENVIRONMENT", None)
@ -1972,6 +1992,40 @@ DOCLING_DO_PICTURE_DESCRIPTION = PersistentConfig(
os.getenv("DOCLING_DO_PICTURE_DESCRIPTION", "False").lower() == "true", os.getenv("DOCLING_DO_PICTURE_DESCRIPTION", "False").lower() == "true",
) )
DOCLING_PICTURE_DESCRIPTION_MODE = PersistentConfig(
"DOCLING_PICTURE_DESCRIPTION_MODE",
"rag.docling_picture_description_mode",
os.getenv("DOCLING_PICTURE_DESCRIPTION_MODE", ""),
)
docling_picture_description_local = os.getenv("DOCLING_PICTURE_DESCRIPTION_LOCAL", "")
try:
docling_picture_description_local = json.loads(docling_picture_description_local)
except json.JSONDecodeError:
docling_picture_description_local = {}
DOCLING_PICTURE_DESCRIPTION_LOCAL = PersistentConfig(
"DOCLING_PICTURE_DESCRIPTION_LOCAL",
"rag.docling_picture_description_local",
docling_picture_description_local,
)
docling_picture_description_api = os.getenv("DOCLING_PICTURE_DESCRIPTION_API", "")
try:
docling_picture_description_api = json.loads(docling_picture_description_api)
except json.JSONDecodeError:
docling_picture_description_api = {}
DOCLING_PICTURE_DESCRIPTION_API = PersistentConfig(
"DOCLING_PICTURE_DESCRIPTION_API",
"rag.docling_picture_description_api",
docling_picture_description_api,
)
DOCUMENT_INTELLIGENCE_ENDPOINT = PersistentConfig( DOCUMENT_INTELLIGENCE_ENDPOINT = PersistentConfig(
"DOCUMENT_INTELLIGENCE_ENDPOINT", "DOCUMENT_INTELLIGENCE_ENDPOINT",
"rag.document_intelligence_endpoint", "rag.document_intelligence_endpoint",
@ -2471,6 +2525,18 @@ PERPLEXITY_API_KEY = PersistentConfig(
os.getenv("PERPLEXITY_API_KEY", ""), os.getenv("PERPLEXITY_API_KEY", ""),
) )
PERPLEXITY_MODEL = PersistentConfig(
"PERPLEXITY_MODEL",
"rag.web.search.perplexity_model",
os.getenv("PERPLEXITY_MODEL", "sonar"),
)
PERPLEXITY_SEARCH_CONTEXT_USAGE = PersistentConfig(
"PERPLEXITY_SEARCH_CONTEXT_USAGE",
"rag.web.search.perplexity_search_context_usage",
os.getenv("PERPLEXITY_SEARCH_CONTEXT_USAGE", "medium"),
)
SOUGOU_API_SID = PersistentConfig( SOUGOU_API_SID = PersistentConfig(
"SOUGOU_API_SID", "SOUGOU_API_SID",
"rag.web.search.sougou_api_sid", "rag.web.search.sougou_api_sid",
@ -3009,3 +3075,23 @@ LDAP_VALIDATE_CERT = PersistentConfig(
LDAP_CIPHERS = PersistentConfig( LDAP_CIPHERS = PersistentConfig(
"LDAP_CIPHERS", "ldap.server.ciphers", os.environ.get("LDAP_CIPHERS", "ALL") "LDAP_CIPHERS", "ldap.server.ciphers", os.environ.get("LDAP_CIPHERS", "ALL")
) )
# For LDAP Group Management
ENABLE_LDAP_GROUP_MANAGEMENT = PersistentConfig(
"ENABLE_LDAP_GROUP_MANAGEMENT",
"ldap.group.enable_management",
os.environ.get("ENABLE_LDAP_GROUP_MANAGEMENT", "False").lower() == "true",
)
ENABLE_LDAP_GROUP_CREATION = PersistentConfig(
"ENABLE_LDAP_GROUP_CREATION",
"ldap.group.enable_creation",
os.environ.get("ENABLE_LDAP_GROUP_CREATION", "False").lower() == "true",
)
LDAP_ATTRIBUTE_FOR_GROUPS = PersistentConfig(
"LDAP_ATTRIBUTE_FOR_GROUPS",
"ldap.server.attribute_for_groups",
os.environ.get("LDAP_ATTRIBUTE_FOR_GROUPS", "memberOf"),
)

View file

@ -5,6 +5,7 @@ import os
import pkgutil import pkgutil
import sys import sys
import shutil import shutil
from uuid import uuid4
from pathlib import Path from pathlib import Path
import markdown import markdown
@ -130,6 +131,7 @@ else:
PACKAGE_DATA = {"version": "0.0.0"} PACKAGE_DATA = {"version": "0.0.0"}
VERSION = PACKAGE_DATA["version"] VERSION = PACKAGE_DATA["version"]
INSTANCE_ID = os.environ.get("INSTANCE_ID", str(uuid4()))
# Function to parse each section # Function to parse each section

View file

@ -25,6 +25,7 @@ from open_webui.socket.main import (
) )
from open_webui.models.users import UserModel
from open_webui.models.functions import Functions from open_webui.models.functions import Functions
from open_webui.models.models import Models from open_webui.models.models import Models
@ -227,12 +228,7 @@ async def generate_function_chat_completion(
"__task__": __task__, "__task__": __task__,
"__task_body__": __task_body__, "__task_body__": __task_body__,
"__files__": files, "__files__": files,
"__user__": { "__user__": user.model_dump() if isinstance(user, UserModel) else {},
"id": user.id,
"email": user.email,
"name": user.name,
"role": user.role,
},
"__metadata__": metadata, "__metadata__": metadata,
"__request__": request, "__request__": request,
} }

View file

@ -8,6 +8,8 @@ import shutil
import sys import sys
import time import time
import random import random
from uuid import uuid4
from contextlib import asynccontextmanager from contextlib import asynccontextmanager
from urllib.parse import urlencode, parse_qs, urlparse from urllib.parse import urlencode, parse_qs, urlparse
@ -19,6 +21,7 @@ from aiocache import cached
import aiohttp import aiohttp
import anyio.to_thread import anyio.to_thread
import requests import requests
from redis import Redis
from fastapi import ( from fastapi import (
@ -37,7 +40,7 @@ from fastapi import (
from fastapi.openapi.docs import get_swagger_ui_html from fastapi.openapi.docs import get_swagger_ui_html
from fastapi.middleware.cors import CORSMiddleware from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import JSONResponse, RedirectResponse from fastapi.responses import FileResponse, JSONResponse, RedirectResponse
from fastapi.staticfiles import StaticFiles from fastapi.staticfiles import StaticFiles
from starlette_compress import CompressMiddleware from starlette_compress import CompressMiddleware
@ -231,6 +234,9 @@ from open_webui.config import (
DOCLING_OCR_ENGINE, DOCLING_OCR_ENGINE,
DOCLING_OCR_LANG, DOCLING_OCR_LANG,
DOCLING_DO_PICTURE_DESCRIPTION, DOCLING_DO_PICTURE_DESCRIPTION,
DOCLING_PICTURE_DESCRIPTION_MODE,
DOCLING_PICTURE_DESCRIPTION_LOCAL,
DOCLING_PICTURE_DESCRIPTION_API,
DOCUMENT_INTELLIGENCE_ENDPOINT, DOCUMENT_INTELLIGENCE_ENDPOINT,
DOCUMENT_INTELLIGENCE_KEY, DOCUMENT_INTELLIGENCE_KEY,
MISTRAL_OCR_API_KEY, MISTRAL_OCR_API_KEY,
@ -268,6 +274,8 @@ from open_webui.config import (
BRAVE_SEARCH_API_KEY, BRAVE_SEARCH_API_KEY,
EXA_API_KEY, EXA_API_KEY,
PERPLEXITY_API_KEY, PERPLEXITY_API_KEY,
PERPLEXITY_MODEL,
PERPLEXITY_SEARCH_CONTEXT_USAGE,
SOUGOU_API_SID, SOUGOU_API_SID,
SOUGOU_API_SK, SOUGOU_API_SK,
KAGI_SEARCH_API_KEY, KAGI_SEARCH_API_KEY,
@ -341,6 +349,10 @@ from open_webui.config import (
LDAP_CA_CERT_FILE, LDAP_CA_CERT_FILE,
LDAP_VALIDATE_CERT, LDAP_VALIDATE_CERT,
LDAP_CIPHERS, LDAP_CIPHERS,
# LDAP Group Management
ENABLE_LDAP_GROUP_MANAGEMENT,
ENABLE_LDAP_GROUP_CREATION,
LDAP_ATTRIBUTE_FOR_GROUPS,
# Misc # Misc
ENV, ENV,
CACHE_DIR, CACHE_DIR,
@ -386,6 +398,7 @@ from open_webui.env import (
SAFE_MODE, SAFE_MODE,
SRC_LOG_LEVELS, SRC_LOG_LEVELS,
VERSION, VERSION,
INSTANCE_ID,
WEBUI_BUILD_HASH, WEBUI_BUILD_HASH,
WEBUI_SECRET_KEY, WEBUI_SECRET_KEY,
WEBUI_SESSION_COOKIE_SAME_SITE, WEBUI_SESSION_COOKIE_SAME_SITE,
@ -413,6 +426,7 @@ from open_webui.utils.chat import (
chat_completed as chat_completed_handler, chat_completed as chat_completed_handler,
chat_action as chat_action_handler, chat_action as chat_action_handler,
) )
from open_webui.utils.embeddings import generate_embeddings
from open_webui.utils.middleware import process_chat_payload, process_chat_response from open_webui.utils.middleware import process_chat_payload, process_chat_response
from open_webui.utils.access_control import has_access from open_webui.utils.access_control import has_access
@ -426,8 +440,10 @@ from open_webui.utils.auth import (
from open_webui.utils.plugin import install_tool_and_function_dependencies from open_webui.utils.plugin import install_tool_and_function_dependencies
from open_webui.utils.oauth import OAuthManager from open_webui.utils.oauth import OAuthManager
from open_webui.utils.security_headers import SecurityHeadersMiddleware from open_webui.utils.security_headers import SecurityHeadersMiddleware
from open_webui.utils.redis import get_redis_connection
from open_webui.tasks import ( from open_webui.tasks import (
redis_task_command_listener,
list_task_ids_by_chat_id, list_task_ids_by_chat_id,
stop_task, stop_task,
list_tasks, list_tasks,
@ -479,7 +495,9 @@ https://github.com/open-webui/open-webui
@asynccontextmanager @asynccontextmanager
async def lifespan(app: FastAPI): async def lifespan(app: FastAPI):
app.state.instance_id = INSTANCE_ID
start_logger() start_logger()
if RESET_CONFIG_ON_START: if RESET_CONFIG_ON_START:
reset_config() reset_config()
@ -491,6 +509,19 @@ async def lifespan(app: FastAPI):
log.info("Installing external dependencies of functions and tools...") log.info("Installing external dependencies of functions and tools...")
install_tool_and_function_dependencies() install_tool_and_function_dependencies()
app.state.redis = get_redis_connection(
redis_url=REDIS_URL,
redis_sentinels=get_sentinels_from_env(
REDIS_SENTINEL_HOSTS, REDIS_SENTINEL_PORT
),
async_mode=True,
)
if app.state.redis is not None:
app.state.redis_task_command_listener = asyncio.create_task(
redis_task_command_listener(app)
)
if THREAD_POOL_SIZE and THREAD_POOL_SIZE > 0: if THREAD_POOL_SIZE and THREAD_POOL_SIZE > 0:
limiter = anyio.to_thread.current_default_thread_limiter() limiter = anyio.to_thread.current_default_thread_limiter()
limiter.total_tokens = THREAD_POOL_SIZE limiter.total_tokens = THREAD_POOL_SIZE
@ -499,6 +530,9 @@ async def lifespan(app: FastAPI):
yield yield
if hasattr(app.state, "redis_task_command_listener"):
app.state.redis_task_command_listener.cancel()
app = FastAPI( app = FastAPI(
title="Open WebUI", title="Open WebUI",
@ -510,10 +544,12 @@ app = FastAPI(
oauth_manager = OAuthManager(app) oauth_manager = OAuthManager(app)
app.state.instance_id = None
app.state.config = AppConfig( app.state.config = AppConfig(
redis_url=REDIS_URL, redis_url=REDIS_URL,
redis_sentinels=get_sentinels_from_env(REDIS_SENTINEL_HOSTS, REDIS_SENTINEL_PORT), redis_sentinels=get_sentinels_from_env(REDIS_SENTINEL_HOSTS, REDIS_SENTINEL_PORT),
) )
app.state.redis = None
app.state.WEBUI_NAME = WEBUI_NAME app.state.WEBUI_NAME = WEBUI_NAME
app.state.LICENSE_METADATA = None app.state.LICENSE_METADATA = None
@ -644,6 +680,11 @@ app.state.config.LDAP_CA_CERT_FILE = LDAP_CA_CERT_FILE
app.state.config.LDAP_VALIDATE_CERT = LDAP_VALIDATE_CERT app.state.config.LDAP_VALIDATE_CERT = LDAP_VALIDATE_CERT
app.state.config.LDAP_CIPHERS = LDAP_CIPHERS app.state.config.LDAP_CIPHERS = LDAP_CIPHERS
# For LDAP Group Management
app.state.config.ENABLE_LDAP_GROUP_MANAGEMENT = ENABLE_LDAP_GROUP_MANAGEMENT
app.state.config.ENABLE_LDAP_GROUP_CREATION = ENABLE_LDAP_GROUP_CREATION
app.state.config.LDAP_ATTRIBUTE_FOR_GROUPS = LDAP_ATTRIBUTE_FOR_GROUPS
app.state.AUTH_TRUSTED_EMAIL_HEADER = WEBUI_AUTH_TRUSTED_EMAIL_HEADER app.state.AUTH_TRUSTED_EMAIL_HEADER = WEBUI_AUTH_TRUSTED_EMAIL_HEADER
app.state.AUTH_TRUSTED_NAME_HEADER = WEBUI_AUTH_TRUSTED_NAME_HEADER app.state.AUTH_TRUSTED_NAME_HEADER = WEBUI_AUTH_TRUSTED_NAME_HEADER
@ -698,6 +739,9 @@ app.state.config.DOCLING_SERVER_URL = DOCLING_SERVER_URL
app.state.config.DOCLING_OCR_ENGINE = DOCLING_OCR_ENGINE app.state.config.DOCLING_OCR_ENGINE = DOCLING_OCR_ENGINE
app.state.config.DOCLING_OCR_LANG = DOCLING_OCR_LANG app.state.config.DOCLING_OCR_LANG = DOCLING_OCR_LANG
app.state.config.DOCLING_DO_PICTURE_DESCRIPTION = DOCLING_DO_PICTURE_DESCRIPTION app.state.config.DOCLING_DO_PICTURE_DESCRIPTION = DOCLING_DO_PICTURE_DESCRIPTION
app.state.config.DOCLING_PICTURE_DESCRIPTION_MODE = DOCLING_PICTURE_DESCRIPTION_MODE
app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL = DOCLING_PICTURE_DESCRIPTION_LOCAL
app.state.config.DOCLING_PICTURE_DESCRIPTION_API = DOCLING_PICTURE_DESCRIPTION_API
app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT = DOCUMENT_INTELLIGENCE_ENDPOINT app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT = DOCUMENT_INTELLIGENCE_ENDPOINT
app.state.config.DOCUMENT_INTELLIGENCE_KEY = DOCUMENT_INTELLIGENCE_KEY app.state.config.DOCUMENT_INTELLIGENCE_KEY = DOCUMENT_INTELLIGENCE_KEY
app.state.config.MISTRAL_OCR_API_KEY = MISTRAL_OCR_API_KEY app.state.config.MISTRAL_OCR_API_KEY = MISTRAL_OCR_API_KEY
@ -773,6 +817,8 @@ app.state.config.BING_SEARCH_V7_ENDPOINT = BING_SEARCH_V7_ENDPOINT
app.state.config.BING_SEARCH_V7_SUBSCRIPTION_KEY = BING_SEARCH_V7_SUBSCRIPTION_KEY app.state.config.BING_SEARCH_V7_SUBSCRIPTION_KEY = BING_SEARCH_V7_SUBSCRIPTION_KEY
app.state.config.EXA_API_KEY = EXA_API_KEY app.state.config.EXA_API_KEY = EXA_API_KEY
app.state.config.PERPLEXITY_API_KEY = PERPLEXITY_API_KEY app.state.config.PERPLEXITY_API_KEY = PERPLEXITY_API_KEY
app.state.config.PERPLEXITY_MODEL = PERPLEXITY_MODEL
app.state.config.PERPLEXITY_SEARCH_CONTEXT_USAGE = PERPLEXITY_SEARCH_CONTEXT_USAGE
app.state.config.SOUGOU_API_SID = SOUGOU_API_SID app.state.config.SOUGOU_API_SID = SOUGOU_API_SID
app.state.config.SOUGOU_API_SK = SOUGOU_API_SK app.state.config.SOUGOU_API_SK = SOUGOU_API_SK
app.state.config.EXTERNAL_WEB_SEARCH_URL = EXTERNAL_WEB_SEARCH_URL app.state.config.EXTERNAL_WEB_SEARCH_URL = EXTERNAL_WEB_SEARCH_URL
@ -1203,6 +1249,37 @@ async def get_base_models(request: Request, user=Depends(get_admin_user)):
return {"data": models} return {"data": models}
##################################
# Embeddings
##################################
@app.post("/api/embeddings")
async def embeddings(
request: Request, form_data: dict, user=Depends(get_verified_user)
):
"""
OpenAI-compatible embeddings endpoint.
This handler:
- Performs user/model checks and dispatches to the correct backend.
- Supports OpenAI, Ollama, arena models, pipelines, and any compatible provider.
Args:
request (Request): Request context.
form_data (dict): OpenAI-like payload (e.g., {"model": "...", "input": [...]})
user (UserModel): Authenticated user.
Returns:
dict: OpenAI-compatible embeddings response.
"""
# Make sure models are loaded in app state
if not request.app.state.MODELS:
await get_all_models(request, user=user)
# Use generic dispatcher in utils.embeddings
return await generate_embeddings(request, form_data, user)
@app.post("/api/chat/completions") @app.post("/api/chat/completions")
async def chat_completion( async def chat_completion(
request: Request, request: Request,
@ -1344,26 +1421,30 @@ async def chat_action(
@app.post("/api/tasks/stop/{task_id}") @app.post("/api/tasks/stop/{task_id}")
async def stop_task_endpoint(task_id: str, user=Depends(get_verified_user)): async def stop_task_endpoint(
request: Request, task_id: str, user=Depends(get_verified_user)
):
try: try:
result = await stop_task(task_id) result = await stop_task(request, task_id)
return result return result
except ValueError as e: except ValueError as e:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=str(e)) raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=str(e))
@app.get("/api/tasks") @app.get("/api/tasks")
async def list_tasks_endpoint(user=Depends(get_verified_user)): async def list_tasks_endpoint(request: Request, user=Depends(get_verified_user)):
return {"tasks": list_tasks()} return {"tasks": await list_tasks(request)}
@app.get("/api/tasks/chat/{chat_id}") @app.get("/api/tasks/chat/{chat_id}")
async def list_tasks_by_chat_id_endpoint(chat_id: str, user=Depends(get_verified_user)): async def list_tasks_by_chat_id_endpoint(
request: Request, chat_id: str, user=Depends(get_verified_user)
):
chat = Chats.get_chat_by_id(chat_id) chat = Chats.get_chat_by_id(chat_id)
if chat is None or chat.user_id != user.id: if chat is None or chat.user_id != user.id:
return {"task_ids": []} return {"task_ids": []}
task_ids = list_task_ids_by_chat_id(chat_id) task_ids = await list_task_ids_by_chat_id(request, chat_id)
print(f"Task IDs for chat {chat_id}: {task_ids}") print(f"Task IDs for chat {chat_id}: {task_ids}")
return {"task_ids": task_ids} return {"task_ids": task_ids}
@ -1634,7 +1715,20 @@ async def healthcheck_with_db():
app.mount("/static", StaticFiles(directory=STATIC_DIR), name="static") app.mount("/static", StaticFiles(directory=STATIC_DIR), name="static")
app.mount("/cache", StaticFiles(directory=CACHE_DIR), name="cache")
@app.get("/cache/{path:path}")
async def serve_cache_file(
path: str,
user=Depends(get_verified_user),
):
file_path = os.path.abspath(os.path.join(CACHE_DIR, path))
# prevent path traversal
if not file_path.startswith(os.path.abspath(CACHE_DIR)):
raise HTTPException(status_code=404, detail="File not found")
if not os.path.isfile(file_path):
raise HTTPException(status_code=404, detail="File not found")
return FileResponse(file_path)
def swagger_ui_html(*args, **kwargs): def swagger_ui_html(*args, **kwargs):

View file

@ -207,9 +207,39 @@ class GroupTable:
except Exception: except Exception:
return False return False
def sync_user_groups_by_group_names( def create_groups_by_group_names(
self, user_id: str, group_names: list[str] self, user_id: str, group_names: list[str]
) -> bool: ) -> list[GroupModel]:
# check for existing groups
existing_groups = self.get_groups()
existing_group_names = {group.name for group in existing_groups}
new_groups = []
with get_db() as db:
for group_name in group_names:
if group_name not in existing_group_names:
new_group = GroupModel(
id=str(uuid.uuid4()),
user_id=user_id,
name=group_name,
description="",
created_at=int(time.time()),
updated_at=int(time.time()),
)
try:
result = Group(**new_group.model_dump())
db.add(result)
db.commit()
db.refresh(result)
new_groups.append(GroupModel.model_validate(result))
except Exception as e:
log.exception(e)
continue
return new_groups
def sync_groups_by_group_names(self, user_id: str, group_names: list[str]) -> bool:
with get_db() as db: with get_db() as db:
try: try:
groups = db.query(Group).filter(Group.name.in_(group_names)).all() groups = db.query(Group).filter(Group.name.in_(group_names)).all()

View file

@ -370,7 +370,7 @@ class UsersTable:
except Exception: except Exception:
return False return False
def update_user_api_key_by_id(self, id: str, api_key: str) -> str: def update_user_api_key_by_id(self, id: str, api_key: str) -> bool:
try: try:
with get_db() as db: with get_db() as db:
result = db.query(User).filter_by(id=id).update({"api_key": api_key}) result = db.query(User).filter_by(id=id).update({"api_key": api_key})

View file

@ -2,6 +2,7 @@ import requests
import logging import logging
import ftfy import ftfy
import sys import sys
import json
from langchain_community.document_loaders import ( from langchain_community.document_loaders import (
AzureAIDocumentIntelligenceLoader, AzureAIDocumentIntelligenceLoader,
@ -146,17 +147,32 @@ class DoclingLoader:
) )
} }
params = { params = {"image_export_mode": "placeholder", "table_mode": "accurate"}
"image_export_mode": "placeholder",
"table_mode": "accurate",
}
if self.params: if self.params:
if self.params.get("do_picture_classification"): if self.params.get("do_picture_description"):
params["do_picture_classification"] = self.params.get( params["do_picture_description"] = self.params.get(
"do_picture_classification" "do_picture_description"
) )
picture_description_mode = self.params.get(
"picture_description_mode", ""
).lower()
if picture_description_mode == "local" and self.params.get(
"picture_description_local", {}
):
params["picture_description_local"] = self.params.get(
"picture_description_local", {}
)
elif picture_description_mode == "api" and self.params.get(
"picture_description_api", {}
):
params["picture_description_api"] = self.params.get(
"picture_description_api", {}
)
if self.params.get("ocr_engine") and self.params.get("ocr_lang"): if self.params.get("ocr_engine") and self.params.get("ocr_lang"):
params["ocr_engine"] = self.params.get("ocr_engine") params["ocr_engine"] = self.params.get("ocr_engine")
params["ocr_lang"] = [ params["ocr_lang"] = [
@ -284,17 +300,20 @@ class Loader:
if self._is_text_file(file_ext, file_content_type): if self._is_text_file(file_ext, file_content_type):
loader = TextLoader(file_path, autodetect_encoding=True) loader = TextLoader(file_path, autodetect_encoding=True)
else: else:
# Build params for DoclingLoader
params = self.kwargs.get("DOCLING_PARAMS", {})
if not isinstance(params, dict):
try:
params = json.loads(params)
except json.JSONDecodeError:
log.error("Invalid DOCLING_PARAMS format, expected JSON object")
params = {}
loader = DoclingLoader( loader = DoclingLoader(
url=self.kwargs.get("DOCLING_SERVER_URL"), url=self.kwargs.get("DOCLING_SERVER_URL"),
file_path=file_path, file_path=file_path,
mime_type=file_content_type, mime_type=file_content_type,
params={ params=params,
"ocr_engine": self.kwargs.get("DOCLING_OCR_ENGINE"),
"ocr_lang": self.kwargs.get("DOCLING_OCR_LANG"),
"do_picture_classification": self.kwargs.get(
"DOCLING_DO_PICTURE_DESCRIPTION"
),
},
) )
elif ( elif (
self.engine == "document_intelligence" self.engine == "document_intelligence"

View file

@ -1,4 +1,5 @@
import logging import logging
from xml.etree.ElementTree import ParseError
from typing import Any, Dict, Generator, List, Optional, Sequence, Union from typing import Any, Dict, Generator, List, Optional, Sequence, Union
from urllib.parse import parse_qs, urlparse from urllib.parse import parse_qs, urlparse
@ -93,7 +94,6 @@ class YoutubeLoader:
"http": self.proxy_url, "http": self.proxy_url,
"https": self.proxy_url, "https": self.proxy_url,
} }
# Don't log complete URL because it might contain secrets
log.debug(f"Using proxy URL: {self.proxy_url[:14]}...") log.debug(f"Using proxy URL: {self.proxy_url[:14]}...")
else: else:
youtube_proxies = None youtube_proxies = None
@ -110,11 +110,37 @@ class YoutubeLoader:
for lang in self.language: for lang in self.language:
try: try:
transcript = transcript_list.find_transcript([lang]) transcript = transcript_list.find_transcript([lang])
if transcript.is_generated:
log.debug(f"Found generated transcript for language '{lang}'")
try:
transcript = transcript_list.find_manually_created_transcript(
[lang]
)
log.debug(f"Found manual transcript for language '{lang}'")
except NoTranscriptFound:
log.debug(
f"No manual transcript found for language '{lang}', using generated"
)
pass
log.debug(f"Found transcript for language '{lang}'") log.debug(f"Found transcript for language '{lang}'")
transcript_pieces: List[Dict[str, Any]] = transcript.fetch() try:
transcript_pieces: List[Dict[str, Any]] = transcript.fetch()
except ParseError:
log.debug(f"Empty or invalid transcript for language '{lang}'")
continue
if not transcript_pieces:
log.debug(f"Empty transcript for language '{lang}'")
continue
transcript_text = " ".join( transcript_text = " ".join(
map( map(
lambda transcript_piece: transcript_piece.text.strip(" "), lambda transcript_piece: (
transcript_piece.text.strip(" ")
if hasattr(transcript_piece, "text")
else ""
),
transcript_pieces, transcript_pieces,
) )
) )
@ -131,6 +157,4 @@ class YoutubeLoader:
log.warning( log.warning(
f"No transcript found for any of the specified languages: {languages_tried}. Verify if the video has transcripts, add more languages if needed." f"No transcript found for any of the specified languages: {languages_tried}. Verify if the video has transcripts, add more languages if needed."
) )
raise NoTranscriptFound( raise NoTranscriptFound(self.video_id, self.language, list(transcript_list))
f"No transcript found for any supported language. Verify if the video has transcripts, add more languages if needed."
)

View file

@ -1,12 +1,16 @@
from typing import Optional, List, Dict, Any from typing import Optional, List, Dict, Any
import logging import logging
import json
from sqlalchemy import ( from sqlalchemy import (
func,
literal,
cast, cast,
column, column,
create_engine, create_engine,
Column, Column,
Integer, Integer,
MetaData, MetaData,
LargeBinary,
select, select,
text, text,
Text, Text,
@ -28,7 +32,12 @@ from open_webui.retrieval.vector.main import (
SearchResult, SearchResult,
GetResult, GetResult,
) )
from open_webui.config import PGVECTOR_DB_URL, PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH from open_webui.config import (
PGVECTOR_DB_URL,
PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH,
PGVECTOR_PGCRYPTO,
PGVECTOR_PGCRYPTO_KEY,
)
from open_webui.env import SRC_LOG_LEVELS from open_webui.env import SRC_LOG_LEVELS
@ -39,14 +48,27 @@ log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["RAG"]) log.setLevel(SRC_LOG_LEVELS["RAG"])
def pgcrypto_encrypt(val, key):
return func.pgp_sym_encrypt(val, literal(key))
def pgcrypto_decrypt(col, key, outtype="text"):
return func.cast(func.pgp_sym_decrypt(col, literal(key)), outtype)
class DocumentChunk(Base): class DocumentChunk(Base):
__tablename__ = "document_chunk" __tablename__ = "document_chunk"
id = Column(Text, primary_key=True) id = Column(Text, primary_key=True)
vector = Column(Vector(dim=VECTOR_LENGTH), nullable=True) vector = Column(Vector(dim=VECTOR_LENGTH), nullable=True)
collection_name = Column(Text, nullable=False) collection_name = Column(Text, nullable=False)
text = Column(Text, nullable=True)
vmetadata = Column(MutableDict.as_mutable(JSONB), nullable=True) if PGVECTOR_PGCRYPTO:
text = Column(LargeBinary, nullable=True)
vmetadata = Column(LargeBinary, nullable=True)
else:
text = Column(Text, nullable=True)
vmetadata = Column(MutableDict.as_mutable(JSONB), nullable=True)
class PgvectorClient(VectorDBBase): class PgvectorClient(VectorDBBase):
@ -70,6 +92,15 @@ class PgvectorClient(VectorDBBase):
# Ensure the pgvector extension is available # Ensure the pgvector extension is available
self.session.execute(text("CREATE EXTENSION IF NOT EXISTS vector;")) self.session.execute(text("CREATE EXTENSION IF NOT EXISTS vector;"))
if PGVECTOR_PGCRYPTO:
# Ensure the pgcrypto extension is available for encryption
self.session.execute(text("CREATE EXTENSION IF NOT EXISTS pgcrypto;"))
if not PGVECTOR_PGCRYPTO_KEY:
raise ValueError(
"PGVECTOR_PGCRYPTO_KEY must be set when PGVECTOR_PGCRYPTO is enabled."
)
# Check vector length consistency # Check vector length consistency
self.check_vector_length() self.check_vector_length()
@ -147,44 +178,39 @@ class PgvectorClient(VectorDBBase):
def insert(self, collection_name: str, items: List[VectorItem]) -> None: def insert(self, collection_name: str, items: List[VectorItem]) -> None:
try: try:
new_items = [] if PGVECTOR_PGCRYPTO:
for item in items: for item in items:
vector = self.adjust_vector_length(item["vector"]) vector = self.adjust_vector_length(item["vector"])
new_chunk = DocumentChunk( # Use raw SQL for BYTEA/pgcrypto
id=item["id"], self.session.execute(
vector=vector, text(
collection_name=collection_name, """
text=item["text"], INSERT INTO document_chunk
vmetadata=item["metadata"], (id, vector, collection_name, text, vmetadata)
) VALUES (
new_items.append(new_chunk) :id, :vector, :collection_name,
self.session.bulk_save_objects(new_items) pgp_sym_encrypt(:text, :key),
self.session.commit() pgp_sym_encrypt(:metadata::text, :key)
log.info( )
f"Inserted {len(new_items)} items into collection '{collection_name}'." ON CONFLICT (id) DO NOTHING
) """
except Exception as e: ),
self.session.rollback() {
log.exception(f"Error during insert: {e}") "id": item["id"],
raise "vector": vector,
"collection_name": collection_name,
def upsert(self, collection_name: str, items: List[VectorItem]) -> None: "text": item["text"],
try: "metadata": json.dumps(item["metadata"]),
for item in items: "key": PGVECTOR_PGCRYPTO_KEY,
vector = self.adjust_vector_length(item["vector"]) },
existing = (
self.session.query(DocumentChunk)
.filter(DocumentChunk.id == item["id"])
.first()
)
if existing:
existing.vector = vector
existing.text = item["text"]
existing.vmetadata = item["metadata"]
existing.collection_name = (
collection_name # Update collection_name if necessary
) )
else: self.session.commit()
log.info(f"Encrypted & inserted {len(items)} into '{collection_name}'")
else:
new_items = []
for item in items:
vector = self.adjust_vector_length(item["vector"])
new_chunk = DocumentChunk( new_chunk = DocumentChunk(
id=item["id"], id=item["id"],
vector=vector, vector=vector,
@ -192,11 +218,78 @@ class PgvectorClient(VectorDBBase):
text=item["text"], text=item["text"],
vmetadata=item["metadata"], vmetadata=item["metadata"],
) )
self.session.add(new_chunk) new_items.append(new_chunk)
self.session.commit() self.session.bulk_save_objects(new_items)
log.info( self.session.commit()
f"Upserted {len(items)} items into collection '{collection_name}'." log.info(
) f"Inserted {len(new_items)} items into collection '{collection_name}'."
)
except Exception as e:
self.session.rollback()
log.exception(f"Error during insert: {e}")
raise
def upsert(self, collection_name: str, items: List[VectorItem]) -> None:
try:
if PGVECTOR_PGCRYPTO:
for item in items:
vector = self.adjust_vector_length(item["vector"])
self.session.execute(
text(
"""
INSERT INTO document_chunk
(id, vector, collection_name, text, vmetadata)
VALUES (
:id, :vector, :collection_name,
pgp_sym_encrypt(:text, :key),
pgp_sym_encrypt(:metadata::text, :key)
)
ON CONFLICT (id) DO UPDATE SET
vector = EXCLUDED.vector,
collection_name = EXCLUDED.collection_name,
text = EXCLUDED.text,
vmetadata = EXCLUDED.vmetadata
"""
),
{
"id": item["id"],
"vector": vector,
"collection_name": collection_name,
"text": item["text"],
"metadata": json.dumps(item["metadata"]),
"key": PGVECTOR_PGCRYPTO_KEY,
},
)
self.session.commit()
log.info(f"Encrypted & upserted {len(items)} into '{collection_name}'")
else:
for item in items:
vector = self.adjust_vector_length(item["vector"])
existing = (
self.session.query(DocumentChunk)
.filter(DocumentChunk.id == item["id"])
.first()
)
if existing:
existing.vector = vector
existing.text = item["text"]
existing.vmetadata = item["metadata"]
existing.collection_name = (
collection_name # Update collection_name if necessary
)
else:
new_chunk = DocumentChunk(
id=item["id"],
vector=vector,
collection_name=collection_name,
text=item["text"],
vmetadata=item["metadata"],
)
self.session.add(new_chunk)
self.session.commit()
log.info(
f"Upserted {len(items)} items into collection '{collection_name}'."
)
except Exception as e: except Exception as e:
self.session.rollback() self.session.rollback()
log.exception(f"Error during upsert: {e}") log.exception(f"Error during upsert: {e}")
@ -230,16 +323,32 @@ class PgvectorClient(VectorDBBase):
.alias("query_vectors") .alias("query_vectors")
) )
result_fields = [
DocumentChunk.id,
]
if PGVECTOR_PGCRYPTO:
result_fields.append(
pgcrypto_decrypt(
DocumentChunk.text, PGVECTOR_PGCRYPTO_KEY, Text
).label("text")
)
result_fields.append(
pgcrypto_decrypt(
DocumentChunk.vmetadata, PGVECTOR_PGCRYPTO_KEY, JSONB
).label("vmetadata")
)
else:
result_fields.append(DocumentChunk.text)
result_fields.append(DocumentChunk.vmetadata)
result_fields.append(
(DocumentChunk.vector.cosine_distance(query_vectors.c.q_vector)).label(
"distance"
)
)
# Build the lateral subquery for each query vector # Build the lateral subquery for each query vector
subq = ( subq = (
select( select(*result_fields)
DocumentChunk.id,
DocumentChunk.text,
DocumentChunk.vmetadata,
(
DocumentChunk.vector.cosine_distance(query_vectors.c.q_vector)
).label("distance"),
)
.where(DocumentChunk.collection_name == collection_name) .where(DocumentChunk.collection_name == collection_name)
.order_by( .order_by(
(DocumentChunk.vector.cosine_distance(query_vectors.c.q_vector)) (DocumentChunk.vector.cosine_distance(query_vectors.c.q_vector))
@ -299,17 +408,43 @@ class PgvectorClient(VectorDBBase):
self, collection_name: str, filter: Dict[str, Any], limit: Optional[int] = None self, collection_name: str, filter: Dict[str, Any], limit: Optional[int] = None
) -> Optional[GetResult]: ) -> Optional[GetResult]:
try: try:
query = self.session.query(DocumentChunk).filter( if PGVECTOR_PGCRYPTO:
DocumentChunk.collection_name == collection_name # Build where clause for vmetadata filter
) where_clauses = [DocumentChunk.collection_name == collection_name]
for key, value in filter.items():
# decrypt then check key: JSON filter after decryption
where_clauses.append(
pgcrypto_decrypt(
DocumentChunk.vmetadata, PGVECTOR_PGCRYPTO_KEY, JSONB
)[key].astext
== str(value)
)
stmt = select(
DocumentChunk.id,
pgcrypto_decrypt(
DocumentChunk.text, PGVECTOR_PGCRYPTO_KEY, Text
).label("text"),
pgcrypto_decrypt(
DocumentChunk.vmetadata, PGVECTOR_PGCRYPTO_KEY, JSONB
).label("vmetadata"),
).where(*where_clauses)
if limit is not None:
stmt = stmt.limit(limit)
results = self.session.execute(stmt).all()
else:
query = self.session.query(DocumentChunk).filter(
DocumentChunk.collection_name == collection_name
)
for key, value in filter.items(): for key, value in filter.items():
query = query.filter(DocumentChunk.vmetadata[key].astext == str(value)) query = query.filter(
DocumentChunk.vmetadata[key].astext == str(value)
)
if limit is not None: if limit is not None:
query = query.limit(limit) query = query.limit(limit)
results = query.all() results = query.all()
if not results: if not results:
return None return None
@ -331,20 +466,38 @@ class PgvectorClient(VectorDBBase):
self, collection_name: str, limit: Optional[int] = None self, collection_name: str, limit: Optional[int] = None
) -> Optional[GetResult]: ) -> Optional[GetResult]:
try: try:
query = self.session.query(DocumentChunk).filter( if PGVECTOR_PGCRYPTO:
DocumentChunk.collection_name == collection_name stmt = select(
) DocumentChunk.id,
if limit is not None: pgcrypto_decrypt(
query = query.limit(limit) DocumentChunk.text, PGVECTOR_PGCRYPTO_KEY, Text
).label("text"),
pgcrypto_decrypt(
DocumentChunk.vmetadata, PGVECTOR_PGCRYPTO_KEY, JSONB
).label("vmetadata"),
).where(DocumentChunk.collection_name == collection_name)
if limit is not None:
stmt = stmt.limit(limit)
results = self.session.execute(stmt).all()
ids = [[row.id for row in results]]
documents = [[row.text for row in results]]
metadatas = [[row.vmetadata for row in results]]
else:
results = query.all() query = self.session.query(DocumentChunk).filter(
DocumentChunk.collection_name == collection_name
)
if limit is not None:
query = query.limit(limit)
if not results: results = query.all()
return None
ids = [[result.id for result in results]] if not results:
documents = [[result.text for result in results]] return None
metadatas = [[result.vmetadata for result in results]]
ids = [[result.id for result in results]]
documents = [[result.text for result in results]]
metadatas = [[result.vmetadata for result in results]]
return GetResult(ids=ids, documents=documents, metadatas=metadatas) return GetResult(ids=ids, documents=documents, metadatas=metadatas)
except Exception as e: except Exception as e:
@ -358,17 +511,33 @@ class PgvectorClient(VectorDBBase):
filter: Optional[Dict[str, Any]] = None, filter: Optional[Dict[str, Any]] = None,
) -> None: ) -> None:
try: try:
query = self.session.query(DocumentChunk).filter( if PGVECTOR_PGCRYPTO:
DocumentChunk.collection_name == collection_name wheres = [DocumentChunk.collection_name == collection_name]
) if ids:
if ids: wheres.append(DocumentChunk.id.in_(ids))
query = query.filter(DocumentChunk.id.in_(ids)) if filter:
if filter: for key, value in filter.items():
for key, value in filter.items(): wheres.append(
query = query.filter( pgcrypto_decrypt(
DocumentChunk.vmetadata[key].astext == str(value) DocumentChunk.vmetadata, PGVECTOR_PGCRYPTO_KEY, JSONB
) )[key].astext
deleted = query.delete(synchronize_session=False) == str(value)
)
stmt = DocumentChunk.__table__.delete().where(*wheres)
result = self.session.execute(stmt)
deleted = result.rowcount
else:
query = self.session.query(DocumentChunk).filter(
DocumentChunk.collection_name == collection_name
)
if ids:
query = query.filter(DocumentChunk.id.in_(ids))
if filter:
for key, value in filter.items():
query = query.filter(
DocumentChunk.vmetadata[key].astext == str(value)
)
deleted = query.delete(synchronize_session=False)
self.session.commit() self.session.commit()
log.info(f"Deleted {deleted} items from collection '{collection_name}'.") log.info(f"Deleted {deleted} items from collection '{collection_name}'.")
except Exception as e: except Exception as e:

View file

@ -1,10 +1,20 @@
import logging import logging
from typing import Optional, List from typing import Optional, Literal
import requests import requests
from open_webui.retrieval.web.main import SearchResult, get_filtered_results from open_webui.retrieval.web.main import SearchResult, get_filtered_results
from open_webui.env import SRC_LOG_LEVELS from open_webui.env import SRC_LOG_LEVELS
MODELS = Literal[
"sonar",
"sonar-pro",
"sonar-reasoning",
"sonar-reasoning-pro",
"sonar-deep-research",
]
SEARCH_CONTEXT_USAGE_LEVELS = Literal["low", "medium", "high"]
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["RAG"]) log.setLevel(SRC_LOG_LEVELS["RAG"])
@ -14,6 +24,8 @@ def search_perplexity(
query: str, query: str,
count: int, count: int,
filter_list: Optional[list[str]] = None, filter_list: Optional[list[str]] = None,
model: MODELS = "sonar",
search_context_usage: SEARCH_CONTEXT_USAGE_LEVELS = "medium",
) -> list[SearchResult]: ) -> list[SearchResult]:
"""Search using Perplexity API and return the results as a list of SearchResult objects. """Search using Perplexity API and return the results as a list of SearchResult objects.
@ -21,6 +33,9 @@ def search_perplexity(
api_key (str): A Perplexity API key api_key (str): A Perplexity API key
query (str): The query to search for query (str): The query to search for
count (int): Maximum number of results to return count (int): Maximum number of results to return
filter_list (Optional[list[str]]): List of domains to filter results
model (str): The Perplexity model to use (sonar, sonar-pro)
search_context_usage (str): Search context usage level (low, medium, high)
""" """
@ -33,7 +48,7 @@ def search_perplexity(
# Create payload for the API call # Create payload for the API call
payload = { payload = {
"model": "sonar", "model": model,
"messages": [ "messages": [
{ {
"role": "system", "role": "system",
@ -43,6 +58,9 @@ def search_perplexity(
], ],
"temperature": 0.2, # Lower temperature for more factual responses "temperature": 0.2, # Lower temperature for more factual responses
"stream": False, "stream": False,
"web_search_options": {
"search_context_usage": search_context_usage,
},
} }
headers = { headers = {

View file

@ -55,9 +55,8 @@ from typing import Optional, List
from ssl import CERT_NONE, CERT_REQUIRED, PROTOCOL_TLS from ssl import CERT_NONE, CERT_REQUIRED, PROTOCOL_TLS
if ENABLE_LDAP.value: from ldap3 import Server, Connection, NONE, Tls
from ldap3 import Server, Connection, NONE, Tls from ldap3.utils.conv import escape_filter_chars
from ldap3.utils.conv import escape_filter_chars
router = APIRouter() router = APIRouter()
@ -229,14 +228,30 @@ async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
if not connection_app.bind(): if not connection_app.bind():
raise HTTPException(400, detail="Application account bind failed") raise HTTPException(400, detail="Application account bind failed")
ENABLE_LDAP_GROUP_MANAGEMENT = (
request.app.state.config.ENABLE_LDAP_GROUP_MANAGEMENT
)
ENABLE_LDAP_GROUP_CREATION = request.app.state.config.ENABLE_LDAP_GROUP_CREATION
LDAP_ATTRIBUTE_FOR_GROUPS = request.app.state.config.LDAP_ATTRIBUTE_FOR_GROUPS
search_attributes = [
f"{LDAP_ATTRIBUTE_FOR_USERNAME}",
f"{LDAP_ATTRIBUTE_FOR_MAIL}",
"cn",
]
if ENABLE_LDAP_GROUP_MANAGEMENT:
search_attributes.append(f"{LDAP_ATTRIBUTE_FOR_GROUPS}")
log.info(
f"LDAP Group Management enabled. Adding {LDAP_ATTRIBUTE_FOR_GROUPS} to search attributes"
)
log.info(f"LDAP search attributes: {search_attributes}")
search_success = connection_app.search( search_success = connection_app.search(
search_base=LDAP_SEARCH_BASE, search_base=LDAP_SEARCH_BASE,
search_filter=f"(&({LDAP_ATTRIBUTE_FOR_USERNAME}={escape_filter_chars(form_data.user.lower())}){LDAP_SEARCH_FILTERS})", search_filter=f"(&({LDAP_ATTRIBUTE_FOR_USERNAME}={escape_filter_chars(form_data.user.lower())}){LDAP_SEARCH_FILTERS})",
attributes=[ attributes=search_attributes,
f"{LDAP_ATTRIBUTE_FOR_USERNAME}",
f"{LDAP_ATTRIBUTE_FOR_MAIL}",
"cn",
],
) )
if not search_success or not connection_app.entries: if not search_success or not connection_app.entries:
@ -259,6 +274,69 @@ async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
cn = str(entry["cn"]) cn = str(entry["cn"])
user_dn = entry.entry_dn user_dn = entry.entry_dn
user_groups = []
if ENABLE_LDAP_GROUP_MANAGEMENT and LDAP_ATTRIBUTE_FOR_GROUPS in entry:
group_dns = entry[LDAP_ATTRIBUTE_FOR_GROUPS]
log.info(f"LDAP raw group DNs for user {username}: {group_dns}")
if group_dns:
log.info(f"LDAP group_dns original: {group_dns}")
log.info(f"LDAP group_dns type: {type(group_dns)}")
log.info(f"LDAP group_dns length: {len(group_dns)}")
if hasattr(group_dns, "value"):
group_dns = group_dns.value
log.info(f"Extracted .value property: {group_dns}")
elif hasattr(group_dns, "__iter__") and not isinstance(
group_dns, (str, bytes)
):
group_dns = list(group_dns)
log.info(f"Converted to list: {group_dns}")
if isinstance(group_dns, list):
group_dns = [str(item) for item in group_dns]
else:
group_dns = [str(group_dns)]
log.info(
f"LDAP group_dns after processing - type: {type(group_dns)}, length: {len(group_dns)}"
)
for group_idx, group_dn in enumerate(group_dns):
group_dn = str(group_dn)
log.info(f"Processing group DN #{group_idx + 1}: {group_dn}")
try:
group_cn = None
for item in group_dn.split(","):
item = item.strip()
if item.upper().startswith("CN="):
group_cn = item[3:]
break
if group_cn:
user_groups.append(group_cn)
else:
log.warning(
f"Could not extract CN from group DN: {group_dn}"
)
except Exception as e:
log.warning(
f"Failed to extract group name from DN {group_dn}: {e}"
)
log.info(
f"LDAP groups for user {username}: {user_groups} (total: {len(user_groups)})"
)
else:
log.info(f"No groups found for user {username}")
elif ENABLE_LDAP_GROUP_MANAGEMENT:
log.warning(
f"LDAP Group Management enabled but {LDAP_ATTRIBUTE_FOR_GROUPS} attribute not found in user entry"
)
if username == form_data.user.lower(): if username == form_data.user.lower():
connection_user = Connection( connection_user = Connection(
server, server,
@ -334,6 +412,22 @@ async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
user.id, request.app.state.config.USER_PERMISSIONS user.id, request.app.state.config.USER_PERMISSIONS
) )
if (
user.role != "admin"
and ENABLE_LDAP_GROUP_MANAGEMENT
and user_groups
):
if ENABLE_LDAP_GROUP_CREATION:
Groups.create_groups_by_group_names(user.id, user_groups)
try:
Groups.sync_groups_by_group_names(user.id, user_groups)
log.info(
f"Successfully synced groups for user {user.id}: {user_groups}"
)
except Exception as e:
log.error(f"Failed to sync groups for user {user.id}: {e}")
return { return {
"token": token, "token": token,
"token_type": "Bearer", "token_type": "Bearer",
@ -386,7 +480,7 @@ async def signin(request: Request, response: Response, form_data: SigninForm):
group_names = [name.strip() for name in group_names if name.strip()] group_names = [name.strip() for name in group_names if name.strip()]
if group_names: if group_names:
Groups.sync_user_groups_by_group_names(user.id, group_names) Groups.sync_groups_by_group_names(user.id, group_names)
elif WEBUI_AUTH == False: elif WEBUI_AUTH == False:
admin_email = "admin@localhost" admin_email = "admin@localhost"

View file

@ -420,7 +420,7 @@ def load_b64_image_data(b64_str):
try: try:
if "," in b64_str: if "," in b64_str:
header, encoded = b64_str.split(",", 1) header, encoded = b64_str.split(",", 1)
mime_type = header.split(";")[0] mime_type = header.split(";")[0].lstrip("data:")
img_data = base64.b64decode(encoded) img_data = base64.b64decode(encoded)
else: else:
mime_type = "image/png" mime_type = "image/png"
@ -428,7 +428,7 @@ def load_b64_image_data(b64_str):
return img_data, mime_type return img_data, mime_type
except Exception as e: except Exception as e:
log.exception(f"Error loading image data: {e}") log.exception(f"Error loading image data: {e}")
return None return None, None
def load_url_image_data(url, headers=None): def load_url_image_data(url, headers=None):

View file

@ -124,8 +124,9 @@ async def get_note_by_id(request: Request, id: str, user=Depends(get_verified_us
status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND
) )
if (user.role != "admin" and user.id != note.user_id) or ( if user.role != "admin" or (
not has_access(user.id, type="read", access_control=note.access_control) user.id != note.user_id
and not has_access(user.id, type="read", access_control=note.access_control)
): ):
raise HTTPException( raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT() status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
@ -157,8 +158,9 @@ async def update_note_by_id(
status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND
) )
if (user.role != "admin" and user.id != note.user_id) or ( if user.role != "admin" or (
not has_access(user.id, type="write", access_control=note.access_control) user.id != note.user_id
and not has_access(user.id, type="write", access_control=note.access_control)
): ):
raise HTTPException( raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT() status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
@ -195,8 +197,9 @@ async def delete_note_by_id(request: Request, id: str, user=Depends(get_verified
status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND
) )
if (user.role != "admin" and user.id != note.user_id) or ( if user.role != "admin" or (
not has_access(user.id, type="write", access_control=note.access_control) user.id != note.user_id
and not has_access(user.id, type="write", access_control=note.access_control)
): ):
raise HTTPException( raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT() status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()

View file

@ -1232,6 +1232,9 @@ class GenerateChatCompletionForm(BaseModel):
stream: Optional[bool] = True stream: Optional[bool] = True
keep_alive: Optional[Union[int, str]] = None keep_alive: Optional[Union[int, str]] = None
tools: Optional[list[dict]] = None tools: Optional[list[dict]] = None
model_config = ConfigDict(
extra="allow",
)
async def get_ollama_url(request: Request, model: str, url_idx: Optional[int] = None): async def get_ollama_url(request: Request, model: str, url_idx: Optional[int] = None):
@ -1269,7 +1272,9 @@ async def generate_chat_completion(
detail=str(e), detail=str(e),
) )
payload = {**form_data.model_dump(exclude_none=True)} if isinstance(form_data, BaseModel):
payload = {**form_data.model_dump(exclude_none=True)}
if "metadata" in payload: if "metadata" in payload:
del payload["metadata"] del payload["metadata"]
@ -1285,11 +1290,7 @@ async def generate_chat_completion(
if params: if params:
system = params.pop("system", None) system = params.pop("system", None)
# Unlike OpenAI, Ollama does not support params directly in the body payload = apply_model_params_to_body_ollama(params, payload)
payload["options"] = apply_model_params_to_body_ollama(
params, (payload.get("options", {}) or {})
)
payload = apply_model_system_prompt_to_body(system, payload, metadata, user) payload = apply_model_system_prompt_to_body(system, payload, metadata, user)
# Check if user has access to the model # Check if user has access to the model
@ -1323,7 +1324,7 @@ async def generate_chat_completion(
prefix_id = api_config.get("prefix_id", None) prefix_id = api_config.get("prefix_id", None)
if prefix_id: if prefix_id:
payload["model"] = payload["model"].replace(f"{prefix_id}.", "") payload["model"] = payload["model"].replace(f"{prefix_id}.", "")
# payload["keep_alive"] = -1 # keep alive forever
return await send_post_request( return await send_post_request(
url=f"{url}/api/chat", url=f"{url}/api/chat",
payload=json.dumps(payload), payload=json.dumps(payload),

View file

@ -887,6 +887,88 @@ async def generate_chat_completion(
await session.close() await session.close()
async def embeddings(request: Request, form_data: dict, user):
"""
Calls the embeddings endpoint for OpenAI-compatible providers.
Args:
request (Request): The FastAPI request context.
form_data (dict): OpenAI-compatible embeddings payload.
user (UserModel): The authenticated user.
Returns:
dict: OpenAI-compatible embeddings response.
"""
idx = 0
# Prepare payload/body
body = json.dumps(form_data)
# Find correct backend url/key based on model
await get_all_models(request, user=user)
model_id = form_data.get("model")
models = request.app.state.OPENAI_MODELS
if model_id in models:
idx = models[model_id]["urlIdx"]
url = request.app.state.config.OPENAI_API_BASE_URLS[idx]
key = request.app.state.config.OPENAI_API_KEYS[idx]
r = None
session = None
streaming = False
try:
session = aiohttp.ClientSession(trust_env=True)
r = await session.request(
method="POST",
url=f"{url}/embeddings",
data=body,
headers={
"Authorization": f"Bearer {key}",
"Content-Type": "application/json",
**(
{
"X-OpenWebUI-User-Name": user.name,
"X-OpenWebUI-User-Id": user.id,
"X-OpenWebUI-User-Email": user.email,
"X-OpenWebUI-User-Role": user.role,
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user
else {}
),
},
)
r.raise_for_status()
if "text/event-stream" in r.headers.get("Content-Type", ""):
streaming = True
return StreamingResponse(
r.content,
status_code=r.status,
headers=dict(r.headers),
background=BackgroundTask(
cleanup_response, response=r, session=session
),
)
else:
response_data = await r.json()
return response_data
except Exception as e:
log.exception(e)
detail = None
if r is not None:
try:
res = await r.json()
if "error" in res:
detail = f"External: {res['error']['message'] if 'message' in res['error'] else res['error']}"
except Exception:
detail = f"External: {e}"
raise HTTPException(
status_code=r.status if r else 500,
detail=detail if detail else "Open WebUI: Server Connection Error",
)
finally:
if not streaming and session:
if r:
r.close()
await session.close()
@router.api_route("/{path:path}", methods=["GET", "POST", "PUT", "DELETE"]) @router.api_route("/{path:path}", methods=["GET", "POST", "PUT", "DELETE"])
async def proxy(path: str, request: Request, user=Depends(get_verified_user)): async def proxy(path: str, request: Request, user=Depends(get_verified_user)):
""" """

View file

@ -414,6 +414,9 @@ async def get_rag_config(request: Request, user=Depends(get_admin_user)):
"DOCLING_OCR_ENGINE": request.app.state.config.DOCLING_OCR_ENGINE, "DOCLING_OCR_ENGINE": request.app.state.config.DOCLING_OCR_ENGINE,
"DOCLING_OCR_LANG": request.app.state.config.DOCLING_OCR_LANG, "DOCLING_OCR_LANG": request.app.state.config.DOCLING_OCR_LANG,
"DOCLING_DO_PICTURE_DESCRIPTION": request.app.state.config.DOCLING_DO_PICTURE_DESCRIPTION, "DOCLING_DO_PICTURE_DESCRIPTION": request.app.state.config.DOCLING_DO_PICTURE_DESCRIPTION,
"DOCLING_PICTURE_DESCRIPTION_MODE": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_MODE,
"DOCLING_PICTURE_DESCRIPTION_LOCAL": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL,
"DOCLING_PICTURE_DESCRIPTION_API": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API,
"DOCUMENT_INTELLIGENCE_ENDPOINT": request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT, "DOCUMENT_INTELLIGENCE_ENDPOINT": request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT,
"DOCUMENT_INTELLIGENCE_KEY": request.app.state.config.DOCUMENT_INTELLIGENCE_KEY, "DOCUMENT_INTELLIGENCE_KEY": request.app.state.config.DOCUMENT_INTELLIGENCE_KEY,
"MISTRAL_OCR_API_KEY": request.app.state.config.MISTRAL_OCR_API_KEY, "MISTRAL_OCR_API_KEY": request.app.state.config.MISTRAL_OCR_API_KEY,
@ -467,6 +470,8 @@ async def get_rag_config(request: Request, user=Depends(get_admin_user)):
"BING_SEARCH_V7_SUBSCRIPTION_KEY": request.app.state.config.BING_SEARCH_V7_SUBSCRIPTION_KEY, "BING_SEARCH_V7_SUBSCRIPTION_KEY": request.app.state.config.BING_SEARCH_V7_SUBSCRIPTION_KEY,
"EXA_API_KEY": request.app.state.config.EXA_API_KEY, "EXA_API_KEY": request.app.state.config.EXA_API_KEY,
"PERPLEXITY_API_KEY": request.app.state.config.PERPLEXITY_API_KEY, "PERPLEXITY_API_KEY": request.app.state.config.PERPLEXITY_API_KEY,
"PERPLEXITY_MODEL": request.app.state.config.PERPLEXITY_MODEL,
"PERPLEXITY_SEARCH_CONTEXT_USAGE": request.app.state.config.PERPLEXITY_SEARCH_CONTEXT_USAGE,
"SOUGOU_API_SID": request.app.state.config.SOUGOU_API_SID, "SOUGOU_API_SID": request.app.state.config.SOUGOU_API_SID,
"SOUGOU_API_SK": request.app.state.config.SOUGOU_API_SK, "SOUGOU_API_SK": request.app.state.config.SOUGOU_API_SK,
"WEB_LOADER_ENGINE": request.app.state.config.WEB_LOADER_ENGINE, "WEB_LOADER_ENGINE": request.app.state.config.WEB_LOADER_ENGINE,
@ -520,6 +525,8 @@ class WebConfig(BaseModel):
BING_SEARCH_V7_SUBSCRIPTION_KEY: Optional[str] = None BING_SEARCH_V7_SUBSCRIPTION_KEY: Optional[str] = None
EXA_API_KEY: Optional[str] = None EXA_API_KEY: Optional[str] = None
PERPLEXITY_API_KEY: Optional[str] = None PERPLEXITY_API_KEY: Optional[str] = None
PERPLEXITY_MODEL: Optional[str] = None
PERPLEXITY_SEARCH_CONTEXT_USAGE: Optional[str] = None
SOUGOU_API_SID: Optional[str] = None SOUGOU_API_SID: Optional[str] = None
SOUGOU_API_SK: Optional[str] = None SOUGOU_API_SK: Optional[str] = None
WEB_LOADER_ENGINE: Optional[str] = None WEB_LOADER_ENGINE: Optional[str] = None
@ -571,6 +578,9 @@ class ConfigForm(BaseModel):
DOCLING_OCR_ENGINE: Optional[str] = None DOCLING_OCR_ENGINE: Optional[str] = None
DOCLING_OCR_LANG: Optional[str] = None DOCLING_OCR_LANG: Optional[str] = None
DOCLING_DO_PICTURE_DESCRIPTION: Optional[bool] = None DOCLING_DO_PICTURE_DESCRIPTION: Optional[bool] = None
DOCLING_PICTURE_DESCRIPTION_MODE: Optional[str] = None
DOCLING_PICTURE_DESCRIPTION_LOCAL: Optional[dict] = None
DOCLING_PICTURE_DESCRIPTION_API: Optional[dict] = None
DOCUMENT_INTELLIGENCE_ENDPOINT: Optional[str] = None DOCUMENT_INTELLIGENCE_ENDPOINT: Optional[str] = None
DOCUMENT_INTELLIGENCE_KEY: Optional[str] = None DOCUMENT_INTELLIGENCE_KEY: Optional[str] = None
MISTRAL_OCR_API_KEY: Optional[str] = None MISTRAL_OCR_API_KEY: Optional[str] = None
@ -744,6 +754,22 @@ async def update_rag_config(
else request.app.state.config.DOCLING_DO_PICTURE_DESCRIPTION else request.app.state.config.DOCLING_DO_PICTURE_DESCRIPTION
) )
request.app.state.config.DOCLING_PICTURE_DESCRIPTION_MODE = (
form_data.DOCLING_PICTURE_DESCRIPTION_MODE
if form_data.DOCLING_PICTURE_DESCRIPTION_MODE is not None
else request.app.state.config.DOCLING_PICTURE_DESCRIPTION_MODE
)
request.app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL = (
form_data.DOCLING_PICTURE_DESCRIPTION_LOCAL
if form_data.DOCLING_PICTURE_DESCRIPTION_LOCAL is not None
else request.app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL
)
request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API = (
form_data.DOCLING_PICTURE_DESCRIPTION_API
if form_data.DOCLING_PICTURE_DESCRIPTION_API is not None
else request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API
)
request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT = ( request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT = (
form_data.DOCUMENT_INTELLIGENCE_ENDPOINT form_data.DOCUMENT_INTELLIGENCE_ENDPOINT
if form_data.DOCUMENT_INTELLIGENCE_ENDPOINT is not None if form_data.DOCUMENT_INTELLIGENCE_ENDPOINT is not None
@ -907,6 +933,10 @@ async def update_rag_config(
) )
request.app.state.config.EXA_API_KEY = form_data.web.EXA_API_KEY request.app.state.config.EXA_API_KEY = form_data.web.EXA_API_KEY
request.app.state.config.PERPLEXITY_API_KEY = form_data.web.PERPLEXITY_API_KEY request.app.state.config.PERPLEXITY_API_KEY = form_data.web.PERPLEXITY_API_KEY
request.app.state.config.PERPLEXITY_MODEL = form_data.web.PERPLEXITY_MODEL
request.app.state.config.PERPLEXITY_SEARCH_CONTEXT_USAGE = (
form_data.web.PERPLEXITY_SEARCH_CONTEXT_USAGE
)
request.app.state.config.SOUGOU_API_SID = form_data.web.SOUGOU_API_SID request.app.state.config.SOUGOU_API_SID = form_data.web.SOUGOU_API_SID
request.app.state.config.SOUGOU_API_SK = form_data.web.SOUGOU_API_SK request.app.state.config.SOUGOU_API_SK = form_data.web.SOUGOU_API_SK
@ -977,6 +1007,9 @@ async def update_rag_config(
"DOCLING_OCR_ENGINE": request.app.state.config.DOCLING_OCR_ENGINE, "DOCLING_OCR_ENGINE": request.app.state.config.DOCLING_OCR_ENGINE,
"DOCLING_OCR_LANG": request.app.state.config.DOCLING_OCR_LANG, "DOCLING_OCR_LANG": request.app.state.config.DOCLING_OCR_LANG,
"DOCLING_DO_PICTURE_DESCRIPTION": request.app.state.config.DOCLING_DO_PICTURE_DESCRIPTION, "DOCLING_DO_PICTURE_DESCRIPTION": request.app.state.config.DOCLING_DO_PICTURE_DESCRIPTION,
"DOCLING_PICTURE_DESCRIPTION_MODE": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_MODE,
"DOCLING_PICTURE_DESCRIPTION_LOCAL": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL,
"DOCLING_PICTURE_DESCRIPTION_API": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API,
"DOCUMENT_INTELLIGENCE_ENDPOINT": request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT, "DOCUMENT_INTELLIGENCE_ENDPOINT": request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT,
"DOCUMENT_INTELLIGENCE_KEY": request.app.state.config.DOCUMENT_INTELLIGENCE_KEY, "DOCUMENT_INTELLIGENCE_KEY": request.app.state.config.DOCUMENT_INTELLIGENCE_KEY,
"MISTRAL_OCR_API_KEY": request.app.state.config.MISTRAL_OCR_API_KEY, "MISTRAL_OCR_API_KEY": request.app.state.config.MISTRAL_OCR_API_KEY,
@ -1030,6 +1063,8 @@ async def update_rag_config(
"BING_SEARCH_V7_SUBSCRIPTION_KEY": request.app.state.config.BING_SEARCH_V7_SUBSCRIPTION_KEY, "BING_SEARCH_V7_SUBSCRIPTION_KEY": request.app.state.config.BING_SEARCH_V7_SUBSCRIPTION_KEY,
"EXA_API_KEY": request.app.state.config.EXA_API_KEY, "EXA_API_KEY": request.app.state.config.EXA_API_KEY,
"PERPLEXITY_API_KEY": request.app.state.config.PERPLEXITY_API_KEY, "PERPLEXITY_API_KEY": request.app.state.config.PERPLEXITY_API_KEY,
"PERPLEXITY_MODEL": request.app.state.config.PERPLEXITY_MODEL,
"PERPLEXITY_SEARCH_CONTEXT_USAGE": request.app.state.config.PERPLEXITY_SEARCH_CONTEXT_USAGE,
"SOUGOU_API_SID": request.app.state.config.SOUGOU_API_SID, "SOUGOU_API_SID": request.app.state.config.SOUGOU_API_SID,
"SOUGOU_API_SK": request.app.state.config.SOUGOU_API_SK, "SOUGOU_API_SK": request.app.state.config.SOUGOU_API_SK,
"WEB_LOADER_ENGINE": request.app.state.config.WEB_LOADER_ENGINE, "WEB_LOADER_ENGINE": request.app.state.config.WEB_LOADER_ENGINE,
@ -1321,9 +1356,14 @@ def process_file(
EXTERNAL_DOCUMENT_LOADER_API_KEY=request.app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY, EXTERNAL_DOCUMENT_LOADER_API_KEY=request.app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY,
TIKA_SERVER_URL=request.app.state.config.TIKA_SERVER_URL, TIKA_SERVER_URL=request.app.state.config.TIKA_SERVER_URL,
DOCLING_SERVER_URL=request.app.state.config.DOCLING_SERVER_URL, DOCLING_SERVER_URL=request.app.state.config.DOCLING_SERVER_URL,
DOCLING_OCR_ENGINE=request.app.state.config.DOCLING_OCR_ENGINE, DOCLING_PARAMS={
DOCLING_OCR_LANG=request.app.state.config.DOCLING_OCR_LANG, "ocr_engine": request.app.state.config.DOCLING_OCR_ENGINE,
DOCLING_DO_PICTURE_DESCRIPTION=request.app.state.config.DOCLING_DO_PICTURE_DESCRIPTION, "ocr_lang": request.app.state.config.DOCLING_OCR_LANG,
"do_picture_description": request.app.state.config.DOCLING_DO_PICTURE_DESCRIPTION,
"picture_description_mode": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_MODE,
"picture_description_local": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL,
"picture_description_api": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API,
},
PDF_EXTRACT_IMAGES=request.app.state.config.PDF_EXTRACT_IMAGES, PDF_EXTRACT_IMAGES=request.app.state.config.PDF_EXTRACT_IMAGES,
DOCUMENT_INTELLIGENCE_ENDPOINT=request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT, DOCUMENT_INTELLIGENCE_ENDPOINT=request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT,
DOCUMENT_INTELLIGENCE_KEY=request.app.state.config.DOCUMENT_INTELLIGENCE_KEY, DOCUMENT_INTELLIGENCE_KEY=request.app.state.config.DOCUMENT_INTELLIGENCE_KEY,
@ -1740,19 +1780,14 @@ def search_web(request: Request, engine: str, query: str) -> list[SearchResult]:
request.app.state.config.WEB_SEARCH_RESULT_COUNT, request.app.state.config.WEB_SEARCH_RESULT_COUNT,
request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST, request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
) )
elif engine == "exa":
return search_exa(
request.app.state.config.EXA_API_KEY,
query,
request.app.state.config.WEB_SEARCH_RESULT_COUNT,
request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
)
elif engine == "perplexity": elif engine == "perplexity":
return search_perplexity( return search_perplexity(
request.app.state.config.PERPLEXITY_API_KEY, request.app.state.config.PERPLEXITY_API_KEY,
query, query,
request.app.state.config.WEB_SEARCH_RESULT_COUNT, request.app.state.config.WEB_SEARCH_RESULT_COUNT,
request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST, request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
model=request.app.state.config.PERPLEXITY_MODEL,
search_context_usage=request.app.state.config.PERPLEXITY_SEARCH_CONTEXT_USAGE,
) )
elif engine == "sougou": elif engine == "sougou":
if ( if (

View file

@ -33,7 +33,7 @@ class CodeForm(BaseModel):
@router.post("/code/format") @router.post("/code/format")
async def format_code(form_data: CodeForm, user=Depends(get_verified_user)): async def format_code(form_data: CodeForm, user=Depends(get_admin_user)):
try: try:
formatted_code = black.format_str(form_data.code, mode=black.Mode()) formatted_code = black.format_str(form_data.code, mode=black.Mode())
return {"code": formatted_code} return {"code": formatted_code}

View file

@ -2,16 +2,87 @@
import asyncio import asyncio
from typing import Dict from typing import Dict
from uuid import uuid4 from uuid import uuid4
import json
from redis.asyncio import Redis
from fastapi import Request
from typing import Dict, List, Optional
# A dictionary to keep track of active tasks # A dictionary to keep track of active tasks
tasks: Dict[str, asyncio.Task] = {} tasks: Dict[str, asyncio.Task] = {}
chat_tasks = {} chat_tasks = {}
def cleanup_task(task_id: str, id=None): REDIS_TASKS_KEY = "open-webui:tasks"
REDIS_CHAT_TASKS_KEY = "open-webui:tasks:chat"
REDIS_PUBSUB_CHANNEL = "open-webui:tasks:commands"
def is_redis(request: Request) -> bool:
# Called everywhere a request is available to check Redis
return hasattr(request.app.state, "redis") and (request.app.state.redis is not None)
async def redis_task_command_listener(app):
redis: Redis = app.state.redis
pubsub = redis.pubsub()
await pubsub.subscribe(REDIS_PUBSUB_CHANNEL)
async for message in pubsub.listen():
if message["type"] != "message":
continue
try:
command = json.loads(message["data"])
if command.get("action") == "stop":
task_id = command.get("task_id")
local_task = tasks.get(task_id)
if local_task:
local_task.cancel()
except Exception as e:
print(f"Error handling distributed task command: {e}")
### ------------------------------
### REDIS-ENABLED HANDLERS
### ------------------------------
async def redis_save_task(redis: Redis, task_id: str, chat_id: Optional[str]):
pipe = redis.pipeline()
pipe.hset(REDIS_TASKS_KEY, task_id, chat_id or "")
if chat_id:
pipe.sadd(f"{REDIS_CHAT_TASKS_KEY}:{chat_id}", task_id)
await pipe.execute()
async def redis_cleanup_task(redis: Redis, task_id: str, chat_id: Optional[str]):
pipe = redis.pipeline()
pipe.hdel(REDIS_TASKS_KEY, task_id)
if chat_id:
pipe.srem(f"{REDIS_CHAT_TASKS_KEY}:{chat_id}", task_id)
if (await pipe.scard(f"{REDIS_CHAT_TASKS_KEY}:{chat_id}").execute())[-1] == 0:
pipe.delete(f"{REDIS_CHAT_TASKS_KEY}:{chat_id}") # Remove if empty set
await pipe.execute()
async def redis_list_tasks(redis: Redis) -> List[str]:
return list(await redis.hkeys(REDIS_TASKS_KEY))
async def redis_list_chat_tasks(redis: Redis, chat_id: str) -> List[str]:
return list(await redis.smembers(f"{REDIS_CHAT_TASKS_KEY}:{chat_id}"))
async def redis_send_command(redis: Redis, command: dict):
await redis.publish(REDIS_PUBSUB_CHANNEL, json.dumps(command))
async def cleanup_task(request, task_id: str, id=None):
""" """
Remove a completed or canceled task from the global `tasks` dictionary. Remove a completed or canceled task from the global `tasks` dictionary.
""" """
if is_redis(request):
await redis_cleanup_task(request.app.state.redis, task_id, id)
tasks.pop(task_id, None) # Remove the task if it exists tasks.pop(task_id, None) # Remove the task if it exists
# If an ID is provided, remove the task from the chat_tasks dictionary # If an ID is provided, remove the task from the chat_tasks dictionary
@ -21,7 +92,7 @@ def cleanup_task(task_id: str, id=None):
chat_tasks.pop(id, None) chat_tasks.pop(id, None)
def create_task(coroutine, id=None): async def create_task(request, coroutine, id=None):
""" """
Create a new asyncio task and add it to the global task dictionary. Create a new asyncio task and add it to the global task dictionary.
""" """
@ -29,7 +100,9 @@ def create_task(coroutine, id=None):
task = asyncio.create_task(coroutine) # Create the task task = asyncio.create_task(coroutine) # Create the task
# Add a done callback for cleanup # Add a done callback for cleanup
task.add_done_callback(lambda t: cleanup_task(task_id, id)) task.add_done_callback(
lambda t: asyncio.create_task(cleanup_task(request, task_id, id))
)
tasks[task_id] = task tasks[task_id] = task
# If an ID is provided, associate the task with that ID # If an ID is provided, associate the task with that ID
@ -38,34 +111,46 @@ def create_task(coroutine, id=None):
else: else:
chat_tasks[id] = [task_id] chat_tasks[id] = [task_id]
if is_redis(request):
await redis_save_task(request.app.state.redis, task_id, id)
return task_id, task return task_id, task
def get_task(task_id: str): async def list_tasks(request):
"""
Retrieve a task by its task ID.
"""
return tasks.get(task_id)
def list_tasks():
""" """
List all currently active task IDs. List all currently active task IDs.
""" """
if is_redis(request):
return await redis_list_tasks(request.app.state.redis)
return list(tasks.keys()) return list(tasks.keys())
def list_task_ids_by_chat_id(id): async def list_task_ids_by_chat_id(request, id):
""" """
List all tasks associated with a specific ID. List all tasks associated with a specific ID.
""" """
if is_redis(request):
return await redis_list_chat_tasks(request.app.state.redis, id)
return chat_tasks.get(id, []) return chat_tasks.get(id, [])
async def stop_task(task_id: str): async def stop_task(request, task_id: str):
""" """
Cancel a running task and remove it from the global task list. Cancel a running task and remove it from the global task list.
""" """
if is_redis(request):
# PUBSUB: All instances check if they have this task, and stop if so.
await redis_send_command(
request.app.state.redis,
{
"action": "stop",
"task_id": task_id,
},
)
# Optionally check if task_id still in Redis a few moments later for feedback?
return {"status": True, "message": f"Stop signal sent for {task_id}"}
task = tasks.get(task_id) task = tasks.get(task_id)
if not task: if not task:
raise ValueError(f"Task with ID {task_id} not found.") raise ValueError(f"Task with ID {task_id} not found.")

View file

@ -60,7 +60,7 @@ def get_permissions(
# Combine permissions from all user groups # Combine permissions from all user groups
for group in user_groups: for group in user_groups:
group_permissions = group.permissions group_permissions = group.permissions or {}
permissions = combine_permissions(permissions, group_permissions) permissions = combine_permissions(permissions, group_permissions)
# Ensure all fields from default_permissions are present and filled in # Ensure all fields from default_permissions are present and filled in

View file

@ -23,6 +23,7 @@ from open_webui.env import (
TRUSTED_SIGNATURE_KEY, TRUSTED_SIGNATURE_KEY,
STATIC_DIR, STATIC_DIR,
SRC_LOG_LEVELS, SRC_LOG_LEVELS,
WEBUI_AUTH_TRUSTED_EMAIL_HEADER,
) )
from fastapi import BackgroundTasks, Depends, HTTPException, Request, Response, status from fastapi import BackgroundTasks, Depends, HTTPException, Request, Response, status
@ -157,6 +158,7 @@ def get_http_authorization_cred(auth_header: Optional[str]):
def get_current_user( def get_current_user(
request: Request, request: Request,
response: Response,
background_tasks: BackgroundTasks, background_tasks: BackgroundTasks,
auth_token: HTTPAuthorizationCredentials = Depends(bearer_security), auth_token: HTTPAuthorizationCredentials = Depends(bearer_security),
): ):
@ -225,6 +227,21 @@ def get_current_user(
detail=ERROR_MESSAGES.INVALID_TOKEN, detail=ERROR_MESSAGES.INVALID_TOKEN,
) )
else: else:
if WEBUI_AUTH_TRUSTED_EMAIL_HEADER:
trusted_email = request.headers.get(
WEBUI_AUTH_TRUSTED_EMAIL_HEADER, ""
).lower()
if trusted_email and user.email != trusted_email:
# Delete the token cookie
response.delete_cookie("token")
# Delete OAuth token if present
if request.cookies.get("oauth_id_token"):
response.delete_cookie("oauth_id_token")
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="User mismatch. Please sign in again.",
)
# Add user info to current span # Add user info to current span
current_span = trace.get_current_span() current_span = trace.get_current_span()
if current_span: if current_span:

View file

@ -320,12 +320,7 @@ async def chat_completed(request: Request, form_data: dict, user: Any):
extra_params = { extra_params = {
"__event_emitter__": get_event_emitter(metadata), "__event_emitter__": get_event_emitter(metadata),
"__event_call__": get_event_call(metadata), "__event_call__": get_event_call(metadata),
"__user__": { "__user__": user.model_dump() if isinstance(user, UserModel) else {},
"id": user.id,
"email": user.email,
"name": user.name,
"role": user.role,
},
"__metadata__": metadata, "__metadata__": metadata,
"__request__": request, "__request__": request,
"__model__": model, "__model__": model,
@ -424,12 +419,7 @@ async def chat_action(request: Request, action_id: str, form_data: dict, user: A
params[key] = value params[key] = value
if "__user__" in sig.parameters: if "__user__" in sig.parameters:
__user__ = { __user__ = (user.model_dump() if isinstance(user, UserModel) else {},)
"id": user.id,
"email": user.email,
"name": user.name,
"role": user.role,
}
try: try:
if hasattr(function_module, "UserValves"): if hasattr(function_module, "UserValves"):

View file

@ -0,0 +1,90 @@
import random
import logging
import sys
from fastapi import Request
from open_webui.models.users import UserModel
from open_webui.models.models import Models
from open_webui.utils.models import check_model_access
from open_webui.env import SRC_LOG_LEVELS, GLOBAL_LOG_LEVEL, BYPASS_MODEL_ACCESS_CONTROL
from open_webui.routers.openai import embeddings as openai_embeddings
from open_webui.routers.ollama import (
embeddings as ollama_embeddings,
GenerateEmbeddingsForm,
)
from open_webui.utils.payload import convert_embedding_payload_openai_to_ollama
from open_webui.utils.response import convert_embedding_response_ollama_to_openai
logging.basicConfig(stream=sys.stdout, level=GLOBAL_LOG_LEVEL)
log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["MAIN"])
async def generate_embeddings(
request: Request,
form_data: dict,
user: UserModel,
bypass_filter: bool = False,
):
"""
Dispatch and handle embeddings generation based on the model type (OpenAI, Ollama).
Args:
request (Request): The FastAPI request context.
form_data (dict): The input data sent to the endpoint.
user (UserModel): The authenticated user.
bypass_filter (bool): If True, disables access filtering (default False).
Returns:
dict: The embeddings response, following OpenAI API compatibility.
"""
if BYPASS_MODEL_ACCESS_CONTROL:
bypass_filter = True
# Attach extra metadata from request.state if present
if hasattr(request.state, "metadata"):
if "metadata" not in form_data:
form_data["metadata"] = request.state.metadata
else:
form_data["metadata"] = {
**form_data["metadata"],
**request.state.metadata,
}
# If "direct" flag present, use only that model
if getattr(request.state, "direct", False) and hasattr(request.state, "model"):
models = {
request.state.model["id"]: request.state.model,
}
else:
models = request.app.state.MODELS
model_id = form_data.get("model")
if model_id not in models:
raise Exception("Model not found")
model = models[model_id]
# Access filtering
if not getattr(request.state, "direct", False):
if not bypass_filter and user.role == "user":
check_model_access(user, model)
# Ollama backend
if model.get("owned_by") == "ollama":
ollama_payload = convert_embedding_payload_openai_to_ollama(form_data)
response = await ollama_embeddings(
request=request,
form_data=GenerateEmbeddingsForm(**ollama_payload),
user=user,
)
return convert_embedding_response_ollama_to_openai(response)
# Default: OpenAI or compatible backend
return await openai_embeddings(
request=request,
form_data=form_data,
user=user,
)

View file

@ -37,7 +37,12 @@ from open_webui.routers.tasks import (
generate_chat_tags, generate_chat_tags,
) )
from open_webui.routers.retrieval import process_web_search, SearchForm from open_webui.routers.retrieval import process_web_search, SearchForm
from open_webui.routers.images import image_generations, GenerateImageForm from open_webui.routers.images import (
load_b64_image_data,
image_generations,
GenerateImageForm,
upload_image,
)
from open_webui.routers.pipelines import ( from open_webui.routers.pipelines import (
process_pipeline_inlet_filter, process_pipeline_inlet_filter,
process_pipeline_outlet_filter, process_pipeline_outlet_filter,
@ -693,13 +698,8 @@ def apply_params_to_form_data(form_data, model):
params = deep_update(params, custom_params) params = deep_update(params, custom_params)
if model.get("ollama"): if model.get("ollama"):
# Ollama specific parameters
form_data["options"] = params form_data["options"] = params
if "format" in params:
form_data["format"] = params["format"]
if "keep_alive" in params:
form_data["keep_alive"] = params["keep_alive"]
else: else:
if isinstance(params, dict): if isinstance(params, dict):
for key, value in params.items(): for key, value in params.items():
@ -727,12 +727,7 @@ async def process_chat_payload(request, form_data, user, metadata, model):
extra_params = { extra_params = {
"__event_emitter__": event_emitter, "__event_emitter__": event_emitter,
"__event_call__": event_call, "__event_call__": event_call,
"__user__": { "__user__": user.model_dump() if isinstance(user, UserModel) else {},
"id": user.id,
"email": user.email,
"name": user.name,
"role": user.role,
},
"__metadata__": metadata, "__metadata__": metadata,
"__request__": request, "__request__": request,
"__model__": model, "__model__": model,
@ -1327,12 +1322,7 @@ async def process_chat_response(
extra_params = { extra_params = {
"__event_emitter__": event_emitter, "__event_emitter__": event_emitter,
"__event_call__": event_caller, "__event_call__": event_caller,
"__user__": { "__user__": user.model_dump() if isinstance(user, UserModel) else {},
"id": user.id,
"email": user.email,
"name": user.name,
"role": user.role,
},
"__metadata__": metadata, "__metadata__": metadata,
"__request__": request, "__request__": request,
"__model__": model, "__model__": model,
@ -1876,9 +1866,11 @@ async def process_chat_response(
value = delta.get("content") value = delta.get("content")
reasoning_content = delta.get( reasoning_content = (
"reasoning_content" delta.get("reasoning_content")
) or delta.get("reasoning") or delta.get("reasoning")
or delta.get("thinking")
)
if reasoning_content: if reasoning_content:
if ( if (
not content_blocks not content_blocks
@ -2269,28 +2261,21 @@ async def process_chat_response(
stdoutLines = stdout.split("\n") stdoutLines = stdout.split("\n")
for idx, line in enumerate(stdoutLines): for idx, line in enumerate(stdoutLines):
if "data:image/png;base64" in line: if "data:image/png;base64" in line:
id = str(uuid4()) image_url = ""
# Extract base64 image data from the line
# ensure the path exists image_data, content_type = (
os.makedirs( load_b64_image_data(line)
os.path.join(CACHE_DIR, "images"),
exist_ok=True,
) )
if image_data is not None:
image_path = os.path.join( image_url = upload_image(
CACHE_DIR, request,
f"images/{id}.png", image_data,
) content_type,
metadata,
with open(image_path, "wb") as f: user,
f.write(
base64.b64decode(
line.split(",")[1]
)
) )
stdoutLines[idx] = ( stdoutLines[idx] = (
f"![Output Image {idx}](/cache/images/{id}.png)" f"![Output Image]({image_url})"
) )
output["stdout"] = "\n".join(stdoutLines) output["stdout"] = "\n".join(stdoutLines)
@ -2301,30 +2286,22 @@ async def process_chat_response(
resultLines = result.split("\n") resultLines = result.split("\n")
for idx, line in enumerate(resultLines): for idx, line in enumerate(resultLines):
if "data:image/png;base64" in line: if "data:image/png;base64" in line:
id = str(uuid4()) image_url = ""
# Extract base64 image data from the line
# ensure the path exists image_data, content_type = (
os.makedirs( load_b64_image_data(line)
os.path.join(CACHE_DIR, "images"),
exist_ok=True,
) )
if image_data is not None:
image_path = os.path.join( image_url = upload_image(
CACHE_DIR, request,
f"images/{id}.png", image_data,
) content_type,
metadata,
with open(image_path, "wb") as f: user,
f.write(
base64.b64decode(
line.split(",")[1]
)
) )
resultLines[idx] = ( resultLines[idx] = (
f"![Output Image {idx}](/cache/images/{id}.png)" f"![Output Image]({image_url})"
) )
output["result"] = "\n".join(resultLines) output["result"] = "\n".join(resultLines)
except Exception as e: except Exception as e:
output = str(e) output = str(e)
@ -2433,8 +2410,8 @@ async def process_chat_response(
await response.background() await response.background()
# background_tasks.add_task(post_response_handler, response, events) # background_tasks.add_task(post_response_handler, response, events)
task_id, _ = create_task( task_id, _ = await create_task(
post_response_handler(response, events), id=metadata["chat_id"] request, post_response_handler(response, events), id=metadata["chat_id"]
) )
return {"status": True, "task_id": task_id} return {"status": True, "task_id": task_id}

View file

@ -208,6 +208,7 @@ def openai_chat_message_template(model: str):
def openai_chat_chunk_message_template( def openai_chat_chunk_message_template(
model: str, model: str,
content: Optional[str] = None, content: Optional[str] = None,
reasoning_content: Optional[str] = None,
tool_calls: Optional[list[dict]] = None, tool_calls: Optional[list[dict]] = None,
usage: Optional[dict] = None, usage: Optional[dict] = None,
) -> dict: ) -> dict:
@ -220,6 +221,9 @@ def openai_chat_chunk_message_template(
if content: if content:
template["choices"][0]["delta"]["content"] = content template["choices"][0]["delta"]["content"] = content
if reasoning_content:
template["choices"][0]["delta"]["reasoning_content"] = reasoning_content
if tool_calls: if tool_calls:
template["choices"][0]["delta"]["tool_calls"] = tool_calls template["choices"][0]["delta"]["tool_calls"] = tool_calls
@ -234,6 +238,7 @@ def openai_chat_chunk_message_template(
def openai_chat_completion_message_template( def openai_chat_completion_message_template(
model: str, model: str,
message: Optional[str] = None, message: Optional[str] = None,
reasoning_content: Optional[str] = None,
tool_calls: Optional[list[dict]] = None, tool_calls: Optional[list[dict]] = None,
usage: Optional[dict] = None, usage: Optional[dict] = None,
) -> dict: ) -> dict:
@ -241,8 +246,9 @@ def openai_chat_completion_message_template(
template["object"] = "chat.completion" template["object"] = "chat.completion"
if message is not None: if message is not None:
template["choices"][0]["message"] = { template["choices"][0]["message"] = {
"content": message,
"role": "assistant", "role": "assistant",
"content": message,
**({"reasoning_content": reasoning_content} if reasoning_content else {}),
**({"tool_calls": tool_calls} if tool_calls else {}), **({"tool_calls": tool_calls} if tool_calls else {}),
} }

View file

@ -175,16 +175,32 @@ def apply_model_params_to_body_ollama(params: dict, form_data: dict) -> dict:
"num_thread": int, "num_thread": int,
} }
# Extract keep_alive from options if it exists def parse_json(value: str) -> dict:
if "options" in form_data and "keep_alive" in form_data["options"]: """
form_data["keep_alive"] = form_data["options"]["keep_alive"] Parses a JSON string into a dictionary, handling potential JSONDecodeError.
del form_data["options"]["keep_alive"] """
try:
return json.loads(value)
except Exception as e:
return value
if "options" in form_data and "format" in form_data["options"]: ollama_root_params = {
form_data["format"] = form_data["options"]["format"] "format": lambda x: parse_json(x),
del form_data["options"]["format"] "keep_alive": lambda x: parse_json(x),
"think": bool,
}
return apply_model_params_to_body(params, form_data, mappings) for key, value in ollama_root_params.items():
if (param := params.get(key, None)) is not None:
# Copy the parameter to new name then delete it, to prevent Ollama warning of invalid option provided
form_data[key] = value(param)
del params[key]
# Unlike OpenAI, Ollama does not support params directly in the body
form_data["options"] = apply_model_params_to_body(
params, (form_data.get("options", {}) or {}), mappings
)
return form_data
def convert_messages_openai_to_ollama(messages: list[dict]) -> list[dict]: def convert_messages_openai_to_ollama(messages: list[dict]) -> list[dict]:
@ -279,36 +295,48 @@ def convert_payload_openai_to_ollama(openai_payload: dict) -> dict:
openai_payload.get("messages") openai_payload.get("messages")
) )
ollama_payload["stream"] = openai_payload.get("stream", False) ollama_payload["stream"] = openai_payload.get("stream", False)
if "tools" in openai_payload: if "tools" in openai_payload:
ollama_payload["tools"] = openai_payload["tools"] ollama_payload["tools"] = openai_payload["tools"]
if "format" in openai_payload:
ollama_payload["format"] = openai_payload["format"]
# If there are advanced parameters in the payload, format them in Ollama's options field # If there are advanced parameters in the payload, format them in Ollama's options field
if openai_payload.get("options"): if openai_payload.get("options"):
ollama_payload["options"] = openai_payload["options"] ollama_payload["options"] = openai_payload["options"]
ollama_options = openai_payload["options"] ollama_options = openai_payload["options"]
def parse_json(value: str) -> dict:
"""
Parses a JSON string into a dictionary, handling potential JSONDecodeError.
"""
try:
return json.loads(value)
except Exception as e:
return value
ollama_root_params = {
"format": lambda x: parse_json(x),
"keep_alive": lambda x: parse_json(x),
"think": bool,
}
# Ollama's options field can contain parameters that should be at the root level.
for key, value in ollama_root_params.items():
if (param := ollama_options.get(key, None)) is not None:
# Copy the parameter to new name then delete it, to prevent Ollama warning of invalid option provided
ollama_payload[key] = value(param)
del ollama_options[key]
# Re-Mapping OpenAI's `max_tokens` -> Ollama's `num_predict` # Re-Mapping OpenAI's `max_tokens` -> Ollama's `num_predict`
if "max_tokens" in ollama_options: if "max_tokens" in ollama_options:
ollama_options["num_predict"] = ollama_options["max_tokens"] ollama_options["num_predict"] = ollama_options["max_tokens"]
del ollama_options[ del ollama_options["max_tokens"]
"max_tokens"
] # To prevent Ollama warning of invalid option provided
# Ollama lacks a "system" prompt option. It has to be provided as a direct parameter, so we copy it down. # Ollama lacks a "system" prompt option. It has to be provided as a direct parameter, so we copy it down.
# Comment: Not sure why this is needed, but we'll keep it for compatibility.
if "system" in ollama_options: if "system" in ollama_options:
ollama_payload["system"] = ollama_options["system"] ollama_payload["system"] = ollama_options["system"]
del ollama_options[ del ollama_options["system"]
"system"
] # To prevent Ollama warning of invalid option provided
# Extract keep_alive from options if it exists ollama_payload["options"] = ollama_options
if "keep_alive" in ollama_options:
ollama_payload["keep_alive"] = ollama_options["keep_alive"]
del ollama_options["keep_alive"]
# If there is the "stop" parameter in the openai_payload, remap it to the ollama_payload.options # If there is the "stop" parameter in the openai_payload, remap it to the ollama_payload.options
if "stop" in openai_payload: if "stop" in openai_payload:
@ -329,3 +357,32 @@ def convert_payload_openai_to_ollama(openai_payload: dict) -> dict:
ollama_payload["format"] = format ollama_payload["format"] = format
return ollama_payload return ollama_payload
def convert_embedding_payload_openai_to_ollama(openai_payload: dict) -> dict:
"""
Convert an embeddings request payload from OpenAI format to Ollama format.
Args:
openai_payload (dict): The original payload designed for OpenAI API usage.
Returns:
dict: A payload compatible with the Ollama API embeddings endpoint.
"""
ollama_payload = {"model": openai_payload.get("model")}
input_value = openai_payload.get("input")
# Ollama expects 'input' as a list, and 'prompt' as a single string.
if isinstance(input_value, list):
ollama_payload["input"] = input_value
ollama_payload["prompt"] = "\n".join(str(x) for x in input_value)
else:
ollama_payload["input"] = [input_value]
ollama_payload["prompt"] = str(input_value)
# Optionally forward other fields if present
for optional_key in ("options", "truncate", "keep_alive"):
if optional_key in openai_payload:
ollama_payload[optional_key] = openai_payload[optional_key]
return ollama_payload

View file

@ -1,7 +1,6 @@
import socketio import socketio
import redis
from redis import asyncio as aioredis
from urllib.parse import urlparse from urllib.parse import urlparse
from typing import Optional
def parse_redis_service_url(redis_url): def parse_redis_service_url(redis_url):
@ -18,23 +17,46 @@ def parse_redis_service_url(redis_url):
} }
def get_redis_connection(redis_url, redis_sentinels, decode_responses=True): def get_redis_connection(
if redis_sentinels: redis_url, redis_sentinels, async_mode=False, decode_responses=True
redis_config = parse_redis_service_url(redis_url) ):
sentinel = redis.sentinel.Sentinel( if async_mode:
redis_sentinels, import redis.asyncio as redis
port=redis_config["port"],
db=redis_config["db"],
username=redis_config["username"],
password=redis_config["password"],
decode_responses=decode_responses,
)
# Get a master connection from Sentinel # If using sentinel in async mode
return sentinel.master_for(redis_config["service"]) if redis_sentinels:
redis_config = parse_redis_service_url(redis_url)
sentinel = redis.sentinel.Sentinel(
redis_sentinels,
port=redis_config["port"],
db=redis_config["db"],
username=redis_config["username"],
password=redis_config["password"],
decode_responses=decode_responses,
)
return sentinel.master_for(redis_config["service"])
elif redis_url:
return redis.from_url(redis_url, decode_responses=decode_responses)
else:
return None
else: else:
# Standard Redis connection import redis
return redis.Redis.from_url(redis_url, decode_responses=decode_responses)
if redis_sentinels:
redis_config = parse_redis_service_url(redis_url)
sentinel = redis.sentinel.Sentinel(
redis_sentinels,
port=redis_config["port"],
db=redis_config["db"],
username=redis_config["username"],
password=redis_config["password"],
decode_responses=decode_responses,
)
return sentinel.master_for(redis_config["service"])
elif redis_url:
return redis.Redis.from_url(redis_url, decode_responses=decode_responses)
else:
return None
def get_sentinels_from_env(sentinel_hosts_env, sentinel_port_env): def get_sentinels_from_env(sentinel_hosts_env, sentinel_port_env):

View file

@ -83,6 +83,7 @@ def convert_ollama_usage_to_openai(data: dict) -> dict:
def convert_response_ollama_to_openai(ollama_response: dict) -> dict: def convert_response_ollama_to_openai(ollama_response: dict) -> dict:
model = ollama_response.get("model", "ollama") model = ollama_response.get("model", "ollama")
message_content = ollama_response.get("message", {}).get("content", "") message_content = ollama_response.get("message", {}).get("content", "")
reasoning_content = ollama_response.get("message", {}).get("thinking", None)
tool_calls = ollama_response.get("message", {}).get("tool_calls", None) tool_calls = ollama_response.get("message", {}).get("tool_calls", None)
openai_tool_calls = None openai_tool_calls = None
@ -94,7 +95,7 @@ def convert_response_ollama_to_openai(ollama_response: dict) -> dict:
usage = convert_ollama_usage_to_openai(data) usage = convert_ollama_usage_to_openai(data)
response = openai_chat_completion_message_template( response = openai_chat_completion_message_template(
model, message_content, openai_tool_calls, usage model, message_content, reasoning_content, openai_tool_calls, usage
) )
return response return response
@ -105,6 +106,7 @@ async def convert_streaming_response_ollama_to_openai(ollama_streaming_response)
model = data.get("model", "ollama") model = data.get("model", "ollama")
message_content = data.get("message", {}).get("content", None) message_content = data.get("message", {}).get("content", None)
reasoning_content = data.get("message", {}).get("thinking", None)
tool_calls = data.get("message", {}).get("tool_calls", None) tool_calls = data.get("message", {}).get("tool_calls", None)
openai_tool_calls = None openai_tool_calls = None
@ -118,10 +120,71 @@ async def convert_streaming_response_ollama_to_openai(ollama_streaming_response)
usage = convert_ollama_usage_to_openai(data) usage = convert_ollama_usage_to_openai(data)
data = openai_chat_chunk_message_template( data = openai_chat_chunk_message_template(
model, message_content, openai_tool_calls, usage model, message_content, reasoning_content, openai_tool_calls, usage
) )
line = f"data: {json.dumps(data)}\n\n" line = f"data: {json.dumps(data)}\n\n"
yield line yield line
yield "data: [DONE]\n\n" yield "data: [DONE]\n\n"
def convert_embedding_response_ollama_to_openai(response) -> dict:
"""
Convert the response from Ollama embeddings endpoint to the OpenAI-compatible format.
Args:
response (dict): The response from the Ollama API,
e.g. {"embedding": [...], "model": "..."}
or {"embeddings": [{"embedding": [...], "index": 0}, ...], "model": "..."}
Returns:
dict: Response adapted to OpenAI's embeddings API format.
e.g. {
"object": "list",
"data": [
{"object": "embedding", "embedding": [...], "index": 0},
...
],
"model": "...",
}
"""
# Ollama batch-style output
if isinstance(response, dict) and "embeddings" in response:
openai_data = []
for i, emb in enumerate(response["embeddings"]):
openai_data.append(
{
"object": "embedding",
"embedding": emb.get("embedding"),
"index": emb.get("index", i),
}
)
return {
"object": "list",
"data": openai_data,
"model": response.get("model"),
}
# Ollama single output
elif isinstance(response, dict) and "embedding" in response:
return {
"object": "list",
"data": [
{
"object": "embedding",
"embedding": response["embedding"],
"index": 0,
}
],
"model": response.get("model"),
}
# Already OpenAI-compatible?
elif (
isinstance(response, dict)
and "data" in response
and isinstance(response["data"], list)
):
return response
# Fallback: return as is if unrecognized
return response

View file

@ -479,7 +479,7 @@ async def get_tool_server_data(token: str, url: str) -> Dict[str, Any]:
"specs": convert_openapi_to_tool_payload(res), "specs": convert_openapi_to_tool_payload(res),
} }
log.info("Fetched data:", data) log.info(f"Fetched data: {data}")
return data return data
@ -644,5 +644,5 @@ async def execute_tool_server(
except Exception as err: except Exception as err:
error = str(err) error = str(err)
log.exception("API Request Error:", error) log.exception(f"API Request Error: {error}")
return {"error": error} return {"error": error}

View file

@ -7,14 +7,13 @@ python-socketio==5.13.0
python-jose==3.4.0 python-jose==3.4.0
passlib[bcrypt]==1.7.4 passlib[bcrypt]==1.7.4
requests==2.32.3 requests==2.32.4
aiohttp==3.11.11 aiohttp==3.11.11
async-timeout async-timeout
aiocache aiocache
aiofiles aiofiles
starlette-compress==1.6.0 starlette-compress==1.6.0
sqlalchemy==2.0.38 sqlalchemy==2.0.38
alembic==1.14.0 alembic==1.14.0
peewee==3.18.1 peewee==3.18.1
@ -96,7 +95,7 @@ authlib==1.4.1
black==25.1.0 black==25.1.0
langfuse==2.44.0 langfuse==2.44.0
youtube-transcript-api==1.0.3 youtube-transcript-api==1.1.0
pytube==15.0.0 pytube==15.0.0
extract_msg extract_msg

View file

@ -14,7 +14,11 @@ if [[ "${WEB_LOADER_ENGINE,,}" == "playwright" ]]; then
python -c "import nltk; nltk.download('punkt_tab')" python -c "import nltk; nltk.download('punkt_tab')"
fi fi
KEY_FILE=.webui_secret_key if [ -n "${WEBUI_SECRET_KEY_FILE}" ]; then
KEY_FILE="${WEBUI_SECRET_KEY_FILE}"
else
KEY_FILE=".webui_secret_key"
fi
PORT="${PORT:-8080}" PORT="${PORT:-8080}"
HOST="${HOST:-0.0.0.0}" HOST="${HOST:-0.0.0.0}"

View file

@ -18,6 +18,10 @@ IF /I "%WEB_LOADER_ENGINE%" == "playwright" (
) )
SET "KEY_FILE=.webui_secret_key" SET "KEY_FILE=.webui_secret_key"
IF NOT "%WEBUI_SECRET_KEY_FILE%" == "" (
SET "KEY_FILE=%WEBUI_SECRET_KEY_FILE%"
)
IF "%PORT%"=="" SET PORT=8080 IF "%PORT%"=="" SET PORT=8080
IF "%HOST%"=="" SET HOST=0.0.0.0 IF "%HOST%"=="" SET HOST=0.0.0.0
SET "WEBUI_SECRET_KEY=%WEBUI_SECRET_KEY%" SET "WEBUI_SECRET_KEY=%WEBUI_SECRET_KEY%"

77
package-lock.json generated
View file

@ -1,12 +1,12 @@
{ {
"name": "open-webui", "name": "open-webui",
"version": "0.6.13", "version": "0.6.14",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "open-webui", "name": "open-webui",
"version": "0.6.13", "version": "0.6.14",
"dependencies": { "dependencies": {
"@azure/msal-browser": "^4.5.0", "@azure/msal-browser": "^4.5.0",
"@codemirror/lang-javascript": "^6.2.2", "@codemirror/lang-javascript": "^6.2.2",
@ -31,7 +31,7 @@
"@tiptap/starter-kit": "^2.10.0", "@tiptap/starter-kit": "^2.10.0",
"@xyflow/svelte": "^0.1.19", "@xyflow/svelte": "^0.1.19",
"async": "^3.2.5", "async": "^3.2.5",
"bits-ui": "^0.19.7", "bits-ui": "^0.21.15",
"codemirror": "^6.0.1", "codemirror": "^6.0.1",
"codemirror-lang-elixir": "^4.0.0", "codemirror-lang-elixir": "^4.0.0",
"codemirror-lang-hcl": "^0.1.0", "codemirror-lang-hcl": "^0.1.0",
@ -1201,26 +1201,29 @@
} }
}, },
"node_modules/@floating-ui/core": { "node_modules/@floating-ui/core": {
"version": "1.6.0", "version": "1.7.1",
"resolved": "https://registry.npmjs.org/@floating-ui/core/-/core-1.6.0.tgz", "resolved": "https://registry.npmjs.org/@floating-ui/core/-/core-1.7.1.tgz",
"integrity": "sha512-PcF++MykgmTj3CIyOQbKA/hDzOAiqI3mhuoN44WRCopIs1sgoDoU4oty4Jtqaj/y3oDU6fnVSm4QG0a3t5i0+g==", "integrity": "sha512-azI0DrjMMfIug/ExbBaeDVJXcY0a7EPvPjb2xAJPa4HeimBX+Z18HK8QQR3jb6356SnDDdxx+hinMLcJEDdOjw==",
"license": "MIT",
"dependencies": { "dependencies": {
"@floating-ui/utils": "^0.2.1" "@floating-ui/utils": "^0.2.9"
} }
}, },
"node_modules/@floating-ui/dom": { "node_modules/@floating-ui/dom": {
"version": "1.6.3", "version": "1.7.1",
"resolved": "https://registry.npmjs.org/@floating-ui/dom/-/dom-1.6.3.tgz", "resolved": "https://registry.npmjs.org/@floating-ui/dom/-/dom-1.7.1.tgz",
"integrity": "sha512-RnDthu3mzPlQ31Ss/BTwQ1zjzIhr3lk1gZB1OC56h/1vEtaXkESrOqL5fQVMfXpwGtRwX+YsZBdyHtJMQnkArw==", "integrity": "sha512-cwsmW/zyw5ltYTUeeYJ60CnQuPqmGwuGVhG9w0PRaRKkAyi38BT5CKrpIbb+jtahSwUl04cWzSx9ZOIxeS6RsQ==",
"license": "MIT",
"dependencies": { "dependencies": {
"@floating-ui/core": "^1.0.0", "@floating-ui/core": "^1.7.1",
"@floating-ui/utils": "^0.2.0" "@floating-ui/utils": "^0.2.9"
} }
}, },
"node_modules/@floating-ui/utils": { "node_modules/@floating-ui/utils": {
"version": "0.2.1", "version": "0.2.9",
"resolved": "https://registry.npmjs.org/@floating-ui/utils/-/utils-0.2.1.tgz", "resolved": "https://registry.npmjs.org/@floating-ui/utils/-/utils-0.2.9.tgz",
"integrity": "sha512-9TANp6GPoMtYzQdt54kfAyMmz1+osLlXdg2ENroU7zzrtflTLrrC/lgrIfaSe+Wu0b89GKccT7vxXA0MoAIO+Q==" "integrity": "sha512-MDWhGtE+eHw5JW7lq4qhc5yRLS11ERl1c7Z6Xd0a58DozHES6EnNNwUWbMiG4J9Cgj053Bhk8zvlhFYKVhULwg==",
"license": "MIT"
}, },
"node_modules/@gulpjs/to-absolute-glob": { "node_modules/@gulpjs/to-absolute-glob": {
"version": "4.0.0", "version": "4.0.0",
@ -1750,9 +1753,10 @@
} }
}, },
"node_modules/@internationalized/date": { "node_modules/@internationalized/date": {
"version": "3.5.2", "version": "3.8.2",
"resolved": "https://registry.npmjs.org/@internationalized/date/-/date-3.5.2.tgz", "resolved": "https://registry.npmjs.org/@internationalized/date/-/date-3.8.2.tgz",
"integrity": "sha512-vo1yOMUt2hzp63IutEaTUxROdvQg1qlMRsbCvbay2AK2Gai7wIgCyK5weEX3nHkiLgo4qCXHijFNC/ILhlRpOQ==", "integrity": "sha512-/wENk7CbvLbkUvX1tu0mwq49CVkkWpkXubGel6birjRPyo6uQ4nQpnq5xZu823zRCwwn82zgHrvgF1vZyvmVgA==",
"license": "Apache-2.0",
"dependencies": { "dependencies": {
"@swc/helpers": "^0.5.0" "@swc/helpers": "^0.5.0"
} }
@ -2032,9 +2036,10 @@
"integrity": "sha512-CZWV/q6TTe8ta61cZXjfnnHsfWIdFhms03M9T7Cnd5y2mdpylJM0rF1qRq+wsQVRMLz1OYPVEBU9ph2Bx8cxrg==" "integrity": "sha512-CZWV/q6TTe8ta61cZXjfnnHsfWIdFhms03M9T7Cnd5y2mdpylJM0rF1qRq+wsQVRMLz1OYPVEBU9ph2Bx8cxrg=="
}, },
"node_modules/@melt-ui/svelte": { "node_modules/@melt-ui/svelte": {
"version": "0.76.0", "version": "0.76.2",
"resolved": "https://registry.npmjs.org/@melt-ui/svelte/-/svelte-0.76.0.tgz", "resolved": "https://registry.npmjs.org/@melt-ui/svelte/-/svelte-0.76.2.tgz",
"integrity": "sha512-X1ktxKujjLjOBt8LBvfckHGDMrkHWceRt1jdsUTf0EH76ikNPP1ofSoiV0IhlduDoCBV+2YchJ8kXCDfDXfC9Q==", "integrity": "sha512-7SbOa11tXUS95T3fReL+dwDs5FyJtCEqrqG3inRziDws346SYLsxOQ6HmX+4BkIsQh1R8U3XNa+EMmdMt38lMA==",
"license": "MIT",
"dependencies": { "dependencies": {
"@floating-ui/core": "^1.3.1", "@floating-ui/core": "^1.3.1",
"@floating-ui/dom": "^1.4.5", "@floating-ui/dom": "^1.4.5",
@ -2610,11 +2615,12 @@
} }
}, },
"node_modules/@swc/helpers": { "node_modules/@swc/helpers": {
"version": "0.5.7", "version": "0.5.17",
"resolved": "https://registry.npmjs.org/@swc/helpers/-/helpers-0.5.7.tgz", "resolved": "https://registry.npmjs.org/@swc/helpers/-/helpers-0.5.17.tgz",
"integrity": "sha512-BVvNZhx362+l2tSwSuyEUV4h7+jk9raNdoTSdLfwTshXJSaGmYKluGRJznziCI3KX02Z19DdsQrdfrpXAU3Hfg==", "integrity": "sha512-5IKx/Y13RsYd+sauPb2x+U/xZikHjolzfuDgTAl/Tdf3Q8rslRvC19NKDLgAJQ6wsqADk10ntlv08nPFw/gO/A==",
"license": "Apache-2.0",
"dependencies": { "dependencies": {
"tslib": "^2.4.0" "tslib": "^2.8.0"
} }
}, },
"node_modules/@tailwindcss/container-queries": { "node_modules/@tailwindcss/container-queries": {
@ -4381,16 +4387,20 @@
} }
}, },
"node_modules/bits-ui": { "node_modules/bits-ui": {
"version": "0.19.7", "version": "0.21.15",
"resolved": "https://registry.npmjs.org/bits-ui/-/bits-ui-0.19.7.tgz", "resolved": "https://registry.npmjs.org/bits-ui/-/bits-ui-0.21.15.tgz",
"integrity": "sha512-GHUpKvN7QyazhnZNkUy0lxg6W1M6KJHWSZ4a/UGCjPE6nQgk6vKbGysY67PkDtQMknZTZAzVoMj1Eic4IKeCRQ==", "integrity": "sha512-+m5WSpJnFdCcNdXSTIVC1WYBozipO03qRh03GFWgrdxoHiolCfwW71EYG4LPCWYPG6KcTZV0Cj6iHSiZ7cdKdg==",
"license": "MIT",
"dependencies": { "dependencies": {
"@internationalized/date": "^3.5.1", "@internationalized/date": "^3.5.1",
"@melt-ui/svelte": "0.76.0", "@melt-ui/svelte": "0.76.2",
"nanoid": "^5.0.5" "nanoid": "^5.0.5"
}, },
"funding": {
"url": "https://github.com/sponsors/huntabyte"
},
"peerDependencies": { "peerDependencies": {
"svelte": "^4.0.0" "svelte": "^4.0.0 || ^5.0.0-next.118"
} }
}, },
"node_modules/bl": { "node_modules/bl": {
@ -11842,9 +11852,10 @@
} }
}, },
"node_modules/tslib": { "node_modules/tslib": {
"version": "2.6.2", "version": "2.8.1",
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.6.2.tgz", "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz",
"integrity": "sha512-AEYxH93jGFPn/a2iVAwW87VuUIkR1FVUKB77NwMF7nBTDkDrrT/Hpt/IrCJ0QXhW27jTBDcf5ZY7w6RiqTMw2Q==" "integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==",
"license": "0BSD"
}, },
"node_modules/tunnel-agent": { "node_modules/tunnel-agent": {
"version": "0.6.0", "version": "0.6.0",

View file

@ -1,6 +1,6 @@
{ {
"name": "open-webui", "name": "open-webui",
"version": "0.6.13", "version": "0.6.14",
"private": true, "private": true,
"scripts": { "scripts": {
"dev": "npm run pyodide:fetch && vite dev --host", "dev": "npm run pyodide:fetch && vite dev --host",
@ -75,7 +75,7 @@
"@tiptap/starter-kit": "^2.10.0", "@tiptap/starter-kit": "^2.10.0",
"@xyflow/svelte": "^0.1.19", "@xyflow/svelte": "^0.1.19",
"async": "^3.2.5", "async": "^3.2.5",
"bits-ui": "^0.19.7", "bits-ui": "^0.21.15",
"codemirror": "^6.0.1", "codemirror": "^6.0.1",
"codemirror-lang-elixir": "^4.0.0", "codemirror-lang-elixir": "^4.0.0",
"codemirror-lang-hcl": "^0.1.0", "codemirror-lang-hcl": "^0.1.0",

View file

@ -7,7 +7,7 @@ authors = [
license = { file = "LICENSE" } license = { file = "LICENSE" }
dependencies = [ dependencies = [
"fastapi==0.115.7", "fastapi==0.115.7",
"uvicorn[standard]==0.34.0", "uvicorn[standard]==0.34.2",
"pydantic==2.10.6", "pydantic==2.10.6",
"python-multipart==0.0.20", "python-multipart==0.0.20",
@ -15,12 +15,11 @@ dependencies = [
"python-jose==3.4.0", "python-jose==3.4.0",
"passlib[bcrypt]==1.7.4", "passlib[bcrypt]==1.7.4",
"requests==2.32.3", "requests==2.32.4",
"aiohttp==3.11.11", "aiohttp==3.11.11",
"async-timeout", "async-timeout",
"aiocache", "aiocache",
"aiofiles", "aiofiles",
"starlette-compress==1.6.0", "starlette-compress==1.6.0",
"sqlalchemy==2.0.38", "sqlalchemy==2.0.38",
@ -83,13 +82,13 @@ dependencies = [
"openpyxl==3.1.5", "openpyxl==3.1.5",
"pyxlsb==1.0.10", "pyxlsb==1.0.10",
"xlrd==2.0.1", "xlrd==2.0.1",
"validators==0.34.0", "validators==0.35.0",
"psutil", "psutil",
"sentencepiece", "sentencepiece",
"soundfile==0.13.1", "soundfile==0.13.1",
"azure-ai-documentintelligence==1.0.0", "azure-ai-documentintelligence==1.0.2",
"pillow==11.1.0", "pillow==11.2.1",
"opencv-python-headless==4.11.0.86", "opencv-python-headless==4.11.0.86",
"rapidocr-onnxruntime==1.4.4", "rapidocr-onnxruntime==1.4.4",
"rank-bm25==0.2.2", "rank-bm25==0.2.2",
@ -103,7 +102,7 @@ dependencies = [
"black==25.1.0", "black==25.1.0",
"langfuse==2.44.0", "langfuse==2.44.0",
"youtube-transcript-api==1.0.3", "youtube-transcript-api==1.1.0",
"pytube==15.0.0", "pytube==15.0.0",
"extract_msg", "extract_msg",

View file

@ -12,7 +12,8 @@ const packages = [
'sympy', 'sympy',
'tiktoken', 'tiktoken',
'seaborn', 'seaborn',
'pytz' 'pytz',
'black'
]; ];
import { loadPyodide } from 'pyodide'; import { loadPyodide } from 'pyodide';

View file

@ -336,7 +336,7 @@ export const userSignOut = async () => {
}) })
.then(async (res) => { .then(async (res) => {
if (!res.ok) throw await res.json(); if (!res.ok) throw await res.json();
return res; return res.json();
}) })
.catch((err) => { .catch((err) => {
console.error(err); console.error(err);

View file

@ -194,15 +194,16 @@
<Modal size="sm" bind:show> <Modal size="sm" bind:show>
<div> <div>
<div class=" flex justify-between dark:text-gray-100 px-5 pt-4 pb-1.5"> <div class=" flex justify-between dark:text-gray-100 px-5 pt-4 pb-1.5">
<div class=" text-lg font-medium self-center font-primary"> <h1 class="text-lg font-medium self-center font-primary">
{#if edit} {#if edit}
{$i18n.t('Edit Connection')} {$i18n.t('Edit Connection')}
{:else} {:else}
{$i18n.t('Add Connection')} {$i18n.t('Add Connection')}
{/if} {/if}
</div> </h1>
<button <button
class="self-center" class="self-center"
aria-label={$i18n.t('Close modal')}
on:click={() => { on:click={() => {
show = false; show = false;
}} }}
@ -211,6 +212,7 @@
xmlns="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 20 20" viewBox="0 0 20 20"
fill="currentColor" fill="currentColor"
aria-hidden="true"
class="w-5 h-5" class="w-5 h-5"
> >
<path <path

View file

@ -26,7 +26,7 @@
<div class="px-5 pt-4 dark:text-gray-300 text-gray-700"> <div class="px-5 pt-4 dark:text-gray-300 text-gray-700">
<div class="flex justify-between items-start"> <div class="flex justify-between items-start">
<div class="text-xl font-semibold"> <div class="text-xl font-semibold">
{$i18n.t('Whats New in')} {$i18n.t("What's New in")}
{$WEBUI_NAME} {$WEBUI_NAME}
<Confetti x={[-1, -0.25]} y={[0, 0.5]} /> <Confetti x={[-1, -0.25]} y={[0, 0.5]} />
</div> </div>

View file

@ -1,6 +1,8 @@
<script> <script>
import { getContext, tick, onMount } from 'svelte'; import { getContext, tick, onMount } from 'svelte';
import { toast } from 'svelte-sonner'; import { goto } from '$app/navigation';
import { page } from '$app/stores';
import Leaderboard from './Evaluations/Leaderboard.svelte'; import Leaderboard from './Evaluations/Leaderboard.svelte';
import Feedbacks from './Evaluations/Feedbacks.svelte'; import Feedbacks from './Evaluations/Feedbacks.svelte';
@ -8,7 +10,24 @@
const i18n = getContext('i18n'); const i18n = getContext('i18n');
let selectedTab = 'leaderboard'; let selectedTab;
$: {
const pathParts = $page.url.pathname.split('/');
const tabFromPath = pathParts[pathParts.length - 1];
selectedTab = ['leaderboard', 'feedbacks'].includes(tabFromPath) ? tabFromPath : 'leaderboard';
}
$: if (selectedTab) {
// scroll to selectedTab
scrollToTab(selectedTab);
}
const scrollToTab = (tabId) => {
const tabElement = document.getElementById(tabId);
if (tabElement) {
tabElement.scrollIntoView({ behavior: 'smooth', block: 'nearest', inline: 'start' });
}
};
let loaded = false; let loaded = false;
let feedbacks = []; let feedbacks = [];
@ -27,6 +46,9 @@
} }
}); });
} }
// Scroll to the selected tab on mount
scrollToTab(selectedTab);
}); });
</script> </script>
@ -37,12 +59,13 @@
class="tabs flex flex-row overflow-x-auto gap-2.5 max-w-full lg:gap-1 lg:flex-col lg:flex-none lg:w-40 dark:text-gray-200 text-sm font-medium text-left scrollbar-none" class="tabs flex flex-row overflow-x-auto gap-2.5 max-w-full lg:gap-1 lg:flex-col lg:flex-none lg:w-40 dark:text-gray-200 text-sm font-medium text-left scrollbar-none"
> >
<button <button
id="leaderboard"
class="px-0.5 py-1 min-w-fit rounded-lg lg:flex-none flex text-right transition {selectedTab === class="px-0.5 py-1 min-w-fit rounded-lg lg:flex-none flex text-right transition {selectedTab ===
'leaderboard' 'leaderboard'
? '' ? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}" : ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => { on:click={() => {
selectedTab = 'leaderboard'; goto('/admin/evaluations/leaderboard');
}} }}
> >
<div class=" self-center mr-2"> <div class=" self-center mr-2">
@ -63,12 +86,13 @@
</button> </button>
<button <button
id="feedbacks"
class="px-0.5 py-1 min-w-fit rounded-lg lg:flex-none flex text-right transition {selectedTab === class="px-0.5 py-1 min-w-fit rounded-lg lg:flex-none flex text-right transition {selectedTab ===
'feedbacks' 'feedbacks'
? '' ? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}" : ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => { on:click={() => {
selectedTab = 'feedbacks'; goto('/admin/evaluations/feedbacks');
}} }}
> >
<div class=" self-center mr-2"> <div class=" self-center mr-2">

View file

@ -13,6 +13,9 @@
import Tooltip from '$lib/components/common/Tooltip.svelte'; import Tooltip from '$lib/components/common/Tooltip.svelte';
import MagnifyingGlass from '$lib/components/icons/MagnifyingGlass.svelte'; import MagnifyingGlass from '$lib/components/icons/MagnifyingGlass.svelte';
import ChevronUp from '$lib/components/icons/ChevronUp.svelte';
import ChevronDown from '$lib/components/icons/ChevronDown.svelte';
const i18n = getContext('i18n'); const i18n = getContext('i18n');
const EMBEDDING_MODEL = 'TaylorAI/bge-micro-v2'; const EMBEDDING_MODEL = 'TaylorAI/bge-micro-v2';
@ -30,6 +33,9 @@
let loadingLeaderboard = true; let loadingLeaderboard = true;
let debounceTimer; let debounceTimer;
let orderBy: string = 'rating'; // default sort column
let direction: 'asc' | 'desc' = 'desc'; // default sort order
type Feedback = { type Feedback = {
id: string; id: string;
data: { data: {
@ -53,6 +59,15 @@
lost: number; lost: number;
}; };
function setSortKey(key) {
if (orderBy === key) {
direction = direction === 'asc' ? 'desc' : 'asc';
} else {
orderBy = key;
direction = key === 'name' ? 'asc' : 'desc';
}
}
////////////////////// //////////////////////
// //
// Aggregate Level Modal // Aggregate Level Modal
@ -287,6 +302,28 @@
onMount(async () => { onMount(async () => {
rankHandler(); rankHandler();
}); });
$: sortedModels = [...rankedModels].sort((a, b) => {
let aVal, bVal;
if (orderBy === 'name') {
aVal = a.name;
bVal = b.name;
return direction === 'asc' ? aVal.localeCompare(bVal) : bVal.localeCompare(aVal);
} else if (orderBy === 'rating') {
aVal = a.rating === '-' ? -Infinity : a.rating;
bVal = b.rating === '-' ? -Infinity : b.rating;
return direction === 'asc' ? aVal - bVal : bVal - aVal;
} else if (orderBy === 'won') {
aVal = a.stats.won === '-' ? -Infinity : Number(a.stats.won);
bVal = b.stats.won === '-' ? -Infinity : Number(b.stats.won);
return direction === 'asc' ? aVal - bVal : bVal - aVal;
} else if (orderBy === 'lost') {
aVal = a.stats.lost === '-' ? -Infinity : Number(a.stats.lost);
bVal = b.stats.lost === '-' ? -Infinity : Number(b.stats.lost);
return direction === 'asc' ? aVal - bVal : bVal - aVal;
}
return 0;
});
</script> </script>
<ModelModal <ModelModal
@ -352,25 +389,120 @@
class="text-xs text-gray-700 uppercase bg-gray-50 dark:bg-gray-850 dark:text-gray-400 -translate-y-0.5" class="text-xs text-gray-700 uppercase bg-gray-50 dark:bg-gray-850 dark:text-gray-400 -translate-y-0.5"
> >
<tr class=""> <tr class="">
<th scope="col" class="px-3 py-1.5 cursor-pointer select-none w-3"> <th
{$i18n.t('RK')} scope="col"
class="px-3 py-1.5 cursor-pointer select-none w-3"
on:click={() => setSortKey('rating')}
>
<div class="flex gap-1.5 items-center">
{$i18n.t('RK')}
{#if orderBy === 'rating'}
<span class="font-normal">
{#if direction === 'asc'}
<ChevronUp className="size-2" />
{:else}
<ChevronDown className="size-2" />
{/if}
</span>
{:else}
<span class="invisible">
<ChevronUp className="size-2" />
</span>
{/if}
</div>
</th> </th>
<th scope="col" class="px-3 py-1.5 cursor-pointer select-none"> <th
{$i18n.t('Model')} scope="col"
class="px-3 py-1.5 cursor-pointer select-none"
on:click={() => setSortKey('name')}
>
<div class="flex gap-1.5 items-center">
{$i18n.t('Model')}
{#if orderBy === 'name'}
<span class="font-normal">
{#if direction === 'asc'}
<ChevronUp className="size-2" />
{:else}
<ChevronDown className="size-2" />
{/if}
</span>
{:else}
<span class="invisible">
<ChevronUp className="size-2" />
</span>
{/if}
</div>
</th> </th>
<th scope="col" class="px-3 py-1.5 text-right cursor-pointer select-none w-fit"> <th
{$i18n.t('Rating')} scope="col"
class="px-3 py-1.5 text-right cursor-pointer select-none w-fit"
on:click={() => setSortKey('rating')}
>
<div class="flex gap-1.5 items-center justify-end">
{$i18n.t('Rating')}
{#if orderBy === 'rating'}
<span class="font-normal">
{#if direction === 'asc'}
<ChevronUp className="size-2" />
{:else}
<ChevronDown className="size-2" />
{/if}
</span>
{:else}
<span class="invisible">
<ChevronUp className="size-2" />
</span>
{/if}
</div>
</th> </th>
<th scope="col" class="px-3 py-1.5 text-right cursor-pointer select-none w-5"> <th
{$i18n.t('Won')} scope="col"
class="px-3 py-1.5 text-right cursor-pointer select-none w-5"
on:click={() => setSortKey('won')}
>
<div class="flex gap-1.5 items-center justify-end">
{$i18n.t('Won')}
{#if orderBy === 'won'}
<span class="font-normal">
{#if direction === 'asc'}
<ChevronUp className="size-2" />
{:else}
<ChevronDown className="size-2" />
{/if}
</span>
{:else}
<span class="invisible">
<ChevronUp className="size-2" />
</span>
{/if}
</div>
</th> </th>
<th scope="col" class="px-3 py-1.5 text-right cursor-pointer select-none w-5"> <th
{$i18n.t('Lost')} scope="col"
class="px-3 py-1.5 text-right cursor-pointer select-none w-5"
on:click={() => setSortKey('lost')}
>
<div class="flex gap-1.5 items-center justify-end">
{$i18n.t('Lost')}
{#if orderBy === 'lost'}
<span class="font-normal">
{#if direction === 'asc'}
<ChevronUp className="size-2" />
{:else}
<ChevronDown className="size-2" />
{/if}
</span>
{:else}
<span class="invisible">
<ChevronUp className="size-2" />
</span>
{/if}
</div>
</th> </th>
</tr> </tr>
</thead> </thead>
<tbody class=""> <tbody class="">
{#each rankedModels as model, modelIdx (model.id)} {#each sortedModels as model, modelIdx (model.id)}
<tr <tr
class="bg-white dark:bg-gray-900 dark:border-gray-850 text-xs group cursor-pointer hover:bg-gray-50 dark:hover:bg-gray-800 transition" class="bg-white dark:bg-gray-900 dark:border-gray-850 text-xs group cursor-pointer hover:bg-gray-50 dark:hover:bg-gray-800 transition"
on:click={() => openFeedbackModal(model)} on:click={() => openFeedbackModal(model)}

View file

@ -51,6 +51,18 @@
: 'general'; : 'general';
} }
$: if (selectedTab) {
// scroll to selectedTab
scrollToTab(selectedTab);
}
const scrollToTab = (tabId) => {
const tabElement = document.getElementById(tabId);
if (tabElement) {
tabElement.scrollIntoView({ behavior: 'smooth', block: 'nearest', inline: 'start' });
}
};
onMount(() => { onMount(() => {
const containerElement = document.getElementById('admin-settings-tabs-container'); const containerElement = document.getElementById('admin-settings-tabs-container');
@ -62,6 +74,9 @@
} }
}); });
} }
// Scroll to the selected tab on mount
scrollToTab(selectedTab);
}); });
</script> </script>
@ -71,6 +86,7 @@
class="tabs flex flex-row overflow-x-auto gap-2.5 max-w-full lg:gap-1 lg:flex-col lg:flex-none lg:w-40 dark:text-gray-200 text-sm font-medium text-left scrollbar-none" class="tabs flex flex-row overflow-x-auto gap-2.5 max-w-full lg:gap-1 lg:flex-col lg:flex-none lg:w-40 dark:text-gray-200 text-sm font-medium text-left scrollbar-none"
> >
<button <button
id="general"
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 lg:flex-none flex text-right transition {selectedTab === class="px-0.5 py-1 min-w-fit rounded-lg flex-1 lg:flex-none flex text-right transition {selectedTab ===
'general' 'general'
? '' ? ''
@ -97,6 +113,7 @@
</button> </button>
<button <button
id="connections"
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab === class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab ===
'connections' 'connections'
? '' ? ''
@ -121,6 +138,7 @@
</button> </button>
<button <button
id="models"
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab === class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab ===
'models' 'models'
? '' ? ''
@ -147,6 +165,7 @@
</button> </button>
<button <button
id="evaluations"
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab === class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab ===
'evaluations' 'evaluations'
? '' ? ''
@ -162,6 +181,7 @@
</button> </button>
<button <button
id="tools"
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab === class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab ===
'tools' 'tools'
? '' ? ''
@ -188,6 +208,7 @@
</button> </button>
<button <button
id="documents"
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab === class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab ===
'documents' 'documents'
? '' ? ''
@ -218,6 +239,7 @@
</button> </button>
<button <button
id="web"
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab === class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab ===
'web' 'web'
? '' ? ''
@ -242,6 +264,7 @@
</button> </button>
<button <button
id="code-execution"
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab === class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab ===
'code-execution' 'code-execution'
? '' ? ''
@ -268,6 +291,7 @@
</button> </button>
<button <button
id="interface"
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab === class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab ===
'interface' 'interface'
? '' ? ''
@ -294,6 +318,7 @@
</button> </button>
<button <button
id="audio"
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab === class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab ===
'audio' 'audio'
? '' ? ''
@ -321,6 +346,7 @@
</button> </button>
<button <button
id="images"
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab === class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab ===
'images' 'images'
? '' ? ''
@ -347,6 +373,7 @@
</button> </button>
<button <button
id="pipelines"
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab === class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab ===
'pipelines' 'pipelines'
? '' ? ''
@ -377,6 +404,7 @@
</button> </button>
<button <button
id="db"
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab === class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-left transition {selectedTab ===
'db' 'db'
? '' ? ''

View file

@ -194,17 +194,20 @@
await embeddingModelUpdateHandler(); await embeddingModelUpdateHandler();
} }
RAGConfig.ALLOWED_FILE_EXTENSIONS = (RAGConfig?.ALLOWED_FILE_EXTENSIONS ?? '') const res = await updateRAGConfig(localStorage.token, {
.split(',') ...RAGConfig,
.map((ext) => ext.trim()) ALLOWED_FILE_EXTENSIONS: RAGConfig.ALLOWED_FILE_EXTENSIONS.split(',')
.filter((ext) => ext !== ''); .map((ext) => ext.trim())
.filter((ext) => ext !== ''),
RAGConfig.DATALAB_MARKER_LANGS = RAGConfig.DATALAB_MARKER_LANGS.split(',') DATALAB_MARKER_LANGS: RAGConfig.DATALAB_MARKER_LANGS.split(',')
.map((code) => code.trim()) .map((code) => code.trim())
.filter((code) => code !== '') .filter((code) => code !== '')
.join(', '); .join(', '),
DOCLING_PICTURE_DESCRIPTION_LOCAL: JSON.parse(
const res = await updateRAGConfig(localStorage.token, RAGConfig); RAGConfig.DOCLING_PICTURE_DESCRIPTION_LOCAL || '{}'
),
DOCLING_PICTURE_DESCRIPTION_API: JSON.parse(RAGConfig.DOCLING_PICTURE_DESCRIPTION_API || '{}')
});
dispatch('save'); dispatch('save');
}; };
@ -232,6 +235,18 @@
const config = await getRAGConfig(localStorage.token); const config = await getRAGConfig(localStorage.token);
config.ALLOWED_FILE_EXTENSIONS = (config?.ALLOWED_FILE_EXTENSIONS ?? []).join(', '); config.ALLOWED_FILE_EXTENSIONS = (config?.ALLOWED_FILE_EXTENSIONS ?? []).join(', ');
config.DOCLING_PICTURE_DESCRIPTION_LOCAL = JSON.stringify(
config.DOCLING_PICTURE_DESCRIPTION_LOCAL ?? {},
null,
2
);
config.DOCLING_PICTURE_DESCRIPTION_API = JSON.stringify(
config.DOCLING_PICTURE_DESCRIPTION_API ?? {},
null,
2
);
RAGConfig = config; RAGConfig = config;
}); });
</script> </script>
@ -510,6 +525,71 @@
</div> </div>
</div> </div>
</div> </div>
{#if RAGConfig.DOCLING_DO_PICTURE_DESCRIPTION}
<div class="flex justify-between w-full mt-2">
<div class="self-center text-xs font-medium">
<Tooltip content={''} placement="top-start">
{$i18n.t('Picture Description Mode')}
</Tooltip>
</div>
<div class="">
<select
class="dark:bg-gray-900 w-fit pr-8 rounded-sm px-2 text-xs bg-transparent outline-hidden text-right"
bind:value={RAGConfig.DOCLING_PICTURE_DESCRIPTION_MODE}
>
<option value="">{$i18n.t('Default')}</option>
<option value="local">{$i18n.t('Local')}</option>
<option value="api">{$i18n.t('API')}</option>
</select>
</div>
</div>
{#if RAGConfig.DOCLING_PICTURE_DESCRIPTION_MODE === 'local'}
<div class="flex flex-col gap-2 mt-2">
<div class=" flex flex-col w-full justify-between">
<div class=" mb-1 text-xs font-medium">
{$i18n.t('Picture Description Local Config')}
</div>
<div class="flex w-full items-center relative">
<Tooltip
content={$i18n.t(
'Options for running a local vision-language model in the picture description. The parameters refer to a model hosted on Hugging Face. This parameter is mutually exclusive with picture_description_api.'
)}
placement="top-start"
className="w-full"
>
<Textarea
bind:value={RAGConfig.DOCLING_PICTURE_DESCRIPTION_LOCAL}
placeholder={$i18n.t('Enter Config in JSON format')}
/>
</Tooltip>
</div>
</div>
</div>
{:else if RAGConfig.DOCLING_PICTURE_DESCRIPTION_MODE === 'api'}
<div class="flex flex-col gap-2 mt-2">
<div class=" flex flex-col w-full justify-between">
<div class=" mb-1 text-xs font-medium">
{$i18n.t('Picture Description API Config')}
</div>
<div class="flex w-full items-center relative">
<Tooltip
content={$i18n.t(
'API details for using a vision-language model in the picture description. This parameter is mutually exclusive with picture_description_local.'
)}
placement="top-start"
className="w-full"
>
<Textarea
bind:value={RAGConfig.DOCLING_PICTURE_DESCRIPTION_API}
placeholder={$i18n.t('Enter Config in JSON format')}
/>
</Tooltip>
</div>
</div>
</div>
{/if}
{/if}
{:else if RAGConfig.CONTENT_EXTRACTION_ENGINE === 'document_intelligence'} {:else if RAGConfig.CONTENT_EXTRACTION_ENGINE === 'document_intelligence'}
<div class="my-0.5 flex gap-2 pr-2"> <div class="my-0.5 flex gap-2 pr-2">
<input <input
@ -830,12 +910,7 @@
<div class=" mb-2.5 flex w-full justify-between"> <div class=" mb-2.5 flex w-full justify-between">
<div class=" self-center text-xs font-medium">{$i18n.t('Hybrid Search')}</div> <div class=" self-center text-xs font-medium">{$i18n.t('Hybrid Search')}</div>
<div class="flex items-center relative"> <div class="flex items-center relative">
<Switch <Switch bind:state={RAGConfig.ENABLE_RAG_HYBRID_SEARCH} />
bind:state={RAGConfig.ENABLE_RAG_HYBRID_SEARCH}
on:change={() => {
submitHandler();
}}
/>
</div> </div>
</div> </div>

View file

@ -79,6 +79,7 @@
const updateHandler = async () => { const updateHandler = async () => {
webhookUrl = await updateWebhookUrl(localStorage.token, webhookUrl); webhookUrl = await updateWebhookUrl(localStorage.token, webhookUrl);
const res = await updateAdminConfig(localStorage.token, adminConfig); const res = await updateAdminConfig(localStorage.token, adminConfig);
await updateLdapConfig(localStorage.token, ENABLE_LDAP);
await updateLdapServerHandler(); await updateLdapServerHandler();
if (res) { if (res) {
@ -311,7 +312,6 @@
{$i18n.t('Pending User Overlay Title')} {$i18n.t('Pending User Overlay Title')}
</div> </div>
<Textarea <Textarea
rows={2}
placeholder={$i18n.t( placeholder={$i18n.t(
'Enter a title for the pending user info overlay. Leave empty for default.' 'Enter a title for the pending user info overlay. Leave empty for default.'
)} )}
@ -401,12 +401,7 @@
<div class=" font-medium">{$i18n.t('LDAP')}</div> <div class=" font-medium">{$i18n.t('LDAP')}</div>
<div class="mt-1"> <div class="mt-1">
<Switch <Switch bind:state={ENABLE_LDAP} />
bind:state={ENABLE_LDAP}
on:change={async () => {
updateLdapConfig(localStorage.token, ENABLE_LDAP);
}}
/>
</div> </div>
</div> </div>

View file

@ -75,6 +75,8 @@
}; };
const init = async () => { const init = async () => {
models = null;
workspaceModels = await getBaseModels(localStorage.token); workspaceModels = await getBaseModels(localStorage.token);
baseModels = await getModels(localStorage.token, null, true); baseModels = await getModels(localStorage.token, null, true);
@ -126,6 +128,7 @@
toast.success($i18n.t('Model updated successfully')); toast.success($i18n.t('Model updated successfully'));
} }
} }
await init();
_models.set( _models.set(
await getModels( await getModels(
@ -133,7 +136,6 @@
$config?.features?.enable_direct_connections && ($settings?.directConnections ?? null) $config?.features?.enable_direct_connections && ($settings?.directConnections ?? null)
) )
); );
await init();
}; };
const toggleModelHandler = async (model) => { const toggleModelHandler = async (model) => {

View file

@ -45,7 +45,7 @@
<div slot="content"> <div slot="content">
<DropdownMenu.Content <DropdownMenu.Content
class="w-full max-w-[160px] rounded-xl px-1 py-1.5 border border-gray-300/30 dark:border-gray-700/50 z-50 bg-white dark:bg-gray-850 dark:text-white shadow-sm" class="w-full max-w-[170px] rounded-xl px-1 py-1.5 border border-gray-300/30 dark:border-gray-700/50 z-50 bg-white dark:bg-gray-850 dark:text-white shadow-sm"
sideOffset={-2} sideOffset={-2}
side="bottom" side="bottom"
align="start" align="start"

View file

@ -446,15 +446,54 @@
</div> </div>
</div> </div>
{:else if webConfig.WEB_SEARCH_ENGINE === 'perplexity'} {:else if webConfig.WEB_SEARCH_ENGINE === 'perplexity'}
<div> <div class="mb-2.5 flex w-full flex-col">
<div class=" self-center text-xs font-medium mb-1"> <div>
{$i18n.t('Perplexity API Key')} <div class=" self-center text-xs font-medium mb-1">
</div> {$i18n.t('Perplexity API Key')}
</div>
<SensitiveInput <SensitiveInput
placeholder={$i18n.t('Enter Perplexity API Key')} placeholder={$i18n.t('Enter Perplexity API Key')}
bind:value={webConfig.PERPLEXITY_API_KEY} bind:value={webConfig.PERPLEXITY_API_KEY}
/> />
</div>
</div>
<div class="mb-2.5 flex w-full flex-col">
<div>
<div class="self-center text-xs font-medium mb-1">
{$i18n.t('Perplexity Model')}
</div>
<input
list="perplexity-model-list"
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
bind:value={webConfig.PERPLEXITY_MODEL}
/>
<datalist id="perplexity-model-list">
<option value="sonar">Sonar</option>
<option value="sonar-pro">Sonar Pro</option>
<option value="sonar-reasoning">Sonar Reasoning</option>
<option value="sonar-reasoning-pro">Sonar Reasoning Pro</option>
<option value="sonar-deep-research">Sonar Deep Research</option>
</datalist>
</div>
</div>
<div class="mb-2.5 flex w-full flex-col">
<div>
<div class=" self-center text-xs font-medium mb-1">
{$i18n.t('Perplexity Search Context Usage')}
</div>
<select
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
bind:value={webConfig.PERPLEXITY_SEARCH_CONTEXT_USAGE}
>
<option value="low">Low</option>
<option value="medium">Medium</option>
<option value="high">High</option>
</select>
</div>
</div> </div>
{:else if webConfig.WEB_SEARCH_ENGINE === 'sougou'} {:else if webConfig.WEB_SEARCH_ENGINE === 'sougou'}
<div class="mb-2.5 flex w-full flex-col"> <div class="mb-2.5 flex w-full flex-col">

View file

@ -4,13 +4,32 @@
import { goto } from '$app/navigation'; import { goto } from '$app/navigation';
import { user } from '$lib/stores'; import { user } from '$lib/stores';
import { page } from '$app/stores';
import UserList from './Users/UserList.svelte'; import UserList from './Users/UserList.svelte';
import Groups from './Users/Groups.svelte'; import Groups from './Users/Groups.svelte';
const i18n = getContext('i18n'); const i18n = getContext('i18n');
let selectedTab = 'overview'; let selectedTab;
$: {
const pathParts = $page.url.pathname.split('/');
const tabFromPath = pathParts[pathParts.length - 1];
selectedTab = ['overview', 'groups'].includes(tabFromPath) ? tabFromPath : 'overview';
}
$: if (selectedTab) {
// scroll to selectedTab
scrollToTab(selectedTab);
}
const scrollToTab = (tabId) => {
const tabElement = document.getElementById(tabId);
if (tabElement) {
tabElement.scrollIntoView({ behavior: 'smooth', block: 'nearest', inline: 'start' });
}
};
let loaded = false; let loaded = false;
onMount(async () => { onMount(async () => {
@ -30,6 +49,9 @@
} }
}); });
} }
// Scroll to the selected tab on mount
scrollToTab(selectedTab);
}); });
</script> </script>
@ -39,12 +61,13 @@
class=" flex flex-row overflow-x-auto gap-2.5 max-w-full lg:gap-1 lg:flex-col lg:flex-none lg:w-40 dark:text-gray-200 text-sm font-medium text-left scrollbar-none" class=" flex flex-row overflow-x-auto gap-2.5 max-w-full lg:gap-1 lg:flex-col lg:flex-none lg:w-40 dark:text-gray-200 text-sm font-medium text-left scrollbar-none"
> >
<button <button
id="overview"
class="px-0.5 py-1 min-w-fit rounded-lg lg:flex-none flex text-right transition {selectedTab === class="px-0.5 py-1 min-w-fit rounded-lg lg:flex-none flex text-right transition {selectedTab ===
'overview' 'overview'
? '' ? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}" : ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => { on:click={() => {
selectedTab = 'overview'; goto('/admin/users/overview');
}} }}
> >
<div class=" self-center mr-2"> <div class=" self-center mr-2">
@ -63,12 +86,13 @@
</button> </button>
<button <button
id="groups"
class="px-0.5 py-1 min-w-fit rounded-lg lg:flex-none flex text-right transition {selectedTab === class="px-0.5 py-1 min-w-fit rounded-lg lg:flex-none flex text-right transition {selectedTab ===
'groups' 'groups'
? '' ? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}" : ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => { on:click={() => {
selectedTab = 'groups'; goto('/admin/users/groups');
}} }}
> >
<div class=" self-center mr-2"> <div class=" self-center mr-2">

View file

@ -101,7 +101,7 @@
<div class="flex-1"> <div class="flex-1">
<select <select
class="w-full rounded-sm text-sm bg-transparent disabled:text-gray-500 dark:disabled:text-gray-500 outline-hidden" class="w-full dark:bg-gray-900 rounded-sm text-sm bg-transparent disabled:text-gray-500 dark:disabled:text-gray-500 outline-hidden"
bind:value={_user.role} bind:value={_user.role}
disabled={_user.id == sessionUser.id} disabled={_user.id == sessionUser.id}
required required

View file

@ -432,24 +432,19 @@
} }
}; };
let pageSubscribe = null;
onMount(async () => { onMount(async () => {
loading = true; loading = true;
console.log('mounted'); console.log('mounted');
window.addEventListener('message', onMessageHandler); window.addEventListener('message', onMessageHandler);
$socket?.on('chat-events', chatEventHandler); $socket?.on('chat-events', chatEventHandler);
if (!$chatId) { pageSubscribe = page.subscribe(async (p) => {
chatIdUnsubscriber = chatId.subscribe(async (value) => { if (p.url.pathname === '/') {
if (!value) { await tick();
await tick(); // Wait for DOM updates initNewChat();
await initNewChat();
}
});
} else {
if ($temporaryChatEnabled) {
await goto('/');
} }
} });
if (localStorage.getItem(`chat-input${chatIdProp ? `-${chatIdProp}` : ''}`)) { if (localStorage.getItem(`chat-input${chatIdProp ? `-${chatIdProp}` : ''}`)) {
prompt = ''; prompt = '';
@ -509,6 +504,7 @@
}); });
onDestroy(() => { onDestroy(() => {
pageSubscribe();
chatIdUnsubscriber?.(); chatIdUnsubscriber?.();
window.removeEventListener('message', onMessageHandler); window.removeEventListener('message', onMessageHandler);
$socket?.off('chat-events', chatEventHandler); $socket?.off('chat-events', chatEventHandler);
@ -1636,9 +1632,6 @@
params: { params: {
...$settings?.params, ...$settings?.params,
...params, ...params,
format: $settings.requestFormat ?? undefined,
keep_alive: $settings.keepAlive ?? undefined,
stop: stop:
(params?.stop ?? $settings?.params?.stop ?? undefined) (params?.stop ?? $settings?.params?.stop ?? undefined)
? (params?.stop.split(',').map((token) => token.trim()) ?? $settings.params.stop).map( ? (params?.stop.split(',').map((token) => token.trim()) ?? $settings.params.stop).map(

View file

@ -69,10 +69,10 @@
{#if $temporaryChatEnabled} {#if $temporaryChatEnabled}
<Tooltip <Tooltip
content={$i18n.t('This chat wont appear in history and your messages will not be saved.')} content={$i18n.t('This chat wont appear in history and your messages will not be saved.')}
className="w-full flex justify-center mb-0.5" className="w-full flex justify-start mb-0.5"
placement="top" placement="top"
> >
<div class="flex items-center gap-2 text-gray-500 font-medium text-lg my-2 w-fit"> <div class="flex items-center gap-2 text-gray-500 font-medium text-lg mt-2 w-fit">
<EyeSlash strokeWidth="2.5" className="size-5" />{$i18n.t('Temporary Chat')} <EyeSlash strokeWidth="2.5" className="size-5" />{$i18n.t('Temporary Chat')}
</div> </div>
</Tooltip> </Tooltip>

View file

@ -190,7 +190,7 @@
</div> </div>
{#if selectedId} {#if selectedId}
<hr class="dark:border-gray-800 my-1 w-full" /> <hr class="border-gray-50 dark:border-gray-800 my-1 w-full" />
<div class="my-2 text-xs"> <div class="my-2 text-xs">
{#if !loading} {#if !loading}

View file

@ -205,8 +205,10 @@
return; return;
} }
const mineTypes = ['audio/webm; codecs=opus', 'audio/mp4'];
mediaRecorder = new MediaRecorder(stream, { mediaRecorder = new MediaRecorder(stream, {
mimeType: 'audio/webm; codecs=opus' mimeType: mineTypes.find((type) => MediaRecorder.isTypeSupported(type))
}); });
mediaRecorder.onstart = () => { mediaRecorder.onstart = () => {

View file

@ -188,9 +188,8 @@
</div> </div>
<div slot="content"> <div slot="content">
<div class="flex text-xs font-medium flex-wrap"> <div class="flex text-xs font-medium flex-wrap">
{#each citations as citation, idx} {#each citations.slice(2) as citation, idx}
<button <button
id={`source-${id}-${idx + 1}`}
class="no-toggle outline-hidden flex dark:text-gray-300 p-1 bg-gray-50 hover:bg-gray-100 dark:bg-gray-900 dark:hover:bg-gray-850 transition rounded-xl max-w-96" class="no-toggle outline-hidden flex dark:text-gray-300 p-1 bg-gray-50 hover:bg-gray-100 dark:bg-gray-900 dark:hover:bg-gray-850 transition rounded-xl max-w-96"
on:click={() => { on:click={() => {
showCitationModal = true; showCitationModal = true;
@ -199,7 +198,7 @@
> >
{#if citations.every((c) => c.distances !== undefined)} {#if citations.every((c) => c.distances !== undefined)}
<div class="bg-gray-50 dark:bg-gray-800 rounded-full size-4"> <div class="bg-gray-50 dark:bg-gray-800 rounded-full size-4">
{idx + 1} {idx + 3}
</div> </div>
{/if} {/if}
<div class="flex-1 mx-1 truncate"> <div class="flex-1 mx-1 truncate">

View file

@ -28,7 +28,7 @@
<!-- svelte-ignore a11y-media-has-caption --> <!-- svelte-ignore a11y-media-has-caption -->
<video <video
class="w-full my-2" class="w-full my-2"
src={videoSrc} src={videoSrc.replaceAll('&amp;', '&')}
title="Video player" title="Video player"
frameborder="0" frameborder="0"
referrerpolicy="strict-origin-when-cross-origin" referrerpolicy="strict-origin-when-cross-origin"
@ -38,6 +38,20 @@
{:else} {:else}
{token.text} {token.text}
{/if} {/if}
{:else if html && html.includes('<audio')}
{@const audio = html.match(/<audio[^>]*>([\s\S]*?)<\/audio>/)}
{@const audioSrc = audio && audio[1]}
{#if audioSrc}
<!-- svelte-ignore a11y-media-has-caption -->
<audio
class="w-full my-2"
src={audioSrc.replaceAll('&amp;', '&')}
title="Audio player"
controls
></audio>
{:else}
{token.text}
{/if}
{:else if token.text && token.text.match(/<iframe\s+[^>]*src="https:\/\/www\.youtube\.com\/embed\/([a-zA-Z0-9_-]{11})(?:\?[^"]*)?"[^>]*><\/iframe>/)} {:else if token.text && token.text.match(/<iframe\s+[^>]*src="https:\/\/www\.youtube\.com\/embed\/([a-zA-Z0-9_-]{11})(?:\?[^"]*)?"[^>]*><\/iframe>/)}
{@const match = token.text.match( {@const match = token.text.match(
/<iframe\s+[^>]*src="https:\/\/www\.youtube\.com\/embed\/([a-zA-Z0-9_-]{11})(?:\?[^"]*)?"[^>]*><\/iframe>/ /<iframe\s+[^>]*src="https:\/\/www\.youtube\.com\/embed\/([a-zA-Z0-9_-]{11})(?:\?[^"]*)?"[^>]*><\/iframe>/

View file

@ -137,8 +137,8 @@
</div> </div>
<div class="w-full flex justify-center"> <div class="w-full flex justify-center">
<div class=" relative w-fit"> <div class=" relative w-fit overflow-x-auto scrollbar-none">
<div class="mt-1.5 w-fit flex gap-1 pb-5"> <div class="mt-1.5 w-fit flex gap-1 pb-2">
<!-- 1-10 scale --> <!-- 1-10 scale -->
{#each Array.from({ length: 10 }).map((_, i) => i + 1) as rating} {#each Array.from({ length: 10 }).map((_, i) => i + 1) as rating}
<button <button
@ -156,7 +156,7 @@
{/each} {/each}
</div> </div>
<div class="absolute bottom-0 left-0 right-0 flex justify-between text-xs"> <div class="sticky top-0 bottom-0 left-0 right-0 flex justify-between text-xs">
<div> <div>
1 - {$i18n.t('Awful')} 1 - {$i18n.t('Awful')}
</div> </div>

View file

@ -601,7 +601,7 @@
id="message-{message.id}" id="message-{message.id}"
dir={$settings.chatDirection} dir={$settings.chatDirection}
> >
<div class={`shrink-0 ltr:mr-3 rtl:ml-3`}> <div class={`shrink-0 ltr:mr-3 rtl:ml-3 hidden @lg:flex `}>
<ProfileImage <ProfileImage
src={model?.info?.meta?.profile_image_url ?? src={model?.info?.meta?.profile_image_url ??
($i18n.language === 'dg-DG' ? `/doge.png` : `${WEBUI_BASE_URL}/static/favicon.png`)} ($i18n.language === 'dg-DG' ? `/doge.png` : `${WEBUI_BASE_URL}/static/favicon.png`)}
@ -869,12 +869,14 @@
{#if siblings.length > 1} {#if siblings.length > 1}
<div class="flex self-center min-w-fit" dir="ltr"> <div class="flex self-center min-w-fit" dir="ltr">
<button <button
aria-label={$i18n.t('Previous message')}
class="self-center p-1 hover:bg-black/5 dark:hover:bg-white/5 dark:hover:text-white hover:text-black rounded-md transition" class="self-center p-1 hover:bg-black/5 dark:hover:bg-white/5 dark:hover:text-white hover:text-black rounded-md transition"
on:click={() => { on:click={() => {
showPreviousMessage(message); showPreviousMessage(message);
}} }}
> >
<svg <svg
aria-hidden="true"
xmlns="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg"
fill="none" fill="none"
viewBox="0 0 24 24" viewBox="0 0 24 24"
@ -940,10 +942,12 @@
on:click={() => { on:click={() => {
showNextMessage(message); showNextMessage(message);
}} }}
aria-label={$i18n.t('Next message')}
> >
<svg <svg
xmlns="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg"
fill="none" fill="none"
aria-hidden="true"
viewBox="0 0 24 24" viewBox="0 0 24 24"
stroke="currentColor" stroke="currentColor"
stroke-width="2.5" stroke-width="2.5"
@ -964,6 +968,7 @@
{#if $user?.role === 'user' ? ($user?.permissions?.chat?.edit ?? true) : true} {#if $user?.role === 'user' ? ($user?.permissions?.chat?.edit ?? true) : true}
<Tooltip content={$i18n.t('Edit')} placement="bottom"> <Tooltip content={$i18n.t('Edit')} placement="bottom">
<button <button
aria-label={$i18n.t('Edit')}
class="{isLastMessage class="{isLastMessage
? 'visible' ? 'visible'
: 'invisible group-hover:visible'} p-1.5 hover:bg-black/5 dark:hover:bg-white/5 rounded-lg dark:hover:text-white hover:text-black transition" : 'invisible group-hover:visible'} p-1.5 hover:bg-black/5 dark:hover:bg-white/5 rounded-lg dark:hover:text-white hover:text-black transition"
@ -976,6 +981,7 @@
fill="none" fill="none"
viewBox="0 0 24 24" viewBox="0 0 24 24"
stroke-width="2.3" stroke-width="2.3"
aria-hidden="true"
stroke="currentColor" stroke="currentColor"
class="w-4 h-4" class="w-4 h-4"
> >
@ -992,6 +998,7 @@
<Tooltip content={$i18n.t('Copy')} placement="bottom"> <Tooltip content={$i18n.t('Copy')} placement="bottom">
<button <button
aria-label={$i18n.t('Copy')}
class="{isLastMessage class="{isLastMessage
? 'visible' ? 'visible'
: 'invisible group-hover:visible'} p-1.5 hover:bg-black/5 dark:hover:bg-white/5 rounded-lg dark:hover:text-white hover:text-black transition copy-response-button" : 'invisible group-hover:visible'} p-1.5 hover:bg-black/5 dark:hover:bg-white/5 rounded-lg dark:hover:text-white hover:text-black transition copy-response-button"
@ -1002,6 +1009,7 @@
<svg <svg
xmlns="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg"
fill="none" fill="none"
aria-hidden="true"
viewBox="0 0 24 24" viewBox="0 0 24 24"
stroke-width="2.3" stroke-width="2.3"
stroke="currentColor" stroke="currentColor"
@ -1019,6 +1027,7 @@
{#if $user?.role === 'admin' || ($user?.permissions?.chat?.tts ?? true)} {#if $user?.role === 'admin' || ($user?.permissions?.chat?.tts ?? true)}
<Tooltip content={$i18n.t('Read Aloud')} placement="bottom"> <Tooltip content={$i18n.t('Read Aloud')} placement="bottom">
<button <button
aria-label={$i18n.t('Read Aloud')}
id="speak-button-{message.id}" id="speak-button-{message.id}"
class="{isLastMessage class="{isLastMessage
? 'visible' ? 'visible'
@ -1034,6 +1043,7 @@
class=" w-4 h-4" class=" w-4 h-4"
fill="currentColor" fill="currentColor"
viewBox="0 0 24 24" viewBox="0 0 24 24"
aria-hidden="true"
xmlns="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg"
> >
<style> <style>
@ -1066,6 +1076,7 @@
xmlns="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg"
fill="none" fill="none"
viewBox="0 0 24 24" viewBox="0 0 24 24"
aria-hidden="true"
stroke-width="2.3" stroke-width="2.3"
stroke="currentColor" stroke="currentColor"
class="w-4 h-4" class="w-4 h-4"
@ -1081,6 +1092,7 @@
xmlns="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg"
fill="none" fill="none"
viewBox="0 0 24 24" viewBox="0 0 24 24"
aria-hidden="true"
stroke-width="2.3" stroke-width="2.3"
stroke="currentColor" stroke="currentColor"
class="w-4 h-4" class="w-4 h-4"
@ -1099,6 +1111,7 @@
{#if $config?.features.enable_image_generation && ($user?.role === 'admin' || $user?.permissions?.features?.image_generation) && !readOnly} {#if $config?.features.enable_image_generation && ($user?.role === 'admin' || $user?.permissions?.features?.image_generation) && !readOnly}
<Tooltip content={$i18n.t('Generate Image')} placement="bottom"> <Tooltip content={$i18n.t('Generate Image')} placement="bottom">
<button <button
aria-label={$i18n.t('Generate Image')}
class="{isLastMessage class="{isLastMessage
? 'visible' ? 'visible'
: 'invisible group-hover:visible'} p-1.5 hover:bg-black/5 dark:hover:bg-white/5 rounded-lg dark:hover:text-white hover:text-black transition" : 'invisible group-hover:visible'} p-1.5 hover:bg-black/5 dark:hover:bg-white/5 rounded-lg dark:hover:text-white hover:text-black transition"
@ -1110,6 +1123,7 @@
> >
{#if generatingImage} {#if generatingImage}
<svg <svg
aria-hidden="true"
class=" w-4 h-4" class=" w-4 h-4"
fill="currentColor" fill="currentColor"
viewBox="0 0 24 24" viewBox="0 0 24 24"
@ -1144,6 +1158,7 @@
<svg <svg
xmlns="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg"
fill="none" fill="none"
aria-hidden="true"
viewBox="0 0 24 24" viewBox="0 0 24 24"
stroke-width="2.3" stroke-width="2.3"
stroke="currentColor" stroke="currentColor"
@ -1176,6 +1191,7 @@
placement="bottom" placement="bottom"
> >
<button <button
aria-hidden="true"
class=" {isLastMessage class=" {isLastMessage
? 'visible' ? 'visible'
: 'invisible group-hover:visible'} p-1.5 hover:bg-black/5 dark:hover:bg-white/5 rounded-lg dark:hover:text-white hover:text-black transition whitespace-pre-wrap" : 'invisible group-hover:visible'} p-1.5 hover:bg-black/5 dark:hover:bg-white/5 rounded-lg dark:hover:text-white hover:text-black transition whitespace-pre-wrap"
@ -1185,6 +1201,7 @@
id="info-{message.id}" id="info-{message.id}"
> >
<svg <svg
aria-hidden="true"
xmlns="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg"
fill="none" fill="none"
viewBox="0 0 24 24" viewBox="0 0 24 24"
@ -1206,6 +1223,7 @@
{#if !$temporaryChatEnabled && ($config?.features.enable_message_rating ?? true)} {#if !$temporaryChatEnabled && ($config?.features.enable_message_rating ?? true)}
<Tooltip content={$i18n.t('Good Response')} placement="bottom"> <Tooltip content={$i18n.t('Good Response')} placement="bottom">
<button <button
aria-label={$i18n.t('Good Response')}
class="{isLastMessage class="{isLastMessage
? 'visible' ? 'visible'
: 'invisible group-hover:visible'} p-1.5 hover:bg-black/5 dark:hover:bg-white/5 rounded-lg {( : 'invisible group-hover:visible'} p-1.5 hover:bg-black/5 dark:hover:bg-white/5 rounded-lg {(
@ -1224,6 +1242,7 @@
}} }}
> >
<svg <svg
aria-hidden="true"
stroke="currentColor" stroke="currentColor"
fill="none" fill="none"
stroke-width="2.3" stroke-width="2.3"
@ -1242,6 +1261,7 @@
<Tooltip content={$i18n.t('Bad Response')} placement="bottom"> <Tooltip content={$i18n.t('Bad Response')} placement="bottom">
<button <button
aria-label={$i18n.t('Bad Response')}
class="{isLastMessage class="{isLastMessage
? 'visible' ? 'visible'
: 'invisible group-hover:visible'} p-1.5 hover:bg-black/5 dark:hover:bg-white/5 rounded-lg {( : 'invisible group-hover:visible'} p-1.5 hover:bg-black/5 dark:hover:bg-white/5 rounded-lg {(
@ -1260,6 +1280,7 @@
}} }}
> >
<svg <svg
aria-hidden="true"
stroke="currentColor" stroke="currentColor"
fill="none" fill="none"
stroke-width="2.3" stroke-width="2.3"
@ -1280,6 +1301,7 @@
{#if isLastMessage} {#if isLastMessage}
<Tooltip content={$i18n.t('Continue Response')} placement="bottom"> <Tooltip content={$i18n.t('Continue Response')} placement="bottom">
<button <button
aria-label={$i18n.t('Continue Response')}
type="button" type="button"
id="continue-response-button" id="continue-response-button"
class="{isLastMessage class="{isLastMessage
@ -1290,6 +1312,7 @@
}} }}
> >
<svg <svg
aria-hidden="true"
xmlns="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg"
fill="none" fill="none"
viewBox="0 0 24 24" viewBox="0 0 24 24"
@ -1315,6 +1338,7 @@
<Tooltip content={$i18n.t('Regenerate')} placement="bottom"> <Tooltip content={$i18n.t('Regenerate')} placement="bottom">
<button <button
type="button" type="button"
aria-label={$i18n.t('Regenerate')}
class="{isLastMessage class="{isLastMessage
? 'visible' ? 'visible'
: 'invisible group-hover:visible'} p-1.5 hover:bg-black/5 dark:hover:bg-white/5 rounded-lg dark:hover:text-white hover:text-black transition regenerate-response-button" : 'invisible group-hover:visible'} p-1.5 hover:bg-black/5 dark:hover:bg-white/5 rounded-lg dark:hover:text-white hover:text-black transition regenerate-response-button"
@ -1340,6 +1364,7 @@
fill="none" fill="none"
viewBox="0 0 24 24" viewBox="0 0 24 24"
stroke-width="2.3" stroke-width="2.3"
aria-hidden="true"
stroke="currentColor" stroke="currentColor"
class="w-4 h-4" class="w-4 h-4"
> >
@ -1356,6 +1381,7 @@
<Tooltip content={$i18n.t('Delete')} placement="bottom"> <Tooltip content={$i18n.t('Delete')} placement="bottom">
<button <button
type="button" type="button"
aria-label={$i18n.t('Delete')}
id="delete-response-button" id="delete-response-button"
class="{isLastMessage class="{isLastMessage
? 'visible' ? 'visible'
@ -1370,6 +1396,7 @@
viewBox="0 0 24 24" viewBox="0 0 24 24"
stroke-width="2" stroke-width="2"
stroke="currentColor" stroke="currentColor"
aria-hidden="true"
class="w-4 h-4" class="w-4 h-4"
> >
<path <path
@ -1387,6 +1414,7 @@
<Tooltip content={action.name} placement="bottom"> <Tooltip content={action.name} placement="bottom">
<button <button
type="button" type="button"
aria-label={action.name}
class="{isLastMessage class="{isLastMessage
? 'visible' ? 'visible'
: 'invisible group-hover:visible'} p-1.5 hover:bg-black/5 dark:hover:bg-white/5 rounded-lg dark:hover:text-white hover:text-black transition" : 'invisible group-hover:visible'} p-1.5 hover:bg-black/5 dark:hover:bg-white/5 rounded-lg dark:hover:text-white hover:text-black transition"

View file

@ -25,6 +25,19 @@
toast.success($i18n.t('Default model updated')); toast.success($i18n.t('Default model updated'));
}; };
const pinModelHandler = async (modelId) => {
let pinnedModels = $settings?.pinnedModels ?? [];
if (pinnedModels.includes(modelId)) {
pinnedModels = pinnedModels.filter((id) => id !== modelId);
} else {
pinnedModels = [...new Set([...pinnedModels, modelId])];
}
settings.set({ ...$settings, pinnedModels: pinnedModels });
await updateUserSettings(localStorage.token, { ui: $settings });
};
$: if (selectedModels.length > 0 && $models.length > 0) { $: if (selectedModels.length > 0 && $models.length > 0) {
selectedModels = selectedModels.map((model) => selectedModels = selectedModels.map((model) =>
$models.map((m) => m.id).includes(model) ? model : '' $models.map((m) => m.id).includes(model) ? model : ''
@ -49,6 +62,7 @@
? ($user?.permissions?.chat?.temporary ?? true) && ? ($user?.permissions?.chat?.temporary ?? true) &&
!($user?.permissions?.chat?.temporary_enforced ?? false) !($user?.permissions?.chat?.temporary_enforced ?? false)
: true} : true}
{pinModelHandler}
bind:value={selectedModel} bind:value={selectedModel}
/> />
</div> </div>

View file

@ -0,0 +1,250 @@
<script lang="ts">
import { marked } from 'marked';
import { getContext, tick } from 'svelte';
import dayjs from '$lib/dayjs';
import { mobile, settings, user } from '$lib/stores';
import Tooltip from '$lib/components/common/Tooltip.svelte';
import { copyToClipboard, sanitizeResponseContent } from '$lib/utils';
import ArrowUpTray from '$lib/components/icons/ArrowUpTray.svelte';
import Check from '$lib/components/icons/Check.svelte';
import ModelItemMenu from './ModelItemMenu.svelte';
import EllipsisHorizontal from '$lib/components/icons/EllipsisHorizontal.svelte';
import { toast } from 'svelte-sonner';
const i18n = getContext('i18n');
export let selectedModelIdx: number = -1;
export let item: any = {};
export let index: number = -1;
export let value: string = '';
export let unloadModelHandler: (modelValue: string) => void = () => {};
export let pinModelHandler: (modelId: string) => void = () => {};
export let onClick: () => void = () => {};
const copyLinkHandler = async (model) => {
const baseUrl = window.location.origin;
const res = await copyToClipboard(`${baseUrl}/?model=${encodeURIComponent(model.id)}`);
if (res) {
toast.success($i18n.t('Copied link to clipboard'));
} else {
toast.error($i18n.t('Failed to copy link'));
}
};
let showMenu = false;
</script>
<button
aria-label="model-item"
class="flex group/item w-full text-left font-medium line-clamp-1 select-none items-center rounded-button py-2 pl-3 pr-1.5 text-sm text-gray-700 dark:text-gray-100 outline-hidden transition-all duration-75 hover:bg-gray-100 dark:hover:bg-gray-800 rounded-lg cursor-pointer data-highlighted:bg-muted {index ===
selectedModelIdx
? 'bg-gray-100 dark:bg-gray-800 group-hover:bg-transparent'
: ''}"
data-arrow-selected={index === selectedModelIdx}
data-value={item.value}
on:click={() => {
onClick();
}}
>
<div class="flex flex-col flex-1 gap-1.5">
{#if (item?.model?.tags ?? []).length > 0}
<div
class="flex gap-0.5 self-center items-start h-full w-full translate-y-[0.5px] overflow-x-auto scrollbar-none"
>
{#each item.model?.tags.sort((a, b) => a.name.localeCompare(b.name)) as tag}
<Tooltip content={tag.name} className="flex-shrink-0">
<div
class=" text-xs font-bold px-1 rounded-sm uppercase bg-gray-500/20 text-gray-700 dark:text-gray-200"
>
{tag.name}
</div>
</Tooltip>
{/each}
</div>
{/if}
<div class="flex items-center gap-2">
<div class="flex items-center min-w-fit">
<Tooltip content={$user?.role === 'admin' ? (item?.value ?? '') : ''} placement="top-start">
<img
src={item.model?.info?.meta?.profile_image_url ?? '/static/favicon.png'}
alt="Model"
class="rounded-full size-5 flex items-center"
/>
</Tooltip>
</div>
<div class="flex items-center">
<Tooltip content={`${item.label} (${item.value})`} placement="top-start">
<div class="line-clamp-1">
{item.label}
</div>
</Tooltip>
</div>
<div class=" shrink-0 flex items-center gap-2">
{#if item.model.owned_by === 'ollama'}
{#if (item.model.ollama?.details?.parameter_size ?? '') !== ''}
<div class="flex items-center translate-y-[0.5px]">
<Tooltip
content={`${
item.model.ollama?.details?.quantization_level
? item.model.ollama?.details?.quantization_level + ' '
: ''
}${
item.model.ollama?.size
? `(${(item.model.ollama?.size / 1024 ** 3).toFixed(1)}GB)`
: ''
}`}
className="self-end"
>
<span class=" text-xs font-medium text-gray-600 dark:text-gray-400 line-clamp-1"
>{item.model.ollama?.details?.parameter_size ?? ''}</span
>
</Tooltip>
</div>
{/if}
{#if item.model.ollama?.expires_at && new Date(item.model.ollama?.expires_at * 1000) > new Date()}
<div class="flex items-center translate-y-[0.5px] px-0.5">
<Tooltip
content={`${$i18n.t('Unloads {{FROM_NOW}}', {
FROM_NOW: dayjs(item.model.ollama?.expires_at * 1000).fromNow()
})}`}
className="self-end"
>
<div class=" flex items-center">
<span class="relative flex size-2">
<span
class="animate-ping absolute inline-flex h-full w-full rounded-full bg-green-400 opacity-75"
/>
<span class="relative inline-flex rounded-full size-2 bg-green-500" />
</span>
</div>
</Tooltip>
</div>
{/if}
{/if}
<!-- {JSON.stringify(item.info)} -->
{#if item.model?.direct}
<Tooltip content={`${$i18n.t('Direct')}`}>
<div class="translate-y-[1px]">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="size-3"
>
<path
fill-rule="evenodd"
d="M2 2.75A.75.75 0 0 1 2.75 2C8.963 2 14 7.037 14 13.25a.75.75 0 0 1-1.5 0c0-5.385-4.365-9.75-9.75-9.75A.75.75 0 0 1 2 2.75Zm0 4.5a.75.75 0 0 1 .75-.75 6.75 6.75 0 0 1 6.75 6.75.75.75 0 0 1-1.5 0C8 10.35 5.65 8 2.75 8A.75.75 0 0 1 2 7.25ZM3.5 11a1.5 1.5 0 1 0 0 3 1.5 1.5 0 0 0 0-3Z"
clip-rule="evenodd"
/>
</svg>
</div>
</Tooltip>
{:else if item.model.connection_type === 'external'}
<Tooltip content={`${$i18n.t('External')}`}>
<div class="translate-y-[1px]">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="size-3"
>
<path
fill-rule="evenodd"
d="M8.914 6.025a.75.75 0 0 1 1.06 0 3.5 3.5 0 0 1 0 4.95l-2 2a3.5 3.5 0 0 1-5.396-4.402.75.75 0 0 1 1.251.827 2 2 0 0 0 3.085 2.514l2-2a2 2 0 0 0 0-2.828.75.75 0 0 1 0-1.06Z"
clip-rule="evenodd"
/>
<path
fill-rule="evenodd"
d="M7.086 9.975a.75.75 0 0 1-1.06 0 3.5 3.5 0 0 1 0-4.95l2-2a3.5 3.5 0 0 1 5.396 4.402.75.75 0 0 1-1.251-.827 2 2 0 0 0-3.085-2.514l-2 2a2 2 0 0 0 0 2.828.75.75 0 0 1 0 1.06Z"
clip-rule="evenodd"
/>
</svg>
</div>
</Tooltip>
{/if}
{#if item.model?.info?.meta?.description}
<Tooltip
content={`${marked.parse(
sanitizeResponseContent(item.model?.info?.meta?.description).replaceAll('\n', '<br>')
)}`}
>
<div class=" translate-y-[1px]">
<svg
xmlns="http://www.w3.org/2000/svg"
fill="none"
viewBox="0 0 24 24"
stroke-width="1.5"
stroke="currentColor"
class="w-4 h-4"
>
<path
stroke-linecap="round"
stroke-linejoin="round"
d="m11.25 11.25.041-.02a.75.75 0 0 1 1.063.852l-.708 2.836a.75.75 0 0 0 1.063.853l.041-.021M21 12a9 9 0 1 1-18 0 9 9 0 0 1 18 0Zm-9-3.75h.008v.008H12V8.25Z"
/>
</svg>
</div>
</Tooltip>
{/if}
</div>
</div>
</div>
<div class="ml-auto pl-2 pr-1 flex items-center gap-1.5 shrink-0">
{#if $user?.role === 'admin' && item.model.owned_by === 'ollama' && item.model.ollama?.expires_at && new Date(item.model.ollama?.expires_at * 1000) > new Date()}
<Tooltip
content={`${$i18n.t('Eject')}`}
className="flex-shrink-0 group-hover/item:opacity-100 opacity-0 "
>
<button
class="flex"
on:click={(e) => {
e.preventDefault();
e.stopPropagation();
unloadModelHandler(item.value);
}}
>
<ArrowUpTray className="size-3" />
</button>
</Tooltip>
{/if}
<ModelItemMenu
bind:show={showMenu}
model={item.model}
{pinModelHandler}
copyLinkHandler={() => {
copyLinkHandler(item.model);
}}
>
<button
class="flex"
on:click={(e) => {
e.preventDefault();
e.stopPropagation();
showMenu = !showMenu;
}}
>
<EllipsisHorizontal />
</button>
</ModelItemMenu>
{#if value === item.value}
<div>
<Check className="size-3" />
</div>
{/if}
</div>
</button>

View file

@ -0,0 +1,90 @@
<script lang="ts">
import { DropdownMenu } from 'bits-ui';
import { flyAndScale } from '$lib/utils/transitions';
import { getContext } from 'svelte';
import Tooltip from '$lib/components/common/Tooltip.svelte';
import Link from '$lib/components/icons/Link.svelte';
import Eye from '$lib/components/icons/Eye.svelte';
import EyeSlash from '$lib/components/icons/EyeSlash.svelte';
import { settings } from '$lib/stores';
const i18n = getContext('i18n');
export let show = false;
export let model;
export let pinModelHandler: (modelId: string) => void = () => {};
export let copyLinkHandler: Function = () => {};
export let onClose: Function = () => {};
</script>
<DropdownMenu.Root
bind:open={show}
closeFocus={false}
onOpenChange={(state) => {
if (state === false) {
onClose();
}
}}
typeahead={false}
>
<DropdownMenu.Trigger>
<Tooltip content={$i18n.t('More')} className=" group-hover/item:opacity-100 opacity-0">
<slot />
</Tooltip>
</DropdownMenu.Trigger>
<DropdownMenu.Content
strategy="fixed"
class="w-full max-w-[180px] text-sm rounded-xl px-1 py-1.5 z-[9999999] bg-white dark:bg-gray-850 dark:text-white shadow-lg"
sideOffset={-2}
side="bottom"
align="end"
transition={flyAndScale}
>
<button
type="button"
class="flex rounded-md py-1.5 px-3 w-full hover:bg-gray-50 dark:hover:bg-gray-800 transition items-center gap-2"
on:click={(e) => {
e.stopPropagation();
e.preventDefault();
pinModelHandler(model?.id);
show = false;
}}
>
{#if ($settings?.pinnedModels ?? []).includes(model?.id)}
<EyeSlash />
{:else}
<Eye />
{/if}
<div class="flex items-center">
{#if ($settings?.pinnedModels ?? []).includes(model?.id)}
{$i18n.t('Hide from Sidebar')}
{:else}
{$i18n.t('Keep in Sidebar')}
{/if}
</div>
</button>
<button
type="button"
class="flex rounded-md py-1.5 px-3 w-full hover:bg-gray-50 dark:hover:bg-gray-800 transition items-center gap-2"
on:click={(e) => {
e.stopPropagation();
e.preventDefault();
copyLinkHandler();
show = false;
}}
>
<Link />
<div class="flex items-center">{$i18n.t('Copy Link')}</div>
</button>
</DropdownMenu.Content>
</DropdownMenu.Root>

View file

@ -3,12 +3,13 @@
import { marked } from 'marked'; import { marked } from 'marked';
import Fuse from 'fuse.js'; import Fuse from 'fuse.js';
import dayjs from '$lib/dayjs';
import relativeTime from 'dayjs/plugin/relativeTime';
dayjs.extend(relativeTime);
import { flyAndScale } from '$lib/utils/transitions'; import { flyAndScale } from '$lib/utils/transitions';
import { createEventDispatcher, onMount, getContext, tick } from 'svelte'; import { createEventDispatcher, onMount, getContext, tick } from 'svelte';
import { goto } from '$app/navigation';
import ChevronDown from '$lib/components/icons/ChevronDown.svelte';
import Check from '$lib/components/icons/Check.svelte';
import Search from '$lib/components/icons/Search.svelte';
import { deleteModel, getOllamaVersion, pullModel, unloadModel } from '$lib/apis/ollama'; import { deleteModel, getOllamaVersion, pullModel, unloadModel } from '$lib/apis/ollama';
@ -25,14 +26,14 @@
import { capitalizeFirstLetter, sanitizeResponseContent, splitStream } from '$lib/utils'; import { capitalizeFirstLetter, sanitizeResponseContent, splitStream } from '$lib/utils';
import { getModels } from '$lib/apis'; import { getModels } from '$lib/apis';
import ChevronDown from '$lib/components/icons/ChevronDown.svelte';
import Check from '$lib/components/icons/Check.svelte';
import Search from '$lib/components/icons/Search.svelte';
import Tooltip from '$lib/components/common/Tooltip.svelte'; import Tooltip from '$lib/components/common/Tooltip.svelte';
import Switch from '$lib/components/common/Switch.svelte'; import Switch from '$lib/components/common/Switch.svelte';
import ChatBubbleOval from '$lib/components/icons/ChatBubbleOval.svelte'; import ChatBubbleOval from '$lib/components/icons/ChatBubbleOval.svelte';
import { goto } from '$app/navigation';
import dayjs from '$lib/dayjs'; import ModelItem from './ModelItem.svelte';
import relativeTime from 'dayjs/plugin/relativeTime';
import ArrowUpTray from '$lib/components/icons/ArrowUpTray.svelte';
dayjs.extend(relativeTime);
const i18n = getContext('i18n'); const i18n = getContext('i18n');
const dispatch = createEventDispatcher(); const dispatch = createEventDispatcher();
@ -56,6 +57,8 @@
export let className = 'w-[32rem]'; export let className = 'w-[32rem]';
export let triggerClassName = 'text-lg'; export let triggerClassName = 'text-lg';
export let pinModelHandler: (modelId: string) => void = () => {};
let tagsContainerElement; let tagsContainerElement;
let show = false; let show = false;
@ -407,10 +410,10 @@
</div> </div>
{/if} {/if}
<div class="px-3 max-h-64 overflow-y-auto scrollbar-hidden group relative"> <div class="px-3">
{#if tags && items.filter((item) => !(item.model?.info?.meta?.hidden ?? false)).length > 0} {#if tags && items.filter((item) => !(item.model?.info?.meta?.hidden ?? false)).length > 0}
<div <div
class=" flex w-full sticky top-0 z-10 bg-white dark:bg-gray-850 overflow-x-auto scrollbar-none" class=" flex w-full bg-white dark:bg-gray-850 overflow-x-auto scrollbar-none"
on:wheel={(e) => { on:wheel={(e) => {
if (e.deltaY !== 0) { if (e.deltaY !== 0) {
e.preventDefault(); e.preventDefault();
@ -492,212 +495,24 @@
</div> </div>
</div> </div>
{/if} {/if}
</div>
<div class="px-3 max-h-64 overflow-y-auto group relative">
{#each filteredItems as item, index} {#each filteredItems as item, index}
<button <ModelItem
aria-label="model-item" {selectedModelIdx}
class="flex w-full text-left font-medium line-clamp-1 select-none items-center rounded-button py-2 pl-3 pr-1.5 text-sm text-gray-700 dark:text-gray-100 outline-hidden transition-all duration-75 hover:bg-gray-100 dark:hover:bg-gray-800 rounded-lg cursor-pointer data-highlighted:bg-muted {index === {item}
selectedModelIdx {index}
? 'bg-gray-100 dark:bg-gray-800 group-hover:bg-transparent' {value}
: ''}" {pinModelHandler}
data-arrow-selected={index === selectedModelIdx} {unloadModelHandler}
data-value={item.value} onClick={() => {
on:click={() => {
value = item.value; value = item.value;
selectedModelIdx = index; selectedModelIdx = index;
show = false; show = false;
}} }}
> />
<div class="flex flex-col">
{#if $mobile && (item?.model?.tags ?? []).length > 0}
<div class="flex gap-0.5 self-start h-full mb-1.5 -translate-x-1">
{#each item.model?.tags.sort((a, b) => a.name.localeCompare(b.name)) as tag}
<div
class=" text-xs font-bold px-1 rounded-sm uppercase line-clamp-1 bg-gray-500/20 text-gray-700 dark:text-gray-200"
>
{tag.name}
</div>
{/each}
</div>
{/if}
<div class="flex items-center gap-2">
<div class="flex items-center min-w-fit">
<div class="line-clamp-1">
<div class="flex items-center min-w-fit">
<Tooltip
content={$user?.role === 'admin' ? (item?.value ?? '') : ''}
placement="top-start"
>
<img
src={item.model?.info?.meta?.profile_image_url ?? '/static/favicon.png'}
alt="Model"
class="rounded-full size-5 flex items-center mr-2"
/>
<div class="flex items-center line-clamp-1">
<div class="line-clamp-1">
{item.label}
</div>
</div>
</Tooltip>
</div>
</div>
</div>
{#if item.model.owned_by === 'ollama'}
{#if (item.model.ollama?.details?.parameter_size ?? '') !== ''}
<div class="flex items-center translate-y-[0.5px]">
<Tooltip
content={`${
item.model.ollama?.details?.quantization_level
? item.model.ollama?.details?.quantization_level + ' '
: ''
}${
item.model.ollama?.size
? `(${(item.model.ollama?.size / 1024 ** 3).toFixed(1)}GB)`
: ''
}`}
className="self-end"
>
<span
class=" text-xs font-medium text-gray-600 dark:text-gray-400 line-clamp-1"
>{item.model.ollama?.details?.parameter_size ?? ''}</span
>
</Tooltip>
</div>
{/if}
{#if item.model.ollama?.expires_at && new Date(item.model.ollama?.expires_at * 1000) > new Date()}
<div class="flex items-center translate-y-[0.5px] px-0.5">
<Tooltip
content={`${$i18n.t('Unloads {{FROM_NOW}}', {
FROM_NOW: dayjs(item.model.ollama?.expires_at * 1000).fromNow()
})}`}
className="self-end"
>
<div class=" flex items-center">
<span class="relative flex size-2">
<span
class="animate-ping absolute inline-flex h-full w-full rounded-full bg-green-400 opacity-75"
/>
<span class="relative inline-flex rounded-full size-2 bg-green-500" />
</span>
</div>
</Tooltip>
</div>
{/if}
{/if}
<!-- {JSON.stringify(item.info)} -->
{#if item.model?.direct}
<Tooltip content={`${$i18n.t('Direct')}`}>
<div class="translate-y-[1px]">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="size-3"
>
<path
fill-rule="evenodd"
d="M2 2.75A.75.75 0 0 1 2.75 2C8.963 2 14 7.037 14 13.25a.75.75 0 0 1-1.5 0c0-5.385-4.365-9.75-9.75-9.75A.75.75 0 0 1 2 2.75Zm0 4.5a.75.75 0 0 1 .75-.75 6.75 6.75 0 0 1 6.75 6.75.75.75 0 0 1-1.5 0C8 10.35 5.65 8 2.75 8A.75.75 0 0 1 2 7.25ZM3.5 11a1.5 1.5 0 1 0 0 3 1.5 1.5 0 0 0 0-3Z"
clip-rule="evenodd"
/>
</svg>
</div>
</Tooltip>
{:else if item.model.connection_type === 'external'}
<Tooltip content={`${$i18n.t('External')}`}>
<div class="translate-y-[1px]">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="size-3"
>
<path
fill-rule="evenodd"
d="M8.914 6.025a.75.75 0 0 1 1.06 0 3.5 3.5 0 0 1 0 4.95l-2 2a3.5 3.5 0 0 1-5.396-4.402.75.75 0 0 1 1.251.827 2 2 0 0 0 3.085 2.514l2-2a2 2 0 0 0 0-2.828.75.75 0 0 1 0-1.06Z"
clip-rule="evenodd"
/>
<path
fill-rule="evenodd"
d="M7.086 9.975a.75.75 0 0 1-1.06 0 3.5 3.5 0 0 1 0-4.95l2-2a3.5 3.5 0 0 1 5.396 4.402.75.75 0 0 1-1.251-.827 2 2 0 0 0-3.085-2.514l-2 2a2 2 0 0 0 0 2.828.75.75 0 0 1 0 1.06Z"
clip-rule="evenodd"
/>
</svg>
</div>
</Tooltip>
{/if}
{#if item.model?.info?.meta?.description}
<Tooltip
content={`${marked.parse(
sanitizeResponseContent(item.model?.info?.meta?.description).replaceAll(
'\n',
'<br>'
)
)}`}
>
<div class=" translate-y-[1px]">
<svg
xmlns="http://www.w3.org/2000/svg"
fill="none"
viewBox="0 0 24 24"
stroke-width="1.5"
stroke="currentColor"
class="w-4 h-4"
>
<path
stroke-linecap="round"
stroke-linejoin="round"
d="m11.25 11.25.041-.02a.75.75 0 0 1 1.063.852l-.708 2.836a.75.75 0 0 0 1.063.853l.041-.021M21 12a9 9 0 1 1-18 0 9 9 0 0 1 18 0Zm-9-3.75h.008v.008H12V8.25Z"
/>
</svg>
</div>
</Tooltip>
{/if}
{#if !$mobile && (item?.model?.tags ?? []).length > 0}
<div
class="flex gap-0.5 self-center items-center h-full translate-y-[0.5px] overflow-x-auto scrollbar-none"
>
{#each item.model?.tags.sort((a, b) => a.name.localeCompare(b.name)) as tag}
<Tooltip content={tag.name} className="flex-shrink-0">
<div
class=" text-xs font-bold px-1 rounded-sm uppercase bg-gray-500/20 text-gray-700 dark:text-gray-200"
>
{tag.name}
</div>
</Tooltip>
{/each}
</div>
{/if}
</div>
</div>
<div class="ml-auto pl-2 pr-1 flex gap-1.5 items-center">
{#if $user?.role === 'admin' && item.model.owned_by === 'ollama' && item.model.ollama?.expires_at && new Date(item.model.ollama?.expires_at * 1000) > new Date()}
<Tooltip content={`${$i18n.t('Eject')}`} className="flex-shrink-0">
<button
class="flex"
on:click={() => {
unloadModelHandler(item.value);
}}
>
<ArrowUpTray className="size-3" />
</button>
</Tooltip>
{/if}
{#if value === item.value}
<div>
<Check className="size-3" />
</div>
{/if}
</div>
</button>
{:else} {:else}
<div class=""> <div class="">
<div class="block px-3 py-2 text-sm text-gray-700 dark:text-gray-100"> <div class="block px-3 py-2 text-sm text-gray-700 dark:text-gray-100">

View file

@ -93,7 +93,7 @@
<div class="m-auto w-full max-w-6xl px-2 @2xl:px-20 translate-y-6 py-24 text-center"> <div class="m-auto w-full max-w-6xl px-2 @2xl:px-20 translate-y-6 py-24 text-center">
{#if $temporaryChatEnabled} {#if $temporaryChatEnabled}
<Tooltip <Tooltip
content={$i18n.t('This chat wont appear in history and your messages will not be saved.')} content={$i18n.t("This chat won't appear in history and your messages will not be saved.")}
className="w-full flex justify-center mb-0.5" className="w-full flex justify-center mb-0.5"
placement="top" placement="top"
> >

View file

@ -42,7 +42,7 @@
}); });
</script> </script>
<div class="flex flex-col h-full justify-between space-y-3 text-sm mb-6"> <div id="tab-about" class="flex flex-col h-full justify-between space-y-3 text-sm mb-6">
<div class=" space-y-3 overflow-y-scroll max-h-[28rem] lg:max-h-full"> <div class=" space-y-3 overflow-y-scroll max-h-[28rem] lg:max-h-full">
<div> <div>
<div class=" mb-2.5 text-sm font-medium flex space-x-2 items-center"> <div class=" mb-2.5 text-sm font-medium flex space-x-2 items-center">

View file

@ -86,7 +86,7 @@
}); });
</script> </script>
<div class="flex flex-col h-full justify-between text-sm"> <div id="tab-account" class="flex flex-col h-full justify-between text-sm">
<div class=" overflow-y-scroll max-h-[28rem] lg:max-h-full"> <div class=" overflow-y-scroll max-h-[28rem] lg:max-h-full">
<input <input
id="profile-image-input" id="profile-image-input"

View file

@ -1,5 +1,6 @@
<script lang="ts"> <script lang="ts">
import Switch from '$lib/components/common/Switch.svelte'; import Switch from '$lib/components/common/Switch.svelte';
import Textarea from '$lib/components/common/Textarea.svelte';
import Tooltip from '$lib/components/common/Tooltip.svelte'; import Tooltip from '$lib/components/common/Tooltip.svelte';
import Plus from '$lib/components/icons/Plus.svelte'; import Plus from '$lib/components/icons/Plus.svelte';
import { getContext } from 'svelte'; import { getContext } from 'svelte';
@ -34,6 +35,9 @@
repeat_penalty: null, repeat_penalty: null,
use_mmap: null, use_mmap: null,
use_mlock: null, use_mlock: null,
think: null,
format: null,
keep_alive: null,
num_keep: null, num_keep: null,
num_ctx: null, num_ctx: null,
num_batch: null, num_batch: null,
@ -87,7 +91,7 @@
<div> <div>
<Tooltip <Tooltip
content={$i18n.t( content={$i18n.t(
'Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the models built-in tool-calling capabilities, but requires the model to inherently support this feature.' "Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the model's built-in tool-calling capabilities, but requires the model to inherently support this feature."
)} )}
placement="top-start" placement="top-start"
className="inline-tooltip" className="inline-tooltip"
@ -1092,6 +1096,74 @@
</div> </div>
{/if} {/if}
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t(
'This option enables or disables the use of the reasoning feature in Ollama, which allows the model to think before generating a response. When enabled, the model can take a moment to process the conversation context and generate a more thoughtful response.'
)}
placement="top-start"
className="inline-tooltip"
>
<div class=" py-0.5 flex w-full justify-between">
<div class=" self-center text-xs font-medium">
{'think'} ({$i18n.t('Ollama')})
</div>
<button
class="p-1 px-3 text-xs flex rounded-sm transition"
on:click={() => {
params.think = (params?.think ?? null) === null ? true : params.think ? false : null;
}}
type="button"
>
{#if params.think === true}
<span class="ml-2 self-center">{$i18n.t('On')}</span>
{:else if params.think === false}
<span class="ml-2 self-center">{$i18n.t('Off')}</span>
{:else}
<span class="ml-2 self-center">{$i18n.t('Default')}</span>
{/if}
</button>
</div>
</Tooltip>
</div>
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t('The format to return a response in. Format can be json or a JSON schema.')}
placement="top-start"
className="inline-tooltip"
>
<div class=" py-0.5 flex w-full justify-between">
<div class=" self-center text-xs font-medium">
{'format'} ({$i18n.t('Ollama')})
</div>
<button
class="p-1 px-3 text-xs flex rounded-sm transition"
on:click={() => {
params.format = (params?.format ?? null) === null ? 'json' : null;
}}
type="button"
>
{#if (params?.format ?? null) === null}
<span class="ml-2 self-center">{$i18n.t('Default')}</span>
{:else}
<span class="ml-2 self-center">{$i18n.t('JSON')}</span>
{/if}
</button>
</div>
</Tooltip>
{#if (params?.format ?? null) !== null}
<div class="flex mt-0.5 space-x-2">
<Textarea
className="w-full text-sm bg-transparent outline-hidden"
placeholder={$i18n.t('e.g. "json" or a JSON schema')}
bind:value={params.format}
/>
</div>
{/if}
</div>
<div class=" py-0.5 w-full justify-between"> <div class=" py-0.5 w-full justify-between">
<Tooltip <Tooltip
content={$i18n.t( content={$i18n.t(
@ -1368,6 +1440,46 @@
{/if} {/if}
</div> </div>
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t(
'This option controls how long the model will stay loaded into memory following the request (default: 5m)'
)}
placement="top-start"
className="inline-tooltip"
>
<div class=" py-0.5 flex w-full justify-between">
<div class=" self-center text-xs font-medium">
{'keep_alive'} ({$i18n.t('Ollama')})
</div>
<button
class="p-1 px-3 text-xs flex rounded-sm transition"
on:click={() => {
params.keep_alive = (params?.keep_alive ?? null) === null ? '5m' : null;
}}
type="button"
>
{#if (params?.keep_alive ?? null) === null}
<span class="ml-2 self-center">{$i18n.t('Default')}</span>
{:else}
<span class="ml-2 self-center">{$i18n.t('Custom')}</span>
{/if}
</button>
</div>
</Tooltip>
{#if (params?.keep_alive ?? null) !== null}
<div class="flex mt-0.5 space-x-2">
<input
class="w-full text-sm bg-transparent outline-hidden"
type="text"
placeholder={$i18n.t("e.g. '30s','10m'. Valid time units are 's', 'm', 'h'.")}
bind:value={params.keep_alive}
/>
</div>
{/if}
</div>
{#if custom && admin} {#if custom && admin}
<div class="flex flex-col justify-center"> <div class="flex flex-col justify-center">
{#each Object.keys(params?.custom_params ?? {}) as key} {#each Object.keys(params?.custom_params ?? {}) as key}

View file

@ -154,6 +154,7 @@
</script> </script>
<form <form
id="tab-audio"
class="flex flex-col h-full justify-between space-y-3 text-sm" class="flex flex-col h-full justify-between space-y-3 text-sm"
on:submit|preventDefault={async () => { on:submit|preventDefault={async () => {
saveSettings({ saveSettings({

View file

@ -107,7 +107,7 @@
<ArchivedChatsModal bind:show={showArchivedChatsModal} onUpdate={handleArchivedChatsChange} /> <ArchivedChatsModal bind:show={showArchivedChatsModal} onUpdate={handleArchivedChatsChange} />
<div class="flex flex-col h-full justify-between space-y-3 text-sm"> <div id="tab-chats" class="flex flex-col h-full justify-between space-y-3 text-sm">
<div class=" space-y-2 overflow-y-scroll max-h-[28rem] lg:max-h-full"> <div class=" space-y-2 overflow-y-scroll max-h-[28rem] lg:max-h-full">
<div class="flex flex-col"> <div class="flex flex-col">
<input <input

View file

@ -70,6 +70,7 @@
<AddConnectionModal direct bind:show={showConnectionModal} onSubmit={addConnectionHandler} /> <AddConnectionModal direct bind:show={showConnectionModal} onSubmit={addConnectionHandler} />
<form <form
id="tab-connections"
class="flex flex-col h-full justify-between text-sm" class="flex flex-col h-full justify-between text-sm"
on:submit|preventDefault={() => { on:submit|preventDefault={() => {
updateHandler(); updateHandler();
@ -126,7 +127,11 @@
</div> </div>
<div class="my-1.5"> <div class="my-1.5">
<div class="text-xs text-gray-500"> <div
class="text-xs {($settings?.highContrastMode ?? false)
? 'text-gray-800 dark:text-gray-100'
: 'text-gray-500'}"
>
{$i18n.t('Connect to your own OpenAI compatible API endpoints.')} {$i18n.t('Connect to your own OpenAI compatible API endpoints.')}
<br /> <br />
{$i18n.t( {$i18n.t(

View file

@ -10,12 +10,11 @@
import AdvancedParams from './Advanced/AdvancedParams.svelte'; import AdvancedParams from './Advanced/AdvancedParams.svelte';
import Textarea from '$lib/components/common/Textarea.svelte'; import Textarea from '$lib/components/common/Textarea.svelte';
export let saveSettings: Function; export let saveSettings: Function;
export let getModels: Function; export let getModels: Function;
// General // General
let themes = ['dark', 'light', 'rose-pine dark', 'rose-pine-dawn light', 'oled-dark']; let themes = ['dark', 'light', 'oled-dark'];
let selectedTheme = 'system'; let selectedTheme = 'system';
let languages: Awaited<ReturnType<typeof getLanguages>> = []; let languages: Awaited<ReturnType<typeof getLanguages>> = [];
@ -40,10 +39,6 @@
} }
}; };
// Advanced
let requestFormat = null;
let keepAlive: string | null = null;
let params = { let params = {
// Advanced // Advanced
stream_response: null, stream_response: null,
@ -71,37 +66,7 @@
num_gpu: null num_gpu: null
}; };
const validateJSON = (json) => {
try {
const obj = JSON.parse(json);
if (obj && typeof obj === 'object') {
return true;
}
} catch (e) {}
return false;
};
const toggleRequestFormat = async () => {
if (requestFormat === null) {
requestFormat = 'json';
} else {
requestFormat = null;
}
saveSettings({ requestFormat: requestFormat !== null ? requestFormat : undefined });
};
const saveHandler = async () => { const saveHandler = async () => {
if (requestFormat !== null && requestFormat !== 'json') {
if (validateJSON(requestFormat) === false) {
toast.error($i18n.t('Invalid JSON schema'));
return;
} else {
requestFormat = JSON.parse(requestFormat);
}
}
saveSettings({ saveSettings({
system: system !== '' ? system : undefined, system: system !== '' ? system : undefined,
params: { params: {
@ -130,15 +95,13 @@
use_mmap: params.use_mmap !== null ? params.use_mmap : undefined, use_mmap: params.use_mmap !== null ? params.use_mmap : undefined,
use_mlock: params.use_mlock !== null ? params.use_mlock : undefined, use_mlock: params.use_mlock !== null ? params.use_mlock : undefined,
num_thread: params.num_thread !== null ? params.num_thread : undefined, num_thread: params.num_thread !== null ? params.num_thread : undefined,
num_gpu: params.num_gpu !== null ? params.num_gpu : undefined num_gpu: params.num_gpu !== null ? params.num_gpu : undefined,
}, think: params.think !== null ? params.think : undefined,
keepAlive: keepAlive ? (isNaN(keepAlive) ? keepAlive : parseInt(keepAlive)) : undefined, keep_alive: params.keep_alive !== null ? params.keep_alive : undefined,
requestFormat: requestFormat !== null ? requestFormat : undefined format: params.format !== null ? params.format : undefined
}
}); });
dispatch('save'); dispatch('save');
requestFormat =
typeof requestFormat === 'object' ? JSON.stringify(requestFormat, null, 2) : requestFormat;
}; };
onMount(async () => { onMount(async () => {
@ -149,14 +112,6 @@
notificationEnabled = $settings.notificationEnabled ?? false; notificationEnabled = $settings.notificationEnabled ?? false;
system = $settings.system ?? ''; system = $settings.system ?? '';
requestFormat = $settings.requestFormat ?? null;
if (requestFormat !== null && requestFormat !== 'json') {
requestFormat =
typeof requestFormat === 'object' ? JSON.stringify(requestFormat, null, 2) : requestFormat;
}
keepAlive = $settings.keepAlive ?? null;
params = { ...params, ...$settings.params }; params = { ...params, ...$settings.params };
params.stop = $settings?.params?.stop ? ($settings?.params?.stop ?? []).join(',') : null; params.stop = $settings?.params?.stop ? ($settings?.params?.stop ?? []).join(',') : null;
}); });
@ -232,7 +187,7 @@
}; };
</script> </script>
<div class="flex flex-col h-full justify-between text-sm"> <div class="flex flex-col h-full justify-between text-sm" id="tab-general">
<div class=" overflow-y-scroll max-h-[28rem] lg:max-h-full"> <div class=" overflow-y-scroll max-h-[28rem] lg:max-h-full">
<div class=""> <div class="">
<div class=" mb-1 text-sm font-medium">{$i18n.t('WebUI Settings')}</div> <div class=" mb-1 text-sm font-medium">{$i18n.t('WebUI Settings')}</div>
@ -335,77 +290,6 @@
{#if showAdvanced} {#if showAdvanced}
<AdvancedParams admin={$user?.role === 'admin'} bind:params /> <AdvancedParams admin={$user?.role === 'admin'} bind:params />
<hr class=" border-gray-100 dark:border-gray-850" />
<div class=" w-full justify-between">
<div class="flex w-full justify-between">
<div class=" self-center text-xs font-medium">{$i18n.t('Keep Alive')}</div>
<button
class="p-1 px-3 text-xs flex rounded-sm transition"
type="button"
on:click={() => {
keepAlive = keepAlive === null ? '5m' : null;
}}
>
{#if keepAlive === null}
<span class="ml-2 self-center"> {$i18n.t('Default')} </span>
{:else}
<span class="ml-2 self-center"> {$i18n.t('Custom')} </span>
{/if}
</button>
</div>
{#if keepAlive !== null}
<div class="flex mt-1 space-x-2">
<input
class="w-full text-sm dark:text-gray-300 dark:bg-gray-850 outline-hidden"
type="text"
placeholder={$i18n.t("e.g. '30s','10m'. Valid time units are 's', 'm', 'h'.")}
bind:value={keepAlive}
/>
</div>
{/if}
</div>
<div>
<div class=" flex w-full justify-between">
<div class=" self-center text-xs font-medium">{$i18n.t('Request Mode')}</div>
<button
class="p-1 px-3 text-xs flex rounded-sm transition"
on:click={() => {
toggleRequestFormat();
}}
>
{#if requestFormat === null}
<span class="ml-2 self-center"> {$i18n.t('Default')} </span>
{:else}
<!-- <svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 20 20"
fill="currentColor"
class="w-4 h-4 self-center"
>
<path
d="M10 2a.75.75 0 01.75.75v1.5a.75.75 0 01-1.5 0v-1.5A.75.75 0 0110 2zM10 15a.75.75 0 01.75.75v1.5a.75.75 0 01-1.5 0v-1.5A.75.75 0 0110 15zM10 7a3 3 0 100 6 3 3 0 000-6zM15.657 5.404a.75.75 0 10-1.06-1.06l-1.061 1.06a.75.75 0 001.06 1.06l1.06-1.06zM6.464 14.596a.75.75 0 10-1.06-1.06l-1.06 1.06a.75.75 0 001.06 1.06l1.06-1.06zM18 10a.75.75 0 01-.75.75h-1.5a.75.75 0 010-1.5h1.5A.75.75 0 0118 10zM5 10a.75.75 0 01-.75.75h-1.5a.75.75 0 010-1.5h1.5A.75.75 0 015 10zM14.596 15.657a.75.75 0 001.06-1.06l-1.06-1.061a.75.75 0 10-1.06 1.06l1.06 1.06zM5.404 6.464a.75.75 0 001.06-1.06l-1.06-1.06a.75.75 0 10-1.061 1.06l1.06 1.06z"
/>
</svg> -->
<span class="ml-2 self-center"> {$i18n.t('JSON')} </span>
{/if}
</button>
</div>
{#if requestFormat !== null}
<div class="flex mt-1 space-x-2">
<Textarea
className="w-full text-sm dark:text-gray-300 dark:bg-gray-900 outline-hidden"
placeholder={$i18n.t('e.g. "json" or a JSON schema')}
bind:value={requestFormat}
/>
</div>
{/if}
</div>
{/if} {/if}
</div> </div>
{/if} {/if}

View file

@ -292,36 +292,36 @@
onMount(async () => { onMount(async () => {
titleAutoGenerate = $settings?.title?.auto ?? true; titleAutoGenerate = $settings?.title?.auto ?? true;
autoTags = $settings.autoTags ?? true; autoTags = $settings?.autoTags ?? true;
autoFollowUps = $settings.autoFollowUps ?? true; autoFollowUps = $settings?.autoFollowUps ?? true;
highContrastMode = $settings.highContrastMode ?? false; highContrastMode = $settings?.highContrastMode ?? false;
detectArtifacts = $settings.detectArtifacts ?? true; detectArtifacts = $settings?.detectArtifacts ?? true;
responseAutoCopy = $settings.responseAutoCopy ?? false; responseAutoCopy = $settings?.responseAutoCopy ?? false;
showUsername = $settings.showUsername ?? false; showUsername = $settings?.showUsername ?? false;
showUpdateToast = $settings.showUpdateToast ?? true; showUpdateToast = $settings?.showUpdateToast ?? true;
showChangelog = $settings.showChangelog ?? true; showChangelog = $settings?.showChangelog ?? true;
showEmojiInCall = $settings.showEmojiInCall ?? false; showEmojiInCall = $settings?.showEmojiInCall ?? false;
voiceInterruption = $settings.voiceInterruption ?? false; voiceInterruption = $settings?.voiceInterruption ?? false;
richTextInput = $settings.richTextInput ?? true; richTextInput = $settings?.richTextInput ?? true;
promptAutocomplete = $settings.promptAutocomplete ?? false; promptAutocomplete = $settings?.promptAutocomplete ?? false;
largeTextAsFile = $settings.largeTextAsFile ?? false; largeTextAsFile = $settings?.largeTextAsFile ?? false;
copyFormatted = $settings.copyFormatted ?? false; copyFormatted = $settings?.copyFormatted ?? false;
collapseCodeBlocks = $settings.collapseCodeBlocks ?? false; collapseCodeBlocks = $settings?.collapseCodeBlocks ?? false;
expandDetails = $settings.expandDetails ?? false; expandDetails = $settings?.expandDetails ?? false;
landingPageMode = $settings.landingPageMode ?? ''; landingPageMode = $settings?.landingPageMode ?? '';
chatBubble = $settings.chatBubble ?? true; chatBubble = $settings?.chatBubble ?? true;
widescreenMode = $settings.widescreenMode ?? false; widescreenMode = $settings?.widescreenMode ?? false;
splitLargeChunks = $settings.splitLargeChunks ?? false; splitLargeChunks = $settings?.splitLargeChunks ?? false;
scrollOnBranchChange = $settings.scrollOnBranchChange ?? true; scrollOnBranchChange = $settings?.scrollOnBranchChange ?? true;
chatDirection = $settings.chatDirection ?? 'auto'; chatDirection = $settings?.chatDirection ?? 'auto';
userLocation = $settings.userLocation ?? false; userLocation = $settings?.userLocation ?? false;
notificationSound = $settings?.notificationSound ?? true; notificationSound = $settings?.notificationSound ?? true;
notificationSoundAlways = $settings?.notificationSoundAlways ?? false; notificationSoundAlways = $settings?.notificationSoundAlways ?? false;
@ -331,23 +331,24 @@
stylizedPdfExport = $settings?.stylizedPdfExport ?? true; stylizedPdfExport = $settings?.stylizedPdfExport ?? true;
hapticFeedback = $settings.hapticFeedback ?? false; hapticFeedback = $settings?.hapticFeedback ?? false;
ctrlEnterToSend = $settings.ctrlEnterToSend ?? false; ctrlEnterToSend = $settings?.ctrlEnterToSend ?? false;
imageCompression = $settings.imageCompression ?? false; imageCompression = $settings?.imageCompression ?? false;
imageCompressionSize = $settings.imageCompressionSize ?? { width: '', height: '' }; imageCompressionSize = $settings?.imageCompressionSize ?? { width: '', height: '' };
defaultModelId = $settings?.models?.at(0) ?? ''; defaultModelId = $settings?.models?.at(0) ?? '';
if ($config?.default_models) { if ($config?.default_models) {
defaultModelId = $config.default_models.split(',')[0]; defaultModelId = $config.default_models.split(',')[0];
} }
backgroundImageUrl = $settings.backgroundImageUrl ?? null; backgroundImageUrl = $settings?.backgroundImageUrl ?? null;
webSearch = $settings.webSearch ?? null; webSearch = $settings?.webSearch ?? null;
}); });
</script> </script>
<form <form
id="tab-interface"
class="flex flex-col h-full justify-between space-y-3 text-sm" class="flex flex-col h-full justify-between space-y-3 text-sm"
on:submit|preventDefault={() => { on:submit|preventDefault={() => {
updateInterfaceHandler(); updateInterfaceHandler();

View file

@ -24,6 +24,7 @@
<ManageModal bind:show={showManageModal} /> <ManageModal bind:show={showManageModal} />
<form <form
id="tab-personalization"
class="flex flex-col h-full justify-between space-y-3 text-sm" class="flex flex-col h-full justify-between space-y-3 text-sm"
on:submit|preventDefault={() => { on:submit|preventDefault={() => {
dispatch('save'); dispatch('save');

View file

@ -42,6 +42,7 @@
<AddServerModal bind:show={showConnectionModal} onSubmit={addConnectionHandler} direct /> <AddServerModal bind:show={showConnectionModal} onSubmit={addConnectionHandler} direct />
<form <form
id="tab-tools"
class="flex flex-col h-full justify-between text-sm" class="flex flex-col h-full justify-between text-sm"
on:submit|preventDefault={() => { on:submit|preventDefault={() => {
updateHandler(); updateHandler();

File diff suppressed because it is too large Load diff

View file

@ -11,10 +11,13 @@
import { oneDark } from '@codemirror/theme-one-dark'; import { oneDark } from '@codemirror/theme-one-dark';
import { onMount, createEventDispatcher, getContext, tick } from 'svelte'; import { onMount, createEventDispatcher, getContext, tick, onDestroy } from 'svelte';
import PyodideWorker from '$lib/workers/pyodide.worker?worker';
import { formatPythonCode } from '$lib/apis/utils'; import { formatPythonCode } from '$lib/apis/utils';
import { toast } from 'svelte-sonner'; import { toast } from 'svelte-sonner';
import { user } from '$lib/stores';
const dispatch = createEventDispatcher(); const dispatch = createEventDispatcher();
const i18n = getContext('i18n'); const i18n = getContext('i18n');
@ -113,13 +116,82 @@
return await language?.load(); return await language?.load();
}; };
let pyodideWorkerInstance = null;
const getPyodideWorker = () => {
if (!pyodideWorkerInstance) {
pyodideWorkerInstance = new PyodideWorker(); // Your worker constructor
}
return pyodideWorkerInstance;
};
// Generate unique IDs for requests
let _formatReqId = 0;
const formatPythonCodePyodide = (code) => {
return new Promise((resolve, reject) => {
const id = `format-${++_formatReqId}`;
let timeout;
const worker = getPyodideWorker();
const script = `
import black
print(black.format_str("""${code.replace(/\\/g, '\\\\').replace(/`/g, '\\`').replace(/"/g, '\\"')}""", mode=black.Mode()))
`;
const packages = ['black'];
function handleMessage(event) {
const { id: eventId, stdout, stderr } = event.data;
if (eventId !== id) return; // Only handle our message
clearTimeout(timeout);
worker.removeEventListener('message', handleMessage);
worker.removeEventListener('error', handleError);
if (stderr) {
reject(stderr);
} else {
const formatted = stdout && typeof stdout === 'string' ? stdout.trim() : '';
resolve({ code: formatted });
}
}
function handleError(event) {
clearTimeout(timeout);
worker.removeEventListener('message', handleMessage);
worker.removeEventListener('error', handleError);
reject(event.message || 'Pyodide worker error');
}
worker.addEventListener('message', handleMessage);
worker.addEventListener('error', handleError);
// Send to worker
worker.postMessage({ id, code: script, packages });
// Timeout
timeout = setTimeout(() => {
worker.removeEventListener('message', handleMessage);
worker.removeEventListener('error', handleError);
try {
worker.terminate();
} catch {}
pyodideWorkerInstance = null;
reject('Execution Time Limit Exceeded');
}, 60000);
});
};
export const formatPythonCodeHandler = async () => { export const formatPythonCodeHandler = async () => {
if (codeEditor) { if (codeEditor) {
const res = await formatPythonCode(localStorage.token, _value).catch((error) => { const res = await (
$user?.role === 'admin'
? formatPythonCode(localStorage.token, _value)
: formatPythonCodePyodide(_value)
).catch((error) => {
toast.error(`${error}`); toast.error(`${error}`);
return null; return null;
}); });
if (res && res.code) { if (res && res.code) {
const formattedCode = res.code; const formattedCode = res.code;
codeEditor.dispatch({ codeEditor.dispatch({
@ -240,6 +312,12 @@
document.removeEventListener('keydown', keydownHandler); document.removeEventListener('keydown', keydownHandler);
}; };
}); });
onDestroy(() => {
if (pyodideWorkerInstance) {
pyodideWorkerInstance.terminate();
}
});
</script> </script>
<div id="code-textarea-{id}" class="h-full w-full text-sm" /> <div id="code-textarea-{id}" class="h-full w-full text-sm" />

View file

@ -13,6 +13,9 @@
dayjs.extend(relativeTime); dayjs.extend(relativeTime);
async function loadLocale(locales) { async function loadLocale(locales) {
if (!locales || !Array.isArray(locales)) {
return;
}
for (const locale of locales) { for (const locale of locales) {
try { try {
dayjs.locale(locale); dayjs.locale(locale);

View file

@ -2,10 +2,8 @@
import DOMPurify from 'dompurify'; import DOMPurify from 'dompurify';
import { onDestroy } from 'svelte'; import { onDestroy } from 'svelte';
import { marked } from 'marked';
import tippy from 'tippy.js'; import tippy from 'tippy.js';
import { roundArrow } from 'tippy.js';
export let placement = 'top'; export let placement = 'top';
export let content = `I'm a tooltip!`; export let content = `I'm a tooltip!`;
@ -47,6 +45,6 @@
}); });
</script> </script>
<div bind:this={tooltipElement} aria-label={DOMPurify.sanitize(content)} class={className}> <div bind:this={tooltipElement} class={className}>
<slot /> <slot />
</div> </div>

View file

@ -7,6 +7,7 @@
xmlns="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg"
fill="none" fill="none"
viewBox="0 0 24 24" viewBox="0 0 24 24"
aria-hidden="true"
stroke-width={strokeWidth} stroke-width={strokeWidth}
stroke="currentColor" stroke="currentColor"
class={className} class={className}

View file

@ -21,7 +21,8 @@
channels, channels,
socket, socket,
config, config,
isApp isApp,
models
} from '$lib/stores'; } from '$lib/stores';
import { onMount, getContext, tick, onDestroy } from 'svelte'; import { onMount, getContext, tick, onDestroy } from 'svelte';
@ -489,6 +490,8 @@
draggable="false" draggable="false"
on:click={async () => { on:click={async () => {
selectedChatId = null; selectedChatId = null;
await temporaryChatEnabled.set(false);
await goto('/'); await goto('/');
const newChatButton = document.getElementById('new-chat-button'); const newChatButton = document.getElementById('new-chat-button');
setTimeout(() => { setTimeout(() => {
@ -649,6 +652,46 @@
? 'opacity-20' ? 'opacity-20'
: ''}" : ''}"
> >
{#if ($models ?? []).length > 0 && ($settings?.pinnedModels ?? []).length > 0}
<div class="mt-0.5">
{#each $settings.pinnedModels as modelId (modelId)}
{@const model = $models.find((model) => model.id === modelId)}
{#if model}
<div class="px-1.5 flex justify-center text-gray-800 dark:text-gray-200">
<a
class="grow flex items-center space-x-2.5 rounded-lg px-2 py-[7px] hover:bg-gray-100 dark:hover:bg-gray-900 transition"
href="/?model={modelId}"
on:click={() => {
selectedChatId = null;
chatId.set('');
if ($mobile) {
showSidebar.set(false);
}
}}
draggable="false"
>
<div class="self-center shrink-0">
<img
crossorigin="anonymous"
src={model?.info?.meta?.profile_image_url ?? '/static/favicon.png'}
class=" size-5 rounded-full -translate-x-[0.5px]"
alt="logo"
/>
</div>
<div class="flex self-center translate-y-[0.5px]">
<div class=" self-center font-medium text-sm font-primary line-clamp-1">
{model?.name ?? modelId}
</div>
</div>
</a>
</div>
{/if}
{/each}
</div>
{/if}
{#if $config?.features?.enable_channels && ($user?.role === 'admin' || $channels.length > 0)} {#if $config?.features?.enable_channels && ($user?.role === 'admin' || $channels.length > 0)}
<Folder <Folder
className="px-2 mt-0.5" className="px-2 mt-0.5"
@ -785,7 +828,7 @@
<div <div
class="ml-3 pl-1 mt-[1px] flex flex-col overflow-y-auto scrollbar-hidden border-s border-gray-100 dark:border-gray-900" class="ml-3 pl-1 mt-[1px] flex flex-col overflow-y-auto scrollbar-hidden border-s border-gray-100 dark:border-gray-900"
> >
{#each $pinnedChats as chat, idx} {#each $pinnedChats as chat, idx (`pinned-chat-${chat?.id ?? idx}`)}
<ChatItem <ChatItem
className="" className=""
id={chat.id} id={chat.id}
@ -831,7 +874,7 @@
<div class=" flex-1 flex flex-col overflow-y-auto scrollbar-hidden"> <div class=" flex-1 flex flex-col overflow-y-auto scrollbar-hidden">
<div class="pt-1.5"> <div class="pt-1.5">
{#if $chats} {#if $chats}
{#each $chats as chat, idx} {#each $chats as chat, idx (`chat-${chat?.id ?? idx}`)}
{#if idx === 0 || (idx > 0 && chat.time_range !== $chats[idx - 1].time_range)} {#if idx === 0 || (idx > 0 && chat.time_range !== $chats[idx - 1].time_range)}
<div <div
class="w-full pl-2.5 text-xs text-gray-500 dark:text-gray-500 font-medium {idx === class="w-full pl-2.5 text-xs text-gray-500 dark:text-gray-500 font-medium {idx ===

View file

@ -204,9 +204,10 @@
const chatTitleInputKeydownHandler = (e) => { const chatTitleInputKeydownHandler = (e) => {
if (e.key === 'Enter') { if (e.key === 'Enter') {
e.preventDefault(); e.preventDefault();
editChatTitle(id, chatTitle); setTimeout(() => {
confirmEdit = false; const input = document.getElementById(`chat-title-input-${id}`);
chatTitle = ''; if (input) input.blur();
}, 0);
} else if (e.key === 'Escape') { } else if (e.key === 'Escape') {
e.preventDefault(); e.preventDefault();
confirmEdit = false; confirmEdit = false;

View file

@ -29,7 +29,7 @@
<div slot="content"> <div slot="content">
<DropdownMenu.Content <DropdownMenu.Content
class="w-full max-w-[160px] rounded-lg px-1 py-1.5 z-50 bg-white dark:bg-gray-850 dark:text-white shadow-lg" class="w-full max-w-[170px] rounded-lg px-1 py-1.5 z-50 bg-white dark:bg-gray-850 dark:text-white shadow-lg"
sideOffset={-2} sideOffset={-2}
side="bottom" side="bottom"
align="start" align="start"

View file

@ -13,7 +13,7 @@
const i18n = getContext('i18n'); const i18n = getContext('i18n');
export let show = false; export let show = false;
export let className = 'max-w-[160px]'; export let className = 'max-w-[170px]';
export let onRecord = () => {}; export let onRecord = () => {};
export let onCaptureAudio = () => {}; export let onCaptureAudio = () => {};

View file

@ -52,7 +52,7 @@
<div class=" pt-1"> <div class=" pt-1">
<button <button
class=" group-hover:text-gray-500 dark:text-gray-900 dark:hover:text-gray-300 transition" class=" group-hover:text-gray-500 dark:text-gray-500 dark:hover:text-gray-300 transition"
on:click={() => { on:click={() => {
onDelete(); onDelete();
}} }}

View file

@ -49,7 +49,7 @@
<div slot="content"> <div slot="content">
<DropdownMenu.Content <DropdownMenu.Content
class="w-full max-w-[160px] rounded-xl px-1 py-1.5 border border-gray-300/30 dark:border-gray-700/50 z-50 bg-white dark:bg-gray-850 dark:text-white shadow-sm" class="w-full max-w-[170px] rounded-xl px-1 py-1.5 border border-gray-300/30 dark:border-gray-700/50 z-50 bg-white dark:bg-gray-850 dark:text-white shadow-sm"
sideOffset={-2} sideOffset={-2}
side="bottom" side="bottom"
align="end" align="end"

View file

@ -48,7 +48,7 @@
<div slot="content"> <div slot="content">
<DropdownMenu.Content <DropdownMenu.Content
class="w-full max-w-[160px] rounded-xl px-1 py-1.5 border border-gray-300/30 dark:border-gray-700/50 z-50 bg-white dark:bg-gray-850 dark:text-white shadow-sm" class="w-full max-w-[170px] rounded-xl px-1 py-1.5 border border-gray-300/30 dark:border-gray-700/50 z-50 bg-white dark:bg-gray-850 dark:text-white shadow-sm"
sideOffset={-2} sideOffset={-2}
side="bottom" side="bottom"
align="start" align="start"

View file

@ -22,7 +22,7 @@
import ChevronRight from '../icons/ChevronRight.svelte'; import ChevronRight from '../icons/ChevronRight.svelte';
import Spinner from '../common/Spinner.svelte'; import Spinner from '../common/Spinner.svelte';
import Tooltip from '../common/Tooltip.svelte'; import Tooltip from '../common/Tooltip.svelte';
import { capitalizeFirstLetter } from '$lib/utils'; import { capitalizeFirstLetter, slugify } from '$lib/utils';
import XMark from '../icons/XMark.svelte'; import XMark from '../icons/XMark.svelte';
const i18n = getContext('i18n'); const i18n = getContext('i18n');
@ -68,7 +68,15 @@
}; };
const cloneHandler = async (prompt) => { const cloneHandler = async (prompt) => {
sessionStorage.prompt = JSON.stringify(prompt); const clonedPrompt = { ...prompt };
clonedPrompt.title = `${clonedPrompt.title} (Clone)`;
const baseCommand = clonedPrompt.command.startsWith('/')
? clonedPrompt.command.substring(1)
: clonedPrompt.command;
clonedPrompt.command = slugify(`${baseCommand} clone`);
sessionStorage.prompt = JSON.stringify(clonedPrompt);
goto('/workspace/prompts/create'); goto('/workspace/prompts/create');
}; };

View file

@ -13,6 +13,7 @@
export let onSubmit: Function; export let onSubmit: Function;
export let edit = false; export let edit = false;
export let prompt = null; export let prompt = null;
export let clone = false;
const i18n = getContext('i18n'); const i18n = getContext('i18n');

View file

@ -39,7 +39,7 @@
<div slot="content"> <div slot="content">
<DropdownMenu.Content <DropdownMenu.Content
class="w-full max-w-[160px] rounded-xl px-1 py-1.5 border border-gray-300/30 dark:border-gray-700/50 z-50 bg-white dark:bg-gray-850 dark:text-white shadow-sm" class="w-full max-w-[170px] rounded-xl px-1 py-1.5 border border-gray-300/30 dark:border-gray-700/50 z-50 bg-white dark:bg-gray-850 dark:text-white shadow-sm"
sideOffset={-2} sideOffset={-2}
side="bottom" side="bottom"
align="start" align="start"

View file

@ -40,7 +40,7 @@
<div slot="content"> <div slot="content">
<DropdownMenu.Content <DropdownMenu.Content
class="w-full max-w-[160px] rounded-xl px-1 py-1.5 border border-gray-300/30 dark:border-gray-700/50 z-50 bg-white dark:bg-gray-850 dark:text-white shadow-sm" class="w-full max-w-[170px] rounded-xl px-1 py-1.5 border border-gray-300/30 dark:border-gray-700/50 z-50 bg-white dark:bg-gray-850 dark:text-white shadow-sm"
sideOffset={-2} sideOffset={-2}
side="bottom" side="bottom"
align="start" align="start"

View file

@ -90,7 +90,9 @@
"and {{COUNT}} more": "", "and {{COUNT}} more": "",
"and create a new shared link.": "و أنشئ رابط مشترك جديد.", "and create a new shared link.": "و أنشئ رابط مشترك جديد.",
"Android": "", "Android": "",
"API": "",
"API Base URL": "API الرابط الرئيسي", "API Base URL": "API الرابط الرئيسي",
"API details for using a vision-language model in the picture description. This parameter is mutually exclusive with picture_description_local.": "",
"API Key": "API مفتاح", "API Key": "API مفتاح",
"API Key created.": "API تم أنشاء المفتاح", "API Key created.": "API تم أنشاء المفتاح",
"API Key Endpoint Restrictions": "", "API Key Endpoint Restrictions": "",
@ -207,6 +209,8 @@
"Clone Chat": "", "Clone Chat": "",
"Clone of {{TITLE}}": "", "Clone of {{TITLE}}": "",
"Close": "أغلق", "Close": "أغلق",
"Close modal": "",
"Close settings modal": "",
"Code execution": "", "Code execution": "",
"Code Execution": "", "Code Execution": "",
"Code Execution Engine": "", "Code Execution Engine": "",
@ -294,7 +298,7 @@
"Default": "الإفتراضي", "Default": "الإفتراضي",
"Default (Open AI)": "", "Default (Open AI)": "",
"Default (SentenceTransformers)": "(SentenceTransformers) الإفتراضي", "Default (SentenceTransformers)": "(SentenceTransformers) الإفتراضي",
"Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the models built-in tool-calling capabilities, but requires the model to inherently support this feature.": "", "Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the model's built-in tool-calling capabilities, but requires the model to inherently support this feature.": "",
"Default Model": "النموذج الافتراضي", "Default Model": "النموذج الافتراضي",
"Default model updated": "الإفتراضي تحديث الموديل", "Default model updated": "الإفتراضي تحديث الموديل",
"Default Models": "", "Default Models": "",
@ -442,6 +446,7 @@
"Enter Chunk Overlap": "أدخل الChunk Overlap", "Enter Chunk Overlap": "أدخل الChunk Overlap",
"Enter Chunk Size": "أدخل Chunk الحجم", "Enter Chunk Size": "أدخل Chunk الحجم",
"Enter comma-separated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "", "Enter comma-separated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter Config in JSON format": "",
"Enter content for the pending user info overlay. Leave empty for default.": "", "Enter content for the pending user info overlay. Leave empty for default.": "",
"Enter Datalab Marker API Key": "", "Enter Datalab Marker API Key": "",
"Enter description": "", "Enter description": "",
@ -666,6 +671,7 @@
"Hex Color": "", "Hex Color": "",
"Hex Color - Leave empty for default color": "", "Hex Color - Leave empty for default color": "",
"Hide": "أخفاء", "Hide": "أخفاء",
"Hide from Sidebar": "",
"Hide Model": "", "Hide Model": "",
"High Contrast Mode": "", "High Contrast Mode": "",
"Home": "", "Home": "",
@ -714,7 +720,6 @@
"Invalid file content": "", "Invalid file content": "",
"Invalid file format.": "", "Invalid file format.": "",
"Invalid JSON file": "", "Invalid JSON file": "",
"Invalid JSON schema": "",
"Invalid Tag": "تاق غير صالحة", "Invalid Tag": "تاق غير صالحة",
"is typing...": "", "is typing...": "",
"January": "يناير", "January": "يناير",
@ -729,7 +734,7 @@
"JWT Expiration": "JWT تجريبي", "JWT Expiration": "JWT تجريبي",
"JWT Token": "JWT Token", "JWT Token": "JWT Token",
"Kagi Search API Key": "", "Kagi Search API Key": "",
"Keep Alive": "Keep Alive", "Keep in Sidebar": "",
"Key": "", "Key": "",
"Keyboard shortcuts": "اختصارات لوحة المفاتيح", "Keyboard shortcuts": "اختصارات لوحة المفاتيح",
"Knowledge": "", "Knowledge": "",
@ -849,6 +854,7 @@
"New Password": "كلمة المرور الجديدة", "New Password": "كلمة المرور الجديدة",
"New Tool": "", "New Tool": "",
"new-channel": "", "new-channel": "",
"Next message": "",
"No chats found for this user.": "", "No chats found for this user.": "",
"No chats found.": "", "No chats found.": "",
"No content": "", "No content": "",
@ -916,6 +922,7 @@
"OpenAI API settings updated": "", "OpenAI API settings updated": "",
"OpenAI URL/Key required.": "URL/مفتاح OpenAI.مطلوب عنوان ", "OpenAI URL/Key required.": "URL/مفتاح OpenAI.مطلوب عنوان ",
"openapi.json URL or Path": "", "openapi.json URL or Path": "",
"Options for running a local vision-language model in the picture description. The parameters refer to a model hosted on Hugging Face. This parameter is mutually exclusive with picture_description_api.": "",
"or": "أو", "or": "أو",
"Organize your users": "", "Organize your users": "",
"Other": "آخر", "Other": "آخر",
@ -939,7 +946,12 @@
"Permission denied when accessing microphone: {{error}}": "{{error}} تم رفض الإذن عند الوصول إلى الميكروفون ", "Permission denied when accessing microphone: {{error}}": "{{error}} تم رفض الإذن عند الوصول إلى الميكروفون ",
"Permissions": "", "Permissions": "",
"Perplexity API Key": "", "Perplexity API Key": "",
"Perplexity Model": "",
"Perplexity Search Context Usage": "",
"Personalization": "التخصيص", "Personalization": "التخصيص",
"Picture Description API Config": "",
"Picture Description Local Config": "",
"Picture Description Mode": "",
"Pin": "", "Pin": "",
"Pinned": "", "Pinned": "",
"Pioneer insights": "", "Pioneer insights": "",
@ -969,6 +981,7 @@
"Preview": "", "Preview": "",
"Previous 30 days": "أخر 30 يوم", "Previous 30 days": "أخر 30 يوم",
"Previous 7 days": "أخر 7 أيام", "Previous 7 days": "أخر 7 أيام",
"Previous message": "",
"Private": "", "Private": "",
"Profile Image": "صورة الملف الشخصي", "Profile Image": "صورة الملف الشخصي",
"Prompt": "", "Prompt": "",
@ -1010,7 +1023,6 @@
"Rename": "إعادة تسمية", "Rename": "إعادة تسمية",
"Reorder Models": "", "Reorder Models": "",
"Reply in Thread": "", "Reply in Thread": "",
"Request Mode": "وضع الطلب",
"Reranking Engine": "", "Reranking Engine": "",
"Reranking Model": "إعادة تقييم النموذج", "Reranking Model": "إعادة تقييم النموذج",
"Reset": "", "Reset": "",
@ -1182,6 +1194,7 @@
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The format to return a response in. Format can be json or a JSON schema.": "",
"The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency. Leave blank to automatically detect the language.": "", "The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency. Leave blank to automatically detect the language.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
"The LDAP attribute that maps to the username that users use to sign in.": "", "The LDAP attribute that maps to the username that users use to sign in.": "",
@ -1195,11 +1208,14 @@
"Thinking...": "", "Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "", "This action cannot be undone. Do you wish to continue?": "",
"This channel was created on {{createdAt}}. This is the very beginning of the {{channelName}} channel.": "", "This channel was created on {{createdAt}}. This is the very beginning of the {{channelName}} channel.": "",
"This chat won't appear in history and your messages will not be saved.": "",
"This chat wont appear in history and your messages will not be saved.": "", "This chat wont appear in history and your messages will not be saved.": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "وهذا يضمن حفظ محادثاتك القيمة بشكل آمن في قاعدة بياناتك الخلفية. شكرًا لك!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "وهذا يضمن حفظ محادثاتك القيمة بشكل آمن في قاعدة بياناتك الخلفية. شكرًا لك!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This model is not publicly available. Please select another model.": "", "This model is not publicly available. Please select another model.": "",
"This option controls how long the model will stay loaded into memory following the request (default: 5m)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option enables or disables the use of the reasoning feature in Ollama, which allows the model to think before generating a response. When enabled, the model can take a moment to process the conversation context and generate a more thoughtful response.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "", "This response was generated by \"{{model}}\"": "",
@ -1345,7 +1361,7 @@
"Weight of BM25 Retrieval": "", "Weight of BM25 Retrieval": "",
"What are you trying to achieve?": "", "What are you trying to achieve?": "",
"What are you working on?": "", "What are you working on?": "",
"Whats New in": "ما هو الجديد", "What's New in": "ما هو الجديد",
"When enabled, the model will respond to each chat message in real-time, generating a response as soon as the user sends a message. This mode is useful for live chat applications, but may impact performance on slower hardware.": "", "When enabled, the model will respond to each chat message in real-time, generating a response as soon as the user sends a message. This mode is useful for live chat applications, but may impact performance on slower hardware.": "",
"wherever you are": "", "wherever you are": "",
"Whether to paginate the output. Each page will be separated by a horizontal rule and page number. Defaults to False.": "", "Whether to paginate the output. Each page will be separated by a horizontal rule and page number. Defaults to False.": "",

View file

@ -90,7 +90,9 @@
"and {{COUNT}} more": "و{{COUNT}} المزيد", "and {{COUNT}} more": "و{{COUNT}} المزيد",
"and create a new shared link.": "وإنشاء رابط مشترك جديد.", "and create a new shared link.": "وإنشاء رابط مشترك جديد.",
"Android": "", "Android": "",
"API": "",
"API Base URL": "الرابط الأساسي لواجهة API", "API Base URL": "الرابط الأساسي لواجهة API",
"API details for using a vision-language model in the picture description. This parameter is mutually exclusive with picture_description_local.": "",
"API Key": "مفتاح واجهة برمجة التطبيقات (API)", "API Key": "مفتاح واجهة برمجة التطبيقات (API)",
"API Key created.": "تم إنشاء مفتاح واجهة API.", "API Key created.": "تم إنشاء مفتاح واجهة API.",
"API Key Endpoint Restrictions": "قيود نقاط نهاية مفتاح API", "API Key Endpoint Restrictions": "قيود نقاط نهاية مفتاح API",
@ -207,6 +209,8 @@
"Clone Chat": "استنساخ المحادثة", "Clone Chat": "استنساخ المحادثة",
"Clone of {{TITLE}}": "استنساخ لـ {{TITLE}}", "Clone of {{TITLE}}": "استنساخ لـ {{TITLE}}",
"Close": "إغلاق", "Close": "إغلاق",
"Close modal": "",
"Close settings modal": "",
"Code execution": "تنفيذ الشيفرة", "Code execution": "تنفيذ الشيفرة",
"Code Execution": "تنفيذ الشيفرة", "Code Execution": "تنفيذ الشيفرة",
"Code Execution Engine": "محرك تنفيذ الشيفرة", "Code Execution Engine": "محرك تنفيذ الشيفرة",
@ -294,7 +298,7 @@
"Default": "افتراضي", "Default": "افتراضي",
"Default (Open AI)": "افتراضي (Open AI)", "Default (Open AI)": "افتراضي (Open AI)",
"Default (SentenceTransformers)": "افتراضي (SentenceTransformers)", "Default (SentenceTransformers)": "افتراضي (SentenceTransformers)",
"Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the models built-in tool-calling capabilities, but requires the model to inherently support this feature.": "الوضع الافتراضي يعمل مع مجموعة أوسع من النماذج من خلال استدعاء الأدوات مرة واحدة قبل التنفيذ. أما الوضع الأصلي فيستخدم قدرات استدعاء الأدوات المدمجة في النموذج، لكنه يتطلب دعمًا داخليًا لهذه الميزة.", "Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the model's built-in tool-calling capabilities, but requires the model to inherently support this feature.": "الوضع الافتراضي يعمل مع مجموعة أوسع من النماذج من خلال استدعاء الأدوات مرة واحدة قبل التنفيذ. أما الوضع الأصلي فيستخدم قدرات استدعاء الأدوات المدمجة في النموذج، لكنه يتطلب دعمًا داخليًا لهذه الميزة.",
"Default Model": "النموذج الافتراضي", "Default Model": "النموذج الافتراضي",
"Default model updated": "الإفتراضي تحديث الموديل", "Default model updated": "الإفتراضي تحديث الموديل",
"Default Models": "النماذج الافتراضية", "Default Models": "النماذج الافتراضية",
@ -442,6 +446,7 @@
"Enter Chunk Overlap": "أدخل الChunk Overlap", "Enter Chunk Overlap": "أدخل الChunk Overlap",
"Enter Chunk Size": "أدخل Chunk الحجم", "Enter Chunk Size": "أدخل Chunk الحجم",
"Enter comma-separated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "أدخل أزواج \"الرمز:قيمة التحيز\" مفصولة بفواصل (مثال: 5432:100، 413:-100)", "Enter comma-separated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "أدخل أزواج \"الرمز:قيمة التحيز\" مفصولة بفواصل (مثال: 5432:100، 413:-100)",
"Enter Config in JSON format": "",
"Enter content for the pending user info overlay. Leave empty for default.": "", "Enter content for the pending user info overlay. Leave empty for default.": "",
"Enter Datalab Marker API Key": "", "Enter Datalab Marker API Key": "",
"Enter description": "أدخل الوصف", "Enter description": "أدخل الوصف",
@ -666,6 +671,7 @@
"Hex Color": "لون سداسي", "Hex Color": "لون سداسي",
"Hex Color - Leave empty for default color": "اللون السداسي - اتركه فارغًا لاستخدام اللون الافتراضي", "Hex Color - Leave empty for default color": "اللون السداسي - اتركه فارغًا لاستخدام اللون الافتراضي",
"Hide": "أخفاء", "Hide": "أخفاء",
"Hide from Sidebar": "",
"Hide Model": "", "Hide Model": "",
"High Contrast Mode": "", "High Contrast Mode": "",
"Home": "الصفحة الرئيسية", "Home": "الصفحة الرئيسية",
@ -714,7 +720,6 @@
"Invalid file content": "", "Invalid file content": "",
"Invalid file format.": "تنسيق ملف غير صالح.", "Invalid file format.": "تنسيق ملف غير صالح.",
"Invalid JSON file": "", "Invalid JSON file": "",
"Invalid JSON schema": "",
"Invalid Tag": "تاق غير صالحة", "Invalid Tag": "تاق غير صالحة",
"is typing...": "يكتب...", "is typing...": "يكتب...",
"January": "يناير", "January": "يناير",
@ -729,7 +734,7 @@
"JWT Expiration": "JWT تجريبي", "JWT Expiration": "JWT تجريبي",
"JWT Token": "JWT Token", "JWT Token": "JWT Token",
"Kagi Search API Key": "مفتاح API لـ Kagi Search", "Kagi Search API Key": "مفتاح API لـ Kagi Search",
"Keep Alive": "Keep Alive", "Keep in Sidebar": "",
"Key": "المفتاح", "Key": "المفتاح",
"Keyboard shortcuts": "اختصارات لوحة المفاتيح", "Keyboard shortcuts": "اختصارات لوحة المفاتيح",
"Knowledge": "المعرفة", "Knowledge": "المعرفة",
@ -849,6 +854,7 @@
"New Password": "كلمة المرور الجديدة", "New Password": "كلمة المرور الجديدة",
"New Tool": "", "New Tool": "",
"new-channel": "قناة جديدة", "new-channel": "قناة جديدة",
"Next message": "",
"No chats found for this user.": "", "No chats found for this user.": "",
"No chats found.": "", "No chats found.": "",
"No content": "", "No content": "",
@ -916,6 +922,7 @@
"OpenAI API settings updated": "تم تحديث إعدادات OpenAI API", "OpenAI API settings updated": "تم تحديث إعدادات OpenAI API",
"OpenAI URL/Key required.": "URL/مفتاح OpenAI.مطلوب عنوان ", "OpenAI URL/Key required.": "URL/مفتاح OpenAI.مطلوب عنوان ",
"openapi.json URL or Path": "", "openapi.json URL or Path": "",
"Options for running a local vision-language model in the picture description. The parameters refer to a model hosted on Hugging Face. This parameter is mutually exclusive with picture_description_api.": "",
"or": "أو", "or": "أو",
"Organize your users": "تنظيم المستخدمين الخاصين بك", "Organize your users": "تنظيم المستخدمين الخاصين بك",
"Other": "آخر", "Other": "آخر",
@ -939,7 +946,12 @@
"Permission denied when accessing microphone: {{error}}": "{{error}} تم رفض الإذن عند الوصول إلى الميكروفون ", "Permission denied when accessing microphone: {{error}}": "{{error}} تم رفض الإذن عند الوصول إلى الميكروفون ",
"Permissions": "الأذونات", "Permissions": "الأذونات",
"Perplexity API Key": "مفتاح API لـ Perplexity", "Perplexity API Key": "مفتاح API لـ Perplexity",
"Perplexity Model": "",
"Perplexity Search Context Usage": "",
"Personalization": "التخصيص", "Personalization": "التخصيص",
"Picture Description API Config": "",
"Picture Description Local Config": "",
"Picture Description Mode": "",
"Pin": "تثبيت", "Pin": "تثبيت",
"Pinned": "مثبت", "Pinned": "مثبت",
"Pioneer insights": "رؤى رائدة", "Pioneer insights": "رؤى رائدة",
@ -969,6 +981,7 @@
"Preview": "", "Preview": "",
"Previous 30 days": "أخر 30 يوم", "Previous 30 days": "أخر 30 يوم",
"Previous 7 days": "أخر 7 أيام", "Previous 7 days": "أخر 7 أيام",
"Previous message": "",
"Private": "", "Private": "",
"Profile Image": "صورة الملف الشخصي", "Profile Image": "صورة الملف الشخصي",
"Prompt": "التوجيه", "Prompt": "التوجيه",
@ -1010,7 +1023,6 @@
"Rename": "إعادة تسمية", "Rename": "إعادة تسمية",
"Reorder Models": "إعادة ترتيب النماذج", "Reorder Models": "إعادة ترتيب النماذج",
"Reply in Thread": "الرد داخل سلسلة الرسائل", "Reply in Thread": "الرد داخل سلسلة الرسائل",
"Request Mode": "وضع الطلب",
"Reranking Engine": "", "Reranking Engine": "",
"Reranking Model": "إعادة تقييم النموذج", "Reranking Model": "إعادة تقييم النموذج",
"Reset": "إعادة تعيين", "Reset": "إعادة تعيين",
@ -1182,6 +1194,7 @@
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "يحدد حجم الدفعة عدد طلبات النصوص التي تتم معالجتها معًا. الحجم الأكبر يمكن أن يزيد الأداء والسرعة، ولكنه يحتاج أيضًا إلى ذاكرة أكبر.", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "يحدد حجم الدفعة عدد طلبات النصوص التي تتم معالجتها معًا. الحجم الأكبر يمكن أن يزيد الأداء والسرعة، ولكنه يحتاج أيضًا إلى ذاكرة أكبر.",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "المطورون خلف هذا المكون الإضافي هم متطوعون شغوفون من المجتمع. إذا وجدت هذا المكون مفيدًا، فكر في المساهمة في تطويره.", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "المطورون خلف هذا المكون الإضافي هم متطوعون شغوفون من المجتمع. إذا وجدت هذا المكون مفيدًا، فكر في المساهمة في تطويره.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "قائمة التقييم تعتمد على نظام Elo ويتم تحديثها في الوقت الفعلي.", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "قائمة التقييم تعتمد على نظام Elo ويتم تحديثها في الوقت الفعلي.",
"The format to return a response in. Format can be json or a JSON schema.": "",
"The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency. Leave blank to automatically detect the language.": "", "The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency. Leave blank to automatically detect the language.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "السمة LDAP التي تتوافق مع البريد الإلكتروني الذي يستخدمه المستخدمون لتسجيل الدخول.", "The LDAP attribute that maps to the mail that users use to sign in.": "السمة LDAP التي تتوافق مع البريد الإلكتروني الذي يستخدمه المستخدمون لتسجيل الدخول.",
"The LDAP attribute that maps to the username that users use to sign in.": "السمة LDAP التي تتوافق مع اسم المستخدم الذي يستخدمه المستخدمون لتسجيل الدخول.", "The LDAP attribute that maps to the username that users use to sign in.": "السمة LDAP التي تتوافق مع اسم المستخدم الذي يستخدمه المستخدمون لتسجيل الدخول.",
@ -1195,11 +1208,14 @@
"Thinking...": "جارٍ التفكير...", "Thinking...": "جارٍ التفكير...",
"This action cannot be undone. Do you wish to continue?": "لا يمكن التراجع عن هذا الإجراء. هل ترغب في المتابعة؟", "This action cannot be undone. Do you wish to continue?": "لا يمكن التراجع عن هذا الإجراء. هل ترغب في المتابعة؟",
"This channel was created on {{createdAt}}. This is the very beginning of the {{channelName}} channel.": "", "This channel was created on {{createdAt}}. This is the very beginning of the {{channelName}} channel.": "",
"This chat won't appear in history and your messages will not be saved.": "",
"This chat wont appear in history and your messages will not be saved.": "", "This chat wont appear in history and your messages will not be saved.": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "وهذا يضمن حفظ محادثاتك القيمة بشكل آمن في قاعدة بياناتك الخلفية. شكرًا لك!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "وهذا يضمن حفظ محادثاتك القيمة بشكل آمن في قاعدة بياناتك الخلفية. شكرًا لك!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "هذه ميزة تجريبية، وقد لا تعمل كما هو متوقع وقد تتغير في أي وقت.", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "هذه ميزة تجريبية، وقد لا تعمل كما هو متوقع وقد تتغير في أي وقت.",
"This model is not publicly available. Please select another model.": "", "This model is not publicly available. Please select another model.": "",
"This option controls how long the model will stay loaded into memory following the request (default: 5m)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "هذا الخيار يحدد عدد الرموز التي يتم الاحتفاظ بها عند تحديث السياق. مثلاً، إذا تم ضبطه على 2، سيتم الاحتفاظ بآخر رمزين من السياق. الحفاظ على السياق يساعد في استمرارية المحادثة، لكنه قد يحد من التفاعل مع مواضيع جديدة.", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "هذا الخيار يحدد عدد الرموز التي يتم الاحتفاظ بها عند تحديث السياق. مثلاً، إذا تم ضبطه على 2، سيتم الاحتفاظ بآخر رمزين من السياق. الحفاظ على السياق يساعد في استمرارية المحادثة، لكنه قد يحد من التفاعل مع مواضيع جديدة.",
"This option enables or disables the use of the reasoning feature in Ollama, which allows the model to think before generating a response. When enabled, the model can take a moment to process the conversation context and generate a more thoughtful response.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "يحدد هذا الخيار الحد الأقصى لعدد الرموز التي يمكن للنموذج توليدها في الرد. زيادته تتيح للنموذج تقديم إجابات أطول، لكنها قد تزيد من احتمالية توليد محتوى غير مفيد أو غير ذي صلة.", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "يحدد هذا الخيار الحد الأقصى لعدد الرموز التي يمكن للنموذج توليدها في الرد. زيادته تتيح للنموذج تقديم إجابات أطول، لكنها قد تزيد من احتمالية توليد محتوى غير مفيد أو غير ذي صلة.",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "سيؤدي هذا الخيار إلى حذف جميع الملفات الحالية في المجموعة واستبدالها بالملفات التي تم تحميلها حديثًا.", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "سيؤدي هذا الخيار إلى حذف جميع الملفات الحالية في المجموعة واستبدالها بالملفات التي تم تحميلها حديثًا.",
"This response was generated by \"{{model}}\"": "تم توليد هذا الرد بواسطة \"{{model}}\"", "This response was generated by \"{{model}}\"": "تم توليد هذا الرد بواسطة \"{{model}}\"",
@ -1345,7 +1361,7 @@
"Weight of BM25 Retrieval": "", "Weight of BM25 Retrieval": "",
"What are you trying to achieve?": "ما الذي تحاول تحقيقه؟", "What are you trying to achieve?": "ما الذي تحاول تحقيقه؟",
"What are you working on?": "على ماذا تعمل؟", "What are you working on?": "على ماذا تعمل؟",
"Whats New in": "ما هو الجديد", "What's New in": "ما هو الجديد",
"When enabled, the model will respond to each chat message in real-time, generating a response as soon as the user sends a message. This mode is useful for live chat applications, but may impact performance on slower hardware.": "عند التفعيل، سيستجيب النموذج لكل رسالة في المحادثة بشكل فوري، مولدًا الرد بمجرد إرسال المستخدم لرسالته. هذا الوضع مفيد لتطبيقات الدردشة الحية، لكنه قد يؤثر على الأداء في الأجهزة الأبطأ.", "When enabled, the model will respond to each chat message in real-time, generating a response as soon as the user sends a message. This mode is useful for live chat applications, but may impact performance on slower hardware.": "عند التفعيل، سيستجيب النموذج لكل رسالة في المحادثة بشكل فوري، مولدًا الرد بمجرد إرسال المستخدم لرسالته. هذا الوضع مفيد لتطبيقات الدردشة الحية، لكنه قد يؤثر على الأداء في الأجهزة الأبطأ.",
"wherever you are": "أينما كنت", "wherever you are": "أينما كنت",
"Whether to paginate the output. Each page will be separated by a horizontal rule and page number. Defaults to False.": "", "Whether to paginate the output. Each page will be separated by a horizontal rule and page number. Defaults to False.": "",

View file

@ -90,7 +90,9 @@
"and {{COUNT}} more": "и още {{COUNT}}", "and {{COUNT}} more": "и още {{COUNT}}",
"and create a new shared link.": "и създай нов общ линк.", "and create a new shared link.": "и създай нов общ линк.",
"Android": "", "Android": "",
"API": "",
"API Base URL": "API Базов URL", "API Base URL": "API Базов URL",
"API details for using a vision-language model in the picture description. This parameter is mutually exclusive with picture_description_local.": "",
"API Key": "API Ключ", "API Key": "API Ключ",
"API Key created.": "API Ключ създаден.", "API Key created.": "API Ключ създаден.",
"API Key Endpoint Restrictions": "Ограничения на крайните точки за API Ключ", "API Key Endpoint Restrictions": "Ограничения на крайните точки за API Ключ",
@ -207,6 +209,8 @@
"Clone Chat": "Клониране на чат", "Clone Chat": "Клониране на чат",
"Clone of {{TITLE}}": "Клонинг на {{TITLE}}", "Clone of {{TITLE}}": "Клонинг на {{TITLE}}",
"Close": "Затвори", "Close": "Затвори",
"Close modal": "",
"Close settings modal": "",
"Code execution": "Изпълнение на код", "Code execution": "Изпълнение на код",
"Code Execution": "Изпълнение на код", "Code Execution": "Изпълнение на код",
"Code Execution Engine": "Двигател за изпълнение на кода", "Code Execution Engine": "Двигател за изпълнение на кода",
@ -294,7 +298,7 @@
"Default": "По подразбиране", "Default": "По подразбиране",
"Default (Open AI)": "По подразбиране (Open AI)", "Default (Open AI)": "По подразбиране (Open AI)",
"Default (SentenceTransformers)": "По подразбиране (SentenceTransformers)", "Default (SentenceTransformers)": "По подразбиране (SentenceTransformers)",
"Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the models built-in tool-calling capabilities, but requires the model to inherently support this feature.": "Режимът по подразбиране работи с по-широк набор от модели, като извиква инструменти веднъж преди изпълнение. Нативният режим използва вградените възможности за извикване на инструменти на модела, но изисква моделът да поддържа тази функция по същество.", "Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the model's built-in tool-calling capabilities, but requires the model to inherently support this feature.": "Режимът по подразбиране работи с по-широк набор от модели, като извиква инструменти веднъж преди изпълнение. Нативният режим използва вградените възможности за извикване на инструменти на модела, но изисква моделът да поддържа тази функция по същество.",
"Default Model": "Модел по подразбиране", "Default Model": "Модел по подразбиране",
"Default model updated": "Моделът по подразбиране е обновен", "Default model updated": "Моделът по подразбиране е обновен",
"Default Models": "Модели по подразбиране", "Default Models": "Модели по подразбиране",
@ -442,6 +446,7 @@
"Enter Chunk Overlap": "Въведете припокриване на чънкове", "Enter Chunk Overlap": "Въведете припокриване на чънкове",
"Enter Chunk Size": "Въведете размер на чънк", "Enter Chunk Size": "Въведете размер на чънк",
"Enter comma-separated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "", "Enter comma-separated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter Config in JSON format": "",
"Enter content for the pending user info overlay. Leave empty for default.": "", "Enter content for the pending user info overlay. Leave empty for default.": "",
"Enter Datalab Marker API Key": "", "Enter Datalab Marker API Key": "",
"Enter description": "Въведете описание", "Enter description": "Въведете описание",
@ -666,6 +671,7 @@
"Hex Color": "Hex цвят", "Hex Color": "Hex цвят",
"Hex Color - Leave empty for default color": "Hex цвят - Оставете празно за цвят по подразбиране", "Hex Color - Leave empty for default color": "Hex цвят - Оставете празно за цвят по подразбиране",
"Hide": "Скрий", "Hide": "Скрий",
"Hide from Sidebar": "",
"Hide Model": "", "Hide Model": "",
"High Contrast Mode": "", "High Contrast Mode": "",
"Home": "Начало", "Home": "Начало",
@ -714,7 +720,6 @@
"Invalid file content": "", "Invalid file content": "",
"Invalid file format.": "Невалиден формат на файла.", "Invalid file format.": "Невалиден формат на файла.",
"Invalid JSON file": "", "Invalid JSON file": "",
"Invalid JSON schema": "",
"Invalid Tag": "Невалиден таг", "Invalid Tag": "Невалиден таг",
"is typing...": "пише...", "is typing...": "пише...",
"January": "Януари", "January": "Януари",
@ -729,7 +734,7 @@
"JWT Expiration": "JWT изтичане", "JWT Expiration": "JWT изтичане",
"JWT Token": "JWT токен", "JWT Token": "JWT токен",
"Kagi Search API Key": "API ключ за Kagi Search", "Kagi Search API Key": "API ключ за Kagi Search",
"Keep Alive": "Поддържай активен", "Keep in Sidebar": "",
"Key": "Ключ", "Key": "Ключ",
"Keyboard shortcuts": "Клавиши за бърз достъп", "Keyboard shortcuts": "Клавиши за бърз достъп",
"Knowledge": "Знания", "Knowledge": "Знания",
@ -849,6 +854,7 @@
"New Password": "Нова парола", "New Password": "Нова парола",
"New Tool": "", "New Tool": "",
"new-channel": "нов-канал", "new-channel": "нов-канал",
"Next message": "",
"No chats found for this user.": "", "No chats found for this user.": "",
"No chats found.": "", "No chats found.": "",
"No content": "Без съдържание", "No content": "Без съдържание",
@ -916,6 +922,7 @@
"OpenAI API settings updated": "Настройките на OpenAI API са актуализирани", "OpenAI API settings updated": "Настройките на OpenAI API са актуализирани",
"OpenAI URL/Key required.": "OpenAI URL/Key е задължителен.", "OpenAI URL/Key required.": "OpenAI URL/Key е задължителен.",
"openapi.json URL or Path": "", "openapi.json URL or Path": "",
"Options for running a local vision-language model in the picture description. The parameters refer to a model hosted on Hugging Face. This parameter is mutually exclusive with picture_description_api.": "",
"or": "или", "or": "или",
"Organize your users": "Организирайте вашите потребители", "Organize your users": "Организирайте вашите потребители",
"Other": "Друго", "Other": "Друго",
@ -939,7 +946,12 @@
"Permission denied when accessing microphone: {{error}}": "Отказан достъп при опит за достъп до микрофона: {{error}}", "Permission denied when accessing microphone: {{error}}": "Отказан достъп при опит за достъп до микрофона: {{error}}",
"Permissions": "Разрешения", "Permissions": "Разрешения",
"Perplexity API Key": "", "Perplexity API Key": "",
"Perplexity Model": "",
"Perplexity Search Context Usage": "",
"Personalization": "Персонализация", "Personalization": "Персонализация",
"Picture Description API Config": "",
"Picture Description Local Config": "",
"Picture Description Mode": "",
"Pin": "Закачи", "Pin": "Закачи",
"Pinned": "Закачено", "Pinned": "Закачено",
"Pioneer insights": "Пионерски прозрения", "Pioneer insights": "Пионерски прозрения",
@ -969,6 +981,7 @@
"Preview": "", "Preview": "",
"Previous 30 days": "Предишните 30 дни", "Previous 30 days": "Предишните 30 дни",
"Previous 7 days": "Предишните 7 дни", "Previous 7 days": "Предишните 7 дни",
"Previous message": "",
"Private": "", "Private": "",
"Profile Image": "Профилна снимка", "Profile Image": "Профилна снимка",
"Prompt": "Промпт", "Prompt": "Промпт",
@ -1010,7 +1023,6 @@
"Rename": "Преименуване", "Rename": "Преименуване",
"Reorder Models": "Преорганизиране на моделите", "Reorder Models": "Преорганизиране на моделите",
"Reply in Thread": "Отговори в тред", "Reply in Thread": "Отговори в тред",
"Request Mode": "Режим на заявка",
"Reranking Engine": "Двигател за пренареждане", "Reranking Engine": "Двигател за пренареждане",
"Reranking Model": "Модел за преподреждане", "Reranking Model": "Модел за преподреждане",
"Reset": "Нулиране", "Reset": "Нулиране",
@ -1182,6 +1194,7 @@
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Разработчиците зад този плъгин са страстни доброволци от общността. Ако намирате този плъгин полезен, моля, обмислете да допринесете за неговото развитие.", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Разработчиците зад този плъгин са страстни доброволци от общността. Ако намирате този плъгин полезен, моля, обмислете да допринесете за неговото развитие.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Класацията за оценка се базира на рейтинговата система Elo и се обновява в реално време.", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Класацията за оценка се базира на рейтинговата система Elo и се обновява в реално време.",
"The format to return a response in. Format can be json or a JSON schema.": "",
"The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency. Leave blank to automatically detect the language.": "", "The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency. Leave blank to automatically detect the language.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "LDAP атрибутът, който съответства на имейла, който потребителите използват за вписване.", "The LDAP attribute that maps to the mail that users use to sign in.": "LDAP атрибутът, който съответства на имейла, който потребителите използват за вписване.",
"The LDAP attribute that maps to the username that users use to sign in.": "LDAP атрибутът, който съответства на потребителското име, което потребителите използват за вписване.", "The LDAP attribute that maps to the username that users use to sign in.": "LDAP атрибутът, който съответства на потребителското име, което потребителите използват за вписване.",
@ -1195,11 +1208,14 @@
"Thinking...": "Мисля...", "Thinking...": "Мисля...",
"This action cannot be undone. Do you wish to continue?": "Това действие не може да бъде отменено. Желаете ли да продължите?", "This action cannot be undone. Do you wish to continue?": "Това действие не може да бъде отменено. Желаете ли да продължите?",
"This channel was created on {{createdAt}}. This is the very beginning of the {{channelName}} channel.": "", "This channel was created on {{createdAt}}. This is the very beginning of the {{channelName}} channel.": "",
"This chat won't appear in history and your messages will not be saved.": "",
"This chat wont appear in history and your messages will not be saved.": "", "This chat wont appear in history and your messages will not be saved.": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Това гарантира, че ценните ви разговори се запазват сигурно във вашата бекенд база данни. Благодарим ви!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Това гарантира, че ценните ви разговори се запазват сигурно във вашата бекенд база данни. Благодарим ви!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Това е експериментална функция, може да не работи според очакванията и подлежи на промяна по всяко време.", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "Това е експериментална функция, може да не работи според очакванията и подлежи на промяна по всяко време.",
"This model is not publicly available. Please select another model.": "", "This model is not publicly available. Please select another model.": "",
"This option controls how long the model will stay loaded into memory following the request (default: 5m)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option enables or disables the use of the reasoning feature in Ollama, which allows the model to think before generating a response. When enabled, the model can take a moment to process the conversation context and generate a more thoughtful response.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Тази опция ще изтрие всички съществуващи файлове в колекцията и ще ги замени с новокачени файлове.", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "Тази опция ще изтрие всички съществуващи файлове в колекцията и ще ги замени с новокачени файлове.",
"This response was generated by \"{{model}}\"": "Този отговор беше генериран от \"{{model}}\"", "This response was generated by \"{{model}}\"": "Този отговор беше генериран от \"{{model}}\"",
@ -1345,7 +1361,7 @@
"Weight of BM25 Retrieval": "", "Weight of BM25 Retrieval": "",
"What are you trying to achieve?": "Какво се опитвате да постигнете?", "What are you trying to achieve?": "Какво се опитвате да постигнете?",
"What are you working on?": "Върху какво работите?", "What are you working on?": "Върху какво работите?",
"Whats New in": "Какво е ново в", "What's New in": "Какво е ново в",
"When enabled, the model will respond to each chat message in real-time, generating a response as soon as the user sends a message. This mode is useful for live chat applications, but may impact performance on slower hardware.": "Когато е активирано, моделът ще отговаря на всяко съобщение в чата в реално време, генерирайки отговор веднага щом потребителят изпрати съобщение. Този режим е полезен за приложения за чат на живо, но може да повлияе на производителността на по-бавен хардуер.", "When enabled, the model will respond to each chat message in real-time, generating a response as soon as the user sends a message. This mode is useful for live chat applications, but may impact performance on slower hardware.": "Когато е активирано, моделът ще отговаря на всяко съобщение в чата в реално време, генерирайки отговор веднага щом потребителят изпрати съобщение. Този режим е полезен за приложения за чат на живо, но може да повлияе на производителността на по-бавен хардуер.",
"wherever you are": "където и да сте", "wherever you are": "където и да сте",
"Whether to paginate the output. Each page will be separated by a horizontal rule and page number. Defaults to False.": "", "Whether to paginate the output. Each page will be separated by a horizontal rule and page number. Defaults to False.": "",

View file

@ -90,7 +90,9 @@
"and {{COUNT}} more": "", "and {{COUNT}} more": "",
"and create a new shared link.": "এবং একটি নতুন শেয়ারে লিংক তৈরি করুন.", "and create a new shared link.": "এবং একটি নতুন শেয়ারে লিংক তৈরি করুন.",
"Android": "", "Android": "",
"API": "",
"API Base URL": "এপিআই বেজ ইউআরএল", "API Base URL": "এপিআই বেজ ইউআরএল",
"API details for using a vision-language model in the picture description. This parameter is mutually exclusive with picture_description_local.": "",
"API Key": "এপিআই কোড", "API Key": "এপিআই কোড",
"API Key created.": "একটি এপিআই কোড তৈরি করা হয়েছে.", "API Key created.": "একটি এপিআই কোড তৈরি করা হয়েছে.",
"API Key Endpoint Restrictions": "", "API Key Endpoint Restrictions": "",
@ -207,6 +209,8 @@
"Clone Chat": "", "Clone Chat": "",
"Clone of {{TITLE}}": "", "Clone of {{TITLE}}": "",
"Close": "বন্ধ", "Close": "বন্ধ",
"Close modal": "",
"Close settings modal": "",
"Code execution": "", "Code execution": "",
"Code Execution": "", "Code Execution": "",
"Code Execution Engine": "", "Code Execution Engine": "",
@ -294,7 +298,7 @@
"Default": "ডিফল্ট", "Default": "ডিফল্ট",
"Default (Open AI)": "", "Default (Open AI)": "",
"Default (SentenceTransformers)": "ডিফল্ট (SentenceTransformers)", "Default (SentenceTransformers)": "ডিফল্ট (SentenceTransformers)",
"Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the models built-in tool-calling capabilities, but requires the model to inherently support this feature.": "", "Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the model's built-in tool-calling capabilities, but requires the model to inherently support this feature.": "",
"Default Model": "ডিফল্ট মডেল", "Default Model": "ডিফল্ট মডেল",
"Default model updated": "ডিফল্ট মডেল আপডেট হয়েছে", "Default model updated": "ডিফল্ট মডেল আপডেট হয়েছে",
"Default Models": "", "Default Models": "",
@ -442,6 +446,7 @@
"Enter Chunk Overlap": "চাঙ্ক ওভারল্যাপ লিখুন", "Enter Chunk Overlap": "চাঙ্ক ওভারল্যাপ লিখুন",
"Enter Chunk Size": "চাংক সাইজ লিখুন", "Enter Chunk Size": "চাংক সাইজ লিখুন",
"Enter comma-separated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "", "Enter comma-separated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter Config in JSON format": "",
"Enter content for the pending user info overlay. Leave empty for default.": "", "Enter content for the pending user info overlay. Leave empty for default.": "",
"Enter Datalab Marker API Key": "", "Enter Datalab Marker API Key": "",
"Enter description": "", "Enter description": "",
@ -666,6 +671,7 @@
"Hex Color": "", "Hex Color": "",
"Hex Color - Leave empty for default color": "", "Hex Color - Leave empty for default color": "",
"Hide": "লুকান", "Hide": "লুকান",
"Hide from Sidebar": "",
"Hide Model": "", "Hide Model": "",
"High Contrast Mode": "", "High Contrast Mode": "",
"Home": "", "Home": "",
@ -714,7 +720,6 @@
"Invalid file content": "", "Invalid file content": "",
"Invalid file format.": "", "Invalid file format.": "",
"Invalid JSON file": "", "Invalid JSON file": "",
"Invalid JSON schema": "",
"Invalid Tag": "অবৈধ ট্যাগ", "Invalid Tag": "অবৈধ ট্যাগ",
"is typing...": "", "is typing...": "",
"January": "জানুয়ারী", "January": "জানুয়ারী",
@ -729,7 +734,7 @@
"JWT Expiration": "JWT-র মেয়াদ", "JWT Expiration": "JWT-র মেয়াদ",
"JWT Token": "JWT টোকেন", "JWT Token": "JWT টোকেন",
"Kagi Search API Key": "", "Kagi Search API Key": "",
"Keep Alive": "সচল রাখুন", "Keep in Sidebar": "",
"Key": "", "Key": "",
"Keyboard shortcuts": "কিবোর্ড শর্টকাটসমূহ", "Keyboard shortcuts": "কিবোর্ড শর্টকাটসমূহ",
"Knowledge": "", "Knowledge": "",
@ -849,6 +854,7 @@
"New Password": "নতুন পাসওয়ার্ড", "New Password": "নতুন পাসওয়ার্ড",
"New Tool": "", "New Tool": "",
"new-channel": "", "new-channel": "",
"Next message": "",
"No chats found for this user.": "", "No chats found for this user.": "",
"No chats found.": "", "No chats found.": "",
"No content": "", "No content": "",
@ -916,6 +922,7 @@
"OpenAI API settings updated": "", "OpenAI API settings updated": "",
"OpenAI URL/Key required.": "OpenAI URL/Key আবশ্যক", "OpenAI URL/Key required.": "OpenAI URL/Key আবশ্যক",
"openapi.json URL or Path": "", "openapi.json URL or Path": "",
"Options for running a local vision-language model in the picture description. The parameters refer to a model hosted on Hugging Face. This parameter is mutually exclusive with picture_description_api.": "",
"or": "অথবা", "or": "অথবা",
"Organize your users": "", "Organize your users": "",
"Other": "অন্যান্য", "Other": "অন্যান্য",
@ -939,7 +946,12 @@
"Permission denied when accessing microphone: {{error}}": "মাইক্রোফোন ব্যবহারের অনুমতি পাওয়া যায়নি: {{error}}", "Permission denied when accessing microphone: {{error}}": "মাইক্রোফোন ব্যবহারের অনুমতি পাওয়া যায়নি: {{error}}",
"Permissions": "", "Permissions": "",
"Perplexity API Key": "", "Perplexity API Key": "",
"Perplexity Model": "",
"Perplexity Search Context Usage": "",
"Personalization": "ডিজিটাল বাংলা", "Personalization": "ডিজিটাল বাংলা",
"Picture Description API Config": "",
"Picture Description Local Config": "",
"Picture Description Mode": "",
"Pin": "", "Pin": "",
"Pinned": "", "Pinned": "",
"Pioneer insights": "", "Pioneer insights": "",
@ -969,6 +981,7 @@
"Preview": "", "Preview": "",
"Previous 30 days": "পূর্ব ৩০ দিন", "Previous 30 days": "পূর্ব ৩০ দিন",
"Previous 7 days": "পূর্ব দিন", "Previous 7 days": "পূর্ব দিন",
"Previous message": "",
"Private": "", "Private": "",
"Profile Image": "প্রোফাইল ইমেজ", "Profile Image": "প্রোফাইল ইমেজ",
"Prompt": "", "Prompt": "",
@ -1010,7 +1023,6 @@
"Rename": "রেনেম", "Rename": "রেনেম",
"Reorder Models": "", "Reorder Models": "",
"Reply in Thread": "", "Reply in Thread": "",
"Request Mode": "রিকোয়েস্ট মোড",
"Reranking Engine": "", "Reranking Engine": "",
"Reranking Model": "রির্যাক্টিং মডেল", "Reranking Model": "রির্যাক্টিং মডেল",
"Reset": "", "Reset": "",
@ -1182,6 +1194,7 @@
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The format to return a response in. Format can be json or a JSON schema.": "",
"The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency. Leave blank to automatically detect the language.": "", "The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency. Leave blank to automatically detect the language.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
"The LDAP attribute that maps to the username that users use to sign in.": "", "The LDAP attribute that maps to the username that users use to sign in.": "",
@ -1195,11 +1208,14 @@
"Thinking...": "", "Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "", "This action cannot be undone. Do you wish to continue?": "",
"This channel was created on {{createdAt}}. This is the very beginning of the {{channelName}} channel.": "", "This channel was created on {{createdAt}}. This is the very beginning of the {{channelName}} channel.": "",
"This chat won't appear in history and your messages will not be saved.": "",
"This chat wont appear in history and your messages will not be saved.": "", "This chat wont appear in history and your messages will not be saved.": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "এটা নিশ্চিত করে যে, আপনার গুরুত্বপূর্ণ আলোচনা নিরাপদে আপনার ব্যাকএন্ড ডেটাবেজে সংরক্ষিত আছে। ধন্যবাদ!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "এটা নিশ্চিত করে যে, আপনার গুরুত্বপূর্ণ আলোচনা নিরাপদে আপনার ব্যাকএন্ড ডেটাবেজে সংরক্ষিত আছে। ধন্যবাদ!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This model is not publicly available. Please select another model.": "", "This model is not publicly available. Please select another model.": "",
"This option controls how long the model will stay loaded into memory following the request (default: 5m)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option enables or disables the use of the reasoning feature in Ollama, which allows the model to think before generating a response. When enabled, the model can take a moment to process the conversation context and generate a more thoughtful response.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "", "This response was generated by \"{{model}}\"": "",
@ -1345,7 +1361,7 @@
"Weight of BM25 Retrieval": "", "Weight of BM25 Retrieval": "",
"What are you trying to achieve?": "", "What are you trying to achieve?": "",
"What are you working on?": "", "What are you working on?": "",
"Whats New in": "এতে নতুন কী", "What's New in": "এতে নতুন কী",
"When enabled, the model will respond to each chat message in real-time, generating a response as soon as the user sends a message. This mode is useful for live chat applications, but may impact performance on slower hardware.": "", "When enabled, the model will respond to each chat message in real-time, generating a response as soon as the user sends a message. This mode is useful for live chat applications, but may impact performance on slower hardware.": "",
"wherever you are": "", "wherever you are": "",
"Whether to paginate the output. Each page will be separated by a horizontal rule and page number. Defaults to False.": "", "Whether to paginate the output. Each page will be separated by a horizontal rule and page number. Defaults to False.": "",

View file

@ -90,7 +90,9 @@
"and {{COUNT}} more": "ད་དུང་ {{COUNT}}", "and {{COUNT}} more": "ད་དུང་ {{COUNT}}",
"and create a new shared link.": "དང་མཉམ་སྤྱོད་སྦྲེལ་ཐག་གསར་པ་ཞིག་བཟོ་བ།", "and create a new shared link.": "དང་མཉམ་སྤྱོད་སྦྲེལ་ཐག་གསར་པ་ཞིག་བཟོ་བ།",
"Android": "", "Android": "",
"API": "",
"API Base URL": "API གཞི་རྩའི་ URL", "API Base URL": "API གཞི་རྩའི་ URL",
"API details for using a vision-language model in the picture description. This parameter is mutually exclusive with picture_description_local.": "",
"API Key": "API ལྡེ་མིག", "API Key": "API ལྡེ་མིག",
"API Key created.": "API ལྡེ་མིག་བཟོས་ཟིན།", "API Key created.": "API ལྡེ་མིག་བཟོས་ཟིན།",
"API Key Endpoint Restrictions": "API ལྡེ་མིག་མཇུག་མཐུད་ཚད་བཀག", "API Key Endpoint Restrictions": "API ལྡེ་མིག་མཇུག་མཐུད་ཚད་བཀག",
@ -207,6 +209,8 @@
"Clone Chat": "ཁ་བརྡ་འདྲ་བཟོ།", "Clone Chat": "ཁ་བརྡ་འདྲ་བཟོ།",
"Clone of {{TITLE}}": "{{TITLE}} ཡི་འདྲ་བཟོ།", "Clone of {{TITLE}}": "{{TITLE}} ཡི་འདྲ་བཟོ།",
"Close": "ཁ་རྒྱག་པ།", "Close": "ཁ་རྒྱག་པ།",
"Close modal": "",
"Close settings modal": "",
"Code execution": "ཀོཌ་ལག་བསྟར།", "Code execution": "ཀོཌ་ལག་བསྟར།",
"Code Execution": "ཀོཌ་ལག་བསྟར།", "Code Execution": "ཀོཌ་ལག་བསྟར།",
"Code Execution Engine": "ཀོཌ་ལག་བསྟར་འཕྲུལ་འཁོར།", "Code Execution Engine": "ཀོཌ་ལག་བསྟར་འཕྲུལ་འཁོར།",
@ -294,7 +298,7 @@
"Default": "སྔོན་སྒྲིག", "Default": "སྔོན་སྒྲིག",
"Default (Open AI)": "སྔོན་སྒྲིག (Open AI)", "Default (Open AI)": "སྔོན་སྒྲིག (Open AI)",
"Default (SentenceTransformers)": "སྔོན་སྒྲིག (SentenceTransformers)", "Default (SentenceTransformers)": "སྔོན་སྒྲིག (SentenceTransformers)",
"Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the models built-in tool-calling capabilities, but requires the model to inherently support this feature.": "སྔོན་སྒྲིག་མ་དཔེ་ནི་ལག་བསྟར་མ་བྱས་སྔོན་དུ་ལག་ཆ་ཐེངས་གཅིག་འབོད་ནས་དཔེ་དབྱིབས་རྒྱ་ཆེ་བའི་ཁྱབ་ཁོངས་དང་མཉམ་ལས་བྱེད་ཐུབ། ས་སྐྱེས་མ་དཔེ་ཡིས་དཔེ་དབྱིབས་ཀྱི་ནང་འདྲེས་ལག་ཆ་འབོད་པའི་ནུས་པ་སྤྱོད་ཀྱི་ཡོད་མོད། འོན་ཀྱང་དཔེ་དབྱིབས་དེས་ཁྱད་ཆོས་འདི་ལ་ངོ་བོའི་ཐོག་ནས་རྒྱབ་སྐྱོར་བྱེད་དགོས།", "Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the model's built-in tool-calling capabilities, but requires the model to inherently support this feature.": "སྔོན་སྒྲིག་མ་དཔེ་ནི་ལག་བསྟར་མ་བྱས་སྔོན་དུ་ལག་ཆ་ཐེངས་གཅིག་འབོད་ནས་དཔེ་དབྱིབས་རྒྱ་ཆེ་བའི་ཁྱབ་ཁོངས་དང་མཉམ་ལས་བྱེད་ཐུབ། ས་སྐྱེས་མ་དཔེ་ཡིས་དཔེ་དབྱིབས་ཀྱི་ནང་འདྲེས་ལག་ཆ་འབོད་པའི་ནུས་པ་སྤྱོད་ཀྱི་ཡོད་མོད། འོན་ཀྱང་དཔེ་དབྱིབས་དེས་ཁྱད་ཆོས་འདི་ལ་ངོ་བོའི་ཐོག་ནས་རྒྱབ་སྐྱོར་བྱེད་དགོས།",
"Default Model": "སྔོན་སྒྲིག་དཔེ་དབྱིབས།", "Default Model": "སྔོན་སྒྲིག་དཔེ་དབྱིབས།",
"Default model updated": "སྔོན་སྒྲིག་དཔེ་དབྱིབས་གསར་སྒྱུར་བྱས།", "Default model updated": "སྔོན་སྒྲིག་དཔེ་དབྱིབས་གསར་སྒྱུར་བྱས།",
"Default Models": "སྔོན་སྒྲིག་དཔེ་དབྱིབས།", "Default Models": "སྔོན་སྒྲིག་དཔེ་དབྱིབས།",
@ -442,6 +446,7 @@
"Enter Chunk Overlap": "དུམ་བུ་བསྣོལ་བ་འཇུག་པ།", "Enter Chunk Overlap": "དུམ་བུ་བསྣོལ་བ་འཇུག་པ།",
"Enter Chunk Size": "དུམ་བུའི་ཆེ་ཆུང་འཇུག་པ།", "Enter Chunk Size": "དུམ་བུའི་ཆེ་ཆུང་འཇུག་པ།",
"Enter comma-separated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "ཚེག་བསྐུངས་ཀྱིས་ལོགས་སུ་བཀར་བའི་ \"ཊོཀ་ཀེན།:ཕྱོགས་ཞེན་རིན་ཐང་།\" ཆ་འཇུག་པ། (དཔེར། 5432:100, 413:-100)", "Enter comma-separated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "ཚེག་བསྐུངས་ཀྱིས་ལོགས་སུ་བཀར་བའི་ \"ཊོཀ་ཀེན།:ཕྱོགས་ཞེན་རིན་ཐང་།\" ཆ་འཇུག་པ། (དཔེར། 5432:100, 413:-100)",
"Enter Config in JSON format": "",
"Enter content for the pending user info overlay. Leave empty for default.": "", "Enter content for the pending user info overlay. Leave empty for default.": "",
"Enter Datalab Marker API Key": "", "Enter Datalab Marker API Key": "",
"Enter description": "འགྲེལ་བཤད་འཇུག་པ།", "Enter description": "འགྲེལ་བཤད་འཇུག་པ།",
@ -666,6 +671,7 @@
"Hex Color": "Hex ཚོན་མདོག", "Hex Color": "Hex ཚོན་མདོག",
"Hex Color - Leave empty for default color": "Hex ཚོན་མདོག - སྔོན་སྒྲིག་ཚོན་མདོག་གི་ཆེད་དུ་སྟོང་པ་བཞག་པ།", "Hex Color - Leave empty for default color": "Hex ཚོན་མདོག - སྔོན་སྒྲིག་ཚོན་མདོག་གི་ཆེད་དུ་སྟོང་པ་བཞག་པ།",
"Hide": "སྦ་བ།", "Hide": "སྦ་བ།",
"Hide from Sidebar": "",
"Hide Model": "དཔེ་དབྱིབས་སྦ་བ།", "Hide Model": "དཔེ་དབྱིབས་སྦ་བ།",
"High Contrast Mode": "", "High Contrast Mode": "",
"Home": "གཙོ་ངོས།", "Home": "གཙོ་ངོས།",
@ -714,7 +720,6 @@
"Invalid file content": "", "Invalid file content": "",
"Invalid file format.": "ཡིག་ཆའི་བཀོད་པ་ནུས་མེད།", "Invalid file format.": "ཡིག་ཆའི་བཀོད་པ་ནུས་མེད།",
"Invalid JSON file": "", "Invalid JSON file": "",
"Invalid JSON schema": "JSON schema ནུས་མེད།",
"Invalid Tag": "རྟགས་ནུས་མེད།", "Invalid Tag": "རྟགས་ནུས་མེད།",
"is typing...": "ཡིག་འབྲུ་རྒྱག་བཞིན་པ།...", "is typing...": "ཡིག་འབྲུ་རྒྱག་བཞིན་པ།...",
"January": "ཟླ་བ་དང་པོ།", "January": "ཟླ་བ་དང་པོ།",
@ -729,7 +734,7 @@
"JWT Expiration": "JWT དུས་ཚོད་རྫོགས་པ།", "JWT Expiration": "JWT དུས་ཚོད་རྫོགས་པ།",
"JWT Token": "JWT Token", "JWT Token": "JWT Token",
"Kagi Search API Key": "Kagi Search API ལྡེ་མིག", "Kagi Search API Key": "Kagi Search API ལྡེ་མིག",
"Keep Alive": "གསོན་པོར་གནས་པ།", "Keep in Sidebar": "",
"Key": "ལྡེ་མིག", "Key": "ལྡེ་མིག",
"Keyboard shortcuts": "མཐེབ་གནོན་མྱུར་ལམ།", "Keyboard shortcuts": "མཐེབ་གནོན་མྱུར་ལམ།",
"Knowledge": "ཤེས་བྱ།", "Knowledge": "ཤེས་བྱ།",
@ -849,6 +854,7 @@
"New Password": "གསང་གྲངས་གསར་པ།", "New Password": "གསང་གྲངས་གསར་པ།",
"New Tool": "", "New Tool": "",
"new-channel": "བགྲོ་གླེང་གསར་པ།", "new-channel": "བགྲོ་གླེང་གསར་པ།",
"Next message": "",
"No chats found for this user.": "", "No chats found for this user.": "",
"No chats found.": "", "No chats found.": "",
"No content": "", "No content": "",
@ -916,6 +922,7 @@
"OpenAI API settings updated": "OpenAI API སྒྲིག་འགོད་གསར་སྒྱུར་བྱས།", "OpenAI API settings updated": "OpenAI API སྒྲིག་འགོད་གསར་སྒྱུར་བྱས།",
"OpenAI URL/Key required.": "OpenAI URL/ལྡེ་མིག་དགོས་ངེས།", "OpenAI URL/Key required.": "OpenAI URL/ལྡེ་མིག་དགོས་ངེས།",
"openapi.json URL or Path": "", "openapi.json URL or Path": "",
"Options for running a local vision-language model in the picture description. The parameters refer to a model hosted on Hugging Face. This parameter is mutually exclusive with picture_description_api.": "",
"or": "ཡང་ན།", "or": "ཡང་ན།",
"Organize your users": "ཁྱེད་ཀྱི་བེད་སྤྱོད་མཁན་སྒྲིག་འཛུགས།", "Organize your users": "ཁྱེད་ཀྱི་བེད་སྤྱོད་མཁན་སྒྲིག་འཛུགས།",
"Other": "གཞན།", "Other": "གཞན།",
@ -939,7 +946,12 @@
"Permission denied when accessing microphone: {{error}}": "སྐད་སྒྲ་འཛིན་ཆས་འཛུལ་སྤྱོད་སྐབས་དབང་ཚད་ཁས་མ་བླངས།: {{error}}", "Permission denied when accessing microphone: {{error}}": "སྐད་སྒྲ་འཛིན་ཆས་འཛུལ་སྤྱོད་སྐབས་དབང་ཚད་ཁས་མ་བླངས།: {{error}}",
"Permissions": "དབང་ཚད།", "Permissions": "དབང་ཚད།",
"Perplexity API Key": "Perplexity API ལྡེ་མིག", "Perplexity API Key": "Perplexity API ལྡེ་མིག",
"Perplexity Model": "",
"Perplexity Search Context Usage": "",
"Personalization": "སྒེར་སྤྱོད་ཅན།", "Personalization": "སྒེར་སྤྱོད་ཅན།",
"Picture Description API Config": "",
"Picture Description Local Config": "",
"Picture Description Mode": "",
"Pin": "གདབ་པ།", "Pin": "གདབ་པ།",
"Pinned": "གདབ་ཟིན།", "Pinned": "གདབ་ཟིན།",
"Pioneer insights": "སྔོན་དཔག་རིག་ནུས།", "Pioneer insights": "སྔོན་དཔག་རིག་ནུས།",
@ -969,6 +981,7 @@
"Preview": "", "Preview": "",
"Previous 30 days": "ཉིན་ ༣༠ སྔོན་མ།", "Previous 30 days": "ཉིན་ ༣༠ སྔོན་མ།",
"Previous 7 days": "ཉིན་ ༧ སྔོན་མ།", "Previous 7 days": "ཉིན་ ༧ སྔོན་མ།",
"Previous message": "",
"Private": "སྒེར།", "Private": "སྒེར།",
"Profile Image": "སྤྱི་ཐག་པར།", "Profile Image": "སྤྱི་ཐག་པར།",
"Prompt": "འགུལ་སློང་།", "Prompt": "འགུལ་སློང་།",
@ -1010,7 +1023,6 @@
"Rename": "མིང་བསྐྱར་འདོགས།", "Rename": "མིང་བསྐྱར་འདོགས།",
"Reorder Models": "དཔེ་དབྱིབས་བསྐྱར་སྒྲིག", "Reorder Models": "དཔེ་དབྱིབས་བསྐྱར་སྒྲིག",
"Reply in Thread": "བརྗོད་གཞིའི་ནང་ལན་འདེབས།", "Reply in Thread": "བརྗོད་གཞིའི་ནང་ལན་འདེབས།",
"Request Mode": "རེ་ཞུའི་མ་དཔེ།",
"Reranking Engine": "", "Reranking Engine": "",
"Reranking Model": "བསྐྱར་སྒྲིག་དཔེ་དབྱིབས།", "Reranking Model": "བསྐྱར་སྒྲིག་དཔེ་དབྱིབས།",
"Reset": "སླར་སྒྲིག", "Reset": "སླར་སྒྲིག",
@ -1182,6 +1194,7 @@
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "ཚན་ཆུང་གི་ཆེ་ཆུང་གིས་ཡིག་རྐྱང་རེ་ཞུ་ག་ཚོད་མཉམ་དུ་ཐེངས་གཅིག་ལ་སྒྲུབ་དགོས་གཏན་འཁེལ་བྱེད། ཚན་ཆུང་ཆེ་བ་ཡིས་དཔེ་དབྱིབས་ཀྱི་ལས་ཆོད་དང་མྱུར་ཚད་མང་དུ་གཏོང་ཐུབ། འོན་ཀྱང་དེས་དྲན་ཤེས་མང་བ་དགོས།", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "ཚན་ཆུང་གི་ཆེ་ཆུང་གིས་ཡིག་རྐྱང་རེ་ཞུ་ག་ཚོད་མཉམ་དུ་ཐེངས་གཅིག་ལ་སྒྲུབ་དགོས་གཏན་འཁེལ་བྱེད། ཚན་ཆུང་ཆེ་བ་ཡིས་དཔེ་དབྱིབས་ཀྱི་ལས་ཆོད་དང་མྱུར་ཚད་མང་དུ་གཏོང་ཐུབ། འོན་ཀྱང་དེས་དྲན་ཤེས་མང་བ་དགོས།",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "plugin འདིའི་རྒྱབ་ཀྱི་གསར་སྤེལ་བ་དག་ནི་སྤྱི་ཚོགས་ནས་ཡིན་པའི་སེམས་ཤུགས་ཅན་གྱི་དང་བླངས་པ་ཡིན། གལ་ཏེ་ཁྱེད་ཀྱིས་ plugin འདི་ཕན་ཐོགས་ཡོད་པ་མཐོང་ན། དེའི་གསར་སྤེལ་ལ་ཞལ་འདེབས་གནང་བར་བསམ་ཞིབ་གནང་རོགས།", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "plugin འདིའི་རྒྱབ་ཀྱི་གསར་སྤེལ་བ་དག་ནི་སྤྱི་ཚོགས་ནས་ཡིན་པའི་སེམས་ཤུགས་ཅན་གྱི་དང་བླངས་པ་ཡིན། གལ་ཏེ་ཁྱེད་ཀྱིས་ plugin འདི་ཕན་ཐོགས་ཡོད་པ་མཐོང་ན། དེའི་གསར་སྤེལ་ལ་ཞལ་འདེབས་གནང་བར་བསམ་ཞིབ་གནང་རོགས།",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "གདེང་འཇོག་འགྲན་རེས་རེའུ་མིག་དེ་ Elo སྐར་མ་སྤྲོད་པའི་མ་ལག་ལ་གཞི་བཅོལ་ཡོད། དེ་མིན་དུས་ཐོག་ཏུ་གསར་སྒྱུར་བྱེད་ཀྱི་ཡོད།", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "གདེང་འཇོག་འགྲན་རེས་རེའུ་མིག་དེ་ Elo སྐར་མ་སྤྲོད་པའི་མ་ལག་ལ་གཞི་བཅོལ་ཡོད། དེ་མིན་དུས་ཐོག་ཏུ་གསར་སྒྱུར་བྱེད་ཀྱི་ཡོད།",
"The format to return a response in. Format can be json or a JSON schema.": "",
"The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency. Leave blank to automatically detect the language.": "", "The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency. Leave blank to automatically detect the language.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "བེད་སྤྱོད་མཁན་ཚོས་ནང་འཛུལ་བྱེད་སྐབས་བེད་སྤྱོད་གཏོང་བའི་ཡིག་ཟམ་ལ་སྦྲེལ་བའི་ LDAP ཁྱད་ཆོས།", "The LDAP attribute that maps to the mail that users use to sign in.": "བེད་སྤྱོད་མཁན་ཚོས་ནང་འཛུལ་བྱེད་སྐབས་བེད་སྤྱོད་གཏོང་བའི་ཡིག་ཟམ་ལ་སྦྲེལ་བའི་ LDAP ཁྱད་ཆོས།",
"The LDAP attribute that maps to the username that users use to sign in.": "བེད་སྤྱོད་མཁན་ཚོས་ནང་འཛུལ་བྱེད་སྐབས་བེད་སྤྱོད་གཏོང་བའི་བེད་སྤྱོད་མིང་ལ་སྦྲེལ་བའི་ LDAP ཁྱད་ཆོས།", "The LDAP attribute that maps to the username that users use to sign in.": "བེད་སྤྱོད་མཁན་ཚོས་ནང་འཛུལ་བྱེད་སྐབས་བེད་སྤྱོད་གཏོང་བའི་བེད་སྤྱོད་མིང་ལ་སྦྲེལ་བའི་ LDAP ཁྱད་ཆོས།",
@ -1195,11 +1208,14 @@
"Thinking...": "བསམ་བཞིན་པ།...", "Thinking...": "བསམ་བཞིན་པ།...",
"This action cannot be undone. Do you wish to continue?": "བྱ་སྤྱོད་འདི་ཕྱིར་ལྡོག་བྱེད་མི་ཐུབ། ཁྱེད་མུ་མཐུད་འདོད་ཡོད་དམ།", "This action cannot be undone. Do you wish to continue?": "བྱ་སྤྱོད་འདི་ཕྱིར་ལྡོག་བྱེད་མི་ཐུབ། ཁྱེད་མུ་མཐུད་འདོད་ཡོད་དམ།",
"This channel was created on {{createdAt}}. This is the very beginning of the {{channelName}} channel.": "བགྲོ་གླེང་འདི་ {{createdAt}} ལ་བཟོས་པ། འདི་ནི་ {{channelName}} བགྲོ་གླེང་གི་ཐོག་མ་རང་ཡིན།", "This channel was created on {{createdAt}}. This is the very beginning of the {{channelName}} channel.": "བགྲོ་གླེང་འདི་ {{createdAt}} ལ་བཟོས་པ། འདི་ནི་ {{channelName}} བགྲོ་གླེང་གི་ཐོག་མ་རང་ཡིན།",
"This chat won't appear in history and your messages will not be saved.": "",
"This chat wont appear in history and your messages will not be saved.": "", "This chat wont appear in history and your messages will not be saved.": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "འདིས་ཁྱེད་ཀྱི་རྩ་ཆེའི་ཁ་བརྡ་དག་བདེ་འཇགས་ངང་ཁྱེད་ཀྱི་རྒྱབ་སྣེ་གནས་ཚུལ་མཛོད་དུ་ཉར་ཚགས་བྱེད་པ་ཁག་ཐེག་བྱེད། ཐུགས་རྗེ་ཆེ།", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "འདིས་ཁྱེད་ཀྱི་རྩ་ཆེའི་ཁ་བརྡ་དག་བདེ་འཇགས་ངང་ཁྱེད་ཀྱི་རྒྱབ་སྣེ་གནས་ཚུལ་མཛོད་དུ་ཉར་ཚགས་བྱེད་པ་ཁག་ཐེག་བྱེད། ཐུགས་རྗེ་ཆེ།",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "འདི་ནི་ཚོད་ལྟའི་རང་བཞིན་གྱི་ཁྱད་ཆོས་ཤིག་ཡིན། དེ་རེ་སྒུག་ལྟར་ལས་ཀ་བྱེད་མི་སྲིད། དེ་མིན་དུས་ཚོད་གང་རུང་ལ་འགྱུར་བ་འགྲོ་སྲིད།", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "འདི་ནི་ཚོད་ལྟའི་རང་བཞིན་གྱི་ཁྱད་ཆོས་ཤིག་ཡིན། དེ་རེ་སྒུག་ལྟར་ལས་ཀ་བྱེད་མི་སྲིད། དེ་མིན་དུས་ཚོད་གང་རུང་ལ་འགྱུར་བ་འགྲོ་སྲིད།",
"This model is not publicly available. Please select another model.": "", "This model is not publicly available. Please select another model.": "",
"This option controls how long the model will stay loaded into memory following the request (default: 5m)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "འདེམས་ཀ་འདིས་ནང་དོན་གསར་སྒྱུར་བྱེད་སྐབས་ཊོཀ་ཀེན་ག་ཚོད་ཉར་ཚགས་བྱེད་དགོས་ཚོད་འཛིན་བྱེད། དཔེར་ན། གལ་ཏེ་ ༢ ལ་བཀོད་སྒྲིག་བྱས་ན། ཁ་བརྡའི་ནང་དོན་གྱི་ཊོཀ་ཀེན་མཐའ་མ་ ༢ ཉར་ཚགས་བྱེད་ངེས། ནང་དོན་ཉར་ཚགས་བྱས་ན་ཁ་བརྡའི་རྒྱུན་མཐུད་རང་བཞིན་རྒྱུན་སྲུང་བྱེད་པར་རོགས་པ་བྱེད་ཐུབ། འོན་ཀྱང་དེས་བརྗོད་གཞི་གསར་པར་ལན་འདེབས་བྱེད་པའི་ནུས་པ་ཉུང་དུ་གཏོང་སྲིད།", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "འདེམས་ཀ་འདིས་ནང་དོན་གསར་སྒྱུར་བྱེད་སྐབས་ཊོཀ་ཀེན་ག་ཚོད་ཉར་ཚགས་བྱེད་དགོས་ཚོད་འཛིན་བྱེད། དཔེར་ན། གལ་ཏེ་ ༢ ལ་བཀོད་སྒྲིག་བྱས་ན། ཁ་བརྡའི་ནང་དོན་གྱི་ཊོཀ་ཀེན་མཐའ་མ་ ༢ ཉར་ཚགས་བྱེད་ངེས། ནང་དོན་ཉར་ཚགས་བྱས་ན་ཁ་བརྡའི་རྒྱུན་མཐུད་རང་བཞིན་རྒྱུན་སྲུང་བྱེད་པར་རོགས་པ་བྱེད་ཐུབ། འོན་ཀྱང་དེས་བརྗོད་གཞི་གསར་པར་ལན་འདེབས་བྱེད་པའི་ནུས་པ་ཉུང་དུ་གཏོང་སྲིད།",
"This option enables or disables the use of the reasoning feature in Ollama, which allows the model to think before generating a response. When enabled, the model can take a moment to process the conversation context and generate a more thoughtful response.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "འདེམས་ཀ་འདིས་དཔེ་དབྱིབས་ཀྱིས་དེའི་ལན་ནང་བཟོ་ཐུབ་པའི་ཊོཀ་ཀེན་གྱི་གྲངས་མང་ཤོས་འཇོག་པ། ཚད་བཀག་འདི་མང་དུ་བཏང་ན་དཔེ་དབྱིབས་ཀྱིས་ལན་རིང་བ་སྤྲོད་པར་གནང་བ་སྤྲོད། འོན་ཀྱང་དེས་ཕན་ཐོགས་མེད་པའམ་འབྲེལ་མེད་ཀྱི་ནང་དོན་བཟོ་བའི་ཆགས་ཚུལ་མང་དུ་གཏོང་སྲིད།", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "འདེམས་ཀ་འདིས་དཔེ་དབྱིབས་ཀྱིས་དེའི་ལན་ནང་བཟོ་ཐུབ་པའི་ཊོཀ་ཀེན་གྱི་གྲངས་མང་ཤོས་འཇོག་པ། ཚད་བཀག་འདི་མང་དུ་བཏང་ན་དཔེ་དབྱིབས་ཀྱིས་ལན་རིང་བ་སྤྲོད་པར་གནང་བ་སྤྲོད། འོན་ཀྱང་དེས་ཕན་ཐོགས་མེད་པའམ་འབྲེལ་མེད་ཀྱི་ནང་དོན་བཟོ་བའི་ཆགས་ཚུལ་མང་དུ་གཏོང་སྲིད།",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "འདེམས་ཀ་འདིས་བསྡུ་གསོག་ནང་གི་ཡོད་པའི་ཡིག་ཆ་ཡོངས་རྫོགས་བསུབ་ནས་དེ་དག་གསར་དུ་སྤར་བའི་ཡིག་ཆས་ཚབ་བྱེད་ངེས།", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "འདེམས་ཀ་འདིས་བསྡུ་གསོག་ནང་གི་ཡོད་པའི་ཡིག་ཆ་ཡོངས་རྫོགས་བསུབ་ནས་དེ་དག་གསར་དུ་སྤར་བའི་ཡིག་ཆས་ཚབ་བྱེད་ངེས།",
"This response was generated by \"{{model}}\"": "ལན་འདི་ \"{{model}}\" ཡིས་བཟོས་པ།", "This response was generated by \"{{model}}\"": "ལན་འདི་ \"{{model}}\" ཡིས་བཟོས་པ།",
@ -1345,7 +1361,7 @@
"Weight of BM25 Retrieval": "", "Weight of BM25 Retrieval": "",
"What are you trying to achieve?": "ཁྱེད་ཀྱིས་ཅི་ཞིག་འགྲུབ་ཐབས་བྱེད་བཞིན་ཡོད།", "What are you trying to achieve?": "ཁྱེད་ཀྱིས་ཅི་ཞིག་འགྲུབ་ཐབས་བྱེད་བཞིན་ཡོད།",
"What are you working on?": "ཁྱེད་ཀྱིས་ཅི་ཞིག་ལས་ཀ་བྱེད་བཞིན་ཡོད།", "What are you working on?": "ཁྱེད་ཀྱིས་ཅི་ཞིག་ལས་ཀ་བྱེད་བཞིན་ཡོད།",
"Whats New in": "གསར་པ་ཅི་ཡོད།", "What's New in": "གསར་པ་ཅི་ཡོད།",
"When enabled, the model will respond to each chat message in real-time, generating a response as soon as the user sends a message. This mode is useful for live chat applications, but may impact performance on slower hardware.": "སྒུལ་བསྐྱོད་བྱས་ཚེ། དཔེ་དབྱིབས་ཀྱིས་ཁ་བརྡའི་འཕྲིན་རེ་རེར་དུས་ཐོག་ཏུ་ལན་འདེབས་བྱེད་ངེས། བེད་སྤྱོད་མཁན་གྱིས་འཕྲིན་བཏང་མ་ཐག་ལན་ཞིག་བཟོ་ངེས། མ་དཔེ་འདི་ཐད་གཏོང་ཁ་བརྡའི་བཀོལ་ཆས་ལ་ཕན་ཐོགས་ཡོད། འོན་ཀྱང་དེས་མཁྲེགས་ཆས་དལ་བའི་སྟེང་ལས་ཆོད་ལ་ཤུགས་རྐྱེན་ཐེབས་སྲིད།", "When enabled, the model will respond to each chat message in real-time, generating a response as soon as the user sends a message. This mode is useful for live chat applications, but may impact performance on slower hardware.": "སྒུལ་བསྐྱོད་བྱས་ཚེ། དཔེ་དབྱིབས་ཀྱིས་ཁ་བརྡའི་འཕྲིན་རེ་རེར་དུས་ཐོག་ཏུ་ལན་འདེབས་བྱེད་ངེས། བེད་སྤྱོད་མཁན་གྱིས་འཕྲིན་བཏང་མ་ཐག་ལན་ཞིག་བཟོ་ངེས། མ་དཔེ་འདི་ཐད་གཏོང་ཁ་བརྡའི་བཀོལ་ཆས་ལ་ཕན་ཐོགས་ཡོད། འོན་ཀྱང་དེས་མཁྲེགས་ཆས་དལ་བའི་སྟེང་ལས་ཆོད་ལ་ཤུགས་རྐྱེན་ཐེབས་སྲིད།",
"wherever you are": "ཁྱེད་གང་དུ་ཡོད་ཀྱང་།", "wherever you are": "ཁྱེད་གང་དུ་ཡོད་ཀྱང་།",
"Whether to paginate the output. Each page will be separated by a horizontal rule and page number. Defaults to False.": "", "Whether to paginate the output. Each page will be separated by a horizontal rule and page number. Defaults to False.": "",

View file

@ -90,7 +90,9 @@
"and {{COUNT}} more": "i {{COUNT}} més", "and {{COUNT}} more": "i {{COUNT}} més",
"and create a new shared link.": "i crear un nou enllaç compartit.", "and create a new shared link.": "i crear un nou enllaç compartit.",
"Android": "Android", "Android": "Android",
"API": "",
"API Base URL": "URL Base de l'API", "API Base URL": "URL Base de l'API",
"API details for using a vision-language model in the picture description. This parameter is mutually exclusive with picture_description_local.": "",
"API Key": "clau API", "API Key": "clau API",
"API Key created.": "clau API creada.", "API Key created.": "clau API creada.",
"API Key Endpoint Restrictions": "Restriccions del punt d'accés de la Clau API", "API Key Endpoint Restrictions": "Restriccions del punt d'accés de la Clau API",
@ -207,6 +209,8 @@
"Clone Chat": "Clonar el xat", "Clone Chat": "Clonar el xat",
"Clone of {{TITLE}}": "Clon de {{TITLE}}", "Clone of {{TITLE}}": "Clon de {{TITLE}}",
"Close": "Tancar", "Close": "Tancar",
"Close modal": "",
"Close settings modal": "",
"Code execution": "Execució de codi", "Code execution": "Execució de codi",
"Code Execution": "Excució de Codi", "Code Execution": "Excució de Codi",
"Code Execution Engine": "Motor d'execució de codi", "Code Execution Engine": "Motor d'execució de codi",
@ -294,7 +298,7 @@
"Default": "Per defecte", "Default": "Per defecte",
"Default (Open AI)": "Per defecte (Open AI)", "Default (Open AI)": "Per defecte (Open AI)",
"Default (SentenceTransformers)": "Per defecte (SentenceTransformers)", "Default (SentenceTransformers)": "Per defecte (SentenceTransformers)",
"Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the models built-in tool-calling capabilities, but requires the model to inherently support this feature.": "El mode predeterminat funciona amb una gamma més àmplia de models cridant a les eines una vegada abans de l'execució. El mode natiu aprofita les capacitats de crida d'eines integrades del model, però requereix que el model admeti aquesta funció de manera inherent.", "Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the model's built-in tool-calling capabilities, but requires the model to inherently support this feature.": "El mode predeterminat funciona amb una gamma més àmplia de models cridant a les eines una vegada abans de l'execució. El mode natiu aprofita les capacitats de crida d'eines integrades del model, però requereix que el model admeti aquesta funció de manera inherent.",
"Default Model": "Model per defecte", "Default Model": "Model per defecte",
"Default model updated": "Model per defecte actualitzat", "Default model updated": "Model per defecte actualitzat",
"Default Models": "Models per defecte", "Default Models": "Models per defecte",
@ -442,6 +446,7 @@
"Enter Chunk Overlap": "Introdueix la mida de solapament de blocs", "Enter Chunk Overlap": "Introdueix la mida de solapament de blocs",
"Enter Chunk Size": "Introdueix la mida del bloc", "Enter Chunk Size": "Introdueix la mida del bloc",
"Enter comma-separated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "Introdueix parelles de \"token:valor de biaix\" separats per comes (exemple: 5432:100, 413:-100)", "Enter comma-separated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "Introdueix parelles de \"token:valor de biaix\" separats per comes (exemple: 5432:100, 413:-100)",
"Enter Config in JSON format": "",
"Enter content for the pending user info overlay. Leave empty for default.": "", "Enter content for the pending user info overlay. Leave empty for default.": "",
"Enter Datalab Marker API Key": "", "Enter Datalab Marker API Key": "",
"Enter description": "Introdueix la descripció", "Enter description": "Introdueix la descripció",
@ -666,6 +671,7 @@
"Hex Color": "Color hexadecimal", "Hex Color": "Color hexadecimal",
"Hex Color - Leave empty for default color": "Color hexadecimal - Deixar buit per a color per defecte", "Hex Color - Leave empty for default color": "Color hexadecimal - Deixar buit per a color per defecte",
"Hide": "Amaga", "Hide": "Amaga",
"Hide from Sidebar": "",
"Hide Model": "Amagar el model", "Hide Model": "Amagar el model",
"High Contrast Mode": "", "High Contrast Mode": "",
"Home": "Inici", "Home": "Inici",
@ -714,7 +720,6 @@
"Invalid file content": "Continguts del fitxer no vàlids", "Invalid file content": "Continguts del fitxer no vàlids",
"Invalid file format.": "Format d'arxiu no vàlid.", "Invalid file format.": "Format d'arxiu no vàlid.",
"Invalid JSON file": "", "Invalid JSON file": "",
"Invalid JSON schema": "Esquema JSON no vàlid",
"Invalid Tag": "Etiqueta no vàlida", "Invalid Tag": "Etiqueta no vàlida",
"is typing...": "està escrivint...", "is typing...": "està escrivint...",
"January": "Gener", "January": "Gener",
@ -729,7 +734,7 @@
"JWT Expiration": "Caducitat del JWT", "JWT Expiration": "Caducitat del JWT",
"JWT Token": "Token JWT", "JWT Token": "Token JWT",
"Kagi Search API Key": "Clau API de Kagi Search", "Kagi Search API Key": "Clau API de Kagi Search",
"Keep Alive": "Manté actiu", "Keep in Sidebar": "",
"Key": "Clau", "Key": "Clau",
"Keyboard shortcuts": "Dreceres de teclat", "Keyboard shortcuts": "Dreceres de teclat",
"Knowledge": "Coneixement", "Knowledge": "Coneixement",
@ -849,6 +854,7 @@
"New Password": "Nova contrasenya", "New Password": "Nova contrasenya",
"New Tool": "", "New Tool": "",
"new-channel": "nou-canal", "new-channel": "nou-canal",
"Next message": "",
"No chats found for this user.": "", "No chats found for this user.": "",
"No chats found.": "", "No chats found.": "",
"No content": "No hi ha contingut", "No content": "No hi ha contingut",
@ -916,6 +922,7 @@
"OpenAI API settings updated": "Configuració de l'API d'OpenAI actualitzada", "OpenAI API settings updated": "Configuració de l'API d'OpenAI actualitzada",
"OpenAI URL/Key required.": "URL/Clau d'OpenAI requerides.", "OpenAI URL/Key required.": "URL/Clau d'OpenAI requerides.",
"openapi.json URL or Path": "", "openapi.json URL or Path": "",
"Options for running a local vision-language model in the picture description. The parameters refer to a model hosted on Hugging Face. This parameter is mutually exclusive with picture_description_api.": "",
"or": "o", "or": "o",
"Organize your users": "Organitza els teus usuaris", "Organize your users": "Organitza els teus usuaris",
"Other": "Altres", "Other": "Altres",
@ -939,7 +946,12 @@
"Permission denied when accessing microphone: {{error}}": "Permís denegat en accedir al micròfon: {{error}}", "Permission denied when accessing microphone: {{error}}": "Permís denegat en accedir al micròfon: {{error}}",
"Permissions": "Permisos", "Permissions": "Permisos",
"Perplexity API Key": "Clau API de Perplexity", "Perplexity API Key": "Clau API de Perplexity",
"Perplexity Model": "",
"Perplexity Search Context Usage": "",
"Personalization": "Personalització", "Personalization": "Personalització",
"Picture Description API Config": "",
"Picture Description Local Config": "",
"Picture Description Mode": "",
"Pin": "Fixar", "Pin": "Fixar",
"Pinned": "Fixat", "Pinned": "Fixat",
"Pioneer insights": "Perspectives pioneres", "Pioneer insights": "Perspectives pioneres",
@ -969,6 +981,7 @@
"Preview": "", "Preview": "",
"Previous 30 days": "30 dies anteriors", "Previous 30 days": "30 dies anteriors",
"Previous 7 days": "7 dies anteriors", "Previous 7 days": "7 dies anteriors",
"Previous message": "",
"Private": "Privat", "Private": "Privat",
"Profile Image": "Imatge de perfil", "Profile Image": "Imatge de perfil",
"Prompt": "Indicació", "Prompt": "Indicació",
@ -1010,7 +1023,6 @@
"Rename": "Canviar el nom", "Rename": "Canviar el nom",
"Reorder Models": "Reordenar els models", "Reorder Models": "Reordenar els models",
"Reply in Thread": "Respondre al fil", "Reply in Thread": "Respondre al fil",
"Request Mode": "Mode de sol·licitud",
"Reranking Engine": "Motor de valoració", "Reranking Engine": "Motor de valoració",
"Reranking Model": "Model de reavaluació", "Reranking Model": "Model de reavaluació",
"Reset": "Restableix", "Reset": "Restableix",
@ -1182,6 +1194,7 @@
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "La mida del lot determina quantes sol·licituds de text es processen alhora. Una mida de lot més gran pot augmentar el rendiment i la velocitat del model, però també requereix més memòria.", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "La mida del lot determina quantes sol·licituds de text es processen alhora. Una mida de lot més gran pot augmentar el rendiment i la velocitat del model, però també requereix més memòria.",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Els desenvolupadors d'aquest complement són voluntaris apassionats de la comunitat. Si trobeu útil aquest complement, considereu contribuir al seu desenvolupament.", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Els desenvolupadors d'aquest complement són voluntaris apassionats de la comunitat. Si trobeu útil aquest complement, considereu contribuir al seu desenvolupament.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "La classificació d'avaluació es basa en el sistema de qualificació Elo i s'actualitza en temps real.", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "La classificació d'avaluació es basa en el sistema de qualificació Elo i s'actualitza en temps real.",
"The format to return a response in. Format can be json or a JSON schema.": "",
"The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency. Leave blank to automatically detect the language.": "", "The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency. Leave blank to automatically detect the language.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "L'atribut LDAP que s'associa al correu que els usuaris utilitzen per iniciar la sessió.", "The LDAP attribute that maps to the mail that users use to sign in.": "L'atribut LDAP que s'associa al correu que els usuaris utilitzen per iniciar la sessió.",
"The LDAP attribute that maps to the username that users use to sign in.": "L'atribut LDAP que mapeja el nom d'usuari amb l'usuari que vol iniciar sessió", "The LDAP attribute that maps to the username that users use to sign in.": "L'atribut LDAP que mapeja el nom d'usuari amb l'usuari que vol iniciar sessió",
@ -1195,11 +1208,14 @@
"Thinking...": "Pensant...", "Thinking...": "Pensant...",
"This action cannot be undone. Do you wish to continue?": "Aquesta acció no es pot desfer. Vols continuar?", "This action cannot be undone. Do you wish to continue?": "Aquesta acció no es pot desfer. Vols continuar?",
"This channel was created on {{createdAt}}. This is the very beginning of the {{channelName}} channel.": "Aquest canal es va crear el dia {{createdAt}}. Aquest és el començament del canal {{channelName}}.", "This channel was created on {{createdAt}}. This is the very beginning of the {{channelName}} channel.": "Aquest canal es va crear el dia {{createdAt}}. Aquest és el començament del canal {{channelName}}.",
"This chat wont appear in history and your messages will not be saved.": "Aquest xat no apareixerà a l'historial i els teus missatges no es desaran.", "This chat won't appear in history and your messages will not be saved.": "Aquest xat no apareixerà a l'historial i els teus missatges no es desaran.",
"This chat wont appear in history and your messages will not be saved.": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Això assegura que les teves converses valuoses queden desades de manera segura a la teva base de dades. Gràcies!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Això assegura que les teves converses valuoses queden desades de manera segura a la teva base de dades. Gràcies!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Aquesta és una funció experimental, és possible que no funcioni com s'espera i està subjecta a canvis en qualsevol moment.", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "Aquesta és una funció experimental, és possible que no funcioni com s'espera i està subjecta a canvis en qualsevol moment.",
"This model is not publicly available. Please select another model.": "Aquest model no està disponible públicament. Seleccioneu-ne un altre.", "This model is not publicly available. Please select another model.": "Aquest model no està disponible públicament. Seleccioneu-ne un altre.",
"This option controls how long the model will stay loaded into memory following the request (default: 5m)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "Aquesta opció controla quants tokens es conserven en actualitzar el context. Per exemple, si s'estableix en 2, es conservaran els darrers 2 tokens del context de conversa. Preservar el context pot ajudar a mantenir la continuïtat d'una conversa, però pot reduir la capacitat de respondre a nous temes.", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "Aquesta opció controla quants tokens es conserven en actualitzar el context. Per exemple, si s'estableix en 2, es conservaran els darrers 2 tokens del context de conversa. Preservar el context pot ajudar a mantenir la continuïtat d'una conversa, però pot reduir la capacitat de respondre a nous temes.",
"This option enables or disables the use of the reasoning feature in Ollama, which allows the model to think before generating a response. When enabled, the model can take a moment to process the conversation context and generate a more thoughtful response.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "Aquesta opció estableix el nombre màxim de tokens que el model pot generar en la seva resposta. Augmentar aquest límit permet que el model proporcioni respostes més llargues, però també pot augmentar la probabilitat que es generi contingut poc útil o irrellevant.", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "Aquesta opció estableix el nombre màxim de tokens que el model pot generar en la seva resposta. Augmentar aquest límit permet que el model proporcioni respostes més llargues, però també pot augmentar la probabilitat que es generi contingut poc útil o irrellevant.",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Aquesta opció eliminarà tots els fitxers existents de la col·lecció i els substituirà per fitxers recentment penjats.", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "Aquesta opció eliminarà tots els fitxers existents de la col·lecció i els substituirà per fitxers recentment penjats.",
"This response was generated by \"{{model}}\"": "Aquesta resposta l'ha generat el model \"{{model}}\"", "This response was generated by \"{{model}}\"": "Aquesta resposta l'ha generat el model \"{{model}}\"",
@ -1345,7 +1361,7 @@
"Weight of BM25 Retrieval": "", "Weight of BM25 Retrieval": "",
"What are you trying to achieve?": "Què intentes aconseguir?", "What are you trying to achieve?": "Què intentes aconseguir?",
"What are you working on?": "En què estàs treballant?", "What are you working on?": "En què estàs treballant?",
"Whats New in": "Què hi ha de nou a", "What's New in": "Què hi ha de nou a",
"When enabled, the model will respond to each chat message in real-time, generating a response as soon as the user sends a message. This mode is useful for live chat applications, but may impact performance on slower hardware.": "Quan està activat, el model respondrà a cada missatge de xat en temps real, generant una resposta tan bon punt l'usuari envia un missatge. Aquest mode és útil per a aplicacions de xat en directe, però pot afectar el rendiment en maquinari més lent.", "When enabled, the model will respond to each chat message in real-time, generating a response as soon as the user sends a message. This mode is useful for live chat applications, but may impact performance on slower hardware.": "Quan està activat, el model respondrà a cada missatge de xat en temps real, generant una resposta tan bon punt l'usuari envia un missatge. Aquest mode és útil per a aplicacions de xat en directe, però pot afectar el rendiment en maquinari més lent.",
"wherever you are": "allà on estiguis", "wherever you are": "allà on estiguis",
"Whether to paginate the output. Each page will be separated by a horizontal rule and page number. Defaults to False.": "", "Whether to paginate the output. Each page will be separated by a horizontal rule and page number. Defaults to False.": "",

View file

@ -90,7 +90,9 @@
"and {{COUNT}} more": "", "and {{COUNT}} more": "",
"and create a new shared link.": "", "and create a new shared link.": "",
"Android": "", "Android": "",
"API": "",
"API Base URL": "API Base URL", "API Base URL": "API Base URL",
"API details for using a vision-language model in the picture description. This parameter is mutually exclusive with picture_description_local.": "",
"API Key": "yawe sa API", "API Key": "yawe sa API",
"API Key created.": "", "API Key created.": "",
"API Key Endpoint Restrictions": "", "API Key Endpoint Restrictions": "",
@ -207,6 +209,8 @@
"Clone Chat": "", "Clone Chat": "",
"Clone of {{TITLE}}": "", "Clone of {{TITLE}}": "",
"Close": "Suod nga", "Close": "Suod nga",
"Close modal": "",
"Close settings modal": "",
"Code execution": "", "Code execution": "",
"Code Execution": "", "Code Execution": "",
"Code Execution Engine": "", "Code Execution Engine": "",
@ -294,7 +298,7 @@
"Default": "Pinaagi sa default", "Default": "Pinaagi sa default",
"Default (Open AI)": "", "Default (Open AI)": "",
"Default (SentenceTransformers)": "", "Default (SentenceTransformers)": "",
"Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the models built-in tool-calling capabilities, but requires the model to inherently support this feature.": "", "Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the model's built-in tool-calling capabilities, but requires the model to inherently support this feature.": "",
"Default Model": "", "Default Model": "",
"Default model updated": "Gi-update nga default template", "Default model updated": "Gi-update nga default template",
"Default Models": "", "Default Models": "",
@ -442,6 +446,7 @@
"Enter Chunk Overlap": "Pagsulod sa block overlap", "Enter Chunk Overlap": "Pagsulod sa block overlap",
"Enter Chunk Size": "Isulod ang block size", "Enter Chunk Size": "Isulod ang block size",
"Enter comma-separated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "", "Enter comma-separated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter Config in JSON format": "",
"Enter content for the pending user info overlay. Leave empty for default.": "", "Enter content for the pending user info overlay. Leave empty for default.": "",
"Enter Datalab Marker API Key": "", "Enter Datalab Marker API Key": "",
"Enter description": "", "Enter description": "",
@ -666,6 +671,7 @@
"Hex Color": "", "Hex Color": "",
"Hex Color - Leave empty for default color": "", "Hex Color - Leave empty for default color": "",
"Hide": "Tagoa", "Hide": "Tagoa",
"Hide from Sidebar": "",
"Hide Model": "", "Hide Model": "",
"High Contrast Mode": "", "High Contrast Mode": "",
"Home": "", "Home": "",
@ -714,7 +720,6 @@
"Invalid file content": "", "Invalid file content": "",
"Invalid file format.": "", "Invalid file format.": "",
"Invalid JSON file": "", "Invalid JSON file": "",
"Invalid JSON schema": "",
"Invalid Tag": "", "Invalid Tag": "",
"is typing...": "", "is typing...": "",
"January": "", "January": "",
@ -729,7 +734,7 @@
"JWT Expiration": "Pag-expire sa JWT", "JWT Expiration": "Pag-expire sa JWT",
"JWT Token": "JWT token", "JWT Token": "JWT token",
"Kagi Search API Key": "", "Kagi Search API Key": "",
"Keep Alive": "Padayon nga aktibo", "Keep in Sidebar": "",
"Key": "", "Key": "",
"Keyboard shortcuts": "Mga shortcut sa keyboard", "Keyboard shortcuts": "Mga shortcut sa keyboard",
"Knowledge": "", "Knowledge": "",
@ -849,6 +854,7 @@
"New Password": "Bag-ong Password", "New Password": "Bag-ong Password",
"New Tool": "", "New Tool": "",
"new-channel": "", "new-channel": "",
"Next message": "",
"No chats found for this user.": "", "No chats found for this user.": "",
"No chats found.": "", "No chats found.": "",
"No content": "", "No content": "",
@ -916,6 +922,7 @@
"OpenAI API settings updated": "", "OpenAI API settings updated": "",
"OpenAI URL/Key required.": "", "OpenAI URL/Key required.": "",
"openapi.json URL or Path": "", "openapi.json URL or Path": "",
"Options for running a local vision-language model in the picture description. The parameters refer to a model hosted on Hugging Face. This parameter is mutually exclusive with picture_description_api.": "",
"or": "O", "or": "O",
"Organize your users": "", "Organize your users": "",
"Other": "", "Other": "",
@ -939,7 +946,12 @@
"Permission denied when accessing microphone: {{error}}": "Gidili ang pagtugot sa dihang nag-access sa mikropono: {{error}}", "Permission denied when accessing microphone: {{error}}": "Gidili ang pagtugot sa dihang nag-access sa mikropono: {{error}}",
"Permissions": "", "Permissions": "",
"Perplexity API Key": "", "Perplexity API Key": "",
"Perplexity Model": "",
"Perplexity Search Context Usage": "",
"Personalization": "", "Personalization": "",
"Picture Description API Config": "",
"Picture Description Local Config": "",
"Picture Description Mode": "",
"Pin": "", "Pin": "",
"Pinned": "", "Pinned": "",
"Pioneer insights": "", "Pioneer insights": "",
@ -969,6 +981,7 @@
"Preview": "", "Preview": "",
"Previous 30 days": "", "Previous 30 days": "",
"Previous 7 days": "", "Previous 7 days": "",
"Previous message": "",
"Private": "", "Private": "",
"Profile Image": "", "Profile Image": "",
"Prompt": "", "Prompt": "",
@ -1010,7 +1023,6 @@
"Rename": "", "Rename": "",
"Reorder Models": "", "Reorder Models": "",
"Reply in Thread": "", "Reply in Thread": "",
"Request Mode": "Query mode",
"Reranking Engine": "", "Reranking Engine": "",
"Reranking Model": "", "Reranking Model": "",
"Reset": "", "Reset": "",
@ -1182,6 +1194,7 @@
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The format to return a response in. Format can be json or a JSON schema.": "",
"The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency. Leave blank to automatically detect the language.": "", "The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency. Leave blank to automatically detect the language.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
"The LDAP attribute that maps to the username that users use to sign in.": "", "The LDAP attribute that maps to the username that users use to sign in.": "",
@ -1195,11 +1208,14 @@
"Thinking...": "", "Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "", "This action cannot be undone. Do you wish to continue?": "",
"This channel was created on {{createdAt}}. This is the very beginning of the {{channelName}} channel.": "", "This channel was created on {{createdAt}}. This is the very beginning of the {{channelName}} channel.": "",
"This chat won't appear in history and your messages will not be saved.": "",
"This chat wont appear in history and your messages will not be saved.": "", "This chat wont appear in history and your messages will not be saved.": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Kini nagsiguro nga ang imong bililhon nga mga panag-istoryahanay luwas nga natipig sa imong backend database. ", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Kini nagsiguro nga ang imong bililhon nga mga panag-istoryahanay luwas nga natipig sa imong backend database. ",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This model is not publicly available. Please select another model.": "", "This model is not publicly available. Please select another model.": "",
"This option controls how long the model will stay loaded into memory following the request (default: 5m)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option enables or disables the use of the reasoning feature in Ollama, which allows the model to think before generating a response. When enabled, the model can take a moment to process the conversation context and generate a more thoughtful response.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "", "This response was generated by \"{{model}}\"": "",
@ -1345,7 +1361,7 @@
"Weight of BM25 Retrieval": "", "Weight of BM25 Retrieval": "",
"What are you trying to achieve?": "", "What are you trying to achieve?": "",
"What are you working on?": "", "What are you working on?": "",
"Whats New in": "Unsay bag-o sa", "What's New in": "Unsay bag-o sa",
"When enabled, the model will respond to each chat message in real-time, generating a response as soon as the user sends a message. This mode is useful for live chat applications, but may impact performance on slower hardware.": "", "When enabled, the model will respond to each chat message in real-time, generating a response as soon as the user sends a message. This mode is useful for live chat applications, but may impact performance on slower hardware.": "",
"wherever you are": "", "wherever you are": "",
"Whether to paginate the output. Each page will be separated by a horizontal rule and page number. Defaults to False.": "", "Whether to paginate the output. Each page will be separated by a horizontal rule and page number. Defaults to False.": "",

Some files were not shown because too many files have changed in this diff Show more