Merge branch 'open-webui:main' into main

This commit is contained in:
Igor Brai 2024-11-21 15:25:41 +02:00 committed by GitHub
commit d16b09bee5
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
264 changed files with 29675 additions and 15258 deletions

View file

@ -17,3 +17,4 @@ uploads
.ipynb_checkpoints
**/*.db
_test
backend/data/*

View file

@ -28,6 +28,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
lfs: true
- name: Remove git history
run: rm -rf .git
@ -52,7 +54,9 @@ jobs:
- name: Set up Git and push to Space
run: |
git init --initial-branch=main
git lfs install
git lfs track "*.ttf"
git lfs track "*.jpg"
rm demo.gif
git add .
git commit -m "GitHub deploy: ${{ github.sha }}"

View file

@ -5,10 +5,81 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.4.2] - 2024-11-20
### Fixed
- **📁 Knowledge Files Visibility Issue**: Resolved the bug preventing individual files in knowledge collections from displaying when referenced with '#'.
- **🔗 OpenAI Endpoint Prefix**: Fixed the issue where certain OpenAI connections that deviate from the official API spec werent working correctly with prefixes.
- **⚔️ Arena Model Access Control**: Corrected an issue where arena model access control settings were not being saved.
- **🔧 Usage Capability Selector**: Fixed the broken usage capabilities selector in the model editor.
## [0.4.1] - 2024-11-19
### Added
- **📊 Enhanced Feedback System**: Introduced a detailed 1-10 rating scale for feedback alongside thumbs up/down, preparing for more precise model fine-tuning and improving feedback quality.
- ** Tool Descriptions on Hover**: Easily access tool descriptions by hovering over the message input, providing a smoother workflow with more context when utilizing tools.
### Fixed
- **🗑️ Graceful Handling of Deleted Users**: Resolved an issue where deleted users caused workspace items (models, knowledge, prompts, tools) to fail, ensuring reliable workspace loading.
- **🔑 API Key Creation**: Fixed an issue preventing users from creating new API keys, restoring secure and seamless API management.
- **🔗 HTTPS Proxy Fix**: Corrected HTTPS proxy issues affecting the '/api/v1/models/' endpoint, ensuring smoother, uninterrupted model management.
## [0.4.0] - 2024-11-19
### Added
- **👥 User Groups**: You can now create and manage user groups, making user organization seamless.
- **🔐 Group-Based Access Control**: Set granular access to models, knowledge, prompts, and tools based on user groups, allowing for more controlled and secure environments.
- **🛠️ Group-Based User Permissions**: Easily manage workspace permissions. Grant users the ability to upload files, delete, edit, or create temporary chats, as well as define their ability to create models, knowledge, prompts, and tools.
- **🔑 LDAP Support**: Newly introduced LDAP authentication adds robust security and scalability to user management.
- **🌐 Enhanced OpenAI-Compatible Connections**: Added prefix ID support to avoid model ID clashes, with explicit model ID support for APIs lacking '/models' endpoint support, ensuring smooth operation with custom setups.
- **🔐 Ollama API Key Support**: Now manage credentials for Ollama when set behind proxies, including the option to utilize prefix ID for proper distinction across multiple Ollama instances.
- **🔄 Connection Enable/Disable Toggle**: Easily enable or disable individual OpenAI and Ollama connections as needed.
- **🎨 Redesigned Model Workspace**: Freshly redesigned to improve usability for managing models across users and groups.
- **🎨 Redesigned Prompt Workspace**: A fresh UI to conveniently organize and manage prompts.
- **🧩 Sorted Functions Workspace**: Functions are now automatically categorized by type (Action, Filter, Pipe), streamlining management.
- **💻 Redesigned Collaborative Workspace**: Enhanced support for multiple users contributing to models, knowledge, prompts, or tools, improving collaboration.
- **🔧 Auto-Selected Tools in Model Editor**: Tools enabled through the model editor are now automatically selected, whereas previously it only gave users the option to enable the tool, reducing manual steps and enhancing efficiency.
- **🔔 Web Search & Tools Indicator**: A clear indication now shows when web search or tools are active, reducing confusion.
- **🔑 Toggle API Key Auth**: Tighten security by easily enabling or disabling API key authentication option for Open WebUI.
- **🗂️ Agentic Retrieval**: Improve RAG accuracy via smart pre-processing of chat history to determine the best queries before retrieval.
- **📁 Large Text as File Option**: Optionally convert large pasted text into a file upload, keeping the chat interface cleaner.
- **🗂️ Toggle Citations for Models**: Ability to disable citations has been introduced in the model editor.
- **🔍 User Settings Search**: Quickly search for settings fields, improving ease of use and navigation.
- **🗣️ Experimental SpeechT5 TTS**: Local SpeechT5 support added for improved text-to-speech capabilities.
- **🔄 Unified Reset for Models**: A one-click option has been introduced to reset and remove all models from the Admin Settings.
- **🛠️ Initial Setup Wizard**: The setup process now explicitly informs users that they are creating an admin account during the first-time setup, ensuring clarity. Previously, users encountered the login page right away without this distinction.
- **🌐 Enhanced Translations**: Several language translations, including Ukrainian, Norwegian, and Brazilian Portuguese, were refined for better localization.
### Fixed
- **🎥 YouTube Video Attachments**: Fixed issues preventing proper loading and attachment of YouTube videos as files.
- **🔄 Shared Chat Update**: Corrected issues where shared chats were not updating, improving collaboration consistency.
- **🔍 DuckDuckGo Rate Limit Fix**: Addressed issues with DuckDuckGo search integration, enhancing search stability and performance when operating within rate limits.
- **🧾 Citations Relevance Fix**: Adjusted the relevance percentage calculation for citations, so that Open WebUI properly reflect the accuracy of a retrieved document in RAG, ensuring users get clearer insights into sources.
- **🔑 Jina Search API Key Requirement**: Added the option to input an API key for Jina Search, ensuring smooth functionality as keys are now mandatory.
### Changed
- **🛠️ Functions Moved to Admin Panel**: As Functions operate as advanced plugins, they are now accessible from the Admin Panel instead of the workspace.
- **🛠️ Manage Ollama Connections**: The "Models" section in Admin Settings has been relocated to Admin Settings > "Connections" > Ollama Connections. You can now manage Ollama instances via a dedicated "Manage Ollama" modal from "Connections", streamlining the setup and configuration of Ollama models.
- **📊 Base Models in Admin Settings**: Admins can now find all base models, both connections or functions, in the "Models" Admin setting. Global model accessibility can be enabled or disabled here. Models are private by default, requiring explicit permission assignment for user access.
- **📌 Sticky Model Selection for New Chats**: The model chosen from a previous chat now persists when creating a new chat. If you click "New Chat" again from the new chat page, it will revert to your default model.
- **🎨 Design Refactoring**: Overall design refinements across the platform have been made, providing a more cohesive and polished user experience.
### Removed
- **📂 Model List Reordering**: Temporarily removed and will be reintroduced in upcoming user group settings improvements.
- **⚙️ Default Model Setting**: Removed the ability to set a default model for users, will be reintroduced with user group settings in the future.
## [0.3.35] - 2024-10-26
### Added
- **🌐 Translation Update**: Added translation labels in the SearchInput and CreateCollection components and updated Brazilian Portuguese translation (pt-BR)
- **📁 Robust File Handling**: Enhanced file input handling for chat. If the content extraction fails or is empty, users will now receive a clear warning, preventing silent failures and ensuring you always know what's happening with your uploads.
- **🌍 New Language Support**: Introduced Hungarian translations and updated French translations, expanding the platform's language accessibility for a more global user base.

View file

@ -21,7 +21,7 @@ Open WebUI is an [extensible](https://github.com/open-webui/pipelines), feature-
- 🤝 **Ollama/OpenAI API Integration**: Effortlessly integrate OpenAI-compatible APIs for versatile conversations alongside Ollama models. Customize the OpenAI API URL to link with **LMStudio, GroqCloud, Mistral, OpenRouter, and more**.
- 🧩 **Pipelines, Open WebUI Plugin Support**: Seamlessly integrate custom logic and Python libraries into Open WebUI using [Pipelines Plugin Framework](https://github.com/open-webui/pipelines). Launch your Pipelines instance, set the OpenAI URL to the Pipelines URL, and explore endless possibilities. [Examples](https://github.com/open-webui/pipelines/tree/main/examples) include **Function Calling**, User **Rate Limiting** to control access, **Usage Monitoring** with tools like Langfuse, **Live Translation with LibreTranslate** for multilingual support, **Toxic Message Filtering** and much more.
- 🛡️ **Granular Permissions and User Groups**: By allowing administrators to create detailed user roles and permissions, we ensure a secure user environment. This granularity not only enhances security but also allows for customized user experiences, fostering a sense of ownership and responsibility amongst users.
- 📱 **Responsive Design**: Enjoy a seamless experience across Desktop PC, Laptop, and Mobile devices.
@ -37,7 +37,7 @@ Open WebUI is an [extensible](https://github.com/open-webui/pipelines), feature-
- 📚 **Local RAG Integration**: Dive into the future of chat interactions with groundbreaking Retrieval Augmented Generation (RAG) support. This feature seamlessly integrates document interactions into your chat experience. You can load documents directly into the chat or add files to your document library, effortlessly accessing them using the `#` command before a query.
- 🔍 **Web Search for RAG**: Perform web searches using providers like `SearXNG`, `Google PSE`, `Brave Search`, `serpstack`, `serper`, `Serply`, `DuckDuckGo`, `TavilySearch` and `SearchApi` and inject the results directly into your chat experience.
- 🔍 **Web Search for RAG**: Perform web searches using providers like `SearXNG`, `Google PSE`, `Brave Search`, `serpstack`, `serper`, `Serply`, `DuckDuckGo`, `TavilySearch`, `SearchApi` and `Bing` and inject the results directly into your chat experience.
- 🌐 **Web Browsing Capability**: Seamlessly integrate websites into your chat experience using the `#` command followed by a URL. This feature allows you to incorporate web content directly into your conversations, enhancing the richness and depth of your interactions.
@ -49,6 +49,8 @@ Open WebUI is an [extensible](https://github.com/open-webui/pipelines), feature-
- 🌐🌍 **Multilingual Support**: Experience Open WebUI in your preferred language with our internationalization (i18n) support. Join us in expanding our supported languages! We're actively seeking contributors!
- 🧩 **Pipelines, Open WebUI Plugin Support**: Seamlessly integrate custom logic and Python libraries into Open WebUI using [Pipelines Plugin Framework](https://github.com/open-webui/pipelines). Launch your Pipelines instance, set the OpenAI URL to the Pipelines URL, and explore endless possibilities. [Examples](https://github.com/open-webui/pipelines/tree/main/examples) include **Function Calling**, User **Rate Limiting** to control access, **Usage Monitoring** with tools like Langfuse, **Live Translation with LibreTranslate** for multilingual support, **Toxic Message Filtering** and much more.
- 🌟 **Continuous Updates**: We are committed to improving Open WebUI with regular updates, fixes, and new features.
Want to learn more about Open WebUI's features? Check out our [Open WebUI documentation](https://docs.openwebui.com/features) for a comprehensive overview!
@ -187,18 +189,6 @@ docker run -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui --a
Discover upcoming features on our roadmap in the [Open WebUI Documentation](https://docs.openwebui.com/roadmap/).
## Supporters ✨
A big shoutout to our amazing supporters who's helping to make this project possible! 🙏
### Platinum Sponsors 🤍
- We're looking for Sponsors!
### Acknowledgments
Special thanks to [Prof. Lawrence Kim](https://www.lhkim.com/) and [Prof. Nick Vincent](https://www.nickmvincent.com/) for their invaluable support and guidance in shaping this project into a research endeavor. Grateful for your mentorship throughout the journey! 🙌
## License 📜
This project is licensed under the [MIT License](LICENSE) - see the [LICENSE](LICENSE) file for details. 📄

View file

@ -32,7 +32,13 @@ from open_webui.config import (
)
from open_webui.constants import ERROR_MESSAGES
from open_webui.env import SRC_LOG_LEVELS, DEVICE_TYPE
from open_webui.env import (
ENV,
SRC_LOG_LEVELS,
DEVICE_TYPE,
ENABLE_FORWARD_USER_INFO_HEADERS,
)
from fastapi import Depends, FastAPI, File, HTTPException, Request, UploadFile, status
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import FileResponse
@ -47,7 +53,12 @@ MAX_FILE_SIZE = MAX_FILE_SIZE_MB * 1024 * 1024 # Convert MB to bytes
log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["AUDIO"])
app = FastAPI()
app = FastAPI(
docs_url="/docs" if ENV == "dev" else None,
openapi_url="/openapi.json" if ENV == "dev" else None,
redoc_url=None,
)
app.add_middleware(
CORSMiddleware,
allow_origins=CORS_ALLOW_ORIGIN,
@ -74,6 +85,10 @@ app.state.config.TTS_VOICE = AUDIO_TTS_VOICE
app.state.config.TTS_API_KEY = AUDIO_TTS_API_KEY
app.state.config.TTS_SPLIT_ON = AUDIO_TTS_SPLIT_ON
app.state.speech_synthesiser = None
app.state.speech_speaker_embeddings_dataset = None
app.state.config.TTS_AZURE_SPEECH_REGION = AUDIO_TTS_AZURE_SPEECH_REGION
app.state.config.TTS_AZURE_SPEECH_OUTPUT_FORMAT = AUDIO_TTS_AZURE_SPEECH_OUTPUT_FORMAT
@ -231,6 +246,21 @@ async def update_audio_config(
}
def load_speech_pipeline():
from transformers import pipeline
from datasets import load_dataset
if app.state.speech_synthesiser is None:
app.state.speech_synthesiser = pipeline(
"text-to-speech", "microsoft/speecht5_tts"
)
if app.state.speech_speaker_embeddings_dataset is None:
app.state.speech_speaker_embeddings_dataset = load_dataset(
"Matthijs/cmu-arctic-xvectors", split="validation"
)
@app.post("/speech")
async def speech(request: Request, user=Depends(get_verified_user)):
body = await request.body()
@ -248,6 +278,12 @@ async def speech(request: Request, user=Depends(get_verified_user)):
headers["Authorization"] = f"Bearer {app.state.config.TTS_OPENAI_API_KEY}"
headers["Content-Type"] = "application/json"
if ENABLE_FORWARD_USER_INFO_HEADERS:
headers["X-OpenWebUI-User-Name"] = user.name
headers["X-OpenWebUI-User-Id"] = user.id
headers["X-OpenWebUI-User-Email"] = user.email
headers["X-OpenWebUI-User-Role"] = user.role
try:
body = body.decode("utf-8")
body = json.loads(body)
@ -391,6 +427,43 @@ async def speech(request: Request, user=Depends(get_verified_user)):
raise HTTPException(
status_code=500, detail=f"Error synthesizing speech - {response.reason}"
)
elif app.state.config.TTS_ENGINE == "transformers":
payload = None
try:
payload = json.loads(body.decode("utf-8"))
except Exception as e:
log.exception(e)
raise HTTPException(status_code=400, detail="Invalid JSON payload")
import torch
import soundfile as sf
load_speech_pipeline()
embeddings_dataset = app.state.speech_speaker_embeddings_dataset
speaker_index = 6799
try:
speaker_index = embeddings_dataset["filename"].index(
app.state.config.TTS_MODEL
)
except Exception:
pass
speaker_embedding = torch.tensor(
embeddings_dataset[speaker_index]["xvector"]
).unsqueeze(0)
speech = app.state.speech_synthesiser(
payload["input"],
forward_params={"speaker_embeddings": speaker_embedding},
)
sf.write(file_path, speech["audio"], samplerate=speech["sampling_rate"])
with open(file_body_path, "w") as f:
json.dump(json.loads(body.decode("utf-8")), f)
return FileResponse(file_path)
def transcribe(file_path):

View file

@ -35,7 +35,8 @@ from open_webui.config import (
AppConfig,
)
from open_webui.constants import ERROR_MESSAGES
from open_webui.env import SRC_LOG_LEVELS
from open_webui.env import ENV, SRC_LOG_LEVELS, ENABLE_FORWARD_USER_INFO_HEADERS
from fastapi import Depends, FastAPI, HTTPException, Request
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
@ -47,7 +48,12 @@ log.setLevel(SRC_LOG_LEVELS["IMAGES"])
IMAGE_CACHE_DIR = Path(CACHE_DIR).joinpath("./image/generations/")
IMAGE_CACHE_DIR.mkdir(parents=True, exist_ok=True)
app = FastAPI()
app = FastAPI(
docs_url="/docs" if ENV == "dev" else None,
openapi_url="/openapi.json" if ENV == "dev" else None,
redoc_url=None,
)
app.add_middleware(
CORSMiddleware,
allow_origins=CORS_ALLOW_ORIGIN,
@ -456,6 +462,12 @@ async def image_generations(
headers["Authorization"] = f"Bearer {app.state.config.OPENAI_API_KEY}"
headers["Content-Type"] = "application/json"
if ENABLE_FORWARD_USER_INFO_HEADERS:
headers["X-OpenWebUI-User-Name"] = user.name
headers["X-OpenWebUI-User-Id"] = user.id
headers["X-OpenWebUI-User-Email"] = user.email
headers["X-OpenWebUI-User-Role"] = user.role
data = {
"model": (
app.state.config.MODEL

View file

@ -13,18 +13,20 @@ import requests
from open_webui.apps.webui.models.models import Models
from open_webui.config import (
CORS_ALLOW_ORIGIN,
ENABLE_MODEL_FILTER,
ENABLE_OLLAMA_API,
MODEL_FILTER_LIST,
OLLAMA_BASE_URLS,
OLLAMA_API_CONFIGS,
UPLOAD_DIR,
AppConfig,
)
from open_webui.env import AIOHTTP_CLIENT_TIMEOUT
from open_webui.env import (
AIOHTTP_CLIENT_TIMEOUT,
AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST,
)
from open_webui.constants import ERROR_MESSAGES
from open_webui.env import SRC_LOG_LEVELS
from open_webui.env import ENV, SRC_LOG_LEVELS
from fastapi import Depends, FastAPI, File, HTTPException, Request, UploadFile
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import StreamingResponse
@ -41,11 +43,18 @@ from open_webui.utils.payload import (
apply_model_system_prompt_to_body,
)
from open_webui.utils.utils import get_admin_user, get_verified_user
from open_webui.utils.access_control import has_access
log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["OLLAMA"])
app = FastAPI()
app = FastAPI(
docs_url="/docs" if ENV == "dev" else None,
openapi_url="/openapi.json" if ENV == "dev" else None,
redoc_url=None,
)
app.add_middleware(
CORSMiddleware,
allow_origins=CORS_ALLOW_ORIGIN,
@ -56,12 +65,9 @@ app.add_middleware(
app.state.config = AppConfig()
app.state.config.ENABLE_MODEL_FILTER = ENABLE_MODEL_FILTER
app.state.config.MODEL_FILTER_LIST = MODEL_FILTER_LIST
app.state.config.ENABLE_OLLAMA_API = ENABLE_OLLAMA_API
app.state.config.OLLAMA_BASE_URLS = OLLAMA_BASE_URLS
app.state.MODELS = {}
app.state.config.OLLAMA_API_CONFIGS = OLLAMA_API_CONFIGS
# TODO: Implement a more intelligent load balancing mechanism for distributing requests among multiple backend instances.
@ -69,60 +75,98 @@ app.state.MODELS = {}
# least connections, or least response time for better resource utilization and performance optimization.
@app.middleware("http")
async def check_url(request: Request, call_next):
if len(app.state.MODELS) == 0:
await get_all_models()
else:
pass
response = await call_next(request)
return response
@app.head("/")
@app.get("/")
async def get_status():
return {"status": True}
class ConnectionVerificationForm(BaseModel):
url: str
key: Optional[str] = None
@app.post("/verify")
async def verify_connection(
form_data: ConnectionVerificationForm, user=Depends(get_admin_user)
):
url = form_data.url
key = form_data.key
headers = {}
if key:
headers["Authorization"] = f"Bearer {key}"
timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST)
async with aiohttp.ClientSession(timeout=timeout) as session:
try:
async with session.get(f"{url}/api/version", headers=headers) as r:
if r.status != 200:
# Extract response error details if available
error_detail = f"HTTP Error: {r.status}"
res = await r.json()
if "error" in res:
error_detail = f"External Error: {res['error']}"
raise Exception(error_detail)
response_data = await r.json()
return response_data
except aiohttp.ClientError as e:
# ClientError covers all aiohttp requests issues
log.exception(f"Client error: {str(e)}")
# Handle aiohttp-specific connection issues, timeout etc.
raise HTTPException(
status_code=500, detail="Open WebUI: Server Connection Error"
)
except Exception as e:
log.exception(f"Unexpected error: {e}")
# Generic error handler in case parsing JSON or other steps fail
error_detail = f"Unexpected error: {str(e)}"
raise HTTPException(status_code=500, detail=error_detail)
@app.get("/config")
async def get_config(user=Depends(get_admin_user)):
return {"ENABLE_OLLAMA_API": app.state.config.ENABLE_OLLAMA_API}
return {
"ENABLE_OLLAMA_API": app.state.config.ENABLE_OLLAMA_API,
"OLLAMA_BASE_URLS": app.state.config.OLLAMA_BASE_URLS,
"OLLAMA_API_CONFIGS": app.state.config.OLLAMA_API_CONFIGS,
}
class OllamaConfigForm(BaseModel):
enable_ollama_api: Optional[bool] = None
ENABLE_OLLAMA_API: Optional[bool] = None
OLLAMA_BASE_URLS: list[str]
OLLAMA_API_CONFIGS: dict
@app.post("/config/update")
async def update_config(form_data: OllamaConfigForm, user=Depends(get_admin_user)):
app.state.config.ENABLE_OLLAMA_API = form_data.enable_ollama_api
return {"ENABLE_OLLAMA_API": app.state.config.ENABLE_OLLAMA_API}
app.state.config.ENABLE_OLLAMA_API = form_data.ENABLE_OLLAMA_API
app.state.config.OLLAMA_BASE_URLS = form_data.OLLAMA_BASE_URLS
app.state.config.OLLAMA_API_CONFIGS = form_data.OLLAMA_API_CONFIGS
# Remove any extra configs
config_urls = app.state.config.OLLAMA_API_CONFIGS.keys()
for url in list(app.state.config.OLLAMA_BASE_URLS):
if url not in config_urls:
app.state.config.OLLAMA_API_CONFIGS.pop(url, None)
return {
"ENABLE_OLLAMA_API": app.state.config.ENABLE_OLLAMA_API,
"OLLAMA_BASE_URLS": app.state.config.OLLAMA_BASE_URLS,
"OLLAMA_API_CONFIGS": app.state.config.OLLAMA_API_CONFIGS,
}
@app.get("/urls")
async def get_ollama_api_urls(user=Depends(get_admin_user)):
return {"OLLAMA_BASE_URLS": app.state.config.OLLAMA_BASE_URLS}
class UrlUpdateForm(BaseModel):
urls: list[str]
@app.post("/urls/update")
async def update_ollama_api_url(form_data: UrlUpdateForm, user=Depends(get_admin_user)):
app.state.config.OLLAMA_BASE_URLS = form_data.urls
log.info(f"app.state.config.OLLAMA_BASE_URLS: {app.state.config.OLLAMA_BASE_URLS}")
return {"OLLAMA_BASE_URLS": app.state.config.OLLAMA_BASE_URLS}
async def fetch_url(url):
timeout = aiohttp.ClientTimeout(total=3)
async def aiohttp_get(url, key=None):
timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST)
try:
headers = {"Authorization": f"Bearer {key}"} if key else {}
async with aiohttp.ClientSession(timeout=timeout, trust_env=True) as session:
async with session.get(url) as response:
async with session.get(url, headers=headers) as response:
return await response.json()
except Exception as e:
# Handle connection error here
@ -148,10 +192,18 @@ async def post_streaming_url(
session = aiohttp.ClientSession(
trust_env=True, timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT)
)
api_config = app.state.config.OLLAMA_API_CONFIGS.get(url, {})
key = api_config.get("key", None)
headers = {"Content-Type": "application/json"}
if key:
headers["Authorization"] = f"Bearer {key}"
r = await session.post(
url,
data=payload,
headers={"Content-Type": "application/json"},
headers=headers,
)
r.raise_for_status()
@ -194,29 +246,62 @@ def merge_models_lists(model_lists):
for idx, model_list in enumerate(model_lists):
if model_list is not None:
for model in model_list:
digest = model["digest"]
if digest not in merged_models:
id = model["model"]
if id not in merged_models:
model["urls"] = [idx]
merged_models[digest] = model
merged_models[id] = model
else:
merged_models[digest]["urls"].append(idx)
merged_models[id]["urls"].append(idx)
return list(merged_models.values())
async def get_all_models():
log.info("get_all_models()")
if app.state.config.ENABLE_OLLAMA_API:
tasks = [
fetch_url(f"{url}/api/tags") for url in app.state.config.OLLAMA_BASE_URLS
]
tasks = []
for idx, url in enumerate(app.state.config.OLLAMA_BASE_URLS):
if url not in app.state.config.OLLAMA_API_CONFIGS:
tasks.append(aiohttp_get(f"{url}/api/tags"))
else:
api_config = app.state.config.OLLAMA_API_CONFIGS.get(url, {})
enable = api_config.get("enable", True)
key = api_config.get("key", None)
if enable:
tasks.append(aiohttp_get(f"{url}/api/tags", key))
else:
tasks.append(asyncio.ensure_future(asyncio.sleep(0, None)))
responses = await asyncio.gather(*tasks)
for idx, response in enumerate(responses):
if response:
url = app.state.config.OLLAMA_BASE_URLS[idx]
api_config = app.state.config.OLLAMA_API_CONFIGS.get(url, {})
prefix_id = api_config.get("prefix_id", None)
model_ids = api_config.get("model_ids", [])
if len(model_ids) != 0 and "models" in response:
response["models"] = list(
filter(
lambda model: model["model"] in model_ids,
response["models"],
)
)
if prefix_id:
for model in response.get("models", []):
model["model"] = f"{prefix_id}.{model['model']}"
print(responses)
models = {
"models": merge_models_lists(
map(
lambda response: response["models"] if response else None, responses
lambda response: response.get("models", []) if response else None,
responses,
)
)
}
@ -224,8 +309,6 @@ async def get_all_models():
else:
models = {"models": []}
app.state.MODELS = {model["model"]: model for model in models["models"]}
return models
@ -234,29 +317,25 @@ async def get_all_models():
async def get_ollama_tags(
url_idx: Optional[int] = None, user=Depends(get_verified_user)
):
models = []
if url_idx is None:
models = await get_all_models()
if app.state.config.ENABLE_MODEL_FILTER:
if user.role == "user":
models["models"] = list(
filter(
lambda model: model["name"]
in app.state.config.MODEL_FILTER_LIST,
models["models"],
)
)
return models
return models
else:
url = app.state.config.OLLAMA_BASE_URLS[url_idx]
api_config = app.state.config.OLLAMA_API_CONFIGS.get(url, {})
key = api_config.get("key", None)
headers = {}
if key:
headers["Authorization"] = f"Bearer {key}"
r = None
try:
r = requests.request(method="GET", url=f"{url}/api/tags")
r = requests.request(method="GET", url=f"{url}/api/tags", headers=headers)
r.raise_for_status()
return r.json()
models = r.json()
except Exception as e:
log.exception(e)
error_detail = "Open WebUI: Server Connection Error"
@ -273,6 +352,20 @@ async def get_ollama_tags(
detail=error_detail,
)
if user.role == "user":
# Filter models based on user access control
filtered_models = []
for model in models.get("models", []):
model_info = Models.get_model_by_id(model["model"])
if model_info:
if user.id == model_info.user_id or has_access(
user.id, type="read", access_control=model_info.access_control
):
filtered_models.append(model)
models["models"] = filtered_models
return models
@app.get("/api/version")
@app.get("/api/version/{url_idx}")
@ -281,7 +374,10 @@ async def get_ollama_versions(url_idx: Optional[int] = None):
if url_idx is None:
# returns lowest version
tasks = [
fetch_url(f"{url}/api/version")
aiohttp_get(
f"{url}/api/version",
app.state.config.OLLAMA_API_CONFIGS.get(url, {}).get("key", None),
)
for url in app.state.config.OLLAMA_BASE_URLS
]
responses = await asyncio.gather(*tasks)
@ -361,8 +457,11 @@ async def push_model(
user=Depends(get_admin_user),
):
if url_idx is None:
if form_data.name in app.state.MODELS:
url_idx = app.state.MODELS[form_data.name]["urls"][0]
model_list = await get_all_models()
models = {model["model"]: model for model in model_list["models"]}
if form_data.name in models:
url_idx = models[form_data.name]["urls"][0]
else:
raise HTTPException(
status_code=400,
@ -411,8 +510,11 @@ async def copy_model(
user=Depends(get_admin_user),
):
if url_idx is None:
if form_data.source in app.state.MODELS:
url_idx = app.state.MODELS[form_data.source]["urls"][0]
model_list = await get_all_models()
models = {model["model"]: model for model in model_list["models"]}
if form_data.source in models:
url_idx = models[form_data.source]["urls"][0]
else:
raise HTTPException(
status_code=400,
@ -421,10 +523,18 @@ async def copy_model(
url = app.state.config.OLLAMA_BASE_URLS[url_idx]
log.info(f"url: {url}")
api_config = app.state.config.OLLAMA_API_CONFIGS.get(url, {})
key = api_config.get("key", None)
headers = {"Content-Type": "application/json"}
if key:
headers["Authorization"] = f"Bearer {key}"
r = requests.request(
method="POST",
url=f"{url}/api/copy",
headers={"Content-Type": "application/json"},
headers=headers,
data=form_data.model_dump_json(exclude_none=True).encode(),
)
@ -459,8 +569,11 @@ async def delete_model(
user=Depends(get_admin_user),
):
if url_idx is None:
if form_data.name in app.state.MODELS:
url_idx = app.state.MODELS[form_data.name]["urls"][0]
model_list = await get_all_models()
models = {model["model"]: model for model in model_list["models"]}
if form_data.name in models:
url_idx = models[form_data.name]["urls"][0]
else:
raise HTTPException(
status_code=400,
@ -470,11 +583,18 @@ async def delete_model(
url = app.state.config.OLLAMA_BASE_URLS[url_idx]
log.info(f"url: {url}")
api_config = app.state.config.OLLAMA_API_CONFIGS.get(url, {})
key = api_config.get("key", None)
headers = {"Content-Type": "application/json"}
if key:
headers["Authorization"] = f"Bearer {key}"
r = requests.request(
method="DELETE",
url=f"{url}/api/delete",
headers={"Content-Type": "application/json"},
data=form_data.model_dump_json(exclude_none=True).encode(),
headers=headers,
)
try:
r.raise_for_status()
@ -501,20 +621,30 @@ async def delete_model(
@app.post("/api/show")
async def show_model_info(form_data: ModelNameForm, user=Depends(get_verified_user)):
if form_data.name not in app.state.MODELS:
model_list = await get_all_models()
models = {model["model"]: model for model in model_list["models"]}
if form_data.name not in models:
raise HTTPException(
status_code=400,
detail=ERROR_MESSAGES.MODEL_NOT_FOUND(form_data.name),
)
url_idx = random.choice(app.state.MODELS[form_data.name]["urls"])
url_idx = random.choice(models[form_data.name]["urls"])
url = app.state.config.OLLAMA_BASE_URLS[url_idx]
log.info(f"url: {url}")
api_config = app.state.config.OLLAMA_API_CONFIGS.get(url, {})
key = api_config.get("key", None)
headers = {"Content-Type": "application/json"}
if key:
headers["Authorization"] = f"Bearer {key}"
r = requests.request(
method="POST",
url=f"{url}/api/show",
headers={"Content-Type": "application/json"},
headers=headers,
data=form_data.model_dump_json(exclude_none=True).encode(),
)
try:
@ -570,23 +700,26 @@ async def generate_embeddings(
url_idx: Optional[int] = None,
user=Depends(get_verified_user),
):
return generate_ollama_embeddings(form_data=form_data, url_idx=url_idx)
return await generate_ollama_embeddings(form_data=form_data, url_idx=url_idx)
def generate_ollama_embeddings(
async def generate_ollama_embeddings(
form_data: GenerateEmbeddingsForm,
url_idx: Optional[int] = None,
):
log.info(f"generate_ollama_embeddings {form_data}")
if url_idx is None:
model_list = await get_all_models()
models = {model["model"]: model for model in model_list["models"]}
model = form_data.model
if ":" not in model:
model = f"{model}:latest"
if model in app.state.MODELS:
url_idx = random.choice(app.state.MODELS[model]["urls"])
if model in models:
url_idx = random.choice(models[model]["urls"])
else:
raise HTTPException(
status_code=400,
@ -596,10 +729,17 @@ def generate_ollama_embeddings(
url = app.state.config.OLLAMA_BASE_URLS[url_idx]
log.info(f"url: {url}")
api_config = app.state.config.OLLAMA_API_CONFIGS.get(url, {})
key = api_config.get("key", None)
headers = {"Content-Type": "application/json"}
if key:
headers["Authorization"] = f"Bearer {key}"
r = requests.request(
method="POST",
url=f"{url}/api/embeddings",
headers={"Content-Type": "application/json"},
headers=headers,
data=form_data.model_dump_json(exclude_none=True).encode(),
)
try:
@ -630,20 +770,23 @@ def generate_ollama_embeddings(
)
def generate_ollama_batch_embeddings(
async def generate_ollama_batch_embeddings(
form_data: GenerateEmbedForm,
url_idx: Optional[int] = None,
):
log.info(f"generate_ollama_batch_embeddings {form_data}")
if url_idx is None:
model_list = await get_all_models()
models = {model["model"]: model for model in model_list["models"]}
model = form_data.model
if ":" not in model:
model = f"{model}:latest"
if model in app.state.MODELS:
url_idx = random.choice(app.state.MODELS[model]["urls"])
if model in models:
url_idx = random.choice(models[model]["urls"])
else:
raise HTTPException(
status_code=400,
@ -653,10 +796,17 @@ def generate_ollama_batch_embeddings(
url = app.state.config.OLLAMA_BASE_URLS[url_idx]
log.info(f"url: {url}")
api_config = app.state.config.OLLAMA_API_CONFIGS.get(url, {})
key = api_config.get("key", None)
headers = {"Content-Type": "application/json"}
if key:
headers["Authorization"] = f"Bearer {key}"
r = requests.request(
method="POST",
url=f"{url}/api/embed",
headers={"Content-Type": "application/json"},
headers=headers,
data=form_data.model_dump_json(exclude_none=True).encode(),
)
try:
@ -692,7 +842,7 @@ class GenerateCompletionForm(BaseModel):
options: Optional[dict] = None
system: Optional[str] = None
template: Optional[str] = None
context: Optional[str] = None
context: Optional[list[int]] = None
stream: Optional[bool] = True
raw: Optional[bool] = None
keep_alive: Optional[Union[int, str]] = None
@ -706,13 +856,16 @@ async def generate_completion(
user=Depends(get_verified_user),
):
if url_idx is None:
model_list = await get_all_models()
models = {model["model"]: model for model in model_list["models"]}
model = form_data.model
if ":" not in model:
model = f"{model}:latest"
if model in app.state.MODELS:
url_idx = random.choice(app.state.MODELS[model]["urls"])
if model in models:
url_idx = random.choice(models[model]["urls"])
else:
raise HTTPException(
status_code=400,
@ -720,6 +873,10 @@ async def generate_completion(
)
url = app.state.config.OLLAMA_BASE_URLS[url_idx]
api_config = app.state.config.OLLAMA_API_CONFIGS.get(url, {})
prefix_id = api_config.get("prefix_id", None)
if prefix_id:
form_data.model = form_data.model.replace(f"{prefix_id}.", "")
log.info(f"url: {url}")
return await post_streaming_url(
@ -743,14 +900,17 @@ class GenerateChatCompletionForm(BaseModel):
keep_alive: Optional[Union[int, str]] = None
def get_ollama_url(url_idx: Optional[int], model: str):
async def get_ollama_url(url_idx: Optional[int], model: str):
if url_idx is None:
if model not in app.state.MODELS:
model_list = await get_all_models()
models = {model["model"]: model for model in model_list["models"]}
if model not in models:
raise HTTPException(
status_code=400,
detail=ERROR_MESSAGES.MODEL_NOT_FOUND(model),
)
url_idx = random.choice(app.state.MODELS[model]["urls"])
url_idx = random.choice(models[model]["urls"])
url = app.state.config.OLLAMA_BASE_URLS[url_idx]
return url
@ -768,15 +928,7 @@ async def generate_chat_completion(
if "metadata" in payload:
del payload["metadata"]
model_id = form_data.model
if not bypass_filter and app.state.config.ENABLE_MODEL_FILTER:
if user.role == "user" and model_id not in app.state.config.MODEL_FILTER_LIST:
raise HTTPException(
status_code=403,
detail="Model not found",
)
model_id = payload["model"]
model_info = Models.get_model_by_id(model_id)
if model_info:
@ -794,13 +946,37 @@ async def generate_chat_completion(
)
payload = apply_model_system_prompt_to_body(params, payload, user)
# Check if user has access to the model
if not bypass_filter and user.role == "user":
if not (
user.id == model_info.user_id
or has_access(
user.id, type="read", access_control=model_info.access_control
)
):
raise HTTPException(
status_code=403,
detail="Model not found",
)
elif not bypass_filter:
if user.role != "admin":
raise HTTPException(
status_code=403,
detail="Model not found",
)
if ":" not in payload["model"]:
payload["model"] = f"{payload['model']}:latest"
url = get_ollama_url(url_idx, payload["model"])
url = await get_ollama_url(url_idx, payload["model"])
log.info(f"url: {url}")
log.debug(f"generate_chat_completion() - 2.payload = {payload}")
api_config = app.state.config.OLLAMA_API_CONFIGS.get(url, {})
prefix_id = api_config.get("prefix_id", None)
if prefix_id:
payload["model"] = payload["model"].replace(f"{prefix_id}.", "")
return await post_streaming_url(
f"{url}/api/chat",
json.dumps(payload),
@ -817,7 +993,7 @@ class OpenAIChatMessageContent(BaseModel):
class OpenAIChatMessage(BaseModel):
role: str
content: Union[str, OpenAIChatMessageContent]
content: Union[str, list[OpenAIChatMessageContent]]
model_config = ConfigDict(extra="allow")
@ -836,22 +1012,24 @@ async def generate_openai_chat_completion(
url_idx: Optional[int] = None,
user=Depends(get_verified_user),
):
try:
completion_form = OpenAIChatCompletionForm(**form_data)
except Exception as e:
log.exception(e)
raise HTTPException(
status_code=400,
detail=str(e),
)
payload = {**completion_form.model_dump(exclude_none=True, exclude=["metadata"])}
if "metadata" in payload:
del payload["metadata"]
model_id = completion_form.model
if app.state.config.ENABLE_MODEL_FILTER:
if user.role == "user" and model_id not in app.state.config.MODEL_FILTER_LIST:
raise HTTPException(
status_code=403,
detail="Model not found",
)
if ":" not in model_id:
model_id = f"{model_id}:latest"
model_info = Models.get_model_by_id(model_id)
if model_info:
if model_info.base_model_id:
payload["model"] = model_info.base_model_id
@ -862,12 +1040,36 @@ async def generate_openai_chat_completion(
payload = apply_model_params_to_body_openai(params, payload)
payload = apply_model_system_prompt_to_body(params, payload, user)
# Check if user has access to the model
if user.role == "user":
if not (
user.id == model_info.user_id
or has_access(
user.id, type="read", access_control=model_info.access_control
)
):
raise HTTPException(
status_code=403,
detail="Model not found",
)
else:
if user.role != "admin":
raise HTTPException(
status_code=403,
detail="Model not found",
)
if ":" not in payload["model"]:
payload["model"] = f"{payload['model']}:latest"
url = get_ollama_url(url_idx, payload["model"])
url = await get_ollama_url(url_idx, payload["model"])
log.info(f"url: {url}")
api_config = app.state.config.OLLAMA_API_CONFIGS.get(url, {})
prefix_id = api_config.get("prefix_id", None)
if prefix_id:
payload["model"] = payload["model"].replace(f"{prefix_id}.", "")
return await post_streaming_url(
f"{url}/v1/chat/completions",
json.dumps(payload),
@ -881,31 +1083,19 @@ async def get_openai_models(
url_idx: Optional[int] = None,
user=Depends(get_verified_user),
):
models = []
if url_idx is None:
models = await get_all_models()
if app.state.config.ENABLE_MODEL_FILTER:
if user.role == "user":
models["models"] = list(
filter(
lambda model: model["name"]
in app.state.config.MODEL_FILTER_LIST,
models["models"],
)
)
return {
"data": [
model_list = await get_all_models()
models = [
{
"id": model["model"],
"object": "model",
"created": int(time.time()),
"owned_by": "openai",
}
for model in models["models"]
],
"object": "list",
}
for model in model_list["models"]
]
else:
url = app.state.config.OLLAMA_BASE_URLS[url_idx]
@ -913,10 +1103,9 @@ async def get_openai_models(
r = requests.request(method="GET", url=f"{url}/api/tags")
r.raise_for_status()
models = r.json()
model_list = r.json()
return {
"data": [
models = [
{
"id": model["model"],
"object": "model",
@ -924,10 +1113,7 @@ async def get_openai_models(
"owned_by": "openai",
}
for model in models["models"]
],
"object": "list",
}
]
except Exception as e:
log.exception(e)
error_detail = "Open WebUI: Server Connection Error"
@ -944,6 +1130,23 @@ async def get_openai_models(
detail=error_detail,
)
if user.role == "user":
# Filter models based on user access control
filtered_models = []
for model in models:
model_info = Models.get_model_by_id(model["id"])
if model_info:
if user.id == model_info.user_id or has_access(
user.id, type="read", access_control=model_info.access_control
):
filtered_models.append(model)
models = filtered_models
return {
"data": models,
"object": "list",
}
class UrlForm(BaseModel):
url: str

View file

@ -11,20 +11,20 @@ from open_webui.apps.webui.models.models import Models
from open_webui.config import (
CACHE_DIR,
CORS_ALLOW_ORIGIN,
ENABLE_MODEL_FILTER,
ENABLE_OPENAI_API,
MODEL_FILTER_LIST,
OPENAI_API_BASE_URLS,
OPENAI_API_KEYS,
OPENAI_API_CONFIGS,
AppConfig,
)
from open_webui.env import (
AIOHTTP_CLIENT_TIMEOUT,
AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST,
ENABLE_FORWARD_USER_INFO_HEADERS,
)
from open_webui.constants import ERROR_MESSAGES
from open_webui.env import SRC_LOG_LEVELS
from open_webui.env import ENV, SRC_LOG_LEVELS
from fastapi import Depends, FastAPI, HTTPException, Request
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import FileResponse, StreamingResponse
@ -37,11 +37,20 @@ from open_webui.utils.payload import (
)
from open_webui.utils.utils import get_admin_user, get_verified_user
from open_webui.utils.access_control import has_access
log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["OPENAI"])
app = FastAPI()
app = FastAPI(
docs_url="/docs" if ENV == "dev" else None,
openapi_url="/openapi.json" if ENV == "dev" else None,
redoc_url=None,
)
app.add_middleware(
CORSMiddleware,
allow_origins=CORS_ALLOW_ORIGIN,
@ -52,69 +61,66 @@ app.add_middleware(
app.state.config = AppConfig()
app.state.config.ENABLE_MODEL_FILTER = ENABLE_MODEL_FILTER
app.state.config.MODEL_FILTER_LIST = MODEL_FILTER_LIST
app.state.config.ENABLE_OPENAI_API = ENABLE_OPENAI_API
app.state.config.OPENAI_API_BASE_URLS = OPENAI_API_BASE_URLS
app.state.config.OPENAI_API_KEYS = OPENAI_API_KEYS
app.state.MODELS = {}
@app.middleware("http")
async def check_url(request: Request, call_next):
if len(app.state.MODELS) == 0:
await get_all_models()
response = await call_next(request)
return response
app.state.config.OPENAI_API_CONFIGS = OPENAI_API_CONFIGS
@app.get("/config")
async def get_config(user=Depends(get_admin_user)):
return {"ENABLE_OPENAI_API": app.state.config.ENABLE_OPENAI_API}
return {
"ENABLE_OPENAI_API": app.state.config.ENABLE_OPENAI_API,
"OPENAI_API_BASE_URLS": app.state.config.OPENAI_API_BASE_URLS,
"OPENAI_API_KEYS": app.state.config.OPENAI_API_KEYS,
"OPENAI_API_CONFIGS": app.state.config.OPENAI_API_CONFIGS,
}
class OpenAIConfigForm(BaseModel):
enable_openai_api: Optional[bool] = None
ENABLE_OPENAI_API: Optional[bool] = None
OPENAI_API_BASE_URLS: list[str]
OPENAI_API_KEYS: list[str]
OPENAI_API_CONFIGS: dict
@app.post("/config/update")
async def update_config(form_data: OpenAIConfigForm, user=Depends(get_admin_user)):
app.state.config.ENABLE_OPENAI_API = form_data.enable_openai_api
return {"ENABLE_OPENAI_API": app.state.config.ENABLE_OPENAI_API}
app.state.config.ENABLE_OPENAI_API = form_data.ENABLE_OPENAI_API
app.state.config.OPENAI_API_BASE_URLS = form_data.OPENAI_API_BASE_URLS
app.state.config.OPENAI_API_KEYS = form_data.OPENAI_API_KEYS
class UrlsUpdateForm(BaseModel):
urls: list[str]
# Check if API KEYS length is same than API URLS length
if len(app.state.config.OPENAI_API_KEYS) != len(
app.state.config.OPENAI_API_BASE_URLS
):
if len(app.state.config.OPENAI_API_KEYS) > len(
app.state.config.OPENAI_API_BASE_URLS
):
app.state.config.OPENAI_API_KEYS = app.state.config.OPENAI_API_KEYS[
: len(app.state.config.OPENAI_API_BASE_URLS)
]
else:
app.state.config.OPENAI_API_KEYS += [""] * (
len(app.state.config.OPENAI_API_BASE_URLS)
- len(app.state.config.OPENAI_API_KEYS)
)
app.state.config.OPENAI_API_CONFIGS = form_data.OPENAI_API_CONFIGS
class KeysUpdateForm(BaseModel):
keys: list[str]
# Remove any extra configs
config_urls = app.state.config.OPENAI_API_CONFIGS.keys()
for idx, url in enumerate(app.state.config.OPENAI_API_BASE_URLS):
if url not in config_urls:
app.state.config.OPENAI_API_CONFIGS.pop(url, None)
@app.get("/urls")
async def get_openai_urls(user=Depends(get_admin_user)):
return {"OPENAI_API_BASE_URLS": app.state.config.OPENAI_API_BASE_URLS}
@app.post("/urls/update")
async def update_openai_urls(form_data: UrlsUpdateForm, user=Depends(get_admin_user)):
await get_all_models()
app.state.config.OPENAI_API_BASE_URLS = form_data.urls
return {"OPENAI_API_BASE_URLS": app.state.config.OPENAI_API_BASE_URLS}
@app.get("/keys")
async def get_openai_keys(user=Depends(get_admin_user)):
return {"OPENAI_API_KEYS": app.state.config.OPENAI_API_KEYS}
@app.post("/keys/update")
async def update_openai_key(form_data: KeysUpdateForm, user=Depends(get_admin_user)):
app.state.config.OPENAI_API_KEYS = form_data.keys
return {"OPENAI_API_KEYS": app.state.config.OPENAI_API_KEYS}
return {
"ENABLE_OPENAI_API": app.state.config.ENABLE_OPENAI_API,
"OPENAI_API_BASE_URLS": app.state.config.OPENAI_API_BASE_URLS,
"OPENAI_API_KEYS": app.state.config.OPENAI_API_KEYS,
"OPENAI_API_CONFIGS": app.state.config.OPENAI_API_CONFIGS,
}
@app.post("/audio/speech")
@ -140,6 +146,11 @@ async def speech(request: Request, user=Depends(get_verified_user)):
if "openrouter.ai" in app.state.config.OPENAI_API_BASE_URLS[idx]:
headers["HTTP-Referer"] = "https://openwebui.com/"
headers["X-Title"] = "Open WebUI"
if ENABLE_FORWARD_USER_INFO_HEADERS:
headers["X-OpenWebUI-User-Name"] = user.name
headers["X-OpenWebUI-User-Id"] = user.id
headers["X-OpenWebUI-User-Email"] = user.email
headers["X-OpenWebUI-User-Role"] = user.role
r = None
try:
r = requests.post(
@ -181,10 +192,10 @@ async def speech(request: Request, user=Depends(get_verified_user)):
raise HTTPException(status_code=401, detail=ERROR_MESSAGES.OPENAI_NOT_FOUND)
async def fetch_url(url, key):
async def aiohttp_get(url, key=None):
timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST)
try:
headers = {"Authorization": f"Bearer {key}"}
headers = {"Authorization": f"Bearer {key}"} if key else {}
async with aiohttp.ClientSession(timeout=timeout, trust_env=True) as session:
async with session.get(url, headers=headers) as response:
return await response.json()
@ -239,12 +250,8 @@ def merge_models_lists(model_lists):
return merged_list
def is_openai_api_disabled():
return not app.state.config.ENABLE_OPENAI_API
async def get_all_models_raw() -> list:
if is_openai_api_disabled():
async def get_all_models_responses() -> list:
if not app.state.config.ENABLE_OPENAI_API:
return []
# Check if API KEYS length is same than API URLS length
@ -260,33 +267,69 @@ async def get_all_models_raw() -> list:
else:
app.state.config.OPENAI_API_KEYS += [""] * (num_urls - num_keys)
tasks = [
fetch_url(f"{url}/models", app.state.config.OPENAI_API_KEYS[idx])
for idx, url in enumerate(app.state.config.OPENAI_API_BASE_URLS)
]
tasks = []
for idx, url in enumerate(app.state.config.OPENAI_API_BASE_URLS):
if url not in app.state.config.OPENAI_API_CONFIGS:
tasks.append(
aiohttp_get(f"{url}/models", app.state.config.OPENAI_API_KEYS[idx])
)
else:
api_config = app.state.config.OPENAI_API_CONFIGS.get(url, {})
enable = api_config.get("enable", True)
model_ids = api_config.get("model_ids", [])
if enable:
if len(model_ids) == 0:
tasks.append(
aiohttp_get(
f"{url}/models", app.state.config.OPENAI_API_KEYS[idx]
)
)
else:
model_list = {
"object": "list",
"data": [
{
"id": model_id,
"name": model_id,
"owned_by": "openai",
"openai": {"id": model_id},
"urlIdx": idx,
}
for model_id in model_ids
],
}
tasks.append(asyncio.ensure_future(asyncio.sleep(0, model_list)))
responses = await asyncio.gather(*tasks)
for idx, response in enumerate(responses):
if response:
url = app.state.config.OPENAI_API_BASE_URLS[idx]
api_config = app.state.config.OPENAI_API_CONFIGS.get(url, {})
prefix_id = api_config.get("prefix_id", None)
if prefix_id:
for model in (
response if isinstance(response, list) else response.get("data", [])
):
model["id"] = f"{prefix_id}.{model['id']}"
log.debug(f"get_all_models:responses() {responses}")
return responses
@overload
async def get_all_models(raw: Literal[True]) -> list: ...
@overload
async def get_all_models(raw: Literal[False] = False) -> dict[str, list]: ...
async def get_all_models(raw=False) -> dict[str, list] | list:
async def get_all_models() -> dict[str, list]:
log.info("get_all_models()")
if is_openai_api_disabled():
return [] if raw else {"data": []}
responses = await get_all_models_raw()
if raw:
return responses
if not app.state.config.ENABLE_OPENAI_API:
return {"data": []}
responses = await get_all_models_responses()
def extract_data(response):
if response and "data" in response:
@ -296,9 +339,7 @@ async def get_all_models(raw=False) -> dict[str, list] | list:
return None
models = {"data": merge_models_lists(map(extract_data, responses))}
log.debug(f"models: {models}")
app.state.MODELS = {model["id"]: model for model in models["data"]}
return models
@ -306,18 +347,12 @@ async def get_all_models(raw=False) -> dict[str, list] | list:
@app.get("/models")
@app.get("/models/{url_idx}")
async def get_models(url_idx: Optional[int] = None, user=Depends(get_verified_user)):
models = {
"data": [],
}
if url_idx is None:
models = await get_all_models()
if app.state.config.ENABLE_MODEL_FILTER:
if user.role == "user":
models["data"] = list(
filter(
lambda model: model["id"] in app.state.config.MODEL_FILTER_LIST,
models["data"],
)
)
return models
return models
else:
url = app.state.config.OPENAI_API_BASE_URLS[url_idx]
key = app.state.config.OPENAI_API_KEYS[url_idx]
@ -326,19 +361,34 @@ async def get_models(url_idx: Optional[int] = None, user=Depends(get_verified_us
headers["Authorization"] = f"Bearer {key}"
headers["Content-Type"] = "application/json"
if ENABLE_FORWARD_USER_INFO_HEADERS:
headers["X-OpenWebUI-User-Name"] = user.name
headers["X-OpenWebUI-User-Id"] = user.id
headers["X-OpenWebUI-User-Email"] = user.email
headers["X-OpenWebUI-User-Role"] = user.role
r = None
timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST)
async with aiohttp.ClientSession(timeout=timeout) as session:
try:
r = requests.request(method="GET", url=f"{url}/models", headers=headers)
r.raise_for_status()
async with session.get(f"{url}/models", headers=headers) as r:
if r.status != 200:
# Extract response error details if available
error_detail = f"HTTP Error: {r.status}"
res = await r.json()
if "error" in res:
error_detail = f"External Error: {res['error']}"
raise Exception(error_detail)
response_data = r.json()
response_data = await r.json()
# Check if we're calling OpenAI API based on the URL
if "api.openai.com" in url:
# Filter the response data
# Filter models according to the specified conditions
response_data["data"] = [
model
for model in response_data["data"]
for model in response_data.get("data", [])
if not any(
name in model["id"]
for name in [
@ -352,30 +402,85 @@ async def get_models(url_idx: Optional[int] = None, user=Depends(get_verified_us
)
]
return response_data
except Exception as e:
log.exception(e)
error_detail = "Open WebUI: Server Connection Error"
if r is not None:
try:
res = r.json()
if "error" in res:
error_detail = f"External: {res['error']}"
except Exception:
error_detail = f"External: {e}"
models = response_data
except aiohttp.ClientError as e:
# ClientError covers all aiohttp requests issues
log.exception(f"Client error: {str(e)}")
# Handle aiohttp-specific connection issues, timeout etc.
raise HTTPException(
status_code=r.status_code if r else 500,
detail=error_detail,
status_code=500, detail="Open WebUI: Server Connection Error"
)
except Exception as e:
log.exception(f"Unexpected error: {e}")
# Generic error handler in case parsing JSON or other steps fail
error_detail = f"Unexpected error: {str(e)}"
raise HTTPException(status_code=500, detail=error_detail)
if user.role == "user":
# Filter models based on user access control
filtered_models = []
for model in models.get("data", []):
model_info = Models.get_model_by_id(model["id"])
if model_info:
if user.id == model_info.user_id or has_access(
user.id, type="read", access_control=model_info.access_control
):
filtered_models.append(model)
models["data"] = filtered_models
return models
class ConnectionVerificationForm(BaseModel):
url: str
key: str
@app.post("/verify")
async def verify_connection(
form_data: ConnectionVerificationForm, user=Depends(get_admin_user)
):
url = form_data.url
key = form_data.key
headers = {}
headers["Authorization"] = f"Bearer {key}"
headers["Content-Type"] = "application/json"
timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST)
async with aiohttp.ClientSession(timeout=timeout) as session:
try:
async with session.get(f"{url}/models", headers=headers) as r:
if r.status != 200:
# Extract response error details if available
error_detail = f"HTTP Error: {r.status}"
res = await r.json()
if "error" in res:
error_detail = f"External Error: {res['error']}"
raise Exception(error_detail)
response_data = await r.json()
return response_data
except aiohttp.ClientError as e:
# ClientError covers all aiohttp requests issues
log.exception(f"Client error: {str(e)}")
# Handle aiohttp-specific connection issues, timeout etc.
raise HTTPException(
status_code=500, detail="Open WebUI: Server Connection Error"
)
except Exception as e:
log.exception(f"Unexpected error: {e}")
# Generic error handler in case parsing JSON or other steps fail
error_detail = f"Unexpected error: {str(e)}"
raise HTTPException(status_code=500, detail=error_detail)
@app.post("/chat/completions")
@app.post("/chat/completions/{url_idx}")
async def generate_chat_completion(
form_data: dict,
url_idx: Optional[int] = None,
user=Depends(get_verified_user),
bypass_filter: Optional[bool] = False,
):
idx = 0
payload = {**form_data}
@ -386,6 +491,7 @@ async def generate_chat_completion(
model_id = form_data.get("model")
model_info = Models.get_model_by_id(model_id)
# Check model info and override the payload
if model_info:
if model_info.base_model_id:
payload["model"] = model_info.base_model_id
@ -394,9 +500,52 @@ async def generate_chat_completion(
payload = apply_model_params_to_body_openai(params, payload)
payload = apply_model_system_prompt_to_body(params, payload, user)
model = app.state.MODELS[payload.get("model")]
idx = model["urlIdx"]
# Check if user has access to the model
if not bypass_filter and user.role == "user":
if not (
user.id == model_info.user_id
or has_access(
user.id, type="read", access_control=model_info.access_control
)
):
raise HTTPException(
status_code=403,
detail="Model not found",
)
elif not bypass_filter:
if user.role != "admin":
raise HTTPException(
status_code=403,
detail="Model not found",
)
# Attemp to get urlIdx from the model
models = await get_all_models()
# Find the model from the list
model = next(
(model for model in models["data"] if model["id"] == payload.get("model")),
None,
)
if model:
idx = model["urlIdx"]
else:
raise HTTPException(
status_code=404,
detail="Model not found",
)
# Get the API config for the model
api_config = app.state.config.OPENAI_API_CONFIGS.get(
app.state.config.OPENAI_API_BASE_URLS[idx], {}
)
prefix_id = api_config.get("prefix_id", None)
if prefix_id:
payload["model"] = payload["model"].replace(f"{prefix_id}.", "")
# Add user info to the payload if the model is a pipeline
if "pipeline" in model and model.get("pipeline"):
payload["user"] = {
"name": user.name,
@ -407,8 +556,9 @@ async def generate_chat_completion(
url = app.state.config.OPENAI_API_BASE_URLS[idx]
key = app.state.config.OPENAI_API_KEYS[idx]
is_o1 = payload["model"].lower().startswith("o1-")
# Fix: O1 does not support the "max_tokens" parameter, Modify "max_tokens" to "max_completion_tokens"
is_o1 = payload["model"].lower().startswith("o1-")
# Change max_completion_tokens to max_tokens (Backward compatible)
if "api.openai.com" not in url and not is_o1:
if "max_completion_tokens" in payload:
@ -437,6 +587,11 @@ async def generate_chat_completion(
if "openrouter.ai" in app.state.config.OPENAI_API_BASE_URLS[idx]:
headers["HTTP-Referer"] = "https://openwebui.com/"
headers["X-Title"] = "Open WebUI"
if ENABLE_FORWARD_USER_INFO_HEADERS:
headers["X-OpenWebUI-User-Name"] = user.name
headers["X-OpenWebUI-User-Id"] = user.id
headers["X-OpenWebUI-User-Email"] = user.email
headers["X-OpenWebUI-User-Role"] = user.role
r = None
session = None
@ -505,6 +660,11 @@ async def proxy(path: str, request: Request, user=Depends(get_verified_user)):
headers = {}
headers["Authorization"] = f"Bearer {key}"
headers["Content-Type"] = "application/json"
if ENABLE_FORWARD_USER_INFO_HEADERS:
headers["X-OpenWebUI-User-Name"] = user.name
headers["X-OpenWebUI-User-Id"] = user.id
headers["X-OpenWebUI-User-Email"] = user.email
headers["X-OpenWebUI-User-Role"] = user.role
r = None
session = None

View file

@ -159,7 +159,7 @@ class Loader:
elif file_ext in ["htm", "html"]:
loader = BSHTMLLoader(file_path, open_encoding="unicode_escape")
elif file_ext == "md":
loader = UnstructuredMarkdownLoader(file_path)
loader = TextLoader(file_path, autodetect_encoding=True)
elif file_content_type == "application/epub+zip":
loader = UnstructuredEPubLoader(file_path)
elif (

View file

@ -0,0 +1,98 @@
from typing import Any, Dict, Generator, List, Optional, Sequence, Union
from urllib.parse import parse_qs, urlparse
from langchain_core.documents import Document
ALLOWED_SCHEMES = {"http", "https"}
ALLOWED_NETLOCS = {
"youtu.be",
"m.youtube.com",
"youtube.com",
"www.youtube.com",
"www.youtube-nocookie.com",
"vid.plus",
}
def _parse_video_id(url: str) -> Optional[str]:
"""Parse a YouTube URL and return the video ID if valid, otherwise None."""
parsed_url = urlparse(url)
if parsed_url.scheme not in ALLOWED_SCHEMES:
return None
if parsed_url.netloc not in ALLOWED_NETLOCS:
return None
path = parsed_url.path
if path.endswith("/watch"):
query = parsed_url.query
parsed_query = parse_qs(query)
if "v" in parsed_query:
ids = parsed_query["v"]
video_id = ids if isinstance(ids, str) else ids[0]
else:
return None
else:
path = parsed_url.path.lstrip("/")
video_id = path.split("/")[-1]
if len(video_id) != 11: # Video IDs are 11 characters long
return None
return video_id
class YoutubeLoader:
"""Load `YouTube` video transcripts."""
def __init__(
self,
video_id: str,
language: Union[str, Sequence[str]] = "en",
):
"""Initialize with YouTube video ID."""
_video_id = _parse_video_id(video_id)
self.video_id = _video_id if _video_id is not None else video_id
self._metadata = {"source": video_id}
self.language = language
if isinstance(language, str):
self.language = [language]
else:
self.language = language
def load(self) -> List[Document]:
"""Load YouTube transcripts into `Document` objects."""
try:
from youtube_transcript_api import (
NoTranscriptFound,
TranscriptsDisabled,
YouTubeTranscriptApi,
)
except ImportError:
raise ImportError(
'Could not import "youtube_transcript_api" Python package. '
"Please install it with `pip install youtube-transcript-api`."
)
try:
transcript_list = YouTubeTranscriptApi.list_transcripts(self.video_id)
except Exception as e:
print(e)
return []
try:
transcript = transcript_list.find_transcript(self.language)
except NoTranscriptFound:
transcript = transcript_list.find_transcript(["en"])
transcript_pieces: List[Dict[str, Any]] = transcript.fetch()
transcript = " ".join(
map(
lambda transcript_piece: transcript_piece["text"].strip(" "),
transcript_pieces,
)
)
return [Document(page_content=transcript, metadata=self._metadata)]

View file

@ -23,6 +23,7 @@ from open_webui.apps.retrieval.vector.connector import VECTOR_DB_CLIENT
# Document loaders
from open_webui.apps.retrieval.loaders.main import Loader
from open_webui.apps.retrieval.loaders.youtube import YoutubeLoader
# Web search engines
from open_webui.apps.retrieval.web.main import SearchResult
@ -38,6 +39,7 @@ from open_webui.apps.retrieval.web.serper import search_serper
from open_webui.apps.retrieval.web.serply import search_serply
from open_webui.apps.retrieval.web.serpstack import search_serpstack
from open_webui.apps.retrieval.web.tavily import search_tavily
from open_webui.apps.retrieval.web.bing import search_bing
from open_webui.apps.retrieval.utils import (
@ -76,6 +78,8 @@ from open_webui.config import (
RAG_FILE_MAX_SIZE,
RAG_OPENAI_API_BASE_URL,
RAG_OPENAI_API_KEY,
RAG_OLLAMA_BASE_URL,
RAG_OLLAMA_API_KEY,
RAG_RELEVANCE_THRESHOLD,
RAG_RERANKING_MODEL,
RAG_RERANKING_MODEL_AUTO_UPDATE,
@ -87,6 +91,7 @@ from open_webui.config import (
RAG_WEB_SEARCH_DOMAIN_FILTER_LIST,
RAG_WEB_SEARCH_ENGINE,
RAG_WEB_SEARCH_RESULT_COUNT,
JINA_API_KEY,
SEARCHAPI_API_KEY,
SEARCHAPI_ENGINE,
SEARXNG_QUERY_URL,
@ -95,13 +100,20 @@ from open_webui.config import (
SERPSTACK_API_KEY,
SERPSTACK_HTTPS,
TAVILY_API_KEY,
BING_SEARCH_V7_ENDPOINT,
BING_SEARCH_V7_SUBSCRIPTION_KEY,
TIKA_SERVER_URL,
UPLOAD_DIR,
YOUTUBE_LOADER_LANGUAGE,
DEFAULT_LOCALE,
AppConfig,
)
from open_webui.constants import ERROR_MESSAGES
from open_webui.env import SRC_LOG_LEVELS, DEVICE_TYPE, DOCKER
from open_webui.env import (
SRC_LOG_LEVELS,
DEVICE_TYPE,
DOCKER,
)
from open_webui.utils.misc import (
calculate_sha256,
calculate_sha256_string,
@ -111,16 +123,17 @@ from open_webui.utils.misc import (
from open_webui.utils.utils import get_admin_user, get_verified_user
from langchain.text_splitter import RecursiveCharacterTextSplitter, TokenTextSplitter
from langchain_community.document_loaders import (
YoutubeLoader,
)
from langchain_core.documents import Document
log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["RAG"])
app = FastAPI()
app = FastAPI(
docs_url="/docs" if ENV == "dev" else None,
openapi_url="/openapi.json" if ENV == "dev" else None,
redoc_url=None,
)
app.state.config = AppConfig()
@ -152,6 +165,9 @@ app.state.config.RAG_TEMPLATE = RAG_TEMPLATE
app.state.config.OPENAI_API_BASE_URL = RAG_OPENAI_API_BASE_URL
app.state.config.OPENAI_API_KEY = RAG_OPENAI_API_KEY
app.state.config.OLLAMA_BASE_URL = RAG_OLLAMA_BASE_URL
app.state.config.OLLAMA_API_KEY = RAG_OLLAMA_API_KEY
app.state.config.PDF_EXTRACT_IMAGES = PDF_EXTRACT_IMAGES
app.state.config.YOUTUBE_LOADER_LANGUAGE = YOUTUBE_LOADER_LANGUAGE
@ -174,6 +190,10 @@ app.state.config.SERPLY_API_KEY = SERPLY_API_KEY
app.state.config.TAVILY_API_KEY = TAVILY_API_KEY
app.state.config.SEARCHAPI_API_KEY = SEARCHAPI_API_KEY
app.state.config.SEARCHAPI_ENGINE = SEARCHAPI_ENGINE
app.state.config.JINA_API_KEY = JINA_API_KEY
app.state.config.BING_SEARCH_V7_ENDPOINT = BING_SEARCH_V7_ENDPOINT
app.state.config.BING_SEARCH_V7_SUBSCRIPTION_KEY = BING_SEARCH_V7_SUBSCRIPTION_KEY
app.state.config.RAG_WEB_SEARCH_RESULT_COUNT = RAG_WEB_SEARCH_RESULT_COUNT
app.state.config.RAG_WEB_SEARCH_CONCURRENT_REQUESTS = RAG_WEB_SEARCH_CONCURRENT_REQUESTS
@ -185,11 +205,15 @@ def update_embedding_model(
if embedding_model and app.state.config.RAG_EMBEDDING_ENGINE == "":
from sentence_transformers import SentenceTransformer
try:
app.state.sentence_transformer_ef = SentenceTransformer(
get_model_path(embedding_model, auto_update),
device=DEVICE_TYPE,
trust_remote_code=RAG_EMBEDDING_MODEL_TRUST_REMOTE_CODE,
)
except Exception as e:
log.debug(f"Error loading SentenceTransformer: {e}")
app.state.sentence_transformer_ef = None
else:
app.state.sentence_transformer_ef = None
@ -243,8 +267,16 @@ app.state.EMBEDDING_FUNCTION = get_embedding_function(
app.state.config.RAG_EMBEDDING_ENGINE,
app.state.config.RAG_EMBEDDING_MODEL,
app.state.sentence_transformer_ef,
app.state.config.OPENAI_API_KEY,
app.state.config.OPENAI_API_BASE_URL,
(
app.state.config.OPENAI_API_BASE_URL
if app.state.config.RAG_EMBEDDING_ENGINE == "openai"
else app.state.config.OLLAMA_BASE_URL
),
(
app.state.config.OPENAI_API_KEY
if app.state.config.RAG_EMBEDDING_ENGINE == "openai"
else app.state.config.OLLAMA_API_KEY
),
app.state.config.RAG_EMBEDDING_BATCH_SIZE,
)
@ -294,6 +326,10 @@ async def get_embedding_config(user=Depends(get_admin_user)):
"url": app.state.config.OPENAI_API_BASE_URL,
"key": app.state.config.OPENAI_API_KEY,
},
"ollama_config": {
"url": app.state.config.OLLAMA_BASE_URL,
"key": app.state.config.OLLAMA_API_KEY,
},
}
@ -310,8 +346,14 @@ class OpenAIConfigForm(BaseModel):
key: str
class OllamaConfigForm(BaseModel):
url: str
key: str
class EmbeddingModelUpdateForm(BaseModel):
openai_config: Optional[OpenAIConfigForm] = None
ollama_config: Optional[OllamaConfigForm] = None
embedding_engine: str
embedding_model: str
embedding_batch_size: Optional[int] = 1
@ -332,6 +374,11 @@ async def update_embedding_config(
if form_data.openai_config is not None:
app.state.config.OPENAI_API_BASE_URL = form_data.openai_config.url
app.state.config.OPENAI_API_KEY = form_data.openai_config.key
if form_data.ollama_config is not None:
app.state.config.OLLAMA_BASE_URL = form_data.ollama_config.url
app.state.config.OLLAMA_API_KEY = form_data.ollama_config.key
app.state.config.RAG_EMBEDDING_BATCH_SIZE = form_data.embedding_batch_size
update_embedding_model(app.state.config.RAG_EMBEDDING_MODEL)
@ -340,8 +387,16 @@ async def update_embedding_config(
app.state.config.RAG_EMBEDDING_ENGINE,
app.state.config.RAG_EMBEDDING_MODEL,
app.state.sentence_transformer_ef,
app.state.config.OPENAI_API_KEY,
app.state.config.OPENAI_API_BASE_URL,
(
app.state.config.OPENAI_API_BASE_URL
if app.state.config.RAG_EMBEDDING_ENGINE == "openai"
else app.state.config.OLLAMA_BASE_URL
),
(
app.state.config.OPENAI_API_KEY
if app.state.config.RAG_EMBEDDING_ENGINE == "openai"
else app.state.config.OLLAMA_API_KEY
),
app.state.config.RAG_EMBEDDING_BATCH_SIZE,
)
@ -354,6 +409,10 @@ async def update_embedding_config(
"url": app.state.config.OPENAI_API_BASE_URL,
"key": app.state.config.OPENAI_API_KEY,
},
"ollama_config": {
"url": app.state.config.OLLAMA_BASE_URL,
"key": app.state.config.OLLAMA_API_KEY,
},
}
except Exception as e:
log.exception(f"Problem updating embedding model: {e}")
@ -414,7 +473,7 @@ async def get_rag_config(user=Depends(get_admin_user)):
"translation": app.state.YOUTUBE_LOADER_TRANSLATION,
},
"web": {
"ssl_verification": app.state.config.ENABLE_RAG_WEB_LOADER_SSL_VERIFICATION,
"web_loader_ssl_verification": app.state.config.ENABLE_RAG_WEB_LOADER_SSL_VERIFICATION,
"search": {
"enabled": app.state.config.ENABLE_RAG_WEB_SEARCH,
"engine": app.state.config.RAG_WEB_SEARCH_ENGINE,
@ -430,6 +489,9 @@ async def get_rag_config(user=Depends(get_admin_user)):
"tavily_api_key": app.state.config.TAVILY_API_KEY,
"searchapi_api_key": app.state.config.SEARCHAPI_API_KEY,
"seaarchapi_engine": app.state.config.SEARCHAPI_ENGINE,
"jina_api_key": app.state.config.JINA_API_KEY,
"bing_search_v7_endpoint": app.state.config.BING_SEARCH_V7_ENDPOINT,
"bing_search_v7_subscription_key": app.state.config.BING_SEARCH_V7_SUBSCRIPTION_KEY,
"result_count": app.state.config.RAG_WEB_SEARCH_RESULT_COUNT,
"concurrent_requests": app.state.config.RAG_WEB_SEARCH_CONCURRENT_REQUESTS,
},
@ -473,6 +535,9 @@ class WebSearchConfig(BaseModel):
tavily_api_key: Optional[str] = None
searchapi_api_key: Optional[str] = None
searchapi_engine: Optional[str] = None
jina_api_key: Optional[str] = None
bing_search_v7_endpoint: Optional[str] = None
bing_search_v7_subscription_key: Optional[str] = None
result_count: Optional[int] = None
concurrent_requests: Optional[int] = None
@ -519,6 +584,7 @@ async def update_rag_config(form_data: ConfigUpdateForm, user=Depends(get_admin_
if form_data.web is not None:
app.state.config.ENABLE_RAG_WEB_LOADER_SSL_VERIFICATION = (
# Note: When UI "Bypass SSL verification for Websites"=True then ENABLE_RAG_WEB_LOADER_SSL_VERIFICATION=False
form_data.web.web_loader_ssl_verification
)
@ -540,6 +606,15 @@ async def update_rag_config(form_data: ConfigUpdateForm, user=Depends(get_admin_
app.state.config.TAVILY_API_KEY = form_data.web.search.tavily_api_key
app.state.config.SEARCHAPI_API_KEY = form_data.web.search.searchapi_api_key
app.state.config.SEARCHAPI_ENGINE = form_data.web.search.searchapi_engine
app.state.config.JINA_API_KEY = form_data.web.search.jina_api_key
app.state.config.BING_SEARCH_V7_ENDPOINT = (
form_data.web.search.bing_search_v7_endpoint
)
app.state.config.BING_SEARCH_V7_SUBSCRIPTION_KEY = (
form_data.web.search.bing_search_v7_subscription_key
)
app.state.config.RAG_WEB_SEARCH_RESULT_COUNT = form_data.web.search.result_count
app.state.config.RAG_WEB_SEARCH_CONCURRENT_REQUESTS = (
form_data.web.search.concurrent_requests
@ -566,7 +641,7 @@ async def update_rag_config(form_data: ConfigUpdateForm, user=Depends(get_admin_
"translation": app.state.YOUTUBE_LOADER_TRANSLATION,
},
"web": {
"ssl_verification": app.state.config.ENABLE_RAG_WEB_LOADER_SSL_VERIFICATION,
"web_loader_ssl_verification": app.state.config.ENABLE_RAG_WEB_LOADER_SSL_VERIFICATION,
"search": {
"enabled": app.state.config.ENABLE_RAG_WEB_SEARCH,
"engine": app.state.config.RAG_WEB_SEARCH_ENGINE,
@ -582,6 +657,9 @@ async def update_rag_config(form_data: ConfigUpdateForm, user=Depends(get_admin_
"serachapi_api_key": app.state.config.SEARCHAPI_API_KEY,
"searchapi_engine": app.state.config.SEARCHAPI_ENGINE,
"tavily_api_key": app.state.config.TAVILY_API_KEY,
"jina_api_key": app.state.config.JINA_API_KEY,
"bing_search_v7_endpoint": app.state.config.BING_SEARCH_V7_ENDPOINT,
"bing_search_v7_subscription_key": app.state.config.BING_SEARCH_V7_SUBSCRIPTION_KEY,
"result_count": app.state.config.RAG_WEB_SEARCH_RESULT_COUNT,
"concurrent_requests": app.state.config.RAG_WEB_SEARCH_CONCURRENT_REQUESTS,
},
@ -643,6 +721,23 @@ async def update_query_settings(
####################################
def _get_docs_info(docs: list[Document]) -> str:
docs_info = set()
# Trying to select relevant metadata identifying the document.
for doc in docs:
metadata = getattr(doc, "metadata", {})
doc_name = metadata.get("name", "")
if not doc_name:
doc_name = metadata.get("title", "")
if not doc_name:
doc_name = metadata.get("source", "")
if doc_name:
docs_info.add(doc_name)
return ", ".join(docs_info)
def save_docs_to_vector_db(
docs,
collection_name,
@ -651,7 +746,9 @@ def save_docs_to_vector_db(
split: bool = True,
add: bool = False,
) -> bool:
log.info(f"save_docs_to_vector_db {docs} {collection_name}")
log.info(
f"save_docs_to_vector_db: document {_get_docs_info(docs)} {collection_name}"
)
# Check if entries with the same hash (metadata.hash) already exist
if metadata and "hash" in metadata:
@ -733,8 +830,16 @@ def save_docs_to_vector_db(
app.state.config.RAG_EMBEDDING_ENGINE,
app.state.config.RAG_EMBEDDING_MODEL,
app.state.sentence_transformer_ef,
app.state.config.OPENAI_API_KEY,
app.state.config.OPENAI_API_BASE_URL,
(
app.state.config.OPENAI_API_BASE_URL
if app.state.config.RAG_EMBEDDING_ENGINE == "openai"
else app.state.config.OLLAMA_BASE_URL
),
(
app.state.config.OPENAI_API_KEY
if app.state.config.RAG_EMBEDDING_ENGINE == "openai"
else app.state.config.OLLAMA_API_KEY
),
app.state.config.RAG_EMBEDDING_BATCH_SIZE,
)
@ -959,12 +1064,10 @@ def process_youtube_video(form_data: ProcessUrlForm, user=Depends(get_verified_u
if not collection_name:
collection_name = calculate_sha256_string(form_data.url)[:63]
loader = YoutubeLoader.from_youtube_url(
form_data.url,
add_video_info=True,
language=app.state.config.YOUTUBE_LOADER_LANGUAGE,
translation=app.state.YOUTUBE_LOADER_TRANSLATION,
loader = YoutubeLoader(
form_data.url, language=app.state.config.YOUTUBE_LOADER_LANGUAGE
)
docs = loader.load()
content = " ".join([doc.page_content for doc in docs])
log.debug(f"text_content: {content}")
@ -1150,7 +1253,20 @@ def search_web(engine: str, query: str) -> list[SearchResult]:
else:
raise Exception("No SEARCHAPI_API_KEY found in environment variables")
elif engine == "jina":
return search_jina(query, app.state.config.RAG_WEB_SEARCH_RESULT_COUNT)
return search_jina(
app.state.config.JINA_API_KEY,
query,
app.state.config.RAG_WEB_SEARCH_RESULT_COUNT,
)
elif engine == "bing":
return search_bing(
app.state.config.BING_SEARCH_V7_SUBSCRIPTION_KEY,
app.state.config.BING_SEARCH_V7_ENDPOINT,
str(DEFAULT_LOCALE),
query,
app.state.config.RAG_WEB_SEARCH_RESULT_COUNT,
app.state.config.RAG_WEB_SEARCH_DOMAIN_FILTER_LIST,
)
else:
raise Exception("No search engine API key found in environment variables")
@ -1180,8 +1296,12 @@ def process_web_search(form_data: SearchForm, user=Depends(get_verified_user)):
urls = [result.link for result in web_results]
loader = get_web_loader(urls)
docs = loader.load()
loader = get_web_loader(
urls,
verify_ssl=app.state.config.ENABLE_RAG_WEB_LOADER_SSL_VERIFICATION,
requests_per_second=app.state.config.RAG_WEB_SEARCH_CONCURRENT_REQUESTS,
)
docs = loader.aload()
save_docs_to_vector_db(docs, collection_name, overwrite=True)

View file

@ -3,6 +3,7 @@ import os
import uuid
from typing import Optional, Union
import asyncio
import requests
from huggingface_hub import snapshot_download
@ -10,11 +11,6 @@ from langchain.retrievers import ContextualCompressionRetriever, EnsembleRetriev
from langchain_community.retrievers import BM25Retriever
from langchain_core.documents import Document
from open_webui.apps.ollama.main import (
GenerateEmbedForm,
generate_ollama_batch_embeddings,
)
from open_webui.apps.retrieval.vector.connector import VECTOR_DB_CLIENT
from open_webui.utils.misc import get_last_user_message
@ -76,7 +72,7 @@ def query_doc(
limit=k,
)
log.info(f"query_doc:result {result}")
log.info(f"query_doc:result {result.ids} {result.metadatas}")
return result
except Exception as e:
print(e)
@ -127,7 +123,10 @@ def query_doc_with_hybrid_search(
"metadatas": [[d.metadata for d in result]],
}
log.info(f"query_doc_with_hybrid_search:result {result}")
log.info(
"query_doc_with_hybrid_search:result "
+ f'{result["metadatas"]} {result["distances"]}'
)
return result
except Exception as e:
raise e
@ -178,14 +177,13 @@ def merge_and_sort_query_results(
def query_collection(
collection_names: list[str],
query: str,
queries: list[str],
embedding_function,
k: int,
) -> dict:
results = []
for query in queries:
query_embedding = embedding_function(query)
for collection_name in collection_names:
if collection_name:
try:
@ -206,7 +204,7 @@ def query_collection(
def query_collection_with_hybrid_search(
collection_names: list[str],
query: str,
queries: list[str],
embedding_function,
k: int,
reranking_function,
@ -216,6 +214,7 @@ def query_collection_with_hybrid_search(
error = False
for collection_name in collection_names:
try:
for query in queries:
result = query_doc_with_hybrid_search(
collection_name=collection_name,
query=query,
@ -281,8 +280,8 @@ def get_embedding_function(
embedding_engine,
embedding_model,
embedding_function,
openai_key,
openai_url,
url,
key,
embedding_batch_size,
):
if embedding_engine == "":
@ -292,8 +291,8 @@ def get_embedding_function(
engine=embedding_engine,
model=embedding_model,
text=query,
key=openai_key if embedding_engine == "openai" else "",
url=openai_url if embedding_engine == "openai" else "",
url=url,
key=key,
)
def generate_multiple(query, func):
@ -310,15 +309,14 @@ def get_embedding_function(
def get_rag_context(
files,
messages,
queries,
embedding_function,
k,
reranking_function,
r,
hybrid_search,
):
log.debug(f"files: {files} {messages} {embedding_function} {reranking_function}")
query = get_last_user_message(messages)
log.debug(f"files: {files} {queries} {embedding_function} {reranking_function}")
extracted_collections = []
relevant_contexts = []
@ -360,7 +358,7 @@ def get_rag_context(
try:
context = query_collection_with_hybrid_search(
collection_names=collection_names,
query=query,
queries=queries,
embedding_function=embedding_function,
k=k,
reranking_function=reranking_function,
@ -375,7 +373,7 @@ def get_rag_context(
if (not hybrid_search) or (context is None):
context = query_collection(
collection_names=collection_names,
query=query,
queries=queries,
embedding_function=embedding_function,
k=k,
)
@ -467,7 +465,7 @@ def get_model_path(model: str, update_model: bool = False):
def generate_openai_batch_embeddings(
model: str, texts: list[str], key: str, url: str = "https://api.openai.com/v1"
model: str, texts: list[str], url: str = "https://api.openai.com/v1", key: str = ""
) -> Optional[list[list[float]]]:
try:
r = requests.post(
@ -489,29 +487,50 @@ def generate_openai_batch_embeddings(
return None
def generate_ollama_batch_embeddings(
model: str, texts: list[str], url: str, key: str
) -> Optional[list[list[float]]]:
try:
r = requests.post(
f"{url}/api/embed",
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {key}",
},
json={"input": texts, "model": model},
)
r.raise_for_status()
data = r.json()
print(data)
if "embeddings" in data:
return data["embeddings"]
else:
raise "Something went wrong :/"
except Exception as e:
print(e)
return None
def generate_embeddings(engine: str, model: str, text: Union[str, list[str]], **kwargs):
url = kwargs.get("url", "")
key = kwargs.get("key", "")
if engine == "ollama":
if isinstance(text, list):
embeddings = generate_ollama_batch_embeddings(
GenerateEmbedForm(**{"model": model, "input": text})
**{"model": model, "texts": text, "url": url, "key": key}
)
else:
embeddings = generate_ollama_batch_embeddings(
GenerateEmbedForm(**{"model": model, "input": [text]})
)
return (
embeddings["embeddings"][0]
if isinstance(text, str)
else embeddings["embeddings"]
**{"model": model, "texts": [text], "url": url, "key": key}
)
return embeddings[0] if isinstance(text, str) else embeddings
elif engine == "openai":
key = kwargs.get("key", "")
url = kwargs.get("url", "https://api.openai.com/v1")
if isinstance(text, list):
embeddings = generate_openai_batch_embeddings(model, text, key, url)
embeddings = generate_openai_batch_embeddings(model, text, url, key)
else:
embeddings = generate_openai_batch_embeddings(model, [text], key, url)
embeddings = generate_openai_batch_embeddings(model, [text], url, key)
return embeddings[0] if isinstance(text, str) else embeddings

View file

@ -8,6 +8,14 @@ elif VECTOR_DB == "qdrant":
from open_webui.apps.retrieval.vector.dbs.qdrant import QdrantClient
VECTOR_DB_CLIENT = QdrantClient()
elif VECTOR_DB == "opensearch":
from open_webui.apps.retrieval.vector.dbs.opensearch import OpenSearchClient
VECTOR_DB_CLIENT = OpenSearchClient()
elif VECTOR_DB == "pgvector":
from open_webui.apps.retrieval.vector.dbs.pgvector import PgvectorClient
VECTOR_DB_CLIENT = PgvectorClient()
else:
from open_webui.apps.retrieval.vector.dbs.chroma import ChromaClient

View file

@ -13,11 +13,24 @@ from open_webui.config import (
CHROMA_HTTP_SSL,
CHROMA_TENANT,
CHROMA_DATABASE,
CHROMA_CLIENT_AUTH_PROVIDER,
CHROMA_CLIENT_AUTH_CREDENTIALS,
)
class ChromaClient:
def __init__(self):
settings_dict = {
"allow_reset": True,
"anonymized_telemetry": False,
}
if CHROMA_CLIENT_AUTH_PROVIDER is not None:
settings_dict["chroma_client_auth_provider"] = CHROMA_CLIENT_AUTH_PROVIDER
if CHROMA_CLIENT_AUTH_CREDENTIALS is not None:
settings_dict["chroma_client_auth_credentials"] = (
CHROMA_CLIENT_AUTH_CREDENTIALS
)
if CHROMA_HTTP_HOST != "":
self.client = chromadb.HttpClient(
host=CHROMA_HTTP_HOST,
@ -26,12 +39,12 @@ class ChromaClient:
ssl=CHROMA_HTTP_SSL,
tenant=CHROMA_TENANT,
database=CHROMA_DATABASE,
settings=Settings(allow_reset=True, anonymized_telemetry=False),
settings=Settings(**settings_dict),
)
else:
self.client = chromadb.PersistentClient(
path=CHROMA_DATA_PATH,
settings=Settings(allow_reset=True, anonymized_telemetry=False),
settings=Settings(**settings_dict),
tenant=CHROMA_TENANT,
database=CHROMA_DATABASE,
)

View file

@ -0,0 +1,178 @@
from opensearchpy import OpenSearch
from typing import Optional
from open_webui.apps.retrieval.vector.main import VectorItem, SearchResult, GetResult
from open_webui.config import (
OPENSEARCH_URI,
OPENSEARCH_SSL,
OPENSEARCH_CERT_VERIFY,
OPENSEARCH_USERNAME,
OPENSEARCH_PASSWORD,
)
class OpenSearchClient:
def __init__(self):
self.index_prefix = "open_webui"
self.client = OpenSearch(
hosts=[OPENSEARCH_URI],
use_ssl=OPENSEARCH_SSL,
verify_certs=OPENSEARCH_CERT_VERIFY,
http_auth=(OPENSEARCH_USERNAME, OPENSEARCH_PASSWORD),
)
def _result_to_get_result(self, result) -> GetResult:
ids = []
documents = []
metadatas = []
for hit in result["hits"]["hits"]:
ids.append(hit["_id"])
documents.append(hit["_source"].get("text"))
metadatas.append(hit["_source"].get("metadata"))
return GetResult(ids=ids, documents=documents, metadatas=metadatas)
def _result_to_search_result(self, result) -> SearchResult:
ids = []
distances = []
documents = []
metadatas = []
for hit in result["hits"]["hits"]:
ids.append(hit["_id"])
distances.append(hit["_score"])
documents.append(hit["_source"].get("text"))
metadatas.append(hit["_source"].get("metadata"))
return SearchResult(
ids=ids, distances=distances, documents=documents, metadatas=metadatas
)
def _create_index(self, index_name: str, dimension: int):
body = {
"mappings": {
"properties": {
"id": {"type": "keyword"},
"vector": {
"type": "dense_vector",
"dims": dimension, # Adjust based on your vector dimensions
"index": true,
"similarity": "faiss",
"method": {
"name": "hnsw",
"space_type": "ip", # Use inner product to approximate cosine similarity
"engine": "faiss",
"ef_construction": 128,
"m": 16,
},
},
"text": {"type": "text"},
"metadata": {"type": "object"},
}
}
}
self.client.indices.create(index=f"{self.index_prefix}_{index_name}", body=body)
def _create_batches(self, items: list[VectorItem], batch_size=100):
for i in range(0, len(items), batch_size):
yield items[i : i + batch_size]
def has_collection(self, index_name: str) -> bool:
# has_collection here means has index.
# We are simply adapting to the norms of the other DBs.
return self.client.indices.exists(index=f"{self.index_prefix}_{index_name}")
def delete_colleciton(self, index_name: str):
# delete_collection here means delete index.
# We are simply adapting to the norms of the other DBs.
self.client.indices.delete(index=f"{self.index_prefix}_{index_name}")
def search(
self, index_name: str, vectors: list[list[float]], limit: int
) -> Optional[SearchResult]:
query = {
"size": limit,
"_source": ["text", "metadata"],
"query": {
"script_score": {
"query": {"match_all": {}},
"script": {
"source": "cosineSimilarity(params.vector, 'vector') + 1.0",
"params": {
"vector": vectors[0]
}, # Assuming single query vector
},
}
},
}
result = self.client.search(
index=f"{self.index_prefix}_{index_name}", body=query
)
return self._result_to_search_result(result)
def get_or_create_index(self, index_name: str, dimension: int):
if not self.has_index(index_name):
self._create_index(index_name, dimension)
def get(self, index_name: str) -> Optional[GetResult]:
query = {"query": {"match_all": {}}, "_source": ["text", "metadata"]}
result = self.client.search(
index=f"{self.index_prefix}_{index_name}", body=query
)
return self._result_to_get_result(result)
def insert(self, index_name: str, items: list[VectorItem]):
if not self.has_index(index_name):
self._create_index(index_name, dimension=len(items[0]["vector"]))
for batch in self._create_batches(items):
actions = [
{
"index": {
"_id": item["id"],
"_source": {
"vector": item["vector"],
"text": item["text"],
"metadata": item["metadata"],
},
}
}
for item in batch
]
self.client.bulk(actions)
def upsert(self, index_name: str, items: list[VectorItem]):
if not self.has_index(index_name):
self._create_index(index_name, dimension=len(items[0]["vector"]))
for batch in self._create_batches(items):
actions = [
{
"index": {
"_id": item["id"],
"_source": {
"vector": item["vector"],
"text": item["text"],
"metadata": item["metadata"],
},
}
}
for item in batch
]
self.client.bulk(actions)
def delete(self, index_name: str, ids: list[str]):
actions = [
{"delete": {"_index": f"{self.index_prefix}_{index_name}", "_id": id}}
for id in ids
]
self.client.bulk(body=actions)
def reset(self):
indices = self.client.indices.get(index=f"{self.index_prefix}_*")
for index in indices:
self.client.indices.delete(index=index)

View file

@ -0,0 +1,354 @@
from typing import Optional, List, Dict, Any
from sqlalchemy import (
cast,
column,
create_engine,
Column,
Integer,
select,
text,
Text,
values,
)
from sqlalchemy.sql import true
from sqlalchemy.pool import NullPool
from sqlalchemy.orm import declarative_base, scoped_session, sessionmaker
from sqlalchemy.dialects.postgresql import JSONB, array
from pgvector.sqlalchemy import Vector
from sqlalchemy.ext.mutable import MutableDict
from open_webui.apps.retrieval.vector.main import VectorItem, SearchResult, GetResult
from open_webui.config import PGVECTOR_DB_URL
VECTOR_LENGTH = 1536
Base = declarative_base()
class DocumentChunk(Base):
__tablename__ = "document_chunk"
id = Column(Text, primary_key=True)
vector = Column(Vector(dim=VECTOR_LENGTH), nullable=True)
collection_name = Column(Text, nullable=False)
text = Column(Text, nullable=True)
vmetadata = Column(MutableDict.as_mutable(JSONB), nullable=True)
class PgvectorClient:
def __init__(self) -> None:
# if no pgvector uri, use the existing database connection
if not PGVECTOR_DB_URL:
from open_webui.apps.webui.internal.db import Session
self.session = Session
else:
engine = create_engine(
PGVECTOR_DB_URL, pool_pre_ping=True, poolclass=NullPool
)
SessionLocal = sessionmaker(
autocommit=False, autoflush=False, bind=engine, expire_on_commit=False
)
self.session = scoped_session(SessionLocal)
try:
# Ensure the pgvector extension is available
self.session.execute(text("CREATE EXTENSION IF NOT EXISTS vector;"))
# Create the tables if they do not exist
# Base.metadata.create_all requires a bind (engine or connection)
# Get the connection from the session
connection = self.session.connection()
Base.metadata.create_all(bind=connection)
# Create an index on the vector column if it doesn't exist
self.session.execute(
text(
"CREATE INDEX IF NOT EXISTS idx_document_chunk_vector "
"ON document_chunk USING ivfflat (vector vector_cosine_ops) WITH (lists = 100);"
)
)
self.session.execute(
text(
"CREATE INDEX IF NOT EXISTS idx_document_chunk_collection_name "
"ON document_chunk (collection_name);"
)
)
self.session.commit()
print("Initialization complete.")
except Exception as e:
self.session.rollback()
print(f"Error during initialization: {e}")
raise
def adjust_vector_length(self, vector: List[float]) -> List[float]:
# Adjust vector to have length VECTOR_LENGTH
current_length = len(vector)
if current_length < VECTOR_LENGTH:
# Pad the vector with zeros
vector += [0.0] * (VECTOR_LENGTH - current_length)
elif current_length > VECTOR_LENGTH:
raise Exception(
f"Vector length {current_length} not supported. Max length must be <= {VECTOR_LENGTH}"
)
return vector
def insert(self, collection_name: str, items: List[VectorItem]) -> None:
try:
new_items = []
for item in items:
vector = self.adjust_vector_length(item["vector"])
new_chunk = DocumentChunk(
id=item["id"],
vector=vector,
collection_name=collection_name,
text=item["text"],
vmetadata=item["metadata"],
)
new_items.append(new_chunk)
self.session.bulk_save_objects(new_items)
self.session.commit()
print(
f"Inserted {len(new_items)} items into collection '{collection_name}'."
)
except Exception as e:
self.session.rollback()
print(f"Error during insert: {e}")
raise
def upsert(self, collection_name: str, items: List[VectorItem]) -> None:
try:
for item in items:
vector = self.adjust_vector_length(item["vector"])
existing = (
self.session.query(DocumentChunk)
.filter(DocumentChunk.id == item["id"])
.first()
)
if existing:
existing.vector = vector
existing.text = item["text"]
existing.vmetadata = item["metadata"]
existing.collection_name = (
collection_name # Update collection_name if necessary
)
else:
new_chunk = DocumentChunk(
id=item["id"],
vector=vector,
collection_name=collection_name,
text=item["text"],
vmetadata=item["metadata"],
)
self.session.add(new_chunk)
self.session.commit()
print(f"Upserted {len(items)} items into collection '{collection_name}'.")
except Exception as e:
self.session.rollback()
print(f"Error during upsert: {e}")
raise
def search(
self,
collection_name: str,
vectors: List[List[float]],
limit: Optional[int] = None,
) -> Optional[SearchResult]:
try:
if not vectors:
return None
# Adjust query vectors to VECTOR_LENGTH
vectors = [self.adjust_vector_length(vector) for vector in vectors]
num_queries = len(vectors)
def vector_expr(vector):
return cast(array(vector), Vector(VECTOR_LENGTH))
# Create the values for query vectors
qid_col = column("qid", Integer)
q_vector_col = column("q_vector", Vector(VECTOR_LENGTH))
query_vectors = (
values(qid_col, q_vector_col)
.data(
[(idx, vector_expr(vector)) for idx, vector in enumerate(vectors)]
)
.alias("query_vectors")
)
# Build the lateral subquery for each query vector
subq = (
select(
DocumentChunk.id,
DocumentChunk.text,
DocumentChunk.vmetadata,
(
DocumentChunk.vector.cosine_distance(query_vectors.c.q_vector)
).label("distance"),
)
.where(DocumentChunk.collection_name == collection_name)
.order_by(
(DocumentChunk.vector.cosine_distance(query_vectors.c.q_vector))
)
)
if limit is not None:
subq = subq.limit(limit)
subq = subq.lateral("result")
# Build the main query by joining query_vectors and the lateral subquery
stmt = (
select(
query_vectors.c.qid,
subq.c.id,
subq.c.text,
subq.c.vmetadata,
subq.c.distance,
)
.select_from(query_vectors)
.join(subq, true())
.order_by(query_vectors.c.qid, subq.c.distance)
)
result_proxy = self.session.execute(stmt)
results = result_proxy.all()
ids = [[] for _ in range(num_queries)]
distances = [[] for _ in range(num_queries)]
documents = [[] for _ in range(num_queries)]
metadatas = [[] for _ in range(num_queries)]
if not results:
return SearchResult(
ids=ids,
distances=distances,
documents=documents,
metadatas=metadatas,
)
for row in results:
qid = int(row.qid)
ids[qid].append(row.id)
distances[qid].append(row.distance)
documents[qid].append(row.text)
metadatas[qid].append(row.vmetadata)
return SearchResult(
ids=ids, distances=distances, documents=documents, metadatas=metadatas
)
except Exception as e:
print(f"Error during search: {e}")
return None
def query(
self, collection_name: str, filter: Dict[str, Any], limit: Optional[int] = None
) -> Optional[GetResult]:
try:
query = self.session.query(DocumentChunk).filter(
DocumentChunk.collection_name == collection_name
)
for key, value in filter.items():
query = query.filter(DocumentChunk.vmetadata[key].astext == str(value))
if limit is not None:
query = query.limit(limit)
results = query.all()
if not results:
return None
ids = [[result.id for result in results]]
documents = [[result.text for result in results]]
metadatas = [[result.vmetadata for result in results]]
return GetResult(
ids=ids,
documents=documents,
metadatas=metadatas,
)
except Exception as e:
print(f"Error during query: {e}")
return None
def get(
self, collection_name: str, limit: Optional[int] = None
) -> Optional[GetResult]:
try:
query = self.session.query(DocumentChunk).filter(
DocumentChunk.collection_name == collection_name
)
if limit is not None:
query = query.limit(limit)
results = query.all()
if not results:
return None
ids = [[result.id for result in results]]
documents = [[result.text for result in results]]
metadatas = [[result.vmetadata for result in results]]
return GetResult(ids=ids, documents=documents, metadatas=metadatas)
except Exception as e:
print(f"Error during get: {e}")
return None
def delete(
self,
collection_name: str,
ids: Optional[List[str]] = None,
filter: Optional[Dict[str, Any]] = None,
) -> None:
try:
query = self.session.query(DocumentChunk).filter(
DocumentChunk.collection_name == collection_name
)
if ids:
query = query.filter(DocumentChunk.id.in_(ids))
if filter:
for key, value in filter.items():
query = query.filter(
DocumentChunk.vmetadata[key].astext == str(value)
)
deleted = query.delete(synchronize_session=False)
self.session.commit()
print(f"Deleted {deleted} items from collection '{collection_name}'.")
except Exception as e:
self.session.rollback()
print(f"Error during delete: {e}")
raise
def reset(self) -> None:
try:
deleted = self.session.query(DocumentChunk).delete()
self.session.commit()
print(
f"Reset complete. Deleted {deleted} items from 'document_chunk' table."
)
except Exception as e:
self.session.rollback()
print(f"Error during reset: {e}")
raise
def close(self) -> None:
pass
def has_collection(self, collection_name: str) -> bool:
try:
exists = (
self.session.query(DocumentChunk)
.filter(DocumentChunk.collection_name == collection_name)
.first()
is not None
)
return exists
except Exception as e:
print(f"Error checking collection existence: {e}")
return False
def delete_collection(self, collection_name: str) -> None:
self.delete(collection_name)
print(f"Collection '{collection_name}' deleted.")

View file

@ -5,7 +5,7 @@ from qdrant_client.http.models import PointStruct
from qdrant_client.models import models
from open_webui.apps.retrieval.vector.main import VectorItem, SearchResult, GetResult
from open_webui.config import QDRANT_URI
from open_webui.config import QDRANT_URI, QDRANT_API_KEY
NO_LIMIT = 999999999
@ -14,7 +14,12 @@ class QdrantClient:
def __init__(self):
self.collection_prefix = "open-webui"
self.QDRANT_URI = QDRANT_URI
self.client = Qclient(url=self.QDRANT_URI) if self.QDRANT_URI else None
self.QDRANT_API_KEY = QDRANT_API_KEY
self.client = (
Qclient(url=self.QDRANT_URI, api_key=self.QDRANT_API_KEY)
if self.QDRANT_URI
else None
)
def _result_to_get_result(self, points) -> GetResult:
ids = []

View file

@ -0,0 +1,73 @@
import logging
import os
from pprint import pprint
from typing import Optional
import requests
from open_webui.apps.retrieval.web.main import SearchResult, get_filtered_results
from open_webui.env import SRC_LOG_LEVELS
import argparse
log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["RAG"])
"""
Documentation: https://docs.microsoft.com/en-us/bing/search-apis/bing-web-search/overview
"""
def search_bing(
subscription_key: str,
endpoint: str,
locale: str,
query: str,
count: int,
filter_list: Optional[list[str]] = None,
) -> list[SearchResult]:
mkt = locale
params = {"q": query, "mkt": mkt, "answerCount": count}
headers = {"Ocp-Apim-Subscription-Key": subscription_key}
try:
response = requests.get(endpoint, headers=headers, params=params)
response.raise_for_status()
json_response = response.json()
results = json_response.get("webPages", {}).get("value", [])
if filter_list:
results = get_filtered_results(results, filter_list)
return [
SearchResult(
link=result["url"],
title=result.get("name"),
snippet=result.get("snippet"),
)
for result in results
]
except Exception as ex:
log.error(f"Error: {ex}")
raise ex
def main():
parser = argparse.ArgumentParser(description="Search Bing from the command line.")
parser.add_argument(
"query",
type=str,
default="Top 10 international news today",
help="The search query.",
)
parser.add_argument(
"--count", type=int, default=10, help="Number of search results to return."
)
parser.add_argument(
"--filter", nargs="*", help="List of filters to apply to the search results."
)
parser.add_argument(
"--locale",
type=str,
default="en-US",
help="The locale to use for the search, maps to market in api",
)
args = parser.parse_args()
results = search_bing(args.locale, args.query, args.count, args.filter)
pprint(results)

View file

@ -9,7 +9,7 @@ log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["RAG"])
def search_jina(query: str, count: int) -> list[SearchResult]:
def search_jina(api_key: str, query: str, count: int) -> list[SearchResult]:
"""
Search using Jina's Search API and return the results as a list of SearchResult objects.
Args:
@ -20,9 +20,7 @@ def search_jina(query: str, count: int) -> list[SearchResult]:
list[SearchResult]: A list of search results
"""
jina_search_endpoint = "https://s.jina.ai/"
headers = {
"Accept": "application/json",
}
headers = {"Accept": "application/json", "Authorization": f"Bearer {api_key}"}
url = str(URL(jina_search_endpoint + query))
response = requests.get(url, headers=headers)
response.raise_for_status()

View file

@ -0,0 +1,58 @@
{
"_type": "SearchResponse",
"queryContext": {
"originalQuery": "Top 10 international results"
},
"webPages": {
"webSearchUrl": "https://www.bing.com/search?q=Top+10+international+results",
"totalEstimatedMatches": 687,
"value": [
{
"id": "https://api.bing.microsoft.com/api/v7/#WebPages.0",
"name": "2024 Mexican Grand Prix - F1 results and latest standings ... - PlanetF1",
"url": "https://www.planetf1.com/news/f1-results-2024-mexican-grand-prix-race-standings",
"datePublished": "2024-10-27T00:00:00.0000000",
"datePublishedFreshnessText": "1 day ago",
"isFamilyFriendly": true,
"displayUrl": "https://www.planetf1.com/news/f1-results-2024-mexican-grand-prix-race-standings",
"snippet": "Nico Hulkenberg and Pierre Gasly completed the top 10. A full report of the Mexican Grand Prix is available at the bottom of this article. F1 results 2024 Mexican Grand Prix",
"dateLastCrawled": "2024-10-28T07:15:00.0000000Z",
"cachedPageUrl": "https://cc.bingj.com/cache.aspx?q=Top+10+international+results&d=916492551782&mkt=en-US&setlang=en-US&w=zBsfaAPyF2tUrHFHr_vFFdUm8sng4g34",
"language": "en",
"isNavigational": false,
"noCache": false
},
{
"id": "https://api.bing.microsoft.com/api/v7/#WebPages.1",
"name": "F1 Results Today: HUGE Verstappen penalties cause major title change",
"url": "https://www.gpfans.com/en/f1-news/1033512/f1-results-today-mexican-grand-prix-huge-max-verstappen-penalties-cause-major-title-change/",
"datePublished": "2024-10-27T00:00:00.0000000",
"datePublishedFreshnessText": "1 day ago",
"isFamilyFriendly": true,
"displayUrl": "https://www.gpfans.com/en/f1-news/1033512/f1-results-today-mexican-grand-prix-huge-max...",
"snippet": "Elsewhere, Mercedes duo Lewis Hamilton and George Russell came home in P4 and P5 respectively. Meanwhile, the surprise package of the day were Haas, with both Kevin Magnussen and Nico Hulkenberg finishing inside the points.. READ MORE: RB star issues apology after red flag CRASH at Mexican GP Mexican Grand Prix 2024 results. 1. Carlos Sainz [Ferrari] 2. Lando Norris [McLaren] - +4.705",
"dateLastCrawled": "2024-10-28T06:06:00.0000000Z",
"cachedPageUrl": "https://cc.bingj.com/cache.aspx?q=Top+10+international+results&d=2840656522642&mkt=en-US&setlang=en-US&w=-Tbkwxnq52jZCvG7l3CtgcwT1vwAjIUD",
"language": "en",
"isNavigational": false,
"noCache": false
},
{
"id": "https://api.bing.microsoft.com/api/v7/#WebPages.2",
"name": "International Power Rankings: England flying, Kangaroos cruising, Fiji rise",
"url": "https://www.loverugbyleague.com/post/international-power-rankings-england-flying-kangaroos-cruising-fiji-rise",
"datePublished": "2024-10-28T00:00:00.0000000",
"datePublishedFreshnessText": "7 hours ago",
"isFamilyFriendly": true,
"displayUrl": "https://www.loverugbyleague.com/post/international-power-rankings-england-flying...",
"snippet": "LRL RECOMMENDS: England player ratings from first Test against Samoa as omnificent George Williams scores perfect 10. 2. Australia (Men) SAME. The Kangaroos remain 2nd in our Power Rankings after their 22-10 win against New Zealand in Christchurch on Sunday. As was the case in their win against Tonga last week, Mal Meningas side weren ...",
"dateLastCrawled": "2024-10-28T07:09:00.0000000Z",
"cachedPageUrl": "https://cc.bingj.com/cache.aspx?q=Top+10+international+results&d=1535008462672&mkt=en-US&setlang=en-US&w=82ujhH4Kp0iuhCS7wh1xLUFYUeetaVVm",
"language": "en",
"isNavigational": false,
"noCache": false
}
],
"someResultsRemoved": true
}
}

View file

@ -1,3 +1,5 @@
# TODO: move socket to webui app
import asyncio
import socketio
import logging

View file

@ -12,6 +12,7 @@ from open_webui.apps.webui.routers import (
chats,
folders,
configs,
groups,
files,
functions,
memories,
@ -34,6 +35,7 @@ from open_webui.config import (
ENABLE_LOGIN_FORM,
ENABLE_MESSAGE_RATING,
ENABLE_SIGNUP,
ENABLE_API_KEY,
ENABLE_EVALUATION_ARENA_MODELS,
EVALUATION_ARENA_MODELS,
DEFAULT_ARENA_MODEL,
@ -50,9 +52,22 @@ from open_webui.config import (
WEBHOOK_URL,
WEBUI_AUTH,
WEBUI_BANNERS,
ENABLE_LDAP,
LDAP_SERVER_LABEL,
LDAP_SERVER_HOST,
LDAP_SERVER_PORT,
LDAP_ATTRIBUTE_FOR_USERNAME,
LDAP_SEARCH_FILTERS,
LDAP_SEARCH_BASE,
LDAP_APP_DN,
LDAP_APP_PASSWORD,
LDAP_USE_TLS,
LDAP_CA_CERT_FILE,
LDAP_CIPHERS,
AppConfig,
)
from open_webui.env import (
ENV,
WEBUI_AUTH_TRUSTED_EMAIL_HEADER,
WEBUI_AUTH_TRUSTED_NAME_HEADER,
)
@ -72,7 +87,11 @@ from open_webui.utils.payload import (
from open_webui.utils.tools import get_tools
app = FastAPI()
app = FastAPI(
docs_url="/docs" if ENV == "dev" else None,
openapi_url="/openapi.json" if ENV == "dev" else None,
redoc_url=None,
)
log = logging.getLogger(__name__)
@ -80,6 +99,8 @@ app.state.config = AppConfig()
app.state.config.ENABLE_SIGNUP = ENABLE_SIGNUP
app.state.config.ENABLE_LOGIN_FORM = ENABLE_LOGIN_FORM
app.state.config.ENABLE_API_KEY = ENABLE_API_KEY
app.state.config.JWT_EXPIRES_IN = JWT_EXPIRES_IN
app.state.AUTH_TRUSTED_EMAIL_HEADER = WEBUI_AUTH_TRUSTED_EMAIL_HEADER
app.state.AUTH_TRUSTED_NAME_HEADER = WEBUI_AUTH_TRUSTED_NAME_HEADER
@ -92,6 +113,8 @@ app.state.config.ADMIN_EMAIL = ADMIN_EMAIL
app.state.config.DEFAULT_MODELS = DEFAULT_MODELS
app.state.config.DEFAULT_PROMPT_SUGGESTIONS = DEFAULT_PROMPT_SUGGESTIONS
app.state.config.DEFAULT_USER_ROLE = DEFAULT_USER_ROLE
app.state.config.USER_PERMISSIONS = USER_PERMISSIONS
app.state.config.WEBHOOK_URL = WEBHOOK_URL
app.state.config.BANNERS = WEBUI_BANNERS
@ -111,7 +134,19 @@ app.state.config.OAUTH_ROLES_CLAIM = OAUTH_ROLES_CLAIM
app.state.config.OAUTH_ALLOWED_ROLES = OAUTH_ALLOWED_ROLES
app.state.config.OAUTH_ADMIN_ROLES = OAUTH_ADMIN_ROLES
app.state.MODELS = {}
app.state.config.ENABLE_LDAP = ENABLE_LDAP
app.state.config.LDAP_SERVER_LABEL = LDAP_SERVER_LABEL
app.state.config.LDAP_SERVER_HOST = LDAP_SERVER_HOST
app.state.config.LDAP_SERVER_PORT = LDAP_SERVER_PORT
app.state.config.LDAP_ATTRIBUTE_FOR_USERNAME = LDAP_ATTRIBUTE_FOR_USERNAME
app.state.config.LDAP_APP_DN = LDAP_APP_DN
app.state.config.LDAP_APP_PASSWORD = LDAP_APP_PASSWORD
app.state.config.LDAP_SEARCH_BASE = LDAP_SEARCH_BASE
app.state.config.LDAP_SEARCH_FILTERS = LDAP_SEARCH_FILTERS
app.state.config.LDAP_USE_TLS = LDAP_USE_TLS
app.state.config.LDAP_CA_CERT_FILE = LDAP_CA_CERT_FILE
app.state.config.LDAP_CIPHERS = LDAP_CIPHERS
app.state.TOOLS = {}
app.state.FUNCTIONS = {}
@ -135,13 +170,15 @@ app.include_router(models.router, prefix="/models", tags=["models"])
app.include_router(knowledge.router, prefix="/knowledge", tags=["knowledge"])
app.include_router(prompts.router, prefix="/prompts", tags=["prompts"])
app.include_router(tools.router, prefix="/tools", tags=["tools"])
app.include_router(functions.router, prefix="/functions", tags=["functions"])
app.include_router(memories.router, prefix="/memories", tags=["memories"])
app.include_router(folders.router, prefix="/folders", tags=["folders"])
app.include_router(groups.router, prefix="/groups", tags=["groups"])
app.include_router(files.router, prefix="/files", tags=["files"])
app.include_router(functions.router, prefix="/functions", tags=["functions"])
app.include_router(evaluations.router, prefix="/evaluations", tags=["evaluations"])
app.include_router(folders.router, prefix="/folders", tags=["folders"])
app.include_router(files.router, prefix="/files", tags=["files"])
app.include_router(utils.router, prefix="/utils", tags=["utils"])
@ -336,7 +373,7 @@ def get_function_params(function_module, form_data, user, extra_params=None):
return params
async def generate_function_chat_completion(form_data, user):
async def generate_function_chat_completion(form_data, user, models: dict = {}):
model_id = form_data.get("model")
model_info = Models.get_model_by_id(model_id)
@ -372,6 +409,7 @@ async def generate_function_chat_completion(form_data, user):
"name": user.name,
"role": user.role,
},
"__metadata__": metadata,
}
extra_params["__tools__"] = get_tools(
app,
@ -379,7 +417,7 @@ async def generate_function_chat_completion(form_data, user):
user,
{
**extra_params,
"__model__": app.state.MODELS[form_data["model"]],
"__model__": models.get(form_data["model"], None),
"__messages__": form_data["messages"],
"__files__": files,
},

View file

@ -64,6 +64,11 @@ class SigninForm(BaseModel):
password: str
class LdapForm(BaseModel):
user: str
password: str
class ProfileImageUrlForm(BaseModel):
profile_image_url: str

View file

@ -203,15 +203,22 @@ class ChatTable:
def update_shared_chat_by_chat_id(self, chat_id: str) -> Optional[ChatModel]:
try:
with get_db() as db:
print("update_shared_chat_by_id")
chat = db.get(Chat, chat_id)
print(chat)
chat.title = chat.title
chat.chat = chat.chat
db.commit()
db.refresh(chat)
shared_chat = (
db.query(Chat).filter_by(user_id=f"shared-{chat_id}").first()
)
return self.get_chat_by_id(chat.share_id)
if shared_chat is None:
return self.insert_shared_chat_by_chat_id(chat_id)
shared_chat.title = chat.title
shared_chat.chat = chat.chat
shared_chat.updated_at = int(time.time())
db.commit()
db.refresh(shared_chat)
return ChatModel.model_validate(shared_chat)
except Exception:
return None

View file

@ -1,157 +0,0 @@
import json
import logging
import time
from typing import Optional
from open_webui.apps.webui.internal.db import Base, get_db
from open_webui.env import SRC_LOG_LEVELS
from pydantic import BaseModel, ConfigDict
from sqlalchemy import BigInteger, Column, String, Text
log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["MODELS"])
####################
# Documents DB Schema
####################
class Document(Base):
__tablename__ = "document"
collection_name = Column(String, primary_key=True)
name = Column(String, unique=True)
title = Column(Text)
filename = Column(Text)
content = Column(Text, nullable=True)
user_id = Column(String)
timestamp = Column(BigInteger)
class DocumentModel(BaseModel):
model_config = ConfigDict(from_attributes=True)
collection_name: str
name: str
title: str
filename: str
content: Optional[str] = None
user_id: str
timestamp: int # timestamp in epoch
####################
# Forms
####################
class DocumentResponse(BaseModel):
collection_name: str
name: str
title: str
filename: str
content: Optional[dict] = None
user_id: str
timestamp: int # timestamp in epoch
class DocumentUpdateForm(BaseModel):
name: str
title: str
class DocumentForm(DocumentUpdateForm):
collection_name: str
filename: str
content: Optional[str] = None
class DocumentsTable:
def insert_new_doc(
self, user_id: str, form_data: DocumentForm
) -> Optional[DocumentModel]:
with get_db() as db:
document = DocumentModel(
**{
**form_data.model_dump(),
"user_id": user_id,
"timestamp": int(time.time()),
}
)
try:
result = Document(**document.model_dump())
db.add(result)
db.commit()
db.refresh(result)
if result:
return DocumentModel.model_validate(result)
else:
return None
except Exception:
return None
def get_doc_by_name(self, name: str) -> Optional[DocumentModel]:
try:
with get_db() as db:
document = db.query(Document).filter_by(name=name).first()
return DocumentModel.model_validate(document) if document else None
except Exception:
return None
def get_docs(self) -> list[DocumentModel]:
with get_db() as db:
return [
DocumentModel.model_validate(doc) for doc in db.query(Document).all()
]
def update_doc_by_name(
self, name: str, form_data: DocumentUpdateForm
) -> Optional[DocumentModel]:
try:
with get_db() as db:
db.query(Document).filter_by(name=name).update(
{
"title": form_data.title,
"name": form_data.name,
"timestamp": int(time.time()),
}
)
db.commit()
return self.get_doc_by_name(form_data.name)
except Exception as e:
log.exception(e)
return None
def update_doc_content_by_name(
self, name: str, updated: dict
) -> Optional[DocumentModel]:
try:
doc = self.get_doc_by_name(name)
doc_content = json.loads(doc.content if doc.content else "{}")
doc_content = {**doc_content, **updated}
with get_db() as db:
db.query(Document).filter_by(name=name).update(
{
"content": json.dumps(doc_content),
"timestamp": int(time.time()),
}
)
db.commit()
return self.get_doc_by_name(name)
except Exception as e:
log.exception(e)
return None
def delete_doc_by_name(self, name: str) -> bool:
try:
with get_db() as db:
db.query(Document).filter_by(name=name).delete()
db.commit()
return True
except Exception:
return False
Documents = DocumentsTable()

View file

@ -0,0 +1,186 @@
import json
import logging
import time
from typing import Optional
import uuid
from open_webui.apps.webui.internal.db import Base, get_db
from open_webui.env import SRC_LOG_LEVELS
from open_webui.apps.webui.models.files import FileMetadataResponse
from pydantic import BaseModel, ConfigDict
from sqlalchemy import BigInteger, Column, String, Text, JSON, func
log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["MODELS"])
####################
# UserGroup DB Schema
####################
class Group(Base):
__tablename__ = "group"
id = Column(Text, unique=True, primary_key=True)
user_id = Column(Text)
name = Column(Text)
description = Column(Text)
data = Column(JSON, nullable=True)
meta = Column(JSON, nullable=True)
permissions = Column(JSON, nullable=True)
user_ids = Column(JSON, nullable=True)
created_at = Column(BigInteger)
updated_at = Column(BigInteger)
class GroupModel(BaseModel):
model_config = ConfigDict(from_attributes=True)
id: str
user_id: str
name: str
description: str
data: Optional[dict] = None
meta: Optional[dict] = None
permissions: Optional[dict] = None
user_ids: list[str] = []
created_at: int # timestamp in epoch
updated_at: int # timestamp in epoch
####################
# Forms
####################
class GroupResponse(BaseModel):
id: str
user_id: str
name: str
description: str
permissions: Optional[dict] = None
data: Optional[dict] = None
meta: Optional[dict] = None
user_ids: list[str] = []
created_at: int # timestamp in epoch
updated_at: int # timestamp in epoch
class GroupForm(BaseModel):
name: str
description: str
class GroupUpdateForm(GroupForm):
permissions: Optional[dict] = None
user_ids: Optional[list[str]] = None
admin_ids: Optional[list[str]] = None
class GroupTable:
def insert_new_group(
self, user_id: str, form_data: GroupForm
) -> Optional[GroupModel]:
with get_db() as db:
group = GroupModel(
**{
**form_data.model_dump(),
"id": str(uuid.uuid4()),
"user_id": user_id,
"created_at": int(time.time()),
"updated_at": int(time.time()),
}
)
try:
result = Group(**group.model_dump())
db.add(result)
db.commit()
db.refresh(result)
if result:
return GroupModel.model_validate(result)
else:
return None
except Exception:
return None
def get_groups(self) -> list[GroupModel]:
with get_db() as db:
return [
GroupModel.model_validate(group)
for group in db.query(Group).order_by(Group.updated_at.desc()).all()
]
def get_groups_by_member_id(self, user_id: str) -> list[GroupModel]:
with get_db() as db:
return [
GroupModel.model_validate(group)
for group in db.query(Group)
.filter(
func.json_array_length(Group.user_ids) > 0
) # Ensure array exists
.filter(
Group.user_ids.cast(String).like(f'%"{user_id}"%')
) # String-based check
.order_by(Group.updated_at.desc())
.all()
]
def get_group_by_id(self, id: str) -> Optional[GroupModel]:
try:
with get_db() as db:
group = db.query(Group).filter_by(id=id).first()
return GroupModel.model_validate(group) if group else None
except Exception:
return None
def update_group_by_id(
self, id: str, form_data: GroupUpdateForm, overwrite: bool = False
) -> Optional[GroupModel]:
try:
with get_db() as db:
db.query(Group).filter_by(id=id).update(
{
**form_data.model_dump(exclude_none=True),
"updated_at": int(time.time()),
}
)
db.commit()
return self.get_group_by_id(id=id)
except Exception as e:
log.exception(e)
return None
def delete_group_by_id(self, id: str) -> bool:
try:
with get_db() as db:
db.query(Group).filter_by(id=id).delete()
db.commit()
return True
except Exception:
return False
def delete_all_groups(self) -> bool:
with get_db() as db:
try:
db.query(Group).delete()
db.commit()
return True
except Exception:
return False
Groups = GroupTable()

View file

@ -8,11 +8,13 @@ from open_webui.apps.webui.internal.db import Base, get_db
from open_webui.env import SRC_LOG_LEVELS
from open_webui.apps.webui.models.files import FileMetadataResponse
from open_webui.apps.webui.models.users import Users, UserResponse
from pydantic import BaseModel, ConfigDict
from sqlalchemy import BigInteger, Column, String, Text, JSON
from open_webui.utils.access_control import has_access
log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["MODELS"])
@ -34,6 +36,23 @@ class Knowledge(Base):
data = Column(JSON, nullable=True)
meta = Column(JSON, nullable=True)
access_control = Column(JSON, nullable=True) # Controls data access levels.
# Defines access control rules for this entry.
# - `None`: Public access, available to all users with the "user" role.
# - `{}`: Private access, restricted exclusively to the owner.
# - Custom permissions: Specific access control for reading and writing;
# Can specify group or user-level restrictions:
# {
# "read": {
# "group_ids": ["group_id1", "group_id2"],
# "user_ids": ["user_id1", "user_id2"]
# },
# "write": {
# "group_ids": ["group_id1", "group_id2"],
# "user_ids": ["user_id1", "user_id2"]
# }
# }
created_at = Column(BigInteger)
updated_at = Column(BigInteger)
@ -50,6 +69,8 @@ class KnowledgeModel(BaseModel):
data: Optional[dict] = None
meta: Optional[dict] = None
access_control: Optional[dict] = None
created_at: int # timestamp in epoch
updated_at: int # timestamp in epoch
@ -59,15 +80,15 @@ class KnowledgeModel(BaseModel):
####################
class KnowledgeResponse(BaseModel):
id: str
name: str
description: str
data: Optional[dict] = None
meta: Optional[dict] = None
created_at: int # timestamp in epoch
updated_at: int # timestamp in epoch
class KnowledgeUserModel(KnowledgeModel):
user: Optional[UserResponse] = None
class KnowledgeResponse(KnowledgeModel):
files: Optional[list[FileMetadataResponse | dict]] = None
class KnowledgeUserResponse(KnowledgeUserModel):
files: Optional[list[FileMetadataResponse | dict]] = None
@ -75,12 +96,7 @@ class KnowledgeForm(BaseModel):
name: str
description: str
data: Optional[dict] = None
class KnowledgeUpdateForm(BaseModel):
name: Optional[str] = None
description: Optional[str] = None
data: Optional[dict] = None
access_control: Optional[dict] = None
class KnowledgeTable:
@ -110,13 +126,32 @@ class KnowledgeTable:
except Exception:
return None
def get_knowledge_items(self) -> list[KnowledgeModel]:
def get_knowledge_bases(self) -> list[KnowledgeUserModel]:
with get_db() as db:
knowledge_bases = []
for knowledge in (
db.query(Knowledge).order_by(Knowledge.updated_at.desc()).all()
):
user = Users.get_user_by_id(knowledge.user_id)
knowledge_bases.append(
KnowledgeUserModel.model_validate(
{
**KnowledgeModel.model_validate(knowledge).model_dump(),
"user": user.model_dump() if user else None,
}
)
)
return knowledge_bases
def get_knowledge_bases_by_user_id(
self, user_id: str, permission: str = "write"
) -> list[KnowledgeUserModel]:
knowledge_bases = self.get_knowledge_bases()
return [
KnowledgeModel.model_validate(knowledge)
for knowledge in db.query(Knowledge)
.order_by(Knowledge.updated_at.desc())
.all()
knowledge_base
for knowledge_base in knowledge_bases
if knowledge_base.user_id == user_id
or has_access(user_id, permission, knowledge_base.access_control)
]
def get_knowledge_by_id(self, id: str) -> Optional[KnowledgeModel]:
@ -128,14 +163,32 @@ class KnowledgeTable:
return None
def update_knowledge_by_id(
self, id: str, form_data: KnowledgeUpdateForm, overwrite: bool = False
self, id: str, form_data: KnowledgeForm, overwrite: bool = False
) -> Optional[KnowledgeModel]:
try:
with get_db() as db:
knowledge = self.get_knowledge_by_id(id=id)
db.query(Knowledge).filter_by(id=id).update(
{
**form_data.model_dump(exclude_none=True),
**form_data.model_dump(),
"updated_at": int(time.time()),
}
)
db.commit()
return self.get_knowledge_by_id(id=id)
except Exception as e:
log.exception(e)
return None
def update_knowledge_data_by_id(
self, id: str, data: dict
) -> Optional[KnowledgeModel]:
try:
with get_db() as db:
knowledge = self.get_knowledge_by_id(id=id)
db.query(Knowledge).filter_by(id=id).update(
{
"data": data,
"updated_at": int(time.time()),
}
)

View file

@ -4,8 +4,19 @@ from typing import Optional
from open_webui.apps.webui.internal.db import Base, JSONField, get_db
from open_webui.env import SRC_LOG_LEVELS
from open_webui.apps.webui.models.users import Users, UserResponse
from pydantic import BaseModel, ConfigDict
from sqlalchemy import BigInteger, Column, Text
from sqlalchemy import or_, and_, func
from sqlalchemy.dialects import postgresql, sqlite
from sqlalchemy import BigInteger, Column, Text, JSON, Boolean
from open_webui.utils.access_control import has_access
log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["MODELS"])
@ -67,6 +78,25 @@ class Model(Base):
Holds a JSON encoded blob of metadata, see `ModelMeta`.
"""
access_control = Column(JSON, nullable=True) # Controls data access levels.
# Defines access control rules for this entry.
# - `None`: Public access, available to all users with the "user" role.
# - `{}`: Private access, restricted exclusively to the owner.
# - Custom permissions: Specific access control for reading and writing;
# Can specify group or user-level restrictions:
# {
# "read": {
# "group_ids": ["group_id1", "group_id2"],
# "user_ids": ["user_id1", "user_id2"]
# },
# "write": {
# "group_ids": ["group_id1", "group_id2"],
# "user_ids": ["user_id1", "user_id2"]
# }
# }
is_active = Column(Boolean, default=True)
updated_at = Column(BigInteger)
created_at = Column(BigInteger)
@ -80,6 +110,9 @@ class ModelModel(BaseModel):
params: ModelParams
meta: ModelMeta
access_control: Optional[dict] = None
is_active: bool
updated_at: int # timestamp in epoch
created_at: int # timestamp in epoch
@ -91,12 +124,12 @@ class ModelModel(BaseModel):
####################
class ModelResponse(BaseModel):
id: str
name: str
meta: ModelMeta
updated_at: int # timestamp in epoch
created_at: int # timestamp in epoch
class ModelUserResponse(ModelModel):
user: Optional[UserResponse] = None
class ModelResponse(ModelModel):
pass
class ModelForm(BaseModel):
@ -105,6 +138,8 @@ class ModelForm(BaseModel):
name: str
meta: ModelMeta
params: ModelParams
access_control: Optional[dict] = None
is_active: bool = True
class ModelsTable:
@ -138,6 +173,39 @@ class ModelsTable:
with get_db() as db:
return [ModelModel.model_validate(model) for model in db.query(Model).all()]
def get_models(self) -> list[ModelUserResponse]:
with get_db() as db:
models = []
for model in db.query(Model).filter(Model.base_model_id != None).all():
user = Users.get_user_by_id(model.user_id)
models.append(
ModelUserResponse.model_validate(
{
**ModelModel.model_validate(model).model_dump(),
"user": user.model_dump() if user else None,
}
)
)
return models
def get_base_models(self) -> list[ModelModel]:
with get_db() as db:
return [
ModelModel.model_validate(model)
for model in db.query(Model).filter(Model.base_model_id == None).all()
]
def get_models_by_user_id(
self, user_id: str, permission: str = "write"
) -> list[ModelUserResponse]:
models = self.get_models()
return [
model
for model in models
if model.user_id == user_id
or has_access(user_id, permission, model.access_control)
]
def get_model_by_id(self, id: str) -> Optional[ModelModel]:
try:
with get_db() as db:
@ -146,6 +214,23 @@ class ModelsTable:
except Exception:
return None
def toggle_model_by_id(self, id: str) -> Optional[ModelModel]:
with get_db() as db:
try:
is_active = db.query(Model).filter_by(id=id).first().is_active
db.query(Model).filter_by(id=id).update(
{
"is_active": not is_active,
"updated_at": int(time.time()),
}
)
db.commit()
return self.get_model_by_id(id)
except Exception:
return None
def update_model_by_id(self, id: str, model: ModelForm) -> Optional[ModelModel]:
try:
with get_db() as db:
@ -153,7 +238,7 @@ class ModelsTable:
result = (
db.query(Model)
.filter_by(id=id)
.update(model.model_dump(exclude={"id"}, exclude_none=True))
.update(model.model_dump(exclude={"id"}))
)
db.commit()
@ -175,5 +260,15 @@ class ModelsTable:
except Exception:
return False
def delete_all_models(self) -> bool:
try:
with get_db() as db:
db.query(Model).delete()
db.commit()
return True
except Exception:
return False
Models = ModelsTable()

View file

@ -2,8 +2,12 @@ import time
from typing import Optional
from open_webui.apps.webui.internal.db import Base, get_db
from open_webui.apps.webui.models.users import Users, UserResponse
from pydantic import BaseModel, ConfigDict
from sqlalchemy import BigInteger, Column, String, Text
from sqlalchemy import BigInteger, Column, String, Text, JSON
from open_webui.utils.access_control import has_access
####################
# Prompts DB Schema
@ -19,6 +23,23 @@ class Prompt(Base):
content = Column(Text)
timestamp = Column(BigInteger)
access_control = Column(JSON, nullable=True) # Controls data access levels.
# Defines access control rules for this entry.
# - `None`: Public access, available to all users with the "user" role.
# - `{}`: Private access, restricted exclusively to the owner.
# - Custom permissions: Specific access control for reading and writing;
# Can specify group or user-level restrictions:
# {
# "read": {
# "group_ids": ["group_id1", "group_id2"],
# "user_ids": ["user_id1", "user_id2"]
# },
# "write": {
# "group_ids": ["group_id1", "group_id2"],
# "user_ids": ["user_id1", "user_id2"]
# }
# }
class PromptModel(BaseModel):
command: str
@ -27,6 +48,7 @@ class PromptModel(BaseModel):
content: str
timestamp: int # timestamp in epoch
access_control: Optional[dict] = None
model_config = ConfigDict(from_attributes=True)
@ -35,10 +57,15 @@ class PromptModel(BaseModel):
####################
class PromptUserResponse(PromptModel):
user: Optional[UserResponse] = None
class PromptForm(BaseModel):
command: str
title: str
content: str
access_control: Optional[dict] = None
class PromptsTable:
@ -48,16 +75,14 @@ class PromptsTable:
prompt = PromptModel(
**{
"user_id": user_id,
"command": form_data.command,
"title": form_data.title,
"content": form_data.content,
**form_data.model_dump(),
"timestamp": int(time.time()),
}
)
try:
with get_db() as db:
result = Prompt(**prompt.dict())
result = Prompt(**prompt.model_dump())
db.add(result)
db.commit()
db.refresh(result)
@ -76,10 +101,33 @@ class PromptsTable:
except Exception:
return None
def get_prompts(self) -> list[PromptModel]:
def get_prompts(self) -> list[PromptUserResponse]:
with get_db() as db:
prompts = []
for prompt in db.query(Prompt).order_by(Prompt.timestamp.desc()).all():
user = Users.get_user_by_id(prompt.user_id)
prompts.append(
PromptUserResponse.model_validate(
{
**PromptModel.model_validate(prompt).model_dump(),
"user": user.model_dump() if user else None,
}
)
)
return prompts
def get_prompts_by_user_id(
self, user_id: str, permission: str = "write"
) -> list[PromptUserResponse]:
prompts = self.get_prompts()
return [
PromptModel.model_validate(prompt) for prompt in db.query(Prompt).all()
prompt
for prompt in prompts
if prompt.user_id == user_id
or has_access(user_id, permission, prompt.access_control)
]
def update_prompt_by_command(
@ -90,6 +138,7 @@ class PromptsTable:
prompt = db.query(Prompt).filter_by(command=command).first()
prompt.title = form_data.title
prompt.content = form_data.content
prompt.access_control = form_data.access_control
prompt.timestamp = int(time.time())
db.commit()
return PromptModel.model_validate(prompt)

View file

@ -3,10 +3,13 @@ import time
from typing import Optional
from open_webui.apps.webui.internal.db import Base, JSONField, get_db
from open_webui.apps.webui.models.users import Users
from open_webui.apps.webui.models.users import Users, UserResponse
from open_webui.env import SRC_LOG_LEVELS
from pydantic import BaseModel, ConfigDict
from sqlalchemy import BigInteger, Column, String, Text
from sqlalchemy import BigInteger, Column, String, Text, JSON
from open_webui.utils.access_control import has_access
log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["MODELS"])
@ -26,6 +29,24 @@ class Tool(Base):
specs = Column(JSONField)
meta = Column(JSONField)
valves = Column(JSONField)
access_control = Column(JSON, nullable=True) # Controls data access levels.
# Defines access control rules for this entry.
# - `None`: Public access, available to all users with the "user" role.
# - `{}`: Private access, restricted exclusively to the owner.
# - Custom permissions: Specific access control for reading and writing;
# Can specify group or user-level restrictions:
# {
# "read": {
# "group_ids": ["group_id1", "group_id2"],
# "user_ids": ["user_id1", "user_id2"]
# },
# "write": {
# "group_ids": ["group_id1", "group_id2"],
# "user_ids": ["user_id1", "user_id2"]
# }
# }
updated_at = Column(BigInteger)
created_at = Column(BigInteger)
@ -42,6 +63,8 @@ class ToolModel(BaseModel):
content: str
specs: list[dict]
meta: ToolMeta
access_control: Optional[dict] = None
updated_at: int # timestamp in epoch
created_at: int # timestamp in epoch
@ -58,15 +81,21 @@ class ToolResponse(BaseModel):
user_id: str
name: str
meta: ToolMeta
access_control: Optional[dict] = None
updated_at: int # timestamp in epoch
created_at: int # timestamp in epoch
class ToolUserResponse(ToolResponse):
user: Optional[UserResponse] = None
class ToolForm(BaseModel):
id: str
name: str
content: str
meta: ToolMeta
access_control: Optional[dict] = None
class ToolValves(BaseModel):
@ -109,9 +138,32 @@ class ToolsTable:
except Exception:
return None
def get_tools(self) -> list[ToolModel]:
def get_tools(self) -> list[ToolUserResponse]:
with get_db() as db:
return [ToolModel.model_validate(tool) for tool in db.query(Tool).all()]
tools = []
for tool in db.query(Tool).order_by(Tool.updated_at.desc()).all():
user = Users.get_user_by_id(tool.user_id)
tools.append(
ToolUserResponse.model_validate(
{
**ToolModel.model_validate(tool).model_dump(),
"user": user.model_dump() if user else None,
}
)
)
return tools
def get_tools_by_user_id(
self, user_id: str, permission: str = "write"
) -> list[ToolUserResponse]:
tools = self.get_tools()
return [
tool
for tool in tools
if tool.user_id == user_id
or has_access(user_id, permission, tool.access_control)
]
def get_tool_valves_by_id(self, id: str) -> Optional[dict]:
try:

View file

@ -62,6 +62,14 @@ class UserModel(BaseModel):
####################
class UserResponse(BaseModel):
id: str
name: str
email: str
role: str
profile_image_url: str
class UserRoleUpdateForm(BaseModel):
id: str
role: str

View file

@ -2,12 +2,14 @@ import re
import uuid
import time
import datetime
import logging
from open_webui.apps.webui.models.auths import (
AddUserForm,
ApiKey,
Auths,
Token,
LdapForm,
SigninForm,
SigninResponse,
SignupForm,
@ -16,13 +18,15 @@ from open_webui.apps.webui.models.auths import (
UserResponse,
)
from open_webui.apps.webui.models.users import Users
from open_webui.config import WEBUI_AUTH
from open_webui.constants import ERROR_MESSAGES, WEBHOOK_MESSAGES
from open_webui.env import (
WEBUI_AUTH,
WEBUI_AUTH_TRUSTED_EMAIL_HEADER,
WEBUI_AUTH_TRUSTED_NAME_HEADER,
WEBUI_SESSION_COOKIE_SAME_SITE,
WEBUI_SESSION_COOKIE_SECURE,
SRC_LOG_LEVELS,
)
from fastapi import APIRouter, Depends, HTTPException, Request, status
from fastapi.responses import Response
@ -37,10 +41,19 @@ from open_webui.utils.utils import (
get_password_hash,
)
from open_webui.utils.webhook import post_webhook
from typing import Optional
from open_webui.utils.access_control import get_permissions
from typing import Optional, List
from ssl import CERT_REQUIRED, PROTOCOL_TLS
from ldap3 import Server, Connection, ALL, Tls
from ldap3.utils.conv import escape_filter_chars
router = APIRouter()
log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["MAIN"])
############################
# GetSessionUser
############################
@ -48,6 +61,7 @@ router = APIRouter()
class SessionUserResponse(Token, UserResponse):
expires_at: Optional[int] = None
permissions: Optional[dict] = None
@router.get("/", response_model=SessionUserResponse)
@ -80,6 +94,10 @@ async def get_session_user(
secure=WEBUI_SESSION_COOKIE_SECURE,
)
user_permissions = get_permissions(
user.id, request.app.state.config.USER_PERMISSIONS
)
return {
"token": token,
"token_type": "Bearer",
@ -89,6 +107,7 @@ async def get_session_user(
"name": user.name,
"role": user.role,
"profile_image_url": user.profile_image_url,
"permissions": user_permissions,
}
@ -137,6 +156,140 @@ async def update_password(
raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED)
############################
# LDAP Authentication
############################
@router.post("/ldap", response_model=SigninResponse)
async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
ENABLE_LDAP = request.app.state.config.ENABLE_LDAP
LDAP_SERVER_LABEL = request.app.state.config.LDAP_SERVER_LABEL
LDAP_SERVER_HOST = request.app.state.config.LDAP_SERVER_HOST
LDAP_SERVER_PORT = request.app.state.config.LDAP_SERVER_PORT
LDAP_ATTRIBUTE_FOR_USERNAME = request.app.state.config.LDAP_ATTRIBUTE_FOR_USERNAME
LDAP_SEARCH_BASE = request.app.state.config.LDAP_SEARCH_BASE
LDAP_SEARCH_FILTERS = request.app.state.config.LDAP_SEARCH_FILTERS
LDAP_APP_DN = request.app.state.config.LDAP_APP_DN
LDAP_APP_PASSWORD = request.app.state.config.LDAP_APP_PASSWORD
LDAP_USE_TLS = request.app.state.config.LDAP_USE_TLS
LDAP_CA_CERT_FILE = request.app.state.config.LDAP_CA_CERT_FILE
LDAP_CIPHERS = (
request.app.state.config.LDAP_CIPHERS
if request.app.state.config.LDAP_CIPHERS
else "ALL"
)
if not ENABLE_LDAP:
raise HTTPException(400, detail="LDAP authentication is not enabled")
try:
tls = Tls(
validate=CERT_REQUIRED,
version=PROTOCOL_TLS,
ca_certs_file=LDAP_CA_CERT_FILE,
ciphers=LDAP_CIPHERS,
)
except Exception as e:
log.error(f"An error occurred on TLS: {str(e)}")
raise HTTPException(400, detail=str(e))
try:
server = Server(
host=LDAP_SERVER_HOST,
port=LDAP_SERVER_PORT,
get_info=ALL,
use_ssl=LDAP_USE_TLS,
tls=tls,
)
connection_app = Connection(
server,
LDAP_APP_DN,
LDAP_APP_PASSWORD,
auto_bind="NONE",
authentication="SIMPLE",
)
if not connection_app.bind():
raise HTTPException(400, detail="Application account bind failed")
search_success = connection_app.search(
search_base=LDAP_SEARCH_BASE,
search_filter=f"(&({LDAP_ATTRIBUTE_FOR_USERNAME}={escape_filter_chars(form_data.user.lower())}){LDAP_SEARCH_FILTERS})",
attributes=[f"{LDAP_ATTRIBUTE_FOR_USERNAME}", "mail", "cn"],
)
if not search_success:
raise HTTPException(400, detail="User not found in the LDAP server")
entry = connection_app.entries[0]
username = str(entry[f"{LDAP_ATTRIBUTE_FOR_USERNAME}"]).lower()
mail = str(entry["mail"])
cn = str(entry["cn"])
user_dn = entry.entry_dn
if username == form_data.user.lower():
connection_user = Connection(
server,
user_dn,
form_data.password,
auto_bind="NONE",
authentication="SIMPLE",
)
if not connection_user.bind():
raise HTTPException(400, f"Authentication failed for {form_data.user}")
user = Users.get_user_by_email(mail)
if not user:
try:
hashed = get_password_hash(form_data.password)
user = Auths.insert_new_auth(mail, hashed, cn)
if not user:
raise HTTPException(
500, detail=ERROR_MESSAGES.CREATE_USER_ERROR
)
except HTTPException:
raise
except Exception as err:
raise HTTPException(500, detail=ERROR_MESSAGES.DEFAULT(err))
user = Auths.authenticate_user(mail, password=str(form_data.password))
if user:
token = create_token(
data={"id": user.id},
expires_delta=parse_duration(
request.app.state.config.JWT_EXPIRES_IN
),
)
# Set the cookie token
response.set_cookie(
key="token",
value=token,
httponly=True, # Ensures the cookie is not accessible via JavaScript
)
return {
"token": token,
"token_type": "Bearer",
"id": user.id,
"email": user.email,
"name": user.name,
"role": user.role,
"profile_image_url": user.profile_image_url,
}
else:
raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED)
else:
raise HTTPException(
400,
f"User {form_data.user} does not match the record. Search result: {str(entry[f'{LDAP_ATTRIBUTE_FOR_USERNAME}'])}",
)
except Exception as e:
raise HTTPException(400, detail=str(e))
############################
# SignIn
############################
@ -211,6 +364,10 @@ async def signin(request: Request, response: Response, form_data: SigninForm):
secure=WEBUI_SESSION_COOKIE_SECURE,
)
user_permissions = get_permissions(
user.id, request.app.state.config.USER_PERMISSIONS
)
return {
"token": token,
"token_type": "Bearer",
@ -220,6 +377,7 @@ async def signin(request: Request, response: Response, form_data: SigninForm):
"name": user.name,
"role": user.role,
"profile_image_url": user.profile_image_url,
"permissions": user_permissions,
}
else:
raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED)
@ -260,6 +418,11 @@ async def signup(request: Request, response: Response, form_data: SignupForm):
if Users.get_num_users() == 0
else request.app.state.config.DEFAULT_USER_ROLE
)
if Users.get_num_users() == 0:
# Disable signup after the first user is created
request.app.state.config.ENABLE_SIGNUP = False
hashed = get_password_hash(form_data.password)
user = Auths.insert_new_auth(
form_data.email.lower(),
@ -307,6 +470,10 @@ async def signup(request: Request, response: Response, form_data: SignupForm):
},
)
user_permissions = get_permissions(
user.id, request.app.state.config.USER_PERMISSIONS
)
return {
"token": token,
"token_type": "Bearer",
@ -316,6 +483,7 @@ async def signup(request: Request, response: Response, form_data: SignupForm):
"name": user.name,
"role": user.role,
"profile_image_url": user.profile_image_url,
"permissions": user_permissions,
}
else:
raise HTTPException(500, detail=ERROR_MESSAGES.CREATE_USER_ERROR)
@ -413,6 +581,7 @@ async def get_admin_config(request: Request, user=Depends(get_admin_user)):
return {
"SHOW_ADMIN_DETAILS": request.app.state.config.SHOW_ADMIN_DETAILS,
"ENABLE_SIGNUP": request.app.state.config.ENABLE_SIGNUP,
"ENABLE_API_KEY": request.app.state.config.ENABLE_API_KEY,
"DEFAULT_USER_ROLE": request.app.state.config.DEFAULT_USER_ROLE,
"JWT_EXPIRES_IN": request.app.state.config.JWT_EXPIRES_IN,
"ENABLE_COMMUNITY_SHARING": request.app.state.config.ENABLE_COMMUNITY_SHARING,
@ -423,6 +592,7 @@ async def get_admin_config(request: Request, user=Depends(get_admin_user)):
class AdminConfig(BaseModel):
SHOW_ADMIN_DETAILS: bool
ENABLE_SIGNUP: bool
ENABLE_API_KEY: bool
DEFAULT_USER_ROLE: str
JWT_EXPIRES_IN: str
ENABLE_COMMUNITY_SHARING: bool
@ -435,6 +605,7 @@ async def update_admin_config(
):
request.app.state.config.SHOW_ADMIN_DETAILS = form_data.SHOW_ADMIN_DETAILS
request.app.state.config.ENABLE_SIGNUP = form_data.ENABLE_SIGNUP
request.app.state.config.ENABLE_API_KEY = form_data.ENABLE_API_KEY
if form_data.DEFAULT_USER_ROLE in ["pending", "user", "admin"]:
request.app.state.config.DEFAULT_USER_ROLE = form_data.DEFAULT_USER_ROLE
@ -453,6 +624,7 @@ async def update_admin_config(
return {
"SHOW_ADMIN_DETAILS": request.app.state.config.SHOW_ADMIN_DETAILS,
"ENABLE_SIGNUP": request.app.state.config.ENABLE_SIGNUP,
"ENABLE_API_KEY": request.app.state.config.ENABLE_API_KEY,
"DEFAULT_USER_ROLE": request.app.state.config.DEFAULT_USER_ROLE,
"JWT_EXPIRES_IN": request.app.state.config.JWT_EXPIRES_IN,
"ENABLE_COMMUNITY_SHARING": request.app.state.config.ENABLE_COMMUNITY_SHARING,
@ -460,6 +632,105 @@ async def update_admin_config(
}
class LdapServerConfig(BaseModel):
label: str
host: str
port: Optional[int] = None
attribute_for_username: str = "uid"
app_dn: str
app_dn_password: str
search_base: str
search_filters: str = ""
use_tls: bool = True
certificate_path: Optional[str] = None
ciphers: Optional[str] = "ALL"
@router.get("/admin/config/ldap/server", response_model=LdapServerConfig)
async def get_ldap_server(request: Request, user=Depends(get_admin_user)):
return {
"label": request.app.state.config.LDAP_SERVER_LABEL,
"host": request.app.state.config.LDAP_SERVER_HOST,
"port": request.app.state.config.LDAP_SERVER_PORT,
"attribute_for_username": request.app.state.config.LDAP_ATTRIBUTE_FOR_USERNAME,
"app_dn": request.app.state.config.LDAP_APP_DN,
"app_dn_password": request.app.state.config.LDAP_APP_PASSWORD,
"search_base": request.app.state.config.LDAP_SEARCH_BASE,
"search_filters": request.app.state.config.LDAP_SEARCH_FILTERS,
"use_tls": request.app.state.config.LDAP_USE_TLS,
"certificate_path": request.app.state.config.LDAP_CA_CERT_FILE,
"ciphers": request.app.state.config.LDAP_CIPHERS,
}
@router.post("/admin/config/ldap/server")
async def update_ldap_server(
request: Request, form_data: LdapServerConfig, user=Depends(get_admin_user)
):
required_fields = [
"label",
"host",
"attribute_for_username",
"app_dn",
"app_dn_password",
"search_base",
]
for key in required_fields:
value = getattr(form_data, key)
if not value:
raise HTTPException(400, detail=f"Required field {key} is empty")
if form_data.use_tls and not form_data.certificate_path:
raise HTTPException(
400, detail="TLS is enabled but certificate file path is missing"
)
request.app.state.config.LDAP_SERVER_LABEL = form_data.label
request.app.state.config.LDAP_SERVER_HOST = form_data.host
request.app.state.config.LDAP_SERVER_PORT = form_data.port
request.app.state.config.LDAP_ATTRIBUTE_FOR_USERNAME = (
form_data.attribute_for_username
)
request.app.state.config.LDAP_APP_DN = form_data.app_dn
request.app.state.config.LDAP_APP_PASSWORD = form_data.app_dn_password
request.app.state.config.LDAP_SEARCH_BASE = form_data.search_base
request.app.state.config.LDAP_SEARCH_FILTERS = form_data.search_filters
request.app.state.config.LDAP_USE_TLS = form_data.use_tls
request.app.state.config.LDAP_CA_CERT_FILE = form_data.certificate_path
request.app.state.config.LDAP_CIPHERS = form_data.ciphers
return {
"label": request.app.state.config.LDAP_SERVER_LABEL,
"host": request.app.state.config.LDAP_SERVER_HOST,
"port": request.app.state.config.LDAP_SERVER_PORT,
"attribute_for_username": request.app.state.config.LDAP_ATTRIBUTE_FOR_USERNAME,
"app_dn": request.app.state.config.LDAP_APP_DN,
"app_dn_password": request.app.state.config.LDAP_APP_PASSWORD,
"search_base": request.app.state.config.LDAP_SEARCH_BASE,
"search_filters": request.app.state.config.LDAP_SEARCH_FILTERS,
"use_tls": request.app.state.config.LDAP_USE_TLS,
"certificate_path": request.app.state.config.LDAP_CA_CERT_FILE,
"ciphers": request.app.state.config.LDAP_CIPHERS,
}
@router.get("/admin/config/ldap")
async def get_ldap_config(request: Request, user=Depends(get_admin_user)):
return {"ENABLE_LDAP": request.app.state.config.ENABLE_LDAP}
class LdapConfigForm(BaseModel):
enable_ldap: Optional[bool] = None
@router.post("/admin/config/ldap")
async def update_ldap_config(
request: Request, form_data: LdapConfigForm, user=Depends(get_admin_user)
):
request.app.state.config.ENABLE_LDAP = form_data.enable_ldap
return {"ENABLE_LDAP": request.app.state.config.ENABLE_LDAP}
############################
# API Key
############################
@ -467,9 +738,16 @@ async def update_admin_config(
# create api key
@router.post("/api_key", response_model=ApiKey)
async def create_api_key_(user=Depends(get_current_user)):
async def generate_api_key(request: Request, user=Depends(get_current_user)):
if not request.app.state.config.ENABLE_API_KEY:
raise HTTPException(
status.HTTP_403_FORBIDDEN,
detail=ERROR_MESSAGES.API_KEY_CREATION_NOT_ALLOWED,
)
api_key = create_api_key()
success = Users.update_user_api_key_by_id(user.id, api_key)
if success:
return {
"api_key": api_key,

View file

@ -17,7 +17,10 @@ from open_webui.constants import ERROR_MESSAGES
from open_webui.env import SRC_LOG_LEVELS
from fastapi import APIRouter, Depends, HTTPException, Request, status
from pydantic import BaseModel
from open_webui.utils.utils import get_admin_user, get_verified_user
from open_webui.utils.access_control import has_permission
log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["MODELS"])
@ -50,9 +53,10 @@ async def get_session_user_chat_list(
@router.delete("/", response_model=bool)
async def delete_all_user_chats(request: Request, user=Depends(get_verified_user)):
if user.role == "user" and not request.app.state.config.USER_PERMISSIONS.get(
"chat", {}
).get("deletion", {}):
if user.role == "user" and not has_permission(
user.id, "chat.delete", request.app.state.config.USER_PERMISSIONS
):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
@ -385,8 +389,8 @@ async def delete_chat_by_id(request: Request, id: str, user=Depends(get_verified
return result
else:
if not request.app.state.config.USER_PERMISSIONS.get("chat", {}).get(
"deletion", {}
if not has_permission(
user.id, "chat.delete", request.app.state.config.USER_PERMISSIONS
):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,

View file

@ -0,0 +1,120 @@
import os
from pathlib import Path
from typing import Optional
from open_webui.apps.webui.models.groups import (
Groups,
GroupForm,
GroupUpdateForm,
GroupResponse,
)
from open_webui.config import CACHE_DIR
from open_webui.constants import ERROR_MESSAGES
from fastapi import APIRouter, Depends, HTTPException, Request, status
from open_webui.utils.utils import get_admin_user, get_verified_user
router = APIRouter()
############################
# GetFunctions
############################
@router.get("/", response_model=list[GroupResponse])
async def get_groups(user=Depends(get_verified_user)):
if user.role == "admin":
return Groups.get_groups()
else:
return Groups.get_groups_by_member_id(user.id)
############################
# CreateNewGroup
############################
@router.post("/create", response_model=Optional[GroupResponse])
async def create_new_function(form_data: GroupForm, user=Depends(get_admin_user)):
try:
group = Groups.insert_new_group(user.id, form_data)
if group:
return group
else:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.DEFAULT("Error creating group"),
)
except Exception as e:
print(e)
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.DEFAULT(e),
)
############################
# GetGroupById
############################
@router.get("/id/{id}", response_model=Optional[GroupResponse])
async def get_group_by_id(id: str, user=Depends(get_admin_user)):
group = Groups.get_group_by_id(id)
if group:
return group
else:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.NOT_FOUND,
)
############################
# UpdateGroupById
############################
@router.post("/id/{id}/update", response_model=Optional[GroupResponse])
async def update_group_by_id(
id: str, form_data: GroupUpdateForm, user=Depends(get_admin_user)
):
try:
group = Groups.update_group_by_id(id, form_data)
if group:
return group
else:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.DEFAULT("Error updating group"),
)
except Exception as e:
print(e)
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.DEFAULT(e),
)
############################
# DeleteGroupById
############################
@router.delete("/id/{id}/delete", response_model=bool)
async def delete_group_by_id(id: str, user=Depends(get_admin_user)):
try:
result = Groups.delete_group_by_id(id)
if result:
return result
else:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.DEFAULT("Error deleting group"),
)
except Exception as e:
print(e)
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.DEFAULT(e),
)

View file

@ -1,14 +1,14 @@
import json
from typing import Optional, Union
from pydantic import BaseModel
from fastapi import APIRouter, Depends, HTTPException, status
from fastapi import APIRouter, Depends, HTTPException, status, Request
import logging
from open_webui.apps.webui.models.knowledge import (
Knowledges,
KnowledgeUpdateForm,
KnowledgeForm,
KnowledgeResponse,
KnowledgeUserResponse,
)
from open_webui.apps.webui.models.files import Files, FileModel
from open_webui.apps.retrieval.vector.connector import VECTOR_DB_CLIENT
@ -17,6 +17,9 @@ from open_webui.apps.retrieval.main import process_file, ProcessFileForm
from open_webui.constants import ERROR_MESSAGES
from open_webui.utils.utils import get_admin_user, get_verified_user
from open_webui.utils.access_control import has_access, has_permission
from open_webui.env import SRC_LOG_LEVELS
@ -26,64 +29,103 @@ log.setLevel(SRC_LOG_LEVELS["MODELS"])
router = APIRouter()
############################
# GetKnowledgeItems
# getKnowledgeBases
############################
@router.get(
"/", response_model=Optional[Union[list[KnowledgeResponse], KnowledgeResponse]]
)
async def get_knowledge_items(
id: Optional[str] = None, user=Depends(get_verified_user)
):
if id:
knowledge = Knowledges.get_knowledge_by_id(id=id)
if knowledge:
return knowledge
else:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.NOT_FOUND,
)
else:
@router.get("/", response_model=list[KnowledgeUserResponse])
async def get_knowledge(user=Depends(get_verified_user)):
knowledge_bases = []
for knowledge in Knowledges.get_knowledge_items():
if user.role == "admin":
knowledge_bases = Knowledges.get_knowledge_bases()
else:
knowledge_bases = Knowledges.get_knowledge_bases_by_user_id(user.id, "read")
# Get files for each knowledge base
knowledge_with_files = []
for knowledge_base in knowledge_bases:
files = []
if knowledge.data:
if knowledge_base.data:
files = Files.get_file_metadatas_by_ids(
knowledge.data.get("file_ids", [])
knowledge_base.data.get("file_ids", [])
)
# Check if all files exist
if len(files) != len(knowledge.data.get("file_ids", [])):
if len(files) != len(knowledge_base.data.get("file_ids", [])):
missing_files = list(
set(knowledge.data.get("file_ids", []))
set(knowledge_base.data.get("file_ids", []))
- set([file.id for file in files])
)
if missing_files:
data = knowledge.data or {}
data = knowledge_base.data or {}
file_ids = data.get("file_ids", [])
for missing_file in missing_files:
file_ids.remove(missing_file)
data["file_ids"] = file_ids
Knowledges.update_knowledge_by_id(
id=knowledge.id, form_data=KnowledgeUpdateForm(data=data)
Knowledges.update_knowledge_data_by_id(
id=knowledge_base.id, data=data
)
files = Files.get_file_metadatas_by_ids(file_ids)
knowledge_bases.append(
KnowledgeResponse(
**knowledge.model_dump(),
knowledge_with_files.append(
KnowledgeUserResponse(
**knowledge_base.model_dump(),
files=files,
)
)
return knowledge_bases
return knowledge_with_files
@router.get("/list", response_model=list[KnowledgeUserResponse])
async def get_knowledge_list(user=Depends(get_verified_user)):
knowledge_bases = []
if user.role == "admin":
knowledge_bases = Knowledges.get_knowledge_bases()
else:
knowledge_bases = Knowledges.get_knowledge_bases_by_user_id(user.id, "write")
# Get files for each knowledge base
knowledge_with_files = []
for knowledge_base in knowledge_bases:
files = []
if knowledge_base.data:
files = Files.get_file_metadatas_by_ids(
knowledge_base.data.get("file_ids", [])
)
# Check if all files exist
if len(files) != len(knowledge_base.data.get("file_ids", [])):
missing_files = list(
set(knowledge_base.data.get("file_ids", []))
- set([file.id for file in files])
)
if missing_files:
data = knowledge_base.data or {}
file_ids = data.get("file_ids", [])
for missing_file in missing_files:
file_ids.remove(missing_file)
data["file_ids"] = file_ids
Knowledges.update_knowledge_data_by_id(
id=knowledge_base.id, data=data
)
files = Files.get_file_metadatas_by_ids(file_ids)
knowledge_with_files.append(
KnowledgeUserResponse(
**knowledge_base.model_dump(),
files=files,
)
)
return knowledge_with_files
############################
@ -92,7 +134,17 @@ async def get_knowledge_items(
@router.post("/create", response_model=Optional[KnowledgeResponse])
async def create_new_knowledge(form_data: KnowledgeForm, user=Depends(get_admin_user)):
async def create_new_knowledge(
request: Request, form_data: KnowledgeForm, user=Depends(get_verified_user)
):
if user.role != "admin" and not has_permission(
user.id, "workspace.knowledge", request.app.state.config.USER_PERMISSIONS
):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.UNAUTHORIZED,
)
knowledge = Knowledges.insert_new_knowledge(user.id, form_data)
if knowledge:
@ -118,6 +170,13 @@ async def get_knowledge_by_id(id: str, user=Depends(get_verified_user)):
knowledge = Knowledges.get_knowledge_by_id(id=id)
if knowledge:
if (
user.role == "admin"
or knowledge.user_id == user.id
or has_access(user.id, "read", knowledge.access_control)
):
file_ids = knowledge.data.get("file_ids", []) if knowledge.data else []
files = Files.get_files_by_ids(file_ids)
@ -140,11 +199,23 @@ async def get_knowledge_by_id(id: str, user=Depends(get_verified_user)):
@router.post("/{id}/update", response_model=Optional[KnowledgeFilesResponse])
async def update_knowledge_by_id(
id: str,
form_data: KnowledgeUpdateForm,
user=Depends(get_admin_user),
form_data: KnowledgeForm,
user=Depends(get_verified_user),
):
knowledge = Knowledges.update_knowledge_by_id(id=id, form_data=form_data)
knowledge = Knowledges.get_knowledge_by_id(id=id)
if not knowledge:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.NOT_FOUND,
)
if knowledge.user_id != user.id and user.role != "admin":
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
)
knowledge = Knowledges.update_knowledge_by_id(id=id, form_data=form_data)
if knowledge:
file_ids = knowledge.data.get("file_ids", []) if knowledge.data else []
files = Files.get_files_by_ids(file_ids)
@ -173,9 +244,22 @@ class KnowledgeFileIdForm(BaseModel):
def add_file_to_knowledge_by_id(
id: str,
form_data: KnowledgeFileIdForm,
user=Depends(get_admin_user),
user=Depends(get_verified_user),
):
knowledge = Knowledges.get_knowledge_by_id(id=id)
if not knowledge:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.NOT_FOUND,
)
if knowledge.user_id != user.id and user.role != "admin":
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
)
file = Files.get_file_by_id(form_data.file_id)
if not file:
raise HTTPException(
@ -206,9 +290,7 @@ def add_file_to_knowledge_by_id(
file_ids.append(form_data.file_id)
data["file_ids"] = file_ids
knowledge = Knowledges.update_knowledge_by_id(
id=id, form_data=KnowledgeUpdateForm(data=data)
)
knowledge = Knowledges.update_knowledge_data_by_id(id=id, data=data)
if knowledge:
files = Files.get_files_by_ids(file_ids)
@ -238,9 +320,21 @@ def add_file_to_knowledge_by_id(
def update_file_from_knowledge_by_id(
id: str,
form_data: KnowledgeFileIdForm,
user=Depends(get_admin_user),
user=Depends(get_verified_user),
):
knowledge = Knowledges.get_knowledge_by_id(id=id)
if not knowledge:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.NOT_FOUND,
)
if knowledge.user_id != user.id and user.role != "admin":
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
)
file = Files.get_file_by_id(form_data.file_id)
if not file:
raise HTTPException(
@ -288,9 +382,21 @@ def update_file_from_knowledge_by_id(
def remove_file_from_knowledge_by_id(
id: str,
form_data: KnowledgeFileIdForm,
user=Depends(get_admin_user),
user=Depends(get_verified_user),
):
knowledge = Knowledges.get_knowledge_by_id(id=id)
if not knowledge:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.NOT_FOUND,
)
if knowledge.user_id != user.id and user.role != "admin":
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
)
file = Files.get_file_by_id(form_data.file_id)
if not file:
raise HTTPException(
@ -318,9 +424,7 @@ def remove_file_from_knowledge_by_id(
file_ids.remove(form_data.file_id)
data["file_ids"] = file_ids
knowledge = Knowledges.update_knowledge_by_id(
id=id, form_data=KnowledgeUpdateForm(data=data)
)
knowledge = Knowledges.update_knowledge_data_by_id(id=id, data=data)
if knowledge:
files = Files.get_files_by_ids(file_ids)
@ -346,32 +450,26 @@ def remove_file_from_knowledge_by_id(
)
############################
# ResetKnowledgeById
############################
@router.post("/{id}/reset", response_model=Optional[KnowledgeResponse])
async def reset_knowledge_by_id(id: str, user=Depends(get_admin_user)):
try:
VECTOR_DB_CLIENT.delete_collection(collection_name=id)
except Exception as e:
log.debug(e)
pass
knowledge = Knowledges.update_knowledge_by_id(
id=id, form_data=KnowledgeUpdateForm(data={"file_ids": []})
)
return knowledge
############################
# DeleteKnowledgeById
############################
@router.delete("/{id}/delete", response_model=bool)
async def delete_knowledge_by_id(id: str, user=Depends(get_admin_user)):
async def delete_knowledge_by_id(id: str, user=Depends(get_verified_user)):
knowledge = Knowledges.get_knowledge_by_id(id=id)
if not knowledge:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.NOT_FOUND,
)
if knowledge.user_id != user.id and user.role != "admin":
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
)
try:
VECTOR_DB_CLIENT.delete_collection(collection_name=id)
except Exception as e:
@ -379,3 +477,34 @@ async def delete_knowledge_by_id(id: str, user=Depends(get_admin_user)):
pass
result = Knowledges.delete_knowledge_by_id(id=id)
return result
############################
# ResetKnowledgeById
############################
@router.post("/{id}/reset", response_model=Optional[KnowledgeResponse])
async def reset_knowledge_by_id(id: str, user=Depends(get_verified_user)):
knowledge = Knowledges.get_knowledge_by_id(id=id)
if not knowledge:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.NOT_FOUND,
)
if knowledge.user_id != user.id and user.role != "admin":
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
)
try:
VECTOR_DB_CLIENT.delete_collection(collection_name=id)
except Exception as e:
log.debug(e)
pass
knowledge = Knowledges.update_knowledge_data_by_id(id=id, data={"file_ids": []})
return knowledge

View file

@ -4,53 +4,71 @@ from open_webui.apps.webui.models.models import (
ModelForm,
ModelModel,
ModelResponse,
ModelUserResponse,
Models,
)
from open_webui.constants import ERROR_MESSAGES
from fastapi import APIRouter, Depends, HTTPException, Request, status
from open_webui.utils.utils import get_admin_user, get_verified_user
from open_webui.utils.access_control import has_access, has_permission
router = APIRouter()
###########################
# getModels
# GetModels
###########################
@router.get("/", response_model=list[ModelResponse])
@router.get("/", response_model=list[ModelUserResponse])
async def get_models(id: Optional[str] = None, user=Depends(get_verified_user)):
if id:
model = Models.get_model_by_id(id)
if model:
return [model]
if user.role == "admin":
return Models.get_models()
else:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.NOT_FOUND,
)
else:
return Models.get_all_models()
return Models.get_models_by_user_id(user.id)
###########################
# GetBaseModels
###########################
@router.get("/base", response_model=list[ModelResponse])
async def get_base_models(user=Depends(get_admin_user)):
return Models.get_base_models()
############################
# AddNewModel
# CreateNewModel
############################
@router.post("/add", response_model=Optional[ModelModel])
async def add_new_model(
@router.post("/create", response_model=Optional[ModelModel])
async def create_new_model(
request: Request,
form_data: ModelForm,
user=Depends(get_admin_user),
user=Depends(get_verified_user),
):
if form_data.id in request.app.state.MODELS:
if user.role != "admin" and not has_permission(
user.id, "workspace.models", request.app.state.config.USER_PERMISSIONS
):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.UNAUTHORIZED,
)
model = Models.get_model_by_id(form_data.id)
if model:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.MODEL_ID_TAKEN,
)
else:
model = Models.insert_new_model(form_data, user.id)
if model:
return model
else:
@ -60,37 +78,85 @@ async def add_new_model(
)
###########################
# GetModelById
###########################
# Note: We're not using the typical url path param here, but instead using a query parameter to allow '/' in the id
@router.get("/model", response_model=Optional[ModelResponse])
async def get_model_by_id(id: str, user=Depends(get_verified_user)):
model = Models.get_model_by_id(id)
if model:
if (
user.role == "admin"
or model.user_id == user.id
or has_access(user.id, "read", model.access_control)
):
return model
else:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.NOT_FOUND,
)
############################
# ToggelModelById
############################
@router.post("/model/toggle", response_model=Optional[ModelResponse])
async def toggle_model_by_id(id: str, user=Depends(get_verified_user)):
model = Models.get_model_by_id(id)
if model:
if (
user.role == "admin"
or model.user_id == user.id
or has_access(user.id, "write", model.access_control)
):
model = Models.toggle_model_by_id(id)
if model:
return model
else:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.DEFAULT("Error updating function"),
)
else:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.UNAUTHORIZED,
)
else:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.NOT_FOUND,
)
############################
# UpdateModelById
############################
@router.post("/update", response_model=Optional[ModelModel])
@router.post("/model/update", response_model=Optional[ModelModel])
async def update_model_by_id(
request: Request,
id: str,
form_data: ModelForm,
user=Depends(get_admin_user),
user=Depends(get_verified_user),
):
model = Models.get_model_by_id(id)
if model:
if not model:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.NOT_FOUND,
)
model = Models.update_model_by_id(id, form_data)
return model
else:
if form_data.id in request.app.state.MODELS:
model = Models.insert_new_model(form_data, user.id)
if model:
return model
else:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.DEFAULT(),
)
else:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.DEFAULT(),
)
############################
@ -98,7 +164,26 @@ async def update_model_by_id(
############################
@router.delete("/delete", response_model=bool)
async def delete_model_by_id(id: str, user=Depends(get_admin_user)):
@router.delete("/model/delete", response_model=bool)
async def delete_model_by_id(id: str, user=Depends(get_verified_user)):
model = Models.get_model_by_id(id)
if not model:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.NOT_FOUND,
)
if model.user_id != user.id and user.role != "admin":
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.UNAUTHORIZED,
)
result = Models.delete_model_by_id(id)
return result
@router.delete("/delete/all", response_model=bool)
async def delete_all_models(user=Depends(get_admin_user)):
result = Models.delete_all_models()
return result

View file

@ -1,9 +1,15 @@
from typing import Optional
from open_webui.apps.webui.models.prompts import PromptForm, PromptModel, Prompts
from open_webui.apps.webui.models.prompts import (
PromptForm,
PromptUserResponse,
PromptModel,
Prompts,
)
from open_webui.constants import ERROR_MESSAGES
from fastapi import APIRouter, Depends, HTTPException, status
from fastapi import APIRouter, Depends, HTTPException, status, Request
from open_webui.utils.utils import get_admin_user, get_verified_user
from open_webui.utils.access_control import has_access, has_permission
router = APIRouter()
@ -14,7 +20,22 @@ router = APIRouter()
@router.get("/", response_model=list[PromptModel])
async def get_prompts(user=Depends(get_verified_user)):
return Prompts.get_prompts()
if user.role == "admin":
prompts = Prompts.get_prompts()
else:
prompts = Prompts.get_prompts_by_user_id(user.id, "read")
return prompts
@router.get("/list", response_model=list[PromptUserResponse])
async def get_prompt_list(user=Depends(get_verified_user)):
if user.role == "admin":
prompts = Prompts.get_prompts()
else:
prompts = Prompts.get_prompts_by_user_id(user.id, "write")
return prompts
############################
@ -23,7 +44,17 @@ async def get_prompts(user=Depends(get_verified_user)):
@router.post("/create", response_model=Optional[PromptModel])
async def create_new_prompt(form_data: PromptForm, user=Depends(get_admin_user)):
async def create_new_prompt(
request: Request, form_data: PromptForm, user=Depends(get_verified_user)
):
if user.role != "admin" and not has_permission(
user.id, "workspace.prompts", request.app.state.config.USER_PERMISSIONS
):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.UNAUTHORIZED,
)
prompt = Prompts.get_prompt_by_command(form_data.command)
if prompt is None:
prompt = Prompts.insert_new_prompt(user.id, form_data)
@ -50,6 +81,11 @@ async def get_prompt_by_command(command: str, user=Depends(get_verified_user)):
prompt = Prompts.get_prompt_by_command(f"/{command}")
if prompt:
if (
user.role == "admin"
or prompt.user_id == user.id
or has_access(user.id, "read", prompt.access_control)
):
return prompt
else:
raise HTTPException(
@ -67,8 +103,21 @@ async def get_prompt_by_command(command: str, user=Depends(get_verified_user)):
async def update_prompt_by_command(
command: str,
form_data: PromptForm,
user=Depends(get_admin_user),
user=Depends(get_verified_user),
):
prompt = Prompts.get_prompt_by_command(f"/{command}")
if not prompt:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.NOT_FOUND,
)
if prompt.user_id != user.id and user.role != "admin":
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
)
prompt = Prompts.update_prompt_by_command(f"/{command}", form_data)
if prompt:
return prompt
@ -85,6 +134,19 @@ async def update_prompt_by_command(
@router.delete("/command/{command}/delete", response_model=bool)
async def delete_prompt_by_command(command: str, user=Depends(get_admin_user)):
async def delete_prompt_by_command(command: str, user=Depends(get_verified_user)):
prompt = Prompts.get_prompt_by_command(f"/{command}")
if not prompt:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.NOT_FOUND,
)
if prompt.user_id != user.id and user.role != "admin":
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
)
result = Prompts.delete_prompt_by_command(f"/{command}")
return result

View file

@ -2,50 +2,82 @@ import os
from pathlib import Path
from typing import Optional
from open_webui.apps.webui.models.tools import ToolForm, ToolModel, ToolResponse, Tools
from open_webui.apps.webui.utils import load_toolkit_module_by_id, replace_imports
from open_webui.apps.webui.models.tools import (
ToolForm,
ToolModel,
ToolResponse,
ToolUserResponse,
Tools,
)
from open_webui.apps.webui.utils import load_tools_module_by_id, replace_imports
from open_webui.config import CACHE_DIR, DATA_DIR
from open_webui.constants import ERROR_MESSAGES
from fastapi import APIRouter, Depends, HTTPException, Request, status
from open_webui.utils.tools import get_tools_specs
from open_webui.utils.utils import get_admin_user, get_verified_user
from open_webui.utils.access_control import has_access, has_permission
router = APIRouter()
############################
# GetToolkits
# GetTools
############################
@router.get("/", response_model=list[ToolResponse])
async def get_toolkits(user=Depends(get_verified_user)):
toolkits = [toolkit for toolkit in Tools.get_tools()]
return toolkits
@router.get("/", response_model=list[ToolUserResponse])
async def get_tools(user=Depends(get_verified_user)):
if user.role == "admin":
tools = Tools.get_tools()
else:
tools = Tools.get_tools_by_user_id(user.id, "read")
return tools
############################
# ExportToolKits
# GetToolList
############################
@router.get("/list", response_model=list[ToolUserResponse])
async def get_tool_list(user=Depends(get_verified_user)):
if user.role == "admin":
tools = Tools.get_tools()
else:
tools = Tools.get_tools_by_user_id(user.id, "write")
return tools
############################
# ExportTools
############################
@router.get("/export", response_model=list[ToolModel])
async def get_toolkits(user=Depends(get_admin_user)):
toolkits = [toolkit for toolkit in Tools.get_tools()]
return toolkits
async def export_tools(user=Depends(get_admin_user)):
tools = Tools.get_tools()
return tools
############################
# CreateNewToolKit
# CreateNewTools
############################
@router.post("/create", response_model=Optional[ToolResponse])
async def create_new_toolkit(
async def create_new_tools(
request: Request,
form_data: ToolForm,
user=Depends(get_admin_user),
user=Depends(get_verified_user),
):
if user.role != "admin" and not has_permission(
user.id, "workspace.knowledge", request.app.state.config.USER_PERMISSIONS
):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.UNAUTHORIZED,
)
if not form_data.id.isidentifier():
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
@ -54,30 +86,30 @@ async def create_new_toolkit(
form_data.id = form_data.id.lower()
toolkit = Tools.get_tool_by_id(form_data.id)
if toolkit is None:
tools = Tools.get_tool_by_id(form_data.id)
if tools is None:
try:
form_data.content = replace_imports(form_data.content)
toolkit_module, frontmatter = load_toolkit_module_by_id(
tools_module, frontmatter = load_tools_module_by_id(
form_data.id, content=form_data.content
)
form_data.meta.manifest = frontmatter
TOOLS = request.app.state.TOOLS
TOOLS[form_data.id] = toolkit_module
TOOLS[form_data.id] = tools_module
specs = get_tools_specs(TOOLS[form_data.id])
toolkit = Tools.insert_new_tool(user.id, form_data, specs)
tools = Tools.insert_new_tool(user.id, form_data, specs)
tool_cache_dir = Path(CACHE_DIR) / "tools" / form_data.id
tool_cache_dir.mkdir(parents=True, exist_ok=True)
if toolkit:
return toolkit
if tools:
return tools
else:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.DEFAULT("Error creating toolkit"),
detail=ERROR_MESSAGES.DEFAULT("Error creating tools"),
)
except Exception as e:
print(e)
@ -93,16 +125,21 @@ async def create_new_toolkit(
############################
# GetToolkitById
# GetToolsById
############################
@router.get("/id/{id}", response_model=Optional[ToolModel])
async def get_toolkit_by_id(id: str, user=Depends(get_admin_user)):
toolkit = Tools.get_tool_by_id(id)
async def get_tools_by_id(id: str, user=Depends(get_verified_user)):
tools = Tools.get_tool_by_id(id)
if toolkit:
return toolkit
if tools:
if (
user.role == "admin"
or tools.user_id == user.id
or has_access(user.id, "read", tools.access_control)
):
return tools
else:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
@ -111,26 +148,39 @@ async def get_toolkit_by_id(id: str, user=Depends(get_admin_user)):
############################
# UpdateToolkitById
# UpdateToolsById
############################
@router.post("/id/{id}/update", response_model=Optional[ToolModel])
async def update_toolkit_by_id(
async def update_tools_by_id(
request: Request,
id: str,
form_data: ToolForm,
user=Depends(get_admin_user),
user=Depends(get_verified_user),
):
tools = Tools.get_tool_by_id(id)
if not tools:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.NOT_FOUND,
)
if tools.user_id != user.id and user.role != "admin":
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.UNAUTHORIZED,
)
try:
form_data.content = replace_imports(form_data.content)
toolkit_module, frontmatter = load_toolkit_module_by_id(
tools_module, frontmatter = load_tools_module_by_id(
id, content=form_data.content
)
form_data.meta.manifest = frontmatter
TOOLS = request.app.state.TOOLS
TOOLS[id] = toolkit_module
TOOLS[id] = tools_module
specs = get_tools_specs(TOOLS[id])
@ -140,14 +190,14 @@ async def update_toolkit_by_id(
}
print(updated)
toolkit = Tools.update_tool_by_id(id, updated)
tools = Tools.update_tool_by_id(id, updated)
if toolkit:
return toolkit
if tools:
return tools
else:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.DEFAULT("Error updating toolkit"),
detail=ERROR_MESSAGES.DEFAULT("Error updating tools"),
)
except Exception as e:
@ -158,14 +208,28 @@ async def update_toolkit_by_id(
############################
# DeleteToolkitById
# DeleteToolsById
############################
@router.delete("/id/{id}/delete", response_model=bool)
async def delete_toolkit_by_id(request: Request, id: str, user=Depends(get_admin_user)):
result = Tools.delete_tool_by_id(id)
async def delete_tools_by_id(
request: Request, id: str, user=Depends(get_verified_user)
):
tools = Tools.get_tool_by_id(id)
if not tools:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.NOT_FOUND,
)
if tools.user_id != user.id and user.role != "admin":
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.UNAUTHORIZED,
)
result = Tools.delete_tool_by_id(id)
if result:
TOOLS = request.app.state.TOOLS
if id in TOOLS:
@ -180,9 +244,9 @@ async def delete_toolkit_by_id(request: Request, id: str, user=Depends(get_admin
@router.get("/id/{id}/valves", response_model=Optional[dict])
async def get_toolkit_valves_by_id(id: str, user=Depends(get_admin_user)):
toolkit = Tools.get_tool_by_id(id)
if toolkit:
async def get_tools_valves_by_id(id: str, user=Depends(get_verified_user)):
tools = Tools.get_tool_by_id(id)
if tools:
try:
valves = Tools.get_tool_valves_by_id(id)
return valves
@ -204,19 +268,19 @@ async def get_toolkit_valves_by_id(id: str, user=Depends(get_admin_user)):
@router.get("/id/{id}/valves/spec", response_model=Optional[dict])
async def get_toolkit_valves_spec_by_id(
request: Request, id: str, user=Depends(get_admin_user)
async def get_tools_valves_spec_by_id(
request: Request, id: str, user=Depends(get_verified_user)
):
toolkit = Tools.get_tool_by_id(id)
if toolkit:
tools = Tools.get_tool_by_id(id)
if tools:
if id in request.app.state.TOOLS:
toolkit_module = request.app.state.TOOLS[id]
tools_module = request.app.state.TOOLS[id]
else:
toolkit_module, _ = load_toolkit_module_by_id(id)
request.app.state.TOOLS[id] = toolkit_module
tools_module, _ = load_tools_module_by_id(id)
request.app.state.TOOLS[id] = tools_module
if hasattr(toolkit_module, "Valves"):
Valves = toolkit_module.Valves
if hasattr(tools_module, "Valves"):
Valves = tools_module.Valves
return Valves.schema()
return None
else:
@ -232,19 +296,19 @@ async def get_toolkit_valves_spec_by_id(
@router.post("/id/{id}/valves/update", response_model=Optional[dict])
async def update_toolkit_valves_by_id(
request: Request, id: str, form_data: dict, user=Depends(get_admin_user)
async def update_tools_valves_by_id(
request: Request, id: str, form_data: dict, user=Depends(get_verified_user)
):
toolkit = Tools.get_tool_by_id(id)
if toolkit:
tools = Tools.get_tool_by_id(id)
if tools:
if id in request.app.state.TOOLS:
toolkit_module = request.app.state.TOOLS[id]
tools_module = request.app.state.TOOLS[id]
else:
toolkit_module, _ = load_toolkit_module_by_id(id)
request.app.state.TOOLS[id] = toolkit_module
tools_module, _ = load_tools_module_by_id(id)
request.app.state.TOOLS[id] = tools_module
if hasattr(toolkit_module, "Valves"):
Valves = toolkit_module.Valves
if hasattr(tools_module, "Valves"):
Valves = tools_module.Valves
try:
form_data = {k: v for k, v in form_data.items() if v is not None}
@ -276,9 +340,9 @@ async def update_toolkit_valves_by_id(
@router.get("/id/{id}/valves/user", response_model=Optional[dict])
async def get_toolkit_user_valves_by_id(id: str, user=Depends(get_verified_user)):
toolkit = Tools.get_tool_by_id(id)
if toolkit:
async def get_tools_user_valves_by_id(id: str, user=Depends(get_verified_user)):
tools = Tools.get_tool_by_id(id)
if tools:
try:
user_valves = Tools.get_user_valves_by_id_and_user_id(id, user.id)
return user_valves
@ -295,19 +359,19 @@ async def get_toolkit_user_valves_by_id(id: str, user=Depends(get_verified_user)
@router.get("/id/{id}/valves/user/spec", response_model=Optional[dict])
async def get_toolkit_user_valves_spec_by_id(
async def get_tools_user_valves_spec_by_id(
request: Request, id: str, user=Depends(get_verified_user)
):
toolkit = Tools.get_tool_by_id(id)
if toolkit:
tools = Tools.get_tool_by_id(id)
if tools:
if id in request.app.state.TOOLS:
toolkit_module = request.app.state.TOOLS[id]
tools_module = request.app.state.TOOLS[id]
else:
toolkit_module, _ = load_toolkit_module_by_id(id)
request.app.state.TOOLS[id] = toolkit_module
tools_module, _ = load_tools_module_by_id(id)
request.app.state.TOOLS[id] = tools_module
if hasattr(toolkit_module, "UserValves"):
UserValves = toolkit_module.UserValves
if hasattr(tools_module, "UserValves"):
UserValves = tools_module.UserValves
return UserValves.schema()
return None
else:
@ -318,20 +382,20 @@ async def get_toolkit_user_valves_spec_by_id(
@router.post("/id/{id}/valves/user/update", response_model=Optional[dict])
async def update_toolkit_user_valves_by_id(
async def update_tools_user_valves_by_id(
request: Request, id: str, form_data: dict, user=Depends(get_verified_user)
):
toolkit = Tools.get_tool_by_id(id)
tools = Tools.get_tool_by_id(id)
if toolkit:
if tools:
if id in request.app.state.TOOLS:
toolkit_module = request.app.state.TOOLS[id]
tools_module = request.app.state.TOOLS[id]
else:
toolkit_module, _ = load_toolkit_module_by_id(id)
request.app.state.TOOLS[id] = toolkit_module
tools_module, _ = load_tools_module_by_id(id)
request.app.state.TOOLS[id] = tools_module
if hasattr(toolkit_module, "UserValves"):
UserValves = toolkit_module.UserValves
if hasattr(tools_module, "UserValves"):
UserValves = tools_module.UserValves
try:
form_data = {k: v for k, v in form_data.items() if v is not None}

View file

@ -31,21 +31,58 @@ async def get_users(skip: int = 0, limit: int = 50, user=Depends(get_admin_user)
return Users.get_users(skip, limit)
############################
# User Groups
############################
@router.get("/groups")
async def get_user_groups(user=Depends(get_verified_user)):
return Users.get_user_groups(user.id)
############################
# User Permissions
############################
@router.get("/permissions/user")
@router.get("/permissions")
async def get_user_permissisions(user=Depends(get_verified_user)):
return Users.get_user_groups(user.id)
############################
# User Default Permissions
############################
class WorkspacePermissions(BaseModel):
models: bool
knowledge: bool
prompts: bool
tools: bool
class ChatPermissions(BaseModel):
file_upload: bool
delete: bool
edit: bool
temporary: bool
class UserPermissions(BaseModel):
workspace: WorkspacePermissions
chat: ChatPermissions
@router.get("/default/permissions")
async def get_user_permissions(request: Request, user=Depends(get_admin_user)):
return request.app.state.config.USER_PERMISSIONS
@router.post("/permissions/user")
@router.post("/default/permissions")
async def update_user_permissions(
request: Request, form_data: dict, user=Depends(get_admin_user)
request: Request, form_data: UserPermissions, user=Depends(get_admin_user)
):
request.app.state.config.USER_PERMISSIONS = form_data
request.app.state.config.USER_PERMISSIONS = form_data.model_dump()
return request.app.state.config.USER_PERMISSIONS

View file

@ -63,7 +63,7 @@ def replace_imports(content):
return content
def load_toolkit_module_by_id(toolkit_id, content=None):
def load_tools_module_by_id(toolkit_id, content=None):
if content is None:
tool = Tools.get_tool_by_id(toolkit_id)

View file

@ -20,6 +20,7 @@ from open_webui.env import (
WEBUI_FAVICON_URL,
WEBUI_NAME,
log,
DATABASE_URL,
)
from pydantic import BaseModel
from sqlalchemy import JSON, Column, DateTime, Integer, func
@ -264,6 +265,13 @@ class AppConfig:
# WEBUI_AUTH (Required for security)
####################################
ENABLE_API_KEY = PersistentConfig(
"ENABLE_API_KEY",
"auth.api_key.enable",
os.environ.get("ENABLE_API_KEY", "True").lower() == "true",
)
JWT_EXPIRES_IN = PersistentConfig(
"JWT_EXPIRES_IN", "auth.jwt_expiry", os.environ.get("JWT_EXPIRES_IN", "-1")
)
@ -606,6 +614,12 @@ OLLAMA_BASE_URLS = PersistentConfig(
"OLLAMA_BASE_URLS", "ollama.base_urls", OLLAMA_BASE_URLS
)
OLLAMA_API_CONFIGS = PersistentConfig(
"OLLAMA_API_CONFIGS",
"ollama.api_configs",
{},
)
####################################
# OPENAI_API
####################################
@ -646,15 +660,20 @@ OPENAI_API_BASE_URLS = PersistentConfig(
"OPENAI_API_BASE_URLS", "openai.api_base_urls", OPENAI_API_BASE_URLS
)
OPENAI_API_KEY = ""
OPENAI_API_CONFIGS = PersistentConfig(
"OPENAI_API_CONFIGS",
"openai.api_configs",
{},
)
# Get the actual OpenAI API key based on the base URL
OPENAI_API_KEY = ""
try:
OPENAI_API_KEY = OPENAI_API_KEYS.value[
OPENAI_API_BASE_URLS.value.index("https://api.openai.com/v1")
]
except Exception:
pass
OPENAI_API_BASE_URL = "https://api.openai.com/v1"
####################################
@ -727,12 +746,36 @@ DEFAULT_USER_ROLE = PersistentConfig(
os.getenv("DEFAULT_USER_ROLE", "pending"),
)
USER_PERMISSIONS_CHAT_DELETION = (
os.environ.get("USER_PERMISSIONS_CHAT_DELETION", "True").lower() == "true"
USER_PERMISSIONS_WORKSPACE_MODELS_ACCESS = (
os.environ.get("USER_PERMISSIONS_WORKSPACE_MODELS_ACCESS", "False").lower()
== "true"
)
USER_PERMISSIONS_CHAT_EDITING = (
os.environ.get("USER_PERMISSIONS_CHAT_EDITING", "True").lower() == "true"
USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ACCESS = (
os.environ.get("USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ACCESS", "False").lower()
== "true"
)
USER_PERMISSIONS_WORKSPACE_PROMPTS_ACCESS = (
os.environ.get("USER_PERMISSIONS_WORKSPACE_PROMPTS_ACCESS", "False").lower()
== "true"
)
USER_PERMISSIONS_WORKSPACE_TOOLS_ACCESS = (
os.environ.get("USER_PERMISSIONS_WORKSPACE_TOOLS_ACCESS", "False").lower() == "true"
)
USER_PERMISSIONS_CHAT_FILE_UPLOAD = (
os.environ.get("USER_PERMISSIONS_CHAT_FILE_UPLOAD", "True").lower() == "true"
)
USER_PERMISSIONS_CHAT_DELETE = (
os.environ.get("USER_PERMISSIONS_CHAT_DELETE", "True").lower() == "true"
)
USER_PERMISSIONS_CHAT_EDIT = (
os.environ.get("USER_PERMISSIONS_CHAT_EDIT", "True").lower() == "true"
)
USER_PERMISSIONS_CHAT_TEMPORARY = (
@ -741,13 +784,20 @@ USER_PERMISSIONS_CHAT_TEMPORARY = (
USER_PERMISSIONS = PersistentConfig(
"USER_PERMISSIONS",
"ui.user_permissions",
"user.permissions",
{
"workspace": {
"models": USER_PERMISSIONS_WORKSPACE_MODELS_ACCESS,
"knowledge": USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ACCESS,
"prompts": USER_PERMISSIONS_WORKSPACE_PROMPTS_ACCESS,
"tools": USER_PERMISSIONS_WORKSPACE_TOOLS_ACCESS,
},
"chat": {
"deletion": USER_PERMISSIONS_CHAT_DELETION,
"editing": USER_PERMISSIONS_CHAT_EDITING,
"file_upload": USER_PERMISSIONS_CHAT_FILE_UPLOAD,
"delete": USER_PERMISSIONS_CHAT_DELETE,
"edit": USER_PERMISSIONS_CHAT_EDIT,
"temporary": USER_PERMISSIONS_CHAT_TEMPORARY,
}
},
},
)
@ -773,18 +823,6 @@ DEFAULT_ARENA_MODEL = {
},
}
ENABLE_MODEL_FILTER = PersistentConfig(
"ENABLE_MODEL_FILTER",
"model_filter.enable",
os.environ.get("ENABLE_MODEL_FILTER", "False").lower() == "true",
)
MODEL_FILTER_LIST = os.environ.get("MODEL_FILTER_LIST", "")
MODEL_FILTER_LIST = PersistentConfig(
"MODEL_FILTER_LIST",
"model_filter.list",
[model.strip() for model in MODEL_FILTER_LIST.split(";")],
)
WEBHOOK_URL = PersistentConfig(
"WEBHOOK_URL", "webhook_url", os.environ.get("WEBHOOK_URL", "")
)
@ -904,19 +942,55 @@ TAGS_GENERATION_PROMPT_TEMPLATE = PersistentConfig(
os.environ.get("TAGS_GENERATION_PROMPT_TEMPLATE", ""),
)
ENABLE_SEARCH_QUERY = PersistentConfig(
"ENABLE_SEARCH_QUERY",
"task.search.enable",
os.environ.get("ENABLE_SEARCH_QUERY", "True").lower() == "true",
ENABLE_TAGS_GENERATION = PersistentConfig(
"ENABLE_TAGS_GENERATION",
"task.tags.enable",
os.environ.get("ENABLE_TAGS_GENERATION", "True").lower() == "true",
)
SEARCH_QUERY_GENERATION_PROMPT_TEMPLATE = PersistentConfig(
"SEARCH_QUERY_GENERATION_PROMPT_TEMPLATE",
"task.search.prompt_template",
os.environ.get("SEARCH_QUERY_GENERATION_PROMPT_TEMPLATE", ""),
ENABLE_SEARCH_QUERY_GENERATION = PersistentConfig(
"ENABLE_SEARCH_QUERY_GENERATION",
"task.query.search.enable",
os.environ.get("ENABLE_SEARCH_QUERY_GENERATION", "True").lower() == "true",
)
ENABLE_RETRIEVAL_QUERY_GENERATION = PersistentConfig(
"ENABLE_RETRIEVAL_QUERY_GENERATION",
"task.query.retrieval.enable",
os.environ.get("ENABLE_RETRIEVAL_QUERY_GENERATION", "True").lower() == "true",
)
QUERY_GENERATION_PROMPT_TEMPLATE = PersistentConfig(
"QUERY_GENERATION_PROMPT_TEMPLATE",
"task.query.prompt_template",
os.environ.get("QUERY_GENERATION_PROMPT_TEMPLATE", ""),
)
DEFAULT_QUERY_GENERATION_PROMPT_TEMPLATE = """### Task:
Based on the chat history, determine whether a search is necessary, and if so, generate a 1-3 broad search queries to retrieve comprehensive and updated information. If no search is required, return an empty list.
### Guidelines:
- Respond exclusively with a JSON object.
- If a search query is needed, return an object like: { "queries": ["query1", "query2"] } where each query is distinct and concise.
- If no search query is necessary, output should be: { "queries": [] }
- Default to suggesting a search query to ensure accurate and updated information, unless it is definitively clear no search is required.
- Be concise, focusing strictly on composing search queries with no additional commentary or text.
- When in doubt, prefer to suggest a search for comprehensiveness.
- Today's date is: {{CURRENT_DATE}}
### Output:
JSON format: {
"queries": ["query1", "query2"]
}
### Chat History:
<chat_history>
{{MESSAGES:END:6}}
</chat_history>
"""
TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE = PersistentConfig(
"TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE",
@ -937,6 +1011,8 @@ CHROMA_TENANT = os.environ.get("CHROMA_TENANT", chromadb.DEFAULT_TENANT)
CHROMA_DATABASE = os.environ.get("CHROMA_DATABASE", chromadb.DEFAULT_DATABASE)
CHROMA_HTTP_HOST = os.environ.get("CHROMA_HTTP_HOST", "")
CHROMA_HTTP_PORT = int(os.environ.get("CHROMA_HTTP_PORT", "8000"))
CHROMA_CLIENT_AUTH_PROVIDER = os.environ.get("CHROMA_CLIENT_AUTH_PROVIDER", "")
CHROMA_CLIENT_AUTH_CREDENTIALS = os.environ.get("CHROMA_CLIENT_AUTH_CREDENTIALS", "")
# Comma-separated list of header=value pairs
CHROMA_HTTP_HEADERS = os.environ.get("CHROMA_HTTP_HEADERS", "")
if CHROMA_HTTP_HEADERS:
@ -954,6 +1030,21 @@ MILVUS_URI = os.environ.get("MILVUS_URI", f"{DATA_DIR}/vector_db/milvus.db")
# Qdrant
QDRANT_URI = os.environ.get("QDRANT_URI", None)
QDRANT_API_KEY = os.environ.get("QDRANT_API_KEY", None)
# OpenSearch
OPENSEARCH_URI = os.environ.get("OPENSEARCH_URI", "https://localhost:9200")
OPENSEARCH_SSL = os.environ.get("OPENSEARCH_SSL", True)
OPENSEARCH_CERT_VERIFY = os.environ.get("OPENSEARCH_CERT_VERIFY", False)
OPENSEARCH_USERNAME = os.environ.get("OPENSEARCH_USERNAME", None)
OPENSEARCH_PASSWORD = os.environ.get("OPENSEARCH_PASSWORD", None)
# Pgvector
PGVECTOR_DB_URL = os.environ.get("PGVECTOR_DB_URL", DATABASE_URL)
if VECTOR_DB == "pgvector" and not PGVECTOR_DB_URL.startswith("postgres"):
raise ValueError(
"Pgvector requires setting PGVECTOR_DB_URL or using Postgres with vector extension as the primary database."
)
####################################
# Information Retrieval (RAG)
@ -1033,11 +1124,11 @@ RAG_EMBEDDING_MODEL = PersistentConfig(
log.info(f"Embedding model set: {RAG_EMBEDDING_MODEL.value}")
RAG_EMBEDDING_MODEL_AUTO_UPDATE = (
os.environ.get("RAG_EMBEDDING_MODEL_AUTO_UPDATE", "").lower() == "true"
os.environ.get("RAG_EMBEDDING_MODEL_AUTO_UPDATE", "True").lower() == "true"
)
RAG_EMBEDDING_MODEL_TRUST_REMOTE_CODE = (
os.environ.get("RAG_EMBEDDING_MODEL_TRUST_REMOTE_CODE", "").lower() == "true"
os.environ.get("RAG_EMBEDDING_MODEL_TRUST_REMOTE_CODE", "True").lower() == "true"
)
RAG_EMBEDDING_BATCH_SIZE = PersistentConfig(
@ -1058,11 +1149,11 @@ if RAG_RERANKING_MODEL.value != "":
log.info(f"Reranking model set: {RAG_RERANKING_MODEL.value}")
RAG_RERANKING_MODEL_AUTO_UPDATE = (
os.environ.get("RAG_RERANKING_MODEL_AUTO_UPDATE", "").lower() == "true"
os.environ.get("RAG_RERANKING_MODEL_AUTO_UPDATE", "True").lower() == "true"
)
RAG_RERANKING_MODEL_TRUST_REMOTE_CODE = (
os.environ.get("RAG_RERANKING_MODEL_TRUST_REMOTE_CODE", "").lower() == "true"
os.environ.get("RAG_RERANKING_MODEL_TRUST_REMOTE_CODE", "True").lower() == "true"
)
@ -1127,6 +1218,19 @@ RAG_OPENAI_API_KEY = PersistentConfig(
os.getenv("RAG_OPENAI_API_KEY", OPENAI_API_KEY),
)
RAG_OLLAMA_BASE_URL = PersistentConfig(
"RAG_OLLAMA_BASE_URL",
"rag.ollama.url",
os.getenv("RAG_OLLAMA_BASE_URL", OLLAMA_BASE_URL),
)
RAG_OLLAMA_API_KEY = PersistentConfig(
"RAG_OLLAMA_API_KEY",
"rag.ollama.key",
os.getenv("RAG_OLLAMA_API_KEY", ""),
)
ENABLE_RAG_LOCAL_WEB_FETCH = (
os.getenv("ENABLE_RAG_LOCAL_WEB_FETCH", "False").lower() == "true"
)
@ -1222,6 +1326,12 @@ TAVILY_API_KEY = PersistentConfig(
os.getenv("TAVILY_API_KEY", ""),
)
JINA_API_KEY = PersistentConfig(
"JINA_API_KEY",
"rag.web.search.jina_api_key",
os.getenv("JINA_API_KEY", ""),
)
SEARCHAPI_API_KEY = PersistentConfig(
"SEARCHAPI_API_KEY",
"rag.web.search.searchapi_api_key",
@ -1234,6 +1344,21 @@ SEARCHAPI_ENGINE = PersistentConfig(
os.getenv("SEARCHAPI_ENGINE", ""),
)
BING_SEARCH_V7_ENDPOINT = PersistentConfig(
"BING_SEARCH_V7_ENDPOINT",
"rag.web.search.bing_search_v7_endpoint",
os.environ.get(
"BING_SEARCH_V7_ENDPOINT", "https://api.bing.microsoft.com/v7.0/search"
),
)
BING_SEARCH_V7_SUBSCRIPTION_KEY = PersistentConfig(
"BING_SEARCH_V7_SUBSCRIPTION_KEY",
"rag.web.search.bing_search_v7_subscription_key",
os.environ.get("BING_SEARCH_V7_SUBSCRIPTION_KEY", ""),
)
RAG_WEB_SEARCH_RESULT_COUNT = PersistentConfig(
"RAG_WEB_SEARCH_RESULT_COUNT",
"rag.web.search.result_count",
@ -1285,7 +1410,7 @@ AUTOMATIC1111_CFG_SCALE = PersistentConfig(
AUTOMATIC1111_SAMPLER = PersistentConfig(
"AUTOMATIC1111_SAMPLERE",
"AUTOMATIC1111_SAMPLER",
"image_generation.automatic1111.sampler",
(
os.environ.get("AUTOMATIC1111_SAMPLER")
@ -1554,3 +1679,74 @@ AUDIO_TTS_AZURE_SPEECH_OUTPUT_FORMAT = PersistentConfig(
"AUDIO_TTS_AZURE_SPEECH_OUTPUT_FORMAT", "audio-24khz-160kbitrate-mono-mp3"
),
)
####################################
# LDAP
####################################
ENABLE_LDAP = PersistentConfig(
"ENABLE_LDAP",
"ldap.enable",
os.environ.get("ENABLE_LDAP", "false").lower() == "true",
)
LDAP_SERVER_LABEL = PersistentConfig(
"LDAP_SERVER_LABEL",
"ldap.server.label",
os.environ.get("LDAP_SERVER_LABEL", "LDAP Server"),
)
LDAP_SERVER_HOST = PersistentConfig(
"LDAP_SERVER_HOST",
"ldap.server.host",
os.environ.get("LDAP_SERVER_HOST", "localhost"),
)
LDAP_SERVER_PORT = PersistentConfig(
"LDAP_SERVER_PORT",
"ldap.server.port",
int(os.environ.get("LDAP_SERVER_PORT", "389")),
)
LDAP_ATTRIBUTE_FOR_USERNAME = PersistentConfig(
"LDAP_ATTRIBUTE_FOR_USERNAME",
"ldap.server.attribute_for_username",
os.environ.get("LDAP_ATTRIBUTE_FOR_USERNAME", "uid"),
)
LDAP_APP_DN = PersistentConfig(
"LDAP_APP_DN", "ldap.server.app_dn", os.environ.get("LDAP_APP_DN", "")
)
LDAP_APP_PASSWORD = PersistentConfig(
"LDAP_APP_PASSWORD",
"ldap.server.app_password",
os.environ.get("LDAP_APP_PASSWORD", ""),
)
LDAP_SEARCH_BASE = PersistentConfig(
"LDAP_SEARCH_BASE", "ldap.server.users_dn", os.environ.get("LDAP_SEARCH_BASE", "")
)
LDAP_SEARCH_FILTERS = PersistentConfig(
"LDAP_SEARCH_FILTER",
"ldap.server.search_filter",
os.environ.get("LDAP_SEARCH_FILTER", ""),
)
LDAP_USE_TLS = PersistentConfig(
"LDAP_USE_TLS",
"ldap.server.use_tls",
os.environ.get("LDAP_USE_TLS", "True").lower() == "true",
)
LDAP_CA_CERT_FILE = PersistentConfig(
"LDAP_CA_CERT_FILE",
"ldap.server.ca_cert_file",
os.environ.get("LDAP_CA_CERT_FILE", ""),
)
LDAP_CIPHERS = PersistentConfig(
"LDAP_CIPHERS", "ldap.server.ciphers", os.environ.get("LDAP_CIPHERS", "ALL")
)

View file

@ -62,6 +62,7 @@ class ERROR_MESSAGES(str, Enum):
NOT_FOUND = "We could not find what you're looking for :/"
USER_NOT_FOUND = "We could not find what you're looking for :/"
API_KEY_NOT_FOUND = "Oops! It looks like there's a hiccup. The API key is missing. Please make sure to provide a valid API key to access this feature."
API_KEY_NOT_ALLOWED = "Use of API key is not enabled in the environment."
MALICIOUS = "Unusual activities detected, please try again in a few minutes."
@ -75,6 +76,7 @@ class ERROR_MESSAGES(str, Enum):
OPENAI_NOT_FOUND = lambda name="": "OpenAI API was not found"
OLLAMA_NOT_FOUND = "WebUI could not connect to Ollama"
CREATE_API_KEY_ERROR = "Oops! Something went wrong while creating your API key. Please try again later. If the issue persists, contact support for assistance."
API_KEY_CREATION_NOT_ALLOWED = "API key creation is not allowed in the environment."
EMPTY_CONTENT = "The content provided is empty. Please ensure that there is text or data present before proceeding."

View file

@ -195,6 +195,15 @@ CHANGELOG = changelog_json
SAFE_MODE = os.environ.get("SAFE_MODE", "false").lower() == "true"
####################################
# ENABLE_FORWARD_USER_INFO_HEADERS
####################################
ENABLE_FORWARD_USER_INFO_HEADERS = (
os.environ.get("ENABLE_FORWARD_USER_INFO_HEADERS", "False").lower() == "true"
)
####################################
# WEBUI_BUILD_HASH
####################################

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,85 @@
"""Add group table
Revision ID: 922e7a387820
Revises: 4ace53fd72c8
Create Date: 2024-11-14 03:00:00.000000
"""
from alembic import op
import sqlalchemy as sa
revision = "922e7a387820"
down_revision = "4ace53fd72c8"
branch_labels = None
depends_on = None
def upgrade():
op.create_table(
"group",
sa.Column("id", sa.Text(), nullable=False, primary_key=True, unique=True),
sa.Column("user_id", sa.Text(), nullable=True),
sa.Column("name", sa.Text(), nullable=True),
sa.Column("description", sa.Text(), nullable=True),
sa.Column("data", sa.JSON(), nullable=True),
sa.Column("meta", sa.JSON(), nullable=True),
sa.Column("permissions", sa.JSON(), nullable=True),
sa.Column("user_ids", sa.JSON(), nullable=True),
sa.Column("created_at", sa.BigInteger(), nullable=True),
sa.Column("updated_at", sa.BigInteger(), nullable=True),
)
# Add 'access_control' column to 'model' table
op.add_column(
"model",
sa.Column("access_control", sa.JSON(), nullable=True),
)
# Add 'is_active' column to 'model' table
op.add_column(
"model",
sa.Column(
"is_active",
sa.Boolean(),
nullable=False,
server_default=sa.sql.expression.true(),
),
)
# Add 'access_control' column to 'knowledge' table
op.add_column(
"knowledge",
sa.Column("access_control", sa.JSON(), nullable=True),
)
# Add 'access_control' column to 'prompt' table
op.add_column(
"prompt",
sa.Column("access_control", sa.JSON(), nullable=True),
)
# Add 'access_control' column to 'tools' table
op.add_column(
"tool",
sa.Column("access_control", sa.JSON(), nullable=True),
)
def downgrade():
op.drop_table("group")
# Drop 'access_control' column from 'model' table
op.drop_column("model", "access_control")
# Drop 'is_active' column from 'model' table
op.drop_column("model", "is_active")
# Drop 'access_control' column from 'knowledge' table
op.drop_column("knowledge", "access_control")
# Drop 'access_control' column from 'prompt' table
op.drop_column("prompt", "access_control")
# Drop 'access_control' column from 'tools' table
op.drop_column("tool", "access_control")

View file

@ -51,7 +51,10 @@ class StorageProvider:
try:
self.s3_client.upload_file(file_path, self.bucket_name, filename)
return open(file_path, "rb").read(), file_path
return (
open(file_path, "rb").read(),
"s3://" + self.bucket_name + "/" + filename,
)
except ClientError as e:
raise RuntimeError(f"Error uploading file to S3: {e}")

View file

@ -0,0 +1,95 @@
from typing import Optional, Union, List, Dict, Any
from open_webui.apps.webui.models.groups import Groups
import json
def get_permissions(
user_id: str,
default_permissions: Dict[str, Any],
) -> Dict[str, Any]:
"""
Get all permissions for a user by combining the permissions of all groups the user is a member of.
If a permission is defined in multiple groups, the most permissive value is used (True > False).
Permissions are nested in a dict with the permission key as the key and a boolean as the value.
"""
def combine_permissions(
permissions: Dict[str, Any], group_permissions: Dict[str, Any]
) -> Dict[str, Any]:
"""Combine permissions from multiple groups by taking the most permissive value."""
for key, value in group_permissions.items():
if isinstance(value, dict):
if key not in permissions:
permissions[key] = {}
permissions[key] = combine_permissions(permissions[key], value)
else:
if key not in permissions:
permissions[key] = value
else:
permissions[key] = permissions[key] or value
return permissions
user_groups = Groups.get_groups_by_member_id(user_id)
# deep copy default permissions to avoid modifying the original dict
permissions = json.loads(json.dumps(default_permissions))
for group in user_groups:
group_permissions = group.permissions
permissions = combine_permissions(permissions, group_permissions)
return permissions
def has_permission(
user_id: str,
permission_key: str,
default_permissions: Dict[str, bool] = {},
) -> bool:
"""
Check if a user has a specific permission by checking the group permissions
and falls back to default permissions if not found in any group.
Permission keys can be hierarchical and separated by dots ('.').
"""
def get_permission(permissions: Dict[str, bool], keys: List[str]) -> bool:
"""Traverse permissions dict using a list of keys (from dot-split permission_key)."""
for key in keys:
if key not in permissions:
return False # If any part of the hierarchy is missing, deny access
permissions = permissions[key] # Go one level deeper
return bool(permissions) # Return the boolean at the final level
permission_hierarchy = permission_key.split(".")
# Retrieve user group permissions
user_groups = Groups.get_groups_by_member_id(user_id)
for group in user_groups:
group_permissions = group.permissions
if get_permission(group_permissions, permission_hierarchy):
return True
# Check default permissions afterwards if the group permissions don't allow it
return get_permission(default_permissions, permission_hierarchy)
def has_access(
user_id: str,
type: str = "write",
access_control: Optional[dict] = None,
) -> bool:
if access_control is None:
return type == "read"
user_groups = Groups.get_groups_by_member_id(user_id)
user_group_ids = [group.id for group in user_groups]
permission_access = access_control.get(type, {})
permitted_group_ids = permission_access.get("group_ids", [])
permitted_user_ids = permission_access.get("user_ids", [])
return user_id in permitted_user_ids or any(
group_id in permitted_group_ids for group_id in user_group_ids
)

View file

@ -54,17 +54,17 @@ class PDFGenerator:
html_content = markdown(content, extensions=["pymdownx.extra"])
html_message = f"""
<div> {date_str} </div>
<div class="message">
<small> {date_str} </small>
<div>
<h2>
<strong>{role.title()}</strong>
<small class="text-muted">{model}</small>
<span style="font-size: 12px; color: #888;">{model}</span>
</h2>
</div>
<div class="markdown-section">
{html_content}
</div>
<pre class="markdown-section">
{content}
</pre>
</div>
"""
return html_message

View file

@ -20,6 +20,7 @@ def set_security_headers() -> Dict[str, str]:
This function reads specific environment variables and uses their values
to set corresponding security headers. The headers that can be set are:
- cache-control
- permissions-policy
- strict-transport-security
- referrer-policy
- x-content-type-options
@ -38,6 +39,7 @@ def set_security_headers() -> Dict[str, str]:
header_setters = {
"CACHE_CONTROL": set_cache_control,
"HSTS": set_hsts,
"PERMISSIONS_POLICY": set_permissions_policy,
"REFERRER_POLICY": set_referrer,
"XCONTENT_TYPE": set_xcontent_type,
"XDOWNLOAD_OPTIONS": set_xdownload_options,
@ -73,6 +75,15 @@ def set_xframe(value: str):
return {"X-Frame-Options": value}
# Set Permissions-Policy response header
def set_permissions_policy(value: str):
pattern = r"^(?:(accelerometer|autoplay|camera|clipboard-read|clipboard-write|fullscreen|geolocation|gyroscope|magnetometer|microphone|midi|payment|picture-in-picture|sync-xhr|usb|xr-spatial-tracking)=\((self)?\),?)*$"
match = re.match(pattern, value, re.IGNORECASE)
if not match:
value = "none"
return {"Permissions-Policy": value}
# Set Referrer-Policy response header
def set_referrer(value: str):
pattern = r"^(no-referrer|no-referrer-when-downgrade|origin|origin-when-cross-origin|same-origin|strict-origin|strict-origin-when-cross-origin|unsafe-url)$"

View file

@ -163,7 +163,7 @@ def emoji_generation_template(
return template
def search_query_generation_template(
def query_generation_template(
template: str, messages: list[dict], user: Optional[dict] = None
) -> str:
prompt = get_last_user_message(messages)

View file

@ -4,7 +4,7 @@ from typing import Awaitable, Callable, get_type_hints
from open_webui.apps.webui.models.tools import Tools
from open_webui.apps.webui.models.users import UserModel
from open_webui.apps.webui.utils import load_toolkit_module_by_id
from open_webui.apps.webui.utils import load_tools_module_by_id
from open_webui.utils.schemas import json_schema_to_model
log = logging.getLogger(__name__)
@ -32,15 +32,16 @@ def apply_extra_params_to_tool_function(
def get_tools(
webui_app, tool_ids: list[str], user: UserModel, extra_params: dict
) -> dict[str, dict]:
tools = {}
tools_dict = {}
for tool_id in tool_ids:
toolkit = Tools.get_tool_by_id(tool_id)
if toolkit is None:
tools = Tools.get_tool_by_id(tool_id)
if tools is None:
continue
module = webui_app.state.TOOLS.get(tool_id, None)
if module is None:
module, _ = load_toolkit_module_by_id(tool_id)
module, _ = load_tools_module_by_id(tool_id)
webui_app.state.TOOLS[tool_id] = module
extra_params["__id__"] = tool_id
@ -53,11 +54,19 @@ def get_tools(
**Tools.get_user_valves_by_id_and_user_id(tool_id, user.id)
)
for spec in toolkit.specs:
for spec in tools.specs:
# TODO: Fix hack for OpenAI API
for val in spec.get("parameters", {}).get("properties", {}).values():
if val["type"] == "str":
val["type"] = "string"
# Remove internal parameters
spec["parameters"]["properties"] = {
key: val
for key, val in spec["parameters"]["properties"].items()
if not key.startswith("__")
}
function_name = spec["name"]
# convert to function that takes only model params and inserts custom params
@ -77,13 +86,14 @@ def get_tools(
}
# TODO: if collision, prepend toolkit name
if function_name in tools:
log.warning(f"Tool {function_name} already exists in another toolkit!")
log.warning(f"Collision between {toolkit} and {tool_id}.")
log.warning(f"Discarding {toolkit}.{function_name}")
if function_name in tools_dict:
log.warning(f"Tool {function_name} already exists in another tools!")
log.warning(f"Collision between {tools} and {tool_id}.")
log.warning(f"Discarding {tools}.{function_name}")
else:
tools[function_name] = tool_dict
return tools
tools_dict[function_name] = tool_dict
return tools_dict
def doc_to_dict(docstring):

View file

@ -1,12 +1,15 @@
import logging
import uuid
from datetime import UTC, datetime, timedelta
from typing import Optional, Union
import jwt
from datetime import UTC, datetime, timedelta
from typing import Optional, Union, List, Dict
from open_webui.apps.webui.models.users import Users
from open_webui.constants import ERROR_MESSAGES
from open_webui.env import WEBUI_SECRET_KEY
from fastapi import Depends, HTTPException, Request, Response, status
from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer
from passlib.context import CryptContext
@ -88,10 +91,21 @@ def get_current_user(
# auth by api key
if token.startswith("sk-"):
if not request.state.enable_api_key:
raise HTTPException(
status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.API_KEY_NOT_ALLOWED
)
return get_current_user_by_api_key(token)
# auth by jwt token
try:
data = decode_token(token)
except Exception as e:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid token",
)
if data is not None and "id" in data:
user = Users.get_user_by_id(data["id"])
if user is None:

View file

@ -1,7 +1,7 @@
fastapi==0.111.0
uvicorn[standard]==0.30.6
pydantic==2.9.2
python-multipart==0.0.9
python-multipart==0.0.17
Flask==3.0.3
Flask-Cors==5.0.0
@ -13,18 +13,20 @@ passlib[bcrypt]==1.7.4
requests==2.32.3
aiohttp==3.10.8
async-timeout
aiocache
sqlalchemy==2.0.32
alembic==1.13.2
peewee==3.17.6
peewee-migrate==1.12.2
psycopg2-binary==2.9.9
pgvector==0.3.5
PyMySQL==1.1.1
bcrypt==4.2.0
pymongo
redis
boto3==1.35.0
boto3==1.35.53
argon2-cffi==23.1.0
APScheduler==3.10.4
@ -35,14 +37,15 @@ anthropic
google-generativeai==0.7.2
tiktoken
langchain==0.2.15
langchain-community==0.2.12
langchain==0.3.7
langchain-community==0.3.7
langchain-chroma==0.1.4
fake-useragent==1.5.1
chromadb==0.5.9
pymilvus==2.4.7
chromadb==0.5.15
pymilvus==2.4.9
qdrant-client~=1.12.0
opensearch-py==2.7.1
sentence-transformers==3.2.0
colbert-ai==0.2.21
@ -51,7 +54,7 @@ einops==0.8.0
ftfy==6.2.3
pypdf==4.3.1
xhtml2pdf==0.2.16
fpdf2==2.7.9
pymdown-extensions==10.11.2
docx2txt==0.8
python-pptx==1.0.0
@ -65,11 +68,11 @@ pyxlsb==1.0.10
xlrd==2.0.1
validators==0.33.0
psutil
sentencepiece
soundfile==0.12.1
opencv-python-headless==4.10.0.84
rapidocr-onnxruntime==1.3.24
fpdf2==2.7.9
rank-bm25==0.2.2
faster-whisper==1.0.3
@ -79,12 +82,12 @@ authlib==1.3.2
black==24.8.0
langfuse==2.44.0
youtube-transcript-api==0.6.2
youtube-transcript-api==0.6.3
pytube==15.0.0
extract_msg
pydub
duckduckgo-search~=6.2.13
duckduckgo-search~=6.3.5
## Tests
docker~=7.1.0
@ -92,3 +95,6 @@ pytest~=8.3.2
pytest-docker~=3.1.1
googleapis-common-protos==1.63.2
## LDAP
ldap3==2.9.1

17
package-lock.json generated
View file

@ -1,18 +1,19 @@
{
"name": "open-webui",
"version": "0.3.35",
"version": "0.4.2",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "open-webui",
"version": "0.3.35",
"version": "0.4.2",
"dependencies": {
"@codemirror/lang-javascript": "^6.2.2",
"@codemirror/lang-python": "^6.1.6",
"@codemirror/language-data": "^6.5.1",
"@codemirror/theme-one-dark": "^6.1.2",
"@huggingface/transformers": "^3.0.0",
"@mediapipe/tasks-vision": "^0.10.17",
"@pyscript/core": "^0.4.32",
"@sveltejs/adapter-node": "^2.0.0",
"@xyflow/svelte": "^0.1.19",
@ -1749,6 +1750,11 @@
"@lezer/lr": "^1.4.0"
}
},
"node_modules/@mediapipe/tasks-vision": {
"version": "0.10.17",
"resolved": "https://registry.npmjs.org/@mediapipe/tasks-vision/-/tasks-vision-0.10.17.tgz",
"integrity": "sha512-CZWV/q6TTe8ta61cZXjfnnHsfWIdFhms03M9T7Cnd5y2mdpylJM0rF1qRq+wsQVRMLz1OYPVEBU9ph2Bx8cxrg=="
},
"node_modules/@melt-ui/svelte": {
"version": "0.76.0",
"resolved": "https://registry.npmjs.org/@melt-ui/svelte/-/svelte-0.76.0.tgz",
@ -3993,9 +3999,10 @@
"integrity": "sha512-VQ2MBenTq1fWZUH9DJNGti7kKv6EeAuYr3cLwxUWhIu1baTaXh4Ib5W2CqHVqib4/MqbYGJqiL3Zb8GJZr3l4g=="
},
"node_modules/cross-spawn": {
"version": "7.0.3",
"resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.3.tgz",
"integrity": "sha512-iRDPJKUPVEND7dHPO8rkbOnPpyDygcDFtWjpeWNCgy8WP2rXcxXL8TskReQl6OrB2G7+UJrags1q15Fudc7G6w==",
"version": "7.0.6",
"resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.6.tgz",
"integrity": "sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA==",
"license": "MIT",
"dependencies": {
"path-key": "^3.1.0",
"shebang-command": "^2.0.0",

View file

@ -1,6 +1,6 @@
{
"name": "open-webui",
"version": "0.3.35",
"version": "0.4.2",
"private": true,
"scripts": {
"dev": "npm run pyodide:fetch && vite dev --host",
@ -53,6 +53,7 @@
"@codemirror/language-data": "^6.5.1",
"@codemirror/theme-one-dark": "^6.1.2",
"@huggingface/transformers": "^3.0.0",
"@mediapipe/tasks-vision": "^0.10.17",
"@pyscript/core": "^0.4.32",
"@sveltejs/adapter-node": "^2.0.0",
"@xyflow/svelte": "^0.1.19",

View file

@ -9,7 +9,7 @@ dependencies = [
"fastapi==0.111.0",
"uvicorn[standard]==0.30.6",
"pydantic==2.9.2",
"python-multipart==0.0.9",
"python-multipart==0.0.17",
"Flask==3.0.3",
"Flask-Cors==5.0.0",
@ -21,18 +21,20 @@ dependencies = [
"requests==2.32.3",
"aiohttp==3.10.8",
"async-timeout",
"aiocache",
"sqlalchemy==2.0.32",
"alembic==1.13.2",
"peewee==3.17.6",
"peewee-migrate==1.12.2",
"psycopg2-binary==2.9.9",
"pgvector==0.3.5",
"PyMySQL==1.1.1",
"bcrypt==4.2.0",
"pymongo",
"redis",
"boto3==1.35.0",
"boto3==1.35.53",
"argon2-cffi==23.1.0",
"APScheduler==3.10.4",
@ -42,13 +44,15 @@ dependencies = [
"google-generativeai==0.7.2",
"tiktoken",
"langchain==0.2.15",
"langchain-community==0.2.12",
"langchain==0.3.7",
"langchain-community==0.3.7",
"langchain-chroma==0.1.4",
"fake-useragent==1.5.1",
"chromadb==0.5.9",
"pymilvus==2.4.7",
"chromadb==0.5.15",
"pymilvus==2.4.9",
"qdrant-client~=1.12.0",
"opensearch-py==2.7.1",
"sentence-transformers==3.2.0",
"colbert-ai==0.2.21",
@ -56,7 +60,7 @@ dependencies = [
"ftfy==6.2.3",
"pypdf==4.3.1",
"xhtml2pdf==0.2.16",
"fpdf2==2.7.9",
"pymdown-extensions==10.11.2",
"docx2txt==0.8",
"python-pptx==1.0.0",
@ -70,11 +74,11 @@ dependencies = [
"xlrd==2.0.1",
"validators==0.33.0",
"psutil",
"sentencepiece",
"soundfile==0.12.1",
"opencv-python-headless==4.10.0.84",
"rapidocr-onnxruntime==1.3.24",
"fpdf2==2.7.9",
"rank-bm25==0.2.2",
"faster-whisper==1.0.3",
@ -84,18 +88,20 @@ dependencies = [
"black==24.8.0",
"langfuse==2.44.0",
"youtube-transcript-api==0.6.2",
"youtube-transcript-api==0.6.3",
"pytube==15.0.0",
"extract_msg",
"pydub",
"duckduckgo-search~=6.2.13",
"duckduckgo-search~=6.3.5",
"docker~=7.1.0",
"pytest~=8.3.2",
"pytest-docker~=3.1.1",
"googleapis-common-protos==1.63.2"
"googleapis-common-protos==1.63.2",
"ldap3==2.9.1"
]
readme = "README.md"
requires-python = ">= 3.11, < 3.12.0a1"

View file

@ -16,6 +16,12 @@
font-display: swap;
}
@font-face {
font-family: 'InstrumentSerif';
src: url('/assets/fonts/InstrumentSerif-Regular.ttf');
font-display: swap;
}
html {
word-break: break-word;
}
@ -26,6 +32,10 @@ code {
width: auto;
}
.font-secondary {
font-family: 'InstrumentSerif', sans-serif;
}
math {
margin-top: 1rem;
}

View file

@ -110,6 +110,150 @@ export const getSessionUser = async (token: string) => {
return res;
};
export const ldapUserSignIn = async (user: string, password: string) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/auths/ldap`, {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
credentials: 'include',
body: JSON.stringify({
user: user,
password: password
})
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.catch((err) => {
console.log(err);
error = err.detail;
return null;
});
if (error) {
throw error;
}
return res;
};
export const getLdapConfig = async (token: string = '') => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/auths/admin/config/ldap`, {
method: 'GET',
headers: {
'Content-Type': 'application/json',
...(token && { authorization: `Bearer ${token}` })
}
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.catch((err) => {
console.log(err);
error = err.detail;
return null;
});
if (error) {
throw error;
}
return res;
};
export const updateLdapConfig = async (token: string = '', enable_ldap: boolean) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/auths/admin/config/ldap`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
...(token && { authorization: `Bearer ${token}` })
},
body: JSON.stringify({
enable_ldap: enable_ldap
})
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.catch((err) => {
console.log(err);
error = err.detail;
return null;
});
if (error) {
throw error;
}
return res;
};
export const getLdapServer = async (token: string = '') => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/auths/admin/config/ldap/server`, {
method: 'GET',
headers: {
'Content-Type': 'application/json',
...(token && { authorization: `Bearer ${token}` })
}
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.catch((err) => {
console.log(err);
error = err.detail;
return null;
});
if (error) {
throw error;
}
return res;
};
export const updateLdapServer = async (token: string = '', body: object) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/auths/admin/config/ldap/server`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
...(token && { authorization: `Bearer ${token}` })
},
body: JSON.stringify(body)
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.catch((err) => {
console.log(err);
error = err.detail;
return null;
});
if (error) {
throw error;
}
return res;
};
export const userSignIn = async (email: string, password: string) => {
let error = null;

View file

@ -0,0 +1,162 @@
import { WEBUI_API_BASE_URL } from '$lib/constants';
export const createNewGroup = async (token: string, group: object) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/groups/create`, {
method: 'POST',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
},
body: JSON.stringify({
...group
})
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.catch((err) => {
error = err.detail;
console.log(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const getGroups = async (token: string = '') => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/groups/`, {
method: 'GET',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
}
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.then((json) => {
return json;
})
.catch((err) => {
error = err.detail;
console.log(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const getGroupById = async (token: string, id: string) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/groups/id/${id}`, {
method: 'GET',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
}
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.then((json) => {
return json;
})
.catch((err) => {
error = err.detail;
console.log(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const updateGroupById = async (token: string, id: string, group: object) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/groups/id/${id}/update`, {
method: 'POST',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
},
body: JSON.stringify({
...group
})
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.then((json) => {
return json;
})
.catch((err) => {
error = err.detail;
console.log(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const deleteGroupById = async (token: string, id: string) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/groups/id/${id}/delete`, {
method: 'DELETE',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
}
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.then((json) => {
return json;
})
.catch((err) => {
error = err.detail;
console.log(err);
return null;
});
if (error) {
throw error;
}
return res;
};

View file

@ -1,9 +1,8 @@
import { WEBUI_API_BASE_URL, WEBUI_BASE_URL } from '$lib/constants';
export const getModels = async (token: string = '') => {
export const getModels = async (token: string = '', base: boolean = false) => {
let error = null;
const res = await fetch(`${WEBUI_BASE_URL}/api/models`, {
const res = await fetch(`${WEBUI_BASE_URL}/api/models${base ? '/base' : ''}`, {
method: 'GET',
headers: {
Accept: 'application/json',
@ -16,8 +15,8 @@ export const getModels = async (token: string = '') => {
return res.json();
})
.catch((err) => {
console.log(err);
error = err;
console.log(err);
return null;
});
@ -26,26 +25,10 @@ export const getModels = async (token: string = '') => {
}
let models = res?.data ?? [];
models = models
.filter((models) => models)
// Sort the models
.sort((a, b) => {
// Check if models have position property
const aHasPosition = a.info?.meta?.position !== undefined;
const bHasPosition = b.info?.meta?.position !== undefined;
// If both a and b have the position property
if (aHasPosition && bHasPosition) {
return a.info.meta.position - b.info.meta.position;
}
// If only a has the position property, it should come first
if (aHasPosition) return -1;
// If only b has the position property, it should come first
if (bHasPosition) return 1;
// Compare case-insensitively by name for models without position property
const lowerA = a.name.toLowerCase();
const lowerB = b.name.toLowerCase();
@ -365,15 +348,16 @@ export const generateEmoji = async (
return null;
};
export const generateSearchQuery = async (
export const generateQueries = async (
token: string = '',
model: string,
messages: object[],
prompt: string
prompt: string,
type?: string = 'web_search'
) => {
let error = null;
const res = await fetch(`${WEBUI_BASE_URL}/api/task/query/completions`, {
const res = await fetch(`${WEBUI_BASE_URL}/api/task/queries/completions`, {
method: 'POST',
headers: {
Accept: 'application/json',
@ -383,7 +367,8 @@ export const generateSearchQuery = async (
body: JSON.stringify({
model: model,
messages: messages,
prompt: prompt
prompt: prompt,
type: type
})
})
.then(async (res) => {
@ -402,7 +387,39 @@ export const generateSearchQuery = async (
throw error;
}
return res?.choices[0]?.message?.content.replace(/["']/g, '') ?? prompt;
try {
// Step 1: Safely extract the response string
const response = res?.choices[0]?.message?.content ?? '';
// Step 2: Attempt to fix common JSON format issues like single quotes
const sanitizedResponse = response.replace(/['`]/g, '"'); // Convert single quotes to double quotes for valid JSON
// Step 3: Find the relevant JSON block within the response
const jsonStartIndex = sanitizedResponse.indexOf('{');
const jsonEndIndex = sanitizedResponse.lastIndexOf('}');
// Step 4: Check if we found a valid JSON block (with both `{` and `}`)
if (jsonStartIndex !== -1 && jsonEndIndex !== -1) {
const jsonResponse = sanitizedResponse.substring(jsonStartIndex, jsonEndIndex + 1);
// Step 5: Parse the JSON block
const parsed = JSON.parse(jsonResponse);
// Step 6: If there's a "queries" key, return the queries array; otherwise, return an empty array
if (parsed && parsed.queries) {
return Array.isArray(parsed.queries) ? parsed.queries : [];
} else {
return [];
}
}
// If no valid JSON block found, return an empty array
return [];
} catch (e) {
// Catch and safely return empty array on any parsing errors
console.error('Failed to parse response: ', e);
return [];
}
};
export const generateMoACompletion = async (

View file

@ -1,6 +1,11 @@
import { WEBUI_API_BASE_URL } from '$lib/constants';
export const createNewKnowledge = async (token: string, name: string, description: string) => {
export const createNewKnowledge = async (
token: string,
name: string,
description: string,
accessControl: null | object
) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/knowledge/create`, {
@ -12,7 +17,8 @@ export const createNewKnowledge = async (token: string, name: string, descriptio
},
body: JSON.stringify({
name: name,
description: description
description: description,
access_control: accessControl
})
})
.then(async (res) => {
@ -32,7 +38,7 @@ export const createNewKnowledge = async (token: string, name: string, descriptio
return res;
};
export const getKnowledgeItems = async (token: string = '') => {
export const getKnowledgeBases = async (token: string = '') => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/knowledge/`, {
@ -63,6 +69,37 @@ export const getKnowledgeItems = async (token: string = '') => {
return res;
};
export const getKnowledgeBaseList = async (token: string = '') => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/knowledge/list`, {
method: 'GET',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
}
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.then((json) => {
return json;
})
.catch((err) => {
error = err.detail;
console.log(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const getKnowledgeById = async (token: string, id: string) => {
let error = null;
@ -99,6 +136,7 @@ type KnowledgeUpdateForm = {
name?: string;
description?: string;
data?: object;
access_control?: null | object;
};
export const updateKnowledgeById = async (token: string, id: string, form: KnowledgeUpdateForm) => {
@ -114,7 +152,8 @@ export const updateKnowledgeById = async (token: string, id: string, form: Knowl
body: JSON.stringify({
name: form?.name ? form.name : undefined,
description: form?.description ? form.description : undefined,
data: form?.data ? form.data : undefined
data: form?.data ? form.data : undefined,
access_control: form.access_control
})
})
.then(async (res) => {

View file

@ -1,38 +1,9 @@
import { WEBUI_API_BASE_URL } from '$lib/constants';
export const addNewModel = async (token: string, model: object) => {
export const getModels = async (token: string = '') => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/models/add`, {
method: 'POST',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
},
body: JSON.stringify(model)
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.catch((err) => {
error = err.detail;
console.log(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const getModelInfos = async (token: string = '') => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/models`, {
const res = await fetch(`${WEBUI_API_BASE_URL}/models/`, {
method: 'GET',
headers: {
Accept: 'application/json',
@ -60,13 +31,73 @@ export const getModelInfos = async (token: string = '') => {
return res;
};
export const getBaseModels = async (token: string = '') => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/models/base`, {
method: 'GET',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
}
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.then((json) => {
return json;
})
.catch((err) => {
error = err;
console.log(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const createNewModel = async (token: string, model: object) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/models/create`, {
method: 'POST',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
},
body: JSON.stringify(model)
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.catch((err) => {
error = err.detail;
console.log(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const getModelById = async (token: string, id: string) => {
let error = null;
const searchParams = new URLSearchParams();
searchParams.append('id', id);
const res = await fetch(`${WEBUI_API_BASE_URL}/models?${searchParams.toString()}`, {
const res = await fetch(`${WEBUI_API_BASE_URL}/models/model?${searchParams.toString()}`, {
method: 'GET',
headers: {
Accept: 'application/json',
@ -95,13 +126,48 @@ export const getModelById = async (token: string, id: string) => {
return res;
};
export const toggleModelById = async (token: string, id: string) => {
let error = null;
const searchParams = new URLSearchParams();
searchParams.append('id', id);
const res = await fetch(`${WEBUI_API_BASE_URL}/models/model/toggle?${searchParams.toString()}`, {
method: 'POST',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
}
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.then((json) => {
return json;
})
.catch((err) => {
error = err;
console.log(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const updateModelById = async (token: string, id: string, model: object) => {
let error = null;
const searchParams = new URLSearchParams();
searchParams.append('id', id);
const res = await fetch(`${WEBUI_API_BASE_URL}/models/update?${searchParams.toString()}`, {
const res = await fetch(`${WEBUI_API_BASE_URL}/models/model/update?${searchParams.toString()}`, {
method: 'POST',
headers: {
Accept: 'application/json',
@ -137,7 +203,39 @@ export const deleteModelById = async (token: string, id: string) => {
const searchParams = new URLSearchParams();
searchParams.append('id', id);
const res = await fetch(`${WEBUI_API_BASE_URL}/models/delete?${searchParams.toString()}`, {
const res = await fetch(`${WEBUI_API_BASE_URL}/models/model/delete?${searchParams.toString()}`, {
method: 'DELETE',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
}
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.then((json) => {
return json;
})
.catch((err) => {
error = err;
console.log(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const deleteAllModels = async (token: string) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/models/delete/all`, {
method: 'DELETE',
headers: {
Accept: 'application/json',

View file

@ -1,5 +1,40 @@
import { OLLAMA_API_BASE_URL } from '$lib/constants';
export const verifyOllamaConnection = async (
token: string = '',
url: string = '',
key: string = ''
) => {
let error = null;
const res = await fetch(`${OLLAMA_API_BASE_URL}/verify`, {
method: 'POST',
headers: {
Accept: 'application/json',
Authorization: `Bearer ${token}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
url,
key
})
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.catch((err) => {
error = `Ollama: ${err?.error?.message ?? 'Network Problem'}`;
return [];
});
if (error) {
throw error;
}
return res;
};
export const getOllamaConfig = async (token: string = '') => {
let error = null;
@ -32,7 +67,13 @@ export const getOllamaConfig = async (token: string = '') => {
return res;
};
export const updateOllamaConfig = async (token: string = '', enable_ollama_api: boolean) => {
type OllamaConfig = {
ENABLE_OLLAMA_API: boolean;
OLLAMA_BASE_URLS: string[];
OLLAMA_API_CONFIGS: object;
};
export const updateOllamaConfig = async (token: string = '', config: OllamaConfig) => {
let error = null;
const res = await fetch(`${OLLAMA_API_BASE_URL}/config/update`, {
@ -43,7 +84,7 @@ export const updateOllamaConfig = async (token: string = '', enable_ollama_api:
...(token && { authorization: `Bearer ${token}` })
},
body: JSON.stringify({
enable_ollama_api: enable_ollama_api
...config
})
})
.then(async (res) => {
@ -166,10 +207,10 @@ export const getOllamaVersion = async (token: string, urlIdx?: number) => {
return res?.version ?? false;
};
export const getOllamaModels = async (token: string = '') => {
export const getOllamaModels = async (token: string = '', urlIdx: null | number = null) => {
let error = null;
const res = await fetch(`${OLLAMA_API_BASE_URL}/api/tags`, {
const res = await fetch(`${OLLAMA_API_BASE_URL}/api/tags${urlIdx !== null ? `/${urlIdx}` : ''}`, {
method: 'GET',
headers: {
Accept: 'application/json',

View file

@ -32,7 +32,14 @@ export const getOpenAIConfig = async (token: string = '') => {
return res;
};
export const updateOpenAIConfig = async (token: string = '', enable_openai_api: boolean) => {
type OpenAIConfig = {
ENABLE_OPENAI_API: boolean;
OPENAI_API_BASE_URLS: string[];
OPENAI_API_KEYS: string[];
OPENAI_API_CONFIGS: object;
};
export const updateOpenAIConfig = async (token: string = '', config: OpenAIConfig) => {
let error = null;
const res = await fetch(`${OPENAI_API_BASE_URL}/config/update`, {
@ -43,7 +50,7 @@ export const updateOpenAIConfig = async (token: string = '', enable_openai_api:
...(token && { authorization: `Bearer ${token}` })
},
body: JSON.stringify({
enable_openai_api: enable_openai_api
...config
})
})
.then(async (res) => {
@ -231,41 +238,39 @@ export const getOpenAIModels = async (token: string, urlIdx?: number) => {
return res;
};
export const getOpenAIModelsDirect = async (
base_url: string = 'https://api.openai.com/v1',
api_key: string = ''
export const verifyOpenAIConnection = async (
token: string = '',
url: string = 'https://api.openai.com/v1',
key: string = ''
) => {
let error = null;
const res = await fetch(`${base_url}/models`, {
method: 'GET',
const res = await fetch(`${OPENAI_API_BASE_URL}/verify`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${api_key}`
}
Accept: 'application/json',
Authorization: `Bearer ${token}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
url,
key
})
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.catch((err) => {
console.log(err);
error = `OpenAI: ${err?.error?.message ?? 'Network Problem'}`;
return null;
return [];
});
if (error) {
throw error;
}
const models = Array.isArray(res) ? res : (res?.data ?? null);
return models
.map((model) => ({ id: model.id, name: model.name ?? model.id, external: true }))
.filter((model) => (base_url.includes('openai') ? model.name.includes('gpt') : true))
.sort((a, b) => {
return a.name.localeCompare(b.name);
});
return res;
};
export const generateOpenAIChatCompletion = async (

View file

@ -1,11 +1,13 @@
import { WEBUI_API_BASE_URL } from '$lib/constants';
export const createNewPrompt = async (
token: string,
command: string,
title: string,
content: string
) => {
type PromptItem = {
command: string;
title: string;
content: string;
access_control: null | object;
};
export const createNewPrompt = async (token: string, prompt: PromptItem) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/prompts/create`, {
@ -16,9 +18,8 @@ export const createNewPrompt = async (
authorization: `Bearer ${token}`
},
body: JSON.stringify({
command: `/${command}`,
title: title,
content: content
...prompt,
command: `/${prompt.command}`
})
})
.then(async (res) => {
@ -69,6 +70,37 @@ export const getPrompts = async (token: string = '') => {
return res;
};
export const getPromptList = async (token: string = '') => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/prompts/list`, {
method: 'GET',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
}
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.then((json) => {
return json;
})
.catch((err) => {
error = err.detail;
console.log(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const getPromptByCommand = async (token: string, command: string) => {
let error = null;
@ -101,15 +133,10 @@ export const getPromptByCommand = async (token: string, command: string) => {
return res;
};
export const updatePromptByCommand = async (
token: string,
command: string,
title: string,
content: string
) => {
export const updatePromptByCommand = async (token: string, prompt: PromptItem) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/prompts/command/${command}/update`, {
const res = await fetch(`${WEBUI_API_BASE_URL}/prompts/command/${prompt.command}/update`, {
method: 'POST',
headers: {
Accept: 'application/json',
@ -117,9 +144,8 @@ export const updatePromptByCommand = async (
authorization: `Bearer ${token}`
},
body: JSON.stringify({
command: `/${command}`,
title: title,
content: content
...prompt,
command: `/${prompt.command}`
})
})
.then(async (res) => {

View file

@ -62,6 +62,37 @@ export const getTools = async (token: string = '') => {
return res;
};
export const getToolList = async (token: string = '') => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/tools/list`, {
method: 'GET',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
}
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.then((json) => {
return json;
})
.catch((err) => {
error = err.detail;
console.log(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const exportTools = async (token: string = '') => {
let error = null;

View file

@ -1,10 +1,10 @@
import { WEBUI_API_BASE_URL } from '$lib/constants';
import { getUserPosition } from '$lib/utils';
export const getUserPermissions = async (token: string) => {
export const getUserGroups = async (token: string) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/users/permissions/user`, {
const res = await fetch(`${WEBUI_API_BASE_URL}/users/groups`, {
method: 'GET',
headers: {
'Content-Type': 'application/json',
@ -28,10 +28,37 @@ export const getUserPermissions = async (token: string) => {
return res;
};
export const updateUserPermissions = async (token: string, permissions: object) => {
export const getUserDefaultPermissions = async (token: string) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/users/permissions/user`, {
const res = await fetch(`${WEBUI_API_BASE_URL}/users/default/permissions`, {
method: 'GET',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${token}`
}
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.catch((err) => {
console.log(err);
error = err.detail;
return null;
});
if (error) {
throw error;
}
return res;
};
export const updateUserDefaultPermissions = async (token: string, permissions: object) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/users/default/permissions`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',

View file

@ -22,7 +22,7 @@
});
</script>
<Modal bind:show>
<Modal bind:show size="lg">
<div class="px-5 pt-4 dark:text-gray-300 text-gray-700">
<div class="flex justify-between items-start">
<div class="text-xl font-semibold">
@ -59,7 +59,7 @@
</div>
<div class=" w-full p-4 px-5 text-gray-700 dark:text-gray-100">
<div class=" overflow-y-scroll max-h-80 scrollbar-hidden">
<div class=" overflow-y-scroll max-h-96 scrollbar-hidden">
<div class="mb-3">
{#if changelog}
{#each Object.keys(changelog) as version}
@ -111,7 +111,7 @@
await updateUserSettings(localStorage.token, { ui: $settings });
show = false;
}}
class=" px-4 py-2 bg-emerald-700 hover:bg-emerald-800 text-gray-100 transition rounded-lg"
class="px-3.5 py-1.5 text-sm font-medium bg-black hover:bg-gray-900 text-white dark:bg-white dark:text-black dark:hover:bg-gray-100 transition rounded-full"
>
<span class="relative">{$i18n.t("Okay, Let's Go!")}</span>
</button>

View file

@ -0,0 +1,78 @@
<script>
import { getContext } from 'svelte';
const i18n = getContext('i18n');
import { WEBUI_BASE_URL } from '$lib/constants';
import Marquee from './common/Marquee.svelte';
import SlideShow from './common/SlideShow.svelte';
import ArrowRightCircle from './icons/ArrowRightCircle.svelte';
export let show = true;
export let getStartedHandler = () => {};
</script>
{#if show}
<div class="w-full h-screen max-h-[100dvh] text-white relative">
<div class="fixed m-10 z-50">
<div class="flex space-x-2">
<div class=" self-center">
<img
crossorigin="anonymous"
src="{WEBUI_BASE_URL}/static/favicon.png"
class=" w-6 rounded-full"
alt="logo"
/>
</div>
</div>
</div>
<SlideShow duration={5000} />
<div
class="w-full h-full absolute top-0 left-0 bg-gradient-to-t from-20% from-black to-transparent"
></div>
<div class="w-full h-full absolute top-0 left-0 backdrop-blur-sm bg-black/50"></div>
<div class="relative bg-transparent w-full min-h-screen flex z-10">
<div class="flex flex-col justify-end w-full items-center pb-10 text-center">
<div class="text-5xl lg:text-7xl font-secondary">
<Marquee
duration={5000}
words={[
$i18n.t('Explore the cosmos'),
$i18n.t('Unlock mysteries'),
$i18n.t('Chart new frontiers'),
$i18n.t('Dive into knowledge'),
$i18n.t('Discover wonders'),
$i18n.t('Ignite curiosity'),
$i18n.t('Forge new paths'),
$i18n.t('Unravel secrets'),
$i18n.t('Pioneer insights'),
$i18n.t('Embark on adventures')
]}
/>
<div class="mt-0.5">{$i18n.t(`wherever you are`)}</div>
</div>
<div class="flex justify-center mt-8">
<div class="flex flex-col justify-center items-center">
<button
class="relative z-20 flex p-1 rounded-full bg-white/5 hover:bg-white/10 transition font-medium text-sm"
on:click={() => {
getStartedHandler();
}}
>
<ArrowRightCircle className="size-6" />
</button>
<div class="mt-1.5 font-primary text-base font-medium">{$i18n.t(`Get started`)}</div>
</div>
</div>
</div>
<!-- <div class="absolute bottom-12 left-0 right-0 w-full"></div> -->
</div>
</div>
{/if}

View file

@ -1,677 +1,100 @@
<script lang="ts">
import fileSaver from 'file-saver';
const { saveAs } = fileSaver;
import { onMount, getContext } from 'svelte';
import dayjs from 'dayjs';
import relativeTime from 'dayjs/plugin/relativeTime';
dayjs.extend(relativeTime);
import * as ort from 'onnxruntime-web';
import { AutoModel, AutoTokenizer } from '@huggingface/transformers';
const EMBEDDING_MODEL = 'TaylorAI/bge-micro-v2';
let tokenizer = null;
let model = null;
import { models } from '$lib/stores';
import { deleteFeedbackById, exportAllFeedbacks, getAllFeedbacks } from '$lib/apis/evaluations';
import FeedbackMenu from './Evaluations/FeedbackMenu.svelte';
import EllipsisHorizontal from '../icons/EllipsisHorizontal.svelte';
import Tooltip from '../common/Tooltip.svelte';
import Badge from '../common/Badge.svelte';
import Pagination from '../common/Pagination.svelte';
import MagnifyingGlass from '../icons/MagnifyingGlass.svelte';
import Share from '../icons/Share.svelte';
import CloudArrowUp from '../icons/CloudArrowUp.svelte';
<script>
import { getContext, tick, onMount } from 'svelte';
import { toast } from 'svelte-sonner';
import Spinner from '../common/Spinner.svelte';
import DocumentArrowUpSolid from '../icons/DocumentArrowUpSolid.svelte';
import DocumentArrowDown from '../icons/DocumentArrowDown.svelte';
import ArrowDownTray from '../icons/ArrowDownTray.svelte';
import Leaderboard from './Evaluations/Leaderboard.svelte';
import Feedbacks from './Evaluations/Feedbacks.svelte';
import { getAllFeedbacks } from '$lib/apis/evaluations';
const i18n = getContext('i18n');
let rankedModels = [];
let feedbacks = [];
let query = '';
let page = 1;
let tagEmbeddings = new Map();
let selectedTab = 'leaderboard';
let loaded = false;
let loadingLeaderboard = true;
let debounceTimer;
$: paginatedFeedbacks = feedbacks.slice((page - 1) * 10, page * 10);
type Feedback = {
id: string;
data: {
rating: number;
model_id: string;
sibling_model_ids: string[] | null;
reason: string;
comment: string;
tags: string[];
};
user: {
name: string;
profile_image_url: string;
};
updated_at: number;
};
type ModelStats = {
rating: number;
won: number;
lost: number;
};
//////////////////////
//
// Rank models by Elo rating
//
//////////////////////
const rankHandler = async (similarities: Map<string, number> = new Map()) => {
const modelStats = calculateModelStats(feedbacks, similarities);
rankedModels = $models
.filter((m) => m?.owned_by !== 'arena' && (m?.info?.meta?.hidden ?? false) !== true)
.map((model) => {
const stats = modelStats.get(model.id);
return {
...model,
rating: stats ? Math.round(stats.rating) : '-',
stats: {
count: stats ? stats.won + stats.lost : 0,
won: stats ? stats.won.toString() : '-',
lost: stats ? stats.lost.toString() : '-'
}
};
})
.sort((a, b) => {
if (a.rating === '-' && b.rating !== '-') return 1;
if (b.rating === '-' && a.rating !== '-') return -1;
if (a.rating !== '-' && b.rating !== '-') return b.rating - a.rating;
return a.name.localeCompare(b.name);
});
loadingLeaderboard = false;
};
function calculateModelStats(
feedbacks: Feedback[],
similarities: Map<string, number>
): Map<string, ModelStats> {
const stats = new Map<string, ModelStats>();
const K = 32;
function getOrDefaultStats(modelId: string): ModelStats {
return stats.get(modelId) || { rating: 1000, won: 0, lost: 0 };
}
function updateStats(modelId: string, ratingChange: number, outcome: number) {
const currentStats = getOrDefaultStats(modelId);
currentStats.rating += ratingChange;
if (outcome === 1) currentStats.won++;
else if (outcome === 0) currentStats.lost++;
stats.set(modelId, currentStats);
}
function calculateEloChange(
ratingA: number,
ratingB: number,
outcome: number,
similarity: number
): number {
const expectedScore = 1 / (1 + Math.pow(10, (ratingB - ratingA) / 400));
return K * (outcome - expectedScore) * similarity;
}
feedbacks.forEach((feedback) => {
const modelA = feedback.data.model_id;
const statsA = getOrDefaultStats(modelA);
let outcome: number;
switch (feedback.data.rating.toString()) {
case '1':
outcome = 1;
break;
case '-1':
outcome = 0;
break;
default:
return; // Skip invalid ratings
}
// If the query is empty, set similarity to 1, else get the similarity from the map
const similarity = query !== '' ? similarities.get(feedback.id) || 0 : 1;
const opponents = feedback.data.sibling_model_ids || [];
opponents.forEach((modelB) => {
const statsB = getOrDefaultStats(modelB);
const changeA = calculateEloChange(statsA.rating, statsB.rating, outcome, similarity);
const changeB = calculateEloChange(statsB.rating, statsA.rating, 1 - outcome, similarity);
updateStats(modelA, changeA, outcome);
updateStats(modelB, changeB, 1 - outcome);
});
});
return stats;
}
//////////////////////
//
// Calculate cosine similarity
//
//////////////////////
const cosineSimilarity = (vecA, vecB) => {
// Ensure the lengths of the vectors are the same
if (vecA.length !== vecB.length) {
throw new Error('Vectors must be the same length');
}
// Calculate the dot product
let dotProduct = 0;
let normA = 0;
let normB = 0;
for (let i = 0; i < vecA.length; i++) {
dotProduct += vecA[i] * vecB[i];
normA += vecA[i] ** 2;
normB += vecB[i] ** 2;
}
// Calculate the magnitudes
normA = Math.sqrt(normA);
normB = Math.sqrt(normB);
// Avoid division by zero
if (normA === 0 || normB === 0) {
return 0;
}
// Return the cosine similarity
return dotProduct / (normA * normB);
};
const calculateMaxSimilarity = (queryEmbedding, tagEmbeddings: Map<string, number[]>) => {
let maxSimilarity = 0;
for (const tagEmbedding of tagEmbeddings.values()) {
const similarity = cosineSimilarity(queryEmbedding, tagEmbedding);
maxSimilarity = Math.max(maxSimilarity, similarity);
}
return maxSimilarity;
};
//////////////////////
//
// Embedding functions
//
//////////////////////
const getEmbeddings = async (text: string) => {
const tokens = await tokenizer(text);
const output = await model(tokens);
// Perform mean pooling on the last hidden states
const embeddings = output.last_hidden_state.mean(1);
return embeddings.ort_tensor.data;
};
const getTagEmbeddings = async (tags: string[]) => {
const embeddings = new Map();
for (const tag of tags) {
if (!tagEmbeddings.has(tag)) {
tagEmbeddings.set(tag, await getEmbeddings(tag));
}
embeddings.set(tag, tagEmbeddings.get(tag));
}
return embeddings;
};
const debouncedQueryHandler = async () => {
loadingLeaderboard = true;
if (query.trim() === '') {
rankHandler();
return;
}
clearTimeout(debounceTimer);
debounceTimer = setTimeout(async () => {
const queryEmbedding = await getEmbeddings(query);
const similarities = new Map<string, number>();
for (const feedback of feedbacks) {
const feedbackTags = feedback.data.tags || [];
const tagEmbeddings = await getTagEmbeddings(feedbackTags);
const maxSimilarity = calculateMaxSimilarity(queryEmbedding, tagEmbeddings);
similarities.set(feedback.id, maxSimilarity);
}
rankHandler(similarities);
}, 1500); // Debounce for 1.5 seconds
};
$: query, debouncedQueryHandler();
//////////////////////
//
// CRUD operations
//
//////////////////////
const deleteFeedbackHandler = async (feedbackId: string) => {
const response = await deleteFeedbackById(localStorage.token, feedbackId).catch((err) => {
toast.error(err);
return null;
});
if (response) {
feedbacks = feedbacks.filter((f) => f.id !== feedbackId);
}
};
const shareHandler = async () => {
toast.success($i18n.t('Redirecting you to OpenWebUI Community'));
// remove snapshot from feedbacks
const feedbacksToShare = feedbacks.map((f) => {
const { snapshot, user, ...rest } = f;
return rest;
});
console.log(feedbacksToShare);
const url = 'https://openwebui.com';
const tab = await window.open(`${url}/leaderboard`, '_blank');
// Define the event handler function
const messageHandler = (event) => {
if (event.origin !== url) return;
if (event.data === 'loaded') {
tab.postMessage(JSON.stringify(feedbacksToShare), '*');
// Remove the event listener after handling the message
window.removeEventListener('message', messageHandler);
}
};
window.addEventListener('message', messageHandler, false);
};
const exportHandler = async () => {
const _feedbacks = await exportAllFeedbacks(localStorage.token).catch((err) => {
toast.error(err);
return null;
});
if (_feedbacks) {
let blob = new Blob([JSON.stringify(_feedbacks)], {
type: 'application/json'
});
saveAs(blob, `feedback-history-export-${Date.now()}.json`);
}
};
const loadEmbeddingModel = async () => {
// Check if the tokenizer and model are already loaded and stored in the window object
if (!window.tokenizer) {
window.tokenizer = await AutoTokenizer.from_pretrained(EMBEDDING_MODEL);
}
if (!window.model) {
window.model = await AutoModel.from_pretrained(EMBEDDING_MODEL);
}
// Use the tokenizer and model from the window object
tokenizer = window.tokenizer;
model = window.model;
// Pre-compute embeddings for all unique tags
const allTags = new Set(feedbacks.flatMap((feedback) => feedback.data.tags || []));
await getTagEmbeddings(Array.from(allTags));
};
let feedbacks = [];
onMount(async () => {
feedbacks = await getAllFeedbacks(localStorage.token);
loaded = true;
rankHandler();
const containerElement = document.getElementById('users-tabs-container');
if (containerElement) {
containerElement.addEventListener('wheel', function (event) {
if (event.deltaY !== 0) {
// Adjust horizontal scroll position based on vertical scroll
containerElement.scrollLeft += event.deltaY;
}
});
}
});
</script>
{#if loaded}
<div class="mt-0.5 mb-2 gap-1 flex flex-col md:flex-row justify-between">
<div class="flex md:self-center text-lg font-medium px-0.5 shrink-0 items-center">
<div class=" gap-1">
{$i18n.t('Leaderboard')}
</div>
<div class="flex self-center w-[1px] h-6 mx-2.5 bg-gray-50 dark:bg-gray-850" />
<span class="text-lg font-medium text-gray-500 dark:text-gray-300 mr-1.5"
>{rankedModels.length}</span
>
</div>
<div class=" flex space-x-2">
<Tooltip content={$i18n.t('Re-rank models by topic similarity')}>
<div class="flex flex-1">
<div class=" self-center ml-1 mr-3">
<MagnifyingGlass className="size-3" />
</div>
<input
class=" w-full text-sm pr-4 py-1 rounded-r-xl outline-none bg-transparent"
bind:value={query}
placeholder={$i18n.t('Search')}
on:focus={() => {
loadEmbeddingModel();
}}
/>
</div>
</Tooltip>
</div>
</div>
<div class="flex flex-col lg:flex-row w-full h-full pb-2 lg:space-x-4">
<div
class="scrollbar-hidden relative whitespace-nowrap overflow-x-auto max-w-full rounded pt-0.5"
id="users-tabs-container"
class="tabs flex flex-row overflow-x-auto gap-2.5 max-w-full lg:gap-1 lg:flex-col lg:flex-none lg:w-40 dark:text-gray-200 text-sm font-medium text-left scrollbar-none"
>
{#if loadingLeaderboard}
<div class=" absolute top-0 bottom-0 left-0 right-0 flex">
<div class="m-auto">
<Spinner />
</div>
</div>
{/if}
{#if (rankedModels ?? []).length === 0}
<div class="text-center text-xs text-gray-500 dark:text-gray-400 py-1">
{$i18n.t('No models found')}
</div>
{:else}
<table
class="w-full text-sm text-left text-gray-500 dark:text-gray-400 table-auto max-w-full rounded {loadingLeaderboard
? 'opacity-20'
: ''}"
>
<thead
class="text-xs text-gray-700 uppercase bg-gray-50 dark:bg-gray-850 dark:text-gray-400 -translate-y-0.5"
>
<tr class="">
<th scope="col" class="px-3 py-1.5 cursor-pointer select-none w-3">
{$i18n.t('RK')}
</th>
<th scope="col" class="px-3 py-1.5 cursor-pointer select-none">
{$i18n.t('Model')}
</th>
<th scope="col" class="px-3 py-1.5 text-right cursor-pointer select-none w-fit">
{$i18n.t('Rating')}
</th>
<th scope="col" class="px-3 py-1.5 text-right cursor-pointer select-none w-5">
{$i18n.t('Won')}
</th>
<th scope="col" class="px-3 py-1.5 text-right cursor-pointer select-none w-5">
{$i18n.t('Lost')}
</th>
</tr>
</thead>
<tbody class="">
{#each rankedModels as model, modelIdx (model.id)}
<tr class="bg-white dark:bg-gray-900 dark:border-gray-850 text-xs group">
<td class="px-3 py-1.5 text-left font-medium text-gray-900 dark:text-white w-fit">
<div class=" line-clamp-1">
{model?.rating !== '-' ? modelIdx + 1 : '-'}
</div>
</td>
<td class="px-3 py-1.5 flex flex-col justify-center">
<div class="flex items-center gap-2">
<div class="flex-shrink-0">
<img
src={model?.info?.meta?.profile_image_url ?? '/favicon.png'}
alt={model.name}
class="size-5 rounded-full object-cover shrink-0"
/>
</div>
<div class="font-medium text-gray-800 dark:text-gray-200 pr-4">
{model.name}
</div>
</div>
</td>
<td class="px-3 py-1.5 text-right font-medium text-gray-900 dark:text-white w-max">
{model.rating}
</td>
<td class=" px-3 py-1.5 text-right font-semibold text-green-500">
<div class=" w-10">
{#if model.stats.won === '-'}
-
{:else}
<span class="hidden group-hover:inline"
>{((model.stats.won / model.stats.count) * 100).toFixed(1)}%</span
>
<span class=" group-hover:hidden">{model.stats.won}</span>
{/if}
</div>
</td>
<td class="px-3 py-1.5 text-right font-semibold text-red-500">
<div class=" w-10">
{#if model.stats.lost === '-'}
-
{:else}
<span class="hidden group-hover:inline"
>{((model.stats.lost / model.stats.count) * 100).toFixed(1)}%</span
>
<span class=" group-hover:hidden">{model.stats.lost}</span>
{/if}
</div>
</td>
</tr>
{/each}
</tbody>
</table>
{/if}
</div>
<div class=" text-gray-500 text-xs mt-1.5 w-full flex justify-end">
<div class=" text-right">
<div class="line-clamp-1">
{$i18n.t(
'The evaluation leaderboard is based on the Elo rating system and is updated in real-time.'
)}
</div>
{$i18n.t(
'The leaderboard is currently in beta, and we may adjust the rating calculations as we refine the algorithm.'
)}
</div>
</div>
<div class="pb-4"></div>
<div class="mt-0.5 mb-2 gap-1 flex flex-col md:flex-row justify-between">
<div class="flex md:self-center text-lg font-medium px-0.5">
{$i18n.t('Feedback History')}
<div class="flex self-center w-[1px] h-6 mx-2.5 bg-gray-50 dark:bg-gray-850" />
<span class="text-lg font-medium text-gray-500 dark:text-gray-300">{feedbacks.length}</span>
</div>
<div>
<div>
<Tooltip content={$i18n.t('Export')}>
<button
class=" p-2 rounded-xl hover:bg-gray-100 dark:bg-gray-900 dark:hover:bg-gray-850 transition font-medium text-sm flex items-center space-x-1"
class="px-0.5 py-1 min-w-fit rounded-lg lg:flex-none flex text-right transition {selectedTab ===
'leaderboard'
? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => {
exportHandler();
selectedTab = 'leaderboard';
}}
>
<ArrowDownTray className="size-3" />
</button>
</Tooltip>
</div>
</div>
</div>
<div
class="scrollbar-hidden relative whitespace-nowrap overflow-x-auto max-w-full rounded pt-0.5"
<div class=" self-center mr-2">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="size-4"
>
{#if (feedbacks ?? []).length === 0}
<div class="text-center text-xs text-gray-500 dark:text-gray-400 py-1">
{$i18n.t('No feedbacks found')}
</div>
{:else}
<table
class="w-full text-sm text-left text-gray-500 dark:text-gray-400 table-auto max-w-full rounded"
>
<thead
class="text-xs text-gray-700 uppercase bg-gray-50 dark:bg-gray-850 dark:text-gray-400 -translate-y-0.5"
>
<tr class="">
<th scope="col" class="px-3 text-right cursor-pointer select-none w-0">
{$i18n.t('User')}
</th>
<th scope="col" class="px-3 pr-1.5 cursor-pointer select-none">
{$i18n.t('Models')}
</th>
<th scope="col" class="px-3 py-1.5 text-right cursor-pointer select-none w-fit">
{$i18n.t('Result')}
</th>
<th scope="col" class="px-3 py-1.5 text-right cursor-pointer select-none w-0">
{$i18n.t('Updated At')}
</th>
<th scope="col" class="px-3 py-1.5 text-right cursor-pointer select-none w-0"> </th>
</tr>
</thead>
<tbody class="">
{#each paginatedFeedbacks as feedback (feedback.id)}
<tr class="bg-white dark:bg-gray-900 dark:border-gray-850 text-xs">
<td class=" py-0.5 text-right font-semibold">
<div class="flex justify-center">
<Tooltip content={feedback?.user?.name}>
<div class="flex-shrink-0">
<img
src={feedback?.user?.profile_image_url ?? '/user.png'}
alt={feedback?.user?.name}
class="size-5 rounded-full object-cover shrink-0"
<path
fill-rule="evenodd"
d="M4 2a1.5 1.5 0 0 0-1.5 1.5v9A1.5 1.5 0 0 0 4 14h8a1.5 1.5 0 0 0 1.5-1.5V6.621a1.5 1.5 0 0 0-.44-1.06L9.94 2.439A1.5 1.5 0 0 0 8.878 2H4Zm6 5.75a.75.75 0 0 1 1.5 0v3.5a.75.75 0 0 1-1.5 0v-3.5Zm-2.75 1.5a.75.75 0 0 1 1.5 0v2a.75.75 0 0 1-1.5 0v-2Zm-2 .75a.75.75 0 0 0-.75.75v.5a.75.75 0 0 0 1.5 0v-.5a.75.75 0 0 0-.75-.75Z"
clip-rule="evenodd"
/>
</svg>
</div>
</Tooltip>
</div>
</td>
<div class=" self-center">{$i18n.t('Leaderboard')}</div>
</button>
<td class=" py-1 pl-3 flex flex-col">
<div class="flex flex-col items-start gap-0.5 h-full">
<div class="flex flex-col h-full">
{#if feedback.data?.sibling_model_ids}
<div class="font-semibold text-gray-600 dark:text-gray-400 flex-1">
{feedback.data?.model_id}
</div>
<Tooltip content={feedback.data.sibling_model_ids.join(', ')}>
<div class=" text-[0.65rem] text-gray-600 dark:text-gray-400 line-clamp-1">
{#if feedback.data.sibling_model_ids.length > 2}
<!-- {$i18n.t('and {{COUNT}} more')} -->
{feedback.data.sibling_model_ids.slice(0, 2).join(', ')}, {$i18n.t(
'and {{COUNT}} more',
{ COUNT: feedback.data.sibling_model_ids.length - 2 }
)}
{:else}
{feedback.data.sibling_model_ids.join(', ')}
{/if}
</div>
</Tooltip>
{:else}
<div
class=" text-sm font-medium text-gray-600 dark:text-gray-400 flex-1 py-1.5"
>
{feedback.data?.model_id}
</div>
{/if}
</div>
</div>
</td>
<td class="px-3 py-1 text-right font-medium text-gray-900 dark:text-white w-max">
<div class=" flex justify-end">
{#if feedback.data.rating.toString() === '1'}
<Badge type="info" content={$i18n.t('Won')} />
{:else if feedback.data.rating.toString() === '0'}
<Badge type="muted" content={$i18n.t('Draw')} />
{:else if feedback.data.rating.toString() === '-1'}
<Badge type="error" content={$i18n.t('Lost')} />
{/if}
</div>
</td>
<td class=" px-3 py-1 text-right font-medium">
{dayjs(feedback.updated_at * 1000).fromNow()}
</td>
<td class=" px-3 py-1 text-right font-semibold">
<FeedbackMenu
on:delete={(e) => {
deleteFeedbackHandler(feedback.id);
<button
class="px-0.5 py-1 min-w-fit rounded-lg lg:flex-none flex text-right transition {selectedTab ===
'feedbacks'
? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => {
selectedTab = 'feedbacks';
}}
>
<button
class="self-center w-fit text-sm p-1.5 dark:text-gray-300 dark:hover:text-white hover:bg-black/5 dark:hover:bg-white/5 rounded-xl"
<div class=" self-center mr-2">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="size-4"
>
<EllipsisHorizontal />
<path
fill-rule="evenodd"
d="M5.25 2A2.25 2.25 0 0 0 3 4.25v9a.75.75 0 0 0 1.183.613l1.692-1.195 1.692 1.195a.75.75 0 0 0 .866 0l1.692-1.195 1.693 1.195A.75.75 0 0 0 13 13.25v-9A2.25 2.25 0 0 0 10.75 2h-5.5Zm3.03 3.28a.75.75 0 0 0-1.06-1.06L4.97 6.47a.75.75 0 0 0 0 1.06l2.25 2.25a.75.75 0 0 0 1.06-1.06l-.97-.97h1.315c.76 0 1.375.616 1.375 1.375a.75.75 0 0 0 1.5 0A2.875 2.875 0 0 0 8.625 6.25H7.311l.97-.97Z"
clip-rule="evenodd"
/>
</svg>
</div>
<div class=" self-center">{$i18n.t('Feedbacks')}</div>
</button>
</FeedbackMenu>
</td>
</tr>
{/each}
</tbody>
</table>
</div>
<div class="flex-1 mt-1 lg:mt-0 overflow-y-scroll">
{#if selectedTab === 'leaderboard'}
<Leaderboard {feedbacks} />
{:else if selectedTab === 'feedbacks'}
<Feedbacks {feedbacks} />
{/if}
</div>
{#if feedbacks.length > 0}
<div class=" flex flex-col justify-end w-full text-right gap-1">
<div class="line-clamp-1 text-gray-500 text-xs">
{$i18n.t('Help us create the best community leaderboard by sharing your feedback history!')}
</div>
<div class="flex space-x-1 ml-auto">
<Tooltip
content={$i18n.t(
'To protect your privacy, only ratings, model IDs, tags, and metadata are shared from your feedback—your chat logs remain private and are not included.'
)}
>
<button
class="flex text-xs items-center px-3 py-1.5 rounded-xl bg-gray-50 hover:bg-gray-100 dark:bg-gray-850 dark:hover:bg-gray-800 dark:text-gray-200 transition"
on:click={async () => {
shareHandler();
}}
>
<div class=" self-center mr-2 font-medium line-clamp-1">
{$i18n.t('Share to OpenWebUI Community')}
</div>
<div class=" self-center">
<CloudArrowUp className="size-3" strokeWidth="3" />
</div>
</button>
</Tooltip>
</div>
</div>
{/if}
{#if feedbacks.length > 10}
<Pagination bind:page count={feedbacks.length} perPage={10} />
{/if}
<div class="pb-12"></div>
{/if}

View file

@ -0,0 +1,283 @@
<script lang="ts">
import { toast } from 'svelte-sonner';
import fileSaver from 'file-saver';
const { saveAs } = fileSaver;
import dayjs from 'dayjs';
import relativeTime from 'dayjs/plugin/relativeTime';
dayjs.extend(relativeTime);
import { onMount, getContext } from 'svelte';
const i18n = getContext('i18n');
import { deleteFeedbackById, exportAllFeedbacks, getAllFeedbacks } from '$lib/apis/evaluations';
import Tooltip from '$lib/components/common/Tooltip.svelte';
import ArrowDownTray from '$lib/components/icons/ArrowDownTray.svelte';
import Badge from '$lib/components/common/Badge.svelte';
import CloudArrowUp from '$lib/components/icons/CloudArrowUp.svelte';
import Pagination from '$lib/components/common/Pagination.svelte';
import FeedbackMenu from './FeedbackMenu.svelte';
import EllipsisHorizontal from '$lib/components/icons/EllipsisHorizontal.svelte';
export let feedbacks = [];
let page = 1;
$: paginatedFeedbacks = feedbacks.slice((page - 1) * 10, page * 10);
type Feedback = {
id: string;
data: {
rating: number;
model_id: string;
sibling_model_ids: string[] | null;
reason: string;
comment: string;
tags: string[];
};
user: {
name: string;
profile_image_url: string;
};
updated_at: number;
};
type ModelStats = {
rating: number;
won: number;
lost: number;
};
//////////////////////
//
// CRUD operations
//
//////////////////////
const deleteFeedbackHandler = async (feedbackId: string) => {
const response = await deleteFeedbackById(localStorage.token, feedbackId).catch((err) => {
toast.error(err);
return null;
});
if (response) {
feedbacks = feedbacks.filter((f) => f.id !== feedbackId);
}
};
const shareHandler = async () => {
toast.success($i18n.t('Redirecting you to OpenWebUI Community'));
// remove snapshot from feedbacks
const feedbacksToShare = feedbacks.map((f) => {
const { snapshot, user, ...rest } = f;
return rest;
});
console.log(feedbacksToShare);
const url = 'https://openwebui.com';
const tab = await window.open(`${url}/leaderboard`, '_blank');
// Define the event handler function
const messageHandler = (event) => {
if (event.origin !== url) return;
if (event.data === 'loaded') {
tab.postMessage(JSON.stringify(feedbacksToShare), '*');
// Remove the event listener after handling the message
window.removeEventListener('message', messageHandler);
}
};
window.addEventListener('message', messageHandler, false);
};
const exportHandler = async () => {
const _feedbacks = await exportAllFeedbacks(localStorage.token).catch((err) => {
toast.error(err);
return null;
});
if (_feedbacks) {
let blob = new Blob([JSON.stringify(_feedbacks)], {
type: 'application/json'
});
saveAs(blob, `feedback-history-export-${Date.now()}.json`);
}
};
</script>
<div class="mt-0.5 mb-2 gap-1 flex flex-row justify-between">
<div class="flex md:self-center text-lg font-medium px-0.5">
{$i18n.t('Feedback History')}
<div class="flex self-center w-[1px] h-6 mx-2.5 bg-gray-50 dark:bg-gray-850" />
<span class="text-lg font-medium text-gray-500 dark:text-gray-300">{feedbacks.length}</span>
</div>
<div>
<div>
<Tooltip content={$i18n.t('Export')}>
<button
class=" p-2 rounded-xl hover:bg-gray-100 dark:bg-gray-900 dark:hover:bg-gray-850 transition font-medium text-sm flex items-center space-x-1"
on:click={() => {
exportHandler();
}}
>
<ArrowDownTray className="size-3" />
</button>
</Tooltip>
</div>
</div>
</div>
<div class="scrollbar-hidden relative whitespace-nowrap overflow-x-auto max-w-full rounded pt-0.5">
{#if (feedbacks ?? []).length === 0}
<div class="text-center text-xs text-gray-500 dark:text-gray-400 py-1">
{$i18n.t('No feedbacks found')}
</div>
{:else}
<table
class="w-full text-sm text-left text-gray-500 dark:text-gray-400 table-auto max-w-full rounded"
>
<thead
class="text-xs text-gray-700 uppercase bg-gray-50 dark:bg-gray-850 dark:text-gray-400 -translate-y-0.5"
>
<tr class="">
<th scope="col" class="px-3 text-right cursor-pointer select-none w-0">
{$i18n.t('User')}
</th>
<th scope="col" class="px-3 pr-1.5 cursor-pointer select-none">
{$i18n.t('Models')}
</th>
<th scope="col" class="px-3 py-1.5 text-right cursor-pointer select-none w-fit">
{$i18n.t('Result')}
</th>
<th scope="col" class="px-3 py-1.5 text-right cursor-pointer select-none w-0">
{$i18n.t('Updated At')}
</th>
<th scope="col" class="px-3 py-1.5 text-right cursor-pointer select-none w-0"> </th>
</tr>
</thead>
<tbody class="">
{#each paginatedFeedbacks as feedback (feedback.id)}
<tr class="bg-white dark:bg-gray-900 dark:border-gray-850 text-xs">
<td class=" py-0.5 text-right font-semibold">
<div class="flex justify-center">
<Tooltip content={feedback?.user?.name}>
<div class="flex-shrink-0">
<img
src={feedback?.user?.profile_image_url ?? '/user.png'}
alt={feedback?.user?.name}
class="size-5 rounded-full object-cover shrink-0"
/>
</div>
</Tooltip>
</div>
</td>
<td class=" py-1 pl-3 flex flex-col">
<div class="flex flex-col items-start gap-0.5 h-full">
<div class="flex flex-col h-full">
{#if feedback.data?.sibling_model_ids}
<div class="font-semibold text-gray-600 dark:text-gray-400 flex-1">
{feedback.data?.model_id}
</div>
<Tooltip content={feedback.data.sibling_model_ids.join(', ')}>
<div class=" text-[0.65rem] text-gray-600 dark:text-gray-400 line-clamp-1">
{#if feedback.data.sibling_model_ids.length > 2}
<!-- {$i18n.t('and {{COUNT}} more')} -->
{feedback.data.sibling_model_ids.slice(0, 2).join(', ')}, {$i18n.t(
'and {{COUNT}} more',
{ COUNT: feedback.data.sibling_model_ids.length - 2 }
)}
{:else}
{feedback.data.sibling_model_ids.join(', ')}
{/if}
</div>
</Tooltip>
{:else}
<div
class=" text-sm font-medium text-gray-600 dark:text-gray-400 flex-1 py-1.5"
>
{feedback.data?.model_id}
</div>
{/if}
</div>
</div>
</td>
<td class="px-3 py-1 text-right font-medium text-gray-900 dark:text-white w-max">
<div class=" flex justify-end">
{#if feedback.data.rating.toString() === '1'}
<Badge type="info" content={$i18n.t('Won')} />
{:else if feedback.data.rating.toString() === '0'}
<Badge type="muted" content={$i18n.t('Draw')} />
{:else if feedback.data.rating.toString() === '-1'}
<Badge type="error" content={$i18n.t('Lost')} />
{/if}
</div>
</td>
<td class=" px-3 py-1 text-right font-medium">
{dayjs(feedback.updated_at * 1000).fromNow()}
</td>
<td class=" px-3 py-1 text-right font-semibold">
<FeedbackMenu
on:delete={(e) => {
deleteFeedbackHandler(feedback.id);
}}
>
<button
class="self-center w-fit text-sm p-1.5 dark:text-gray-300 dark:hover:text-white hover:bg-black/5 dark:hover:bg-white/5 rounded-xl"
>
<EllipsisHorizontal />
</button>
</FeedbackMenu>
</td>
</tr>
{/each}
</tbody>
</table>
{/if}
</div>
{#if feedbacks.length > 0}
<div class=" flex flex-col justify-end w-full text-right gap-1">
<div class="line-clamp-1 text-gray-500 text-xs">
{$i18n.t('Help us create the best community leaderboard by sharing your feedback history!')}
</div>
<div class="flex space-x-1 ml-auto">
<Tooltip
content={$i18n.t(
'To protect your privacy, only ratings, model IDs, tags, and metadata are shared from your feedback—your chat logs remain private and are not included.'
)}
>
<button
class="flex text-xs items-center px-3 py-1.5 rounded-xl bg-gray-50 hover:bg-gray-100 dark:bg-gray-850 dark:hover:bg-gray-800 dark:text-gray-200 transition"
on:click={async () => {
shareHandler();
}}
>
<div class=" self-center mr-2 font-medium line-clamp-1">
{$i18n.t('Share to OpenWebUI Community')}
</div>
<div class=" self-center">
<CloudArrowUp className="size-3" strokeWidth="3" />
</div>
</button>
</Tooltip>
</div>
</div>
{/if}
{#if feedbacks.length > 10}
<Pagination bind:page count={feedbacks.length} perPage={10} />
{/if}

View file

@ -0,0 +1,410 @@
<script lang="ts">
import * as ort from 'onnxruntime-web';
import { AutoModel, AutoTokenizer } from '@huggingface/transformers';
import { onMount, getContext } from 'svelte';
import { models } from '$lib/stores';
import Spinner from '$lib/components/common/Spinner.svelte';
import Tooltip from '$lib/components/common/Tooltip.svelte';
import MagnifyingGlass from '$lib/components/icons/MagnifyingGlass.svelte';
const i18n = getContext('i18n');
const EMBEDDING_MODEL = 'TaylorAI/bge-micro-v2';
let tokenizer = null;
let model = null;
export let feedbacks = [];
let rankedModels = [];
let query = '';
let tagEmbeddings = new Map();
let loadingLeaderboard = true;
let debounceTimer;
type Feedback = {
id: string;
data: {
rating: number;
model_id: string;
sibling_model_ids: string[] | null;
reason: string;
comment: string;
tags: string[];
};
user: {
name: string;
profile_image_url: string;
};
updated_at: number;
};
type ModelStats = {
rating: number;
won: number;
lost: number;
};
//////////////////////
//
// Rank models by Elo rating
//
//////////////////////
const rankHandler = async (similarities: Map<string, number> = new Map()) => {
const modelStats = calculateModelStats(feedbacks, similarities);
rankedModels = $models
.filter((m) => m?.owned_by !== 'arena' && (m?.info?.meta?.hidden ?? false) !== true)
.map((model) => {
const stats = modelStats.get(model.id);
return {
...model,
rating: stats ? Math.round(stats.rating) : '-',
stats: {
count: stats ? stats.won + stats.lost : 0,
won: stats ? stats.won.toString() : '-',
lost: stats ? stats.lost.toString() : '-'
}
};
})
.sort((a, b) => {
if (a.rating === '-' && b.rating !== '-') return 1;
if (b.rating === '-' && a.rating !== '-') return -1;
if (a.rating !== '-' && b.rating !== '-') return b.rating - a.rating;
return a.name.localeCompare(b.name);
});
loadingLeaderboard = false;
};
function calculateModelStats(
feedbacks: Feedback[],
similarities: Map<string, number>
): Map<string, ModelStats> {
const stats = new Map<string, ModelStats>();
const K = 32;
function getOrDefaultStats(modelId: string): ModelStats {
return stats.get(modelId) || { rating: 1000, won: 0, lost: 0 };
}
function updateStats(modelId: string, ratingChange: number, outcome: number) {
const currentStats = getOrDefaultStats(modelId);
currentStats.rating += ratingChange;
if (outcome === 1) currentStats.won++;
else if (outcome === 0) currentStats.lost++;
stats.set(modelId, currentStats);
}
function calculateEloChange(
ratingA: number,
ratingB: number,
outcome: number,
similarity: number
): number {
const expectedScore = 1 / (1 + Math.pow(10, (ratingB - ratingA) / 400));
return K * (outcome - expectedScore) * similarity;
}
feedbacks.forEach((feedback) => {
const modelA = feedback.data.model_id;
const statsA = getOrDefaultStats(modelA);
let outcome: number;
switch (feedback.data.rating.toString()) {
case '1':
outcome = 1;
break;
case '-1':
outcome = 0;
break;
default:
return; // Skip invalid ratings
}
// If the query is empty, set similarity to 1, else get the similarity from the map
const similarity = query !== '' ? similarities.get(feedback.id) || 0 : 1;
const opponents = feedback.data.sibling_model_ids || [];
opponents.forEach((modelB) => {
const statsB = getOrDefaultStats(modelB);
const changeA = calculateEloChange(statsA.rating, statsB.rating, outcome, similarity);
const changeB = calculateEloChange(statsB.rating, statsA.rating, 1 - outcome, similarity);
updateStats(modelA, changeA, outcome);
updateStats(modelB, changeB, 1 - outcome);
});
});
return stats;
}
//////////////////////
//
// Calculate cosine similarity
//
//////////////////////
const cosineSimilarity = (vecA, vecB) => {
// Ensure the lengths of the vectors are the same
if (vecA.length !== vecB.length) {
throw new Error('Vectors must be the same length');
}
// Calculate the dot product
let dotProduct = 0;
let normA = 0;
let normB = 0;
for (let i = 0; i < vecA.length; i++) {
dotProduct += vecA[i] * vecB[i];
normA += vecA[i] ** 2;
normB += vecB[i] ** 2;
}
// Calculate the magnitudes
normA = Math.sqrt(normA);
normB = Math.sqrt(normB);
// Avoid division by zero
if (normA === 0 || normB === 0) {
return 0;
}
// Return the cosine similarity
return dotProduct / (normA * normB);
};
const calculateMaxSimilarity = (queryEmbedding, tagEmbeddings: Map<string, number[]>) => {
let maxSimilarity = 0;
for (const tagEmbedding of tagEmbeddings.values()) {
const similarity = cosineSimilarity(queryEmbedding, tagEmbedding);
maxSimilarity = Math.max(maxSimilarity, similarity);
}
return maxSimilarity;
};
//////////////////////
//
// Embedding functions
//
//////////////////////
const loadEmbeddingModel = async () => {
// Check if the tokenizer and model are already loaded and stored in the window object
if (!window.tokenizer) {
window.tokenizer = await AutoTokenizer.from_pretrained(EMBEDDING_MODEL);
}
if (!window.model) {
window.model = await AutoModel.from_pretrained(EMBEDDING_MODEL);
}
// Use the tokenizer and model from the window object
tokenizer = window.tokenizer;
model = window.model;
// Pre-compute embeddings for all unique tags
const allTags = new Set(feedbacks.flatMap((feedback) => feedback.data.tags || []));
await getTagEmbeddings(Array.from(allTags));
};
const getEmbeddings = async (text: string) => {
const tokens = await tokenizer(text);
const output = await model(tokens);
// Perform mean pooling on the last hidden states
const embeddings = output.last_hidden_state.mean(1);
return embeddings.ort_tensor.data;
};
const getTagEmbeddings = async (tags: string[]) => {
const embeddings = new Map();
for (const tag of tags) {
if (!tagEmbeddings.has(tag)) {
tagEmbeddings.set(tag, await getEmbeddings(tag));
}
embeddings.set(tag, tagEmbeddings.get(tag));
}
return embeddings;
};
const debouncedQueryHandler = async () => {
loadingLeaderboard = true;
if (query.trim() === '') {
rankHandler();
return;
}
clearTimeout(debounceTimer);
debounceTimer = setTimeout(async () => {
const queryEmbedding = await getEmbeddings(query);
const similarities = new Map<string, number>();
for (const feedback of feedbacks) {
const feedbackTags = feedback.data.tags || [];
const tagEmbeddings = await getTagEmbeddings(feedbackTags);
const maxSimilarity = calculateMaxSimilarity(queryEmbedding, tagEmbeddings);
similarities.set(feedback.id, maxSimilarity);
}
rankHandler(similarities);
}, 1500); // Debounce for 1.5 seconds
};
$: query, debouncedQueryHandler();
onMount(async () => {
rankHandler();
});
</script>
<div class="mt-0.5 mb-2 gap-1 flex flex-col md:flex-row justify-between">
<div class="flex md:self-center text-lg font-medium px-0.5 shrink-0 items-center">
<div class=" gap-1">
{$i18n.t('Leaderboard')}
</div>
<div class="flex self-center w-[1px] h-6 mx-2.5 bg-gray-50 dark:bg-gray-850" />
<span class="text-lg font-medium text-gray-500 dark:text-gray-300 mr-1.5"
>{rankedModels.length}</span
>
</div>
<div class=" flex space-x-2">
<Tooltip content={$i18n.t('Re-rank models by topic similarity')}>
<div class="flex flex-1">
<div class=" self-center ml-1 mr-3">
<MagnifyingGlass className="size-3" />
</div>
<input
class=" w-full text-sm pr-4 py-1 rounded-r-xl outline-none bg-transparent"
bind:value={query}
placeholder={$i18n.t('Search')}
on:focus={() => {
loadEmbeddingModel();
}}
/>
</div>
</Tooltip>
</div>
</div>
<div class="scrollbar-hidden relative whitespace-nowrap overflow-x-auto max-w-full rounded pt-0.5">
{#if loadingLeaderboard}
<div class=" absolute top-0 bottom-0 left-0 right-0 flex">
<div class="m-auto">
<Spinner />
</div>
</div>
{/if}
{#if (rankedModels ?? []).length === 0}
<div class="text-center text-xs text-gray-500 dark:text-gray-400 py-1">
{$i18n.t('No models found')}
</div>
{:else}
<table
class="w-full text-sm text-left text-gray-500 dark:text-gray-400 table-auto max-w-full rounded {loadingLeaderboard
? 'opacity-20'
: ''}"
>
<thead
class="text-xs text-gray-700 uppercase bg-gray-50 dark:bg-gray-850 dark:text-gray-400 -translate-y-0.5"
>
<tr class="">
<th scope="col" class="px-3 py-1.5 cursor-pointer select-none w-3">
{$i18n.t('RK')}
</th>
<th scope="col" class="px-3 py-1.5 cursor-pointer select-none">
{$i18n.t('Model')}
</th>
<th scope="col" class="px-3 py-1.5 text-right cursor-pointer select-none w-fit">
{$i18n.t('Rating')}
</th>
<th scope="col" class="px-3 py-1.5 text-right cursor-pointer select-none w-5">
{$i18n.t('Won')}
</th>
<th scope="col" class="px-3 py-1.5 text-right cursor-pointer select-none w-5">
{$i18n.t('Lost')}
</th>
</tr>
</thead>
<tbody class="">
{#each rankedModels as model, modelIdx (model.id)}
<tr class="bg-white dark:bg-gray-900 dark:border-gray-850 text-xs group">
<td class="px-3 py-1.5 text-left font-medium text-gray-900 dark:text-white w-fit">
<div class=" line-clamp-1">
{model?.rating !== '-' ? modelIdx + 1 : '-'}
</div>
</td>
<td class="px-3 py-1.5 flex flex-col justify-center">
<div class="flex items-center gap-2">
<div class="flex-shrink-0">
<img
src={model?.info?.meta?.profile_image_url ?? '/favicon.png'}
alt={model.name}
class="size-5 rounded-full object-cover shrink-0"
/>
</div>
<div class="font-medium text-gray-800 dark:text-gray-200 pr-4">
{model.name}
</div>
</div>
</td>
<td class="px-3 py-1.5 text-right font-medium text-gray-900 dark:text-white w-max">
{model.rating}
</td>
<td class=" px-3 py-1.5 text-right font-semibold text-green-500">
<div class=" w-10">
{#if model.stats.won === '-'}
-
{:else}
<span class="hidden group-hover:inline"
>{((model.stats.won / model.stats.count) * 100).toFixed(1)}%</span
>
<span class=" group-hover:hidden">{model.stats.won}</span>
{/if}
</div>
</td>
<td class="px-3 py-1.5 text-right font-semibold text-red-500">
<div class=" w-10">
{#if model.stats.lost === '-'}
-
{:else}
<span class="hidden group-hover:inline"
>{((model.stats.lost / model.stats.count) * 100).toFixed(1)}%</span
>
<span class=" group-hover:hidden">{model.stats.lost}</span>
{/if}
</div>
</td>
</tr>
{/each}
</tbody>
</table>
{/if}
</div>
<div class=" text-gray-500 text-xs mt-1.5 w-full flex justify-end">
<div class=" text-right">
<div class="line-clamp-1">
{$i18n.t(
'The evaluation leaderboard is based on the Elo rating system and is updated in real-time.'
)}
</div>
{$i18n.t(
'The leaderboard is currently in beta, and we may adjust the rating calculations as we refine the algorithm.'
)}
</div>
</div>

View file

@ -5,7 +5,6 @@
import { WEBUI_NAME, config, functions, models } from '$lib/stores';
import { onMount, getContext, tick } from 'svelte';
import { createNewPrompt, deletePromptByCommand, getPrompts } from '$lib/apis/prompts';
import { goto } from '$app/navigation';
import {
@ -25,11 +24,14 @@
import FunctionMenu from './Functions/FunctionMenu.svelte';
import EllipsisHorizontal from '../icons/EllipsisHorizontal.svelte';
import Switch from '../common/Switch.svelte';
import ValvesModal from './common/ValvesModal.svelte';
import ManifestModal from './common/ManifestModal.svelte';
import ValvesModal from '../workspace/common/ValvesModal.svelte';
import ManifestModal from '../workspace/common/ManifestModal.svelte';
import Heart from '../icons/Heart.svelte';
import DeleteConfirmDialog from '$lib/components/common/ConfirmDialog.svelte';
import GarbageBin from '../icons/GarbageBin.svelte';
import Search from '../icons/Search.svelte';
import Plus from '../icons/Plus.svelte';
import ChevronRight from '../icons/ChevronRight.svelte';
const i18n = getContext('i18n');
@ -48,12 +50,14 @@
let showDeleteConfirm = false;
let filteredItems = [];
$: filteredItems = $functions.filter(
$: filteredItems = $functions
.filter(
(f) =>
query === '' ||
f.name.toLowerCase().includes(query.toLowerCase()) ||
f.id.toLowerCase().includes(query.toLowerCase())
);
)
.sort((a, b) => a.type.localeCompare(b.type) || a.name.localeCompare(b.name));
const shareHandler = async (func) => {
const item = await getFunctionById(localStorage.token, func.id).catch((error) => {
@ -94,7 +98,7 @@
id: `${_function.id}_clone`,
name: `${_function.name} (Clone)`
});
goto('/workspace/functions/create');
goto('/admin/functions/create');
}
};
@ -182,21 +186,19 @@
</title>
</svelte:head>
<div class=" flex w-full space-x-2 mb-2.5">
<div class="flex flex-col gap-1 mt-1.5 mb-2">
<div class="flex justify-between items-center">
<div class="flex md:self-center text-xl items-center font-medium px-0.5">
{$i18n.t('Functions')}
<div class="flex self-center w-[1px] h-6 mx-2.5 bg-gray-50 dark:bg-gray-850" />
<span class="text-base font-lg text-gray-500 dark:text-gray-300">{filteredItems.length}</span>
</div>
</div>
<div class=" flex w-full space-x-2">
<div class="flex flex-1">
<div class=" self-center ml-1 mr-3">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 20 20"
fill="currentColor"
class="w-4 h-4"
>
<path
fill-rule="evenodd"
d="M9 3.5a5.5 5.5 0 100 11 5.5 5.5 0 000-11zM2 9a7 7 0 1112.452 4.391l3.328 3.329a.75.75 0 11-1.06 1.06l-3.329-3.328A7 7 0 012 9z"
clip-rule="evenodd"
/>
</svg>
<Search className="size-3.5" />
</div>
<input
class=" w-full text-sm pr-4 py-1 rounded-r-xl outline-none bg-transparent"
@ -207,43 +209,23 @@
<div>
<a
class=" px-2 py-2 rounded-xl border border-gray-200 dark:border-gray-600 dark:border-0 hover:bg-gray-100 dark:bg-gray-800 dark:hover:bg-gray-700 transition font-medium text-sm flex items-center space-x-1"
href="/workspace/functions/create"
class=" px-2 py-2 rounded-xl hover:bg-gray-700/10 dark:hover:bg-gray-100/10 dark:text-gray-300 dark:hover:text-white transition font-medium text-sm flex items-center space-x-1"
href="/admin/functions/create"
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="w-4 h-4"
>
<path
d="M8.75 3.75a.75.75 0 0 0-1.5 0v3.5h-3.5a.75.75 0 0 0 0 1.5h3.5v3.5a.75.75 0 0 0 1.5 0v-3.5h3.5a.75.75 0 0 0 0-1.5h-3.5v-3.5Z"
/>
</svg>
<Plus className="size-3.5" />
</a>
</div>
</div>
<div class="mb-3.5">
<div class="flex justify-between items-center">
<div class="flex md:self-center text-base font-medium px-0.5">
{$i18n.t('Functions')}
<div class="flex self-center w-[1px] h-6 mx-2.5 bg-gray-50 dark:bg-gray-850" />
<span class="text-base font-medium text-gray-500 dark:text-gray-300"
>{filteredItems.length}</span
>
</div>
</div>
</div>
<div class="my-3 mb-5">
<div class="mb-5">
{#each filteredItems as func}
<div
class=" flex space-x-4 cursor-pointer w-full px-3 py-2 dark:hover:bg-white/5 hover:bg-black/5 rounded-xl"
>
<a
class=" flex flex-1 space-x-3.5 cursor-pointer w-full"
href={`/workspace/functions/edit?id=${encodeURIComponent(func.id)}`}
href={`/admin/functions/edit?id=${encodeURIComponent(func.id)}`}
>
<div class="flex items-center text-left">
<div class=" flex-1 self-center pl-1">
@ -340,7 +322,7 @@
<FunctionMenu
{func}
editHandler={() => {
goto(`/workspace/functions/edit?id=${encodeURIComponent(func.id)}`);
goto(`/admin/functions/edit?id=${encodeURIComponent(func.id)}`);
}}
shareHandler={() => {
shareHandler(func);
@ -470,40 +452,27 @@
{#if $config?.features.enable_community_sharing}
<div class=" my-16">
<div class=" text-lg font-semibold mb-3 line-clamp-1">
<div class=" text-xl font-medium mb-1 line-clamp-1">
{$i18n.t('Made by OpenWebUI Community')}
</div>
<a
class=" flex space-x-4 cursor-pointer w-full mb-2 px-3 py-2"
class=" flex cursor-pointer items-center justify-between hover:bg-gray-50 dark:hover:bg-gray-850 w-full mb-2 px-3.5 py-1.5 rounded-xl transition"
href="https://openwebui.com/#open-webui-community"
target="_blank"
>
<div class=" self-center w-10 flex-shrink-0">
<div
class="w-full h-10 flex justify-center rounded-full bg-transparent dark:bg-gray-700 border border-dashed border-gray-200"
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 24 24"
fill="currentColor"
class="w-6"
>
<path
fill-rule="evenodd"
d="M12 3.75a.75.75 0 01.75.75v6.75h6.75a.75.75 0 010 1.5h-6.75v6.75a.75.75 0 01-1.5 0v-6.75H4.5a.75.75 0 010-1.5h6.75V4.5a.75.75 0 01.75-.75z"
clip-rule="evenodd"
/>
</svg>
</div>
</div>
<div class=" self-center">
<div class=" font-semibold line-clamp-1">{$i18n.t('Discover a function')}</div>
<div class=" text-sm line-clamp-1">
{$i18n.t('Discover, download, and explore custom functions')}
</div>
</div>
<div>
<div>
<ChevronRight />
</div>
</div>
</a>
</div>
{/if}

View file

@ -305,7 +305,7 @@ class Pipe:
<button
class="w-full text-left text-sm py-1.5 px-1 rounded-lg dark:text-gray-300 dark:hover:text-white hover:bg-black/5 dark:hover:bg-gray-850"
on:click={() => {
goto('/workspace/functions');
goto('/admin/functions');
}}
type="button"
>
@ -315,13 +315,15 @@ class Pipe:
</div>
<div class="flex-1">
<Tooltip content={$i18n.t('e.g. My Filter')} placement="top-start">
<input
class="w-full text-2xl font-medium bg-transparent outline-none font-primary"
type="text"
placeholder={$i18n.t('Function Name (e.g. My Filter)')}
placeholder={$i18n.t('Function Name')}
bind:value={name}
required
/>
</Tooltip>
</div>
<div>
@ -329,31 +331,37 @@ class Pipe:
</div>
</div>
<div class=" flex gap-2 px-1">
<div class=" flex gap-2 px-1 items-center">
{#if edit}
<div class="text-sm text-gray-500 flex-shrink-0">
{id}
</div>
{:else}
<Tooltip className="w-full" content={$i18n.t('e.g. my_filter')} placement="top-start">
<input
class="w-full text-sm disabled:text-gray-500 bg-transparent outline-none"
type="text"
placeholder={$i18n.t('Function ID (e.g. my_filter)')}
placeholder={$i18n.t('Function ID')}
bind:value={id}
required
disabled={edit}
/>
</Tooltip>
{/if}
<Tooltip
className="w-full self-center items-center flex"
content={$i18n.t('e.g. A filter to remove profanity from text')}
placement="top-start"
>
<input
class="w-full text-sm bg-transparent outline-none"
type="text"
placeholder={$i18n.t(
'Function Description (e.g. A filter to remove profanity from text)'
)}
placeholder={$i18n.t('Function Description')}
bind:value={meta.description}
required
/>
</Tooltip>
</div>
</div>

View file

@ -2,11 +2,11 @@
import { getContext, tick, onMount } from 'svelte';
import { toast } from 'svelte-sonner';
import { config } from '$lib/stores';
import { getBackendConfig } from '$lib/apis';
import Database from './Settings/Database.svelte';
import General from './Settings/General.svelte';
import Users from './Settings/Users.svelte';
import Pipelines from './Settings/Pipelines.svelte';
import Audio from './Settings/Audio.svelte';
import Images from './Settings/Images.svelte';
@ -15,8 +15,7 @@
import Connections from './Settings/Connections.svelte';
import Documents from './Settings/Documents.svelte';
import WebSearch from './Settings/WebSearch.svelte';
import { config } from '$lib/stores';
import { getBackendConfig } from '$lib/apis';
import ChartBar from '../icons/ChartBar.svelte';
import DocumentChartBar from '../icons/DocumentChartBar.svelte';
import Evaluations from './Settings/Evaluations.svelte';
@ -39,16 +38,16 @@
});
</script>
<div class="flex flex-col lg:flex-row w-full h-full py-2 lg:space-x-4">
<div class="flex flex-col lg:flex-row w-full h-full pb-2 lg:space-x-4">
<div
id="admin-settings-tabs-container"
class="tabs flex flex-row overflow-x-auto space-x-1 max-w-full lg:space-x-0 lg:space-y-1 lg:flex-col lg:flex-none lg:w-44 dark:text-gray-200 text-xs text-left scrollbar-none"
class="tabs flex flex-row overflow-x-auto gap-2.5 max-w-full lg:gap-1 lg:flex-col lg:flex-none lg:w-40 dark:text-gray-200 text-sm font-medium text-left scrollbar-none"
>
<button
class="px-2.5 py-2 min-w-fit rounded-lg flex-1 lg:flex-none flex text-right transition {selectedTab ===
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 lg:flex-none flex text-right transition {selectedTab ===
'general'
? 'bg-gray-100 dark:bg-gray-800'
: ' hover:bg-gray-50 dark:hover:bg-gray-850'}"
? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => {
selectedTab = 'general';
}}
@ -71,34 +70,10 @@
</button>
<button
class="px-2.5 py-2 min-w-fit rounded-lg flex-1 md:flex-none flex text-right transition {selectedTab ===
'users'
? 'bg-gray-100 dark:bg-gray-800'
: ' hover:bg-gray-50 dark:hover:bg-gray-850'}"
on:click={() => {
selectedTab = 'users';
}}
>
<div class=" self-center mr-2">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="w-4 h-4"
>
<path
d="M8 8a2.5 2.5 0 1 0 0-5 2.5 2.5 0 0 0 0 5ZM3.156 11.763c.16-.629.44-1.21.813-1.72a2.5 2.5 0 0 0-2.725 1.377c-.136.287.102.58.418.58h1.449c.01-.077.025-.156.045-.237ZM12.847 11.763c.02.08.036.16.046.237h1.446c.316 0 .554-.293.417-.579a2.5 2.5 0 0 0-2.722-1.378c.374.51.653 1.09.813 1.72ZM14 7.5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0ZM3.5 9a1.5 1.5 0 1 0 0-3 1.5 1.5 0 0 0 0 3ZM5 13c-.552 0-1.013-.455-.876-.99a4.002 4.002 0 0 1 7.753 0c.136.535-.324.99-.877.99H5Z"
/>
</svg>
</div>
<div class=" self-center">{$i18n.t('Users')}</div>
</button>
<button
class="px-2.5 py-2 min-w-fit rounded-lg flex-1 md:flex-none flex text-right transition {selectedTab ===
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-right transition {selectedTab ===
'connections'
? 'bg-gray-100 dark:bg-gray-800'
: ' hover:bg-gray-50 dark:hover:bg-gray-850'}"
? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => {
selectedTab = 'connections';
}}
@ -119,10 +94,10 @@
</button>
<button
class="px-2.5 py-2 min-w-fit rounded-lg flex-1 md:flex-none flex text-right transition {selectedTab ===
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-right transition {selectedTab ===
'models'
? 'bg-gray-100 dark:bg-gray-800'
: ' hover:bg-gray-50 dark:hover:bg-gray-850'}"
? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => {
selectedTab = 'models';
}}
@ -145,10 +120,10 @@
</button>
<button
class="px-2.5 py-2 min-w-fit rounded-lg flex-1 md:flex-none flex text-right transition {selectedTab ===
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-right transition {selectedTab ===
'evaluations'
? 'bg-gray-100 dark:bg-gray-800'
: ' hover:bg-gray-50 dark:hover:bg-gray-850'}"
? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => {
selectedTab = 'evaluations';
}}
@ -160,10 +135,10 @@
</button>
<button
class="px-2.5 py-2 min-w-fit rounded-lg flex-1 md:flex-none flex text-right transition {selectedTab ===
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-right transition {selectedTab ===
'documents'
? 'bg-gray-100 dark:bg-gray-800'
: ' hover:bg-gray-50 dark:hover:bg-gray-850'}"
? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => {
selectedTab = 'documents';
}}
@ -190,10 +165,10 @@
</button>
<button
class="px-2.5 py-2 min-w-fit rounded-lg flex-1 md:flex-none flex text-right transition {selectedTab ===
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-right transition {selectedTab ===
'web'
? 'bg-gray-100 dark:bg-gray-800'
: ' hover:bg-gray-50 dark:hover:bg-gray-850'}"
? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => {
selectedTab = 'web';
}}
@ -214,10 +189,10 @@
</button>
<button
class="px-2.5 py-2 min-w-fit rounded-lg flex-1 md:flex-none flex text-right transition {selectedTab ===
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-right transition {selectedTab ===
'interface'
? 'bg-gray-100 dark:bg-gray-800'
: ' hover:bg-gray-50 dark:hover:bg-gray-850'}"
? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => {
selectedTab = 'interface';
}}
@ -240,10 +215,10 @@
</button>
<button
class="px-2.5 py-2 min-w-fit rounded-lg flex-1 md:flex-none flex text-right transition {selectedTab ===
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-right transition {selectedTab ===
'audio'
? 'bg-gray-100 dark:bg-gray-800'
: ' hover:bg-gray-50 dark:hover:bg-gray-850'}"
? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => {
selectedTab = 'audio';
}}
@ -267,10 +242,10 @@
</button>
<button
class="px-2.5 py-2 min-w-fit rounded-lg flex-1 md:flex-none flex text-right transition {selectedTab ===
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-right transition {selectedTab ===
'images'
? 'bg-gray-100 dark:bg-gray-800'
: ' hover:bg-gray-50 dark:hover:bg-gray-850'}"
? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => {
selectedTab = 'images';
}}
@ -293,10 +268,10 @@
</button>
<button
class="px-2.5 py-2 min-w-fit rounded-lg flex-1 md:flex-none flex text-right transition {selectedTab ===
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-right transition {selectedTab ===
'pipelines'
? 'bg-gray-100 dark:bg-gray-800'
: ' hover:bg-gray-50 dark:hover:bg-gray-850'}"
? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => {
selectedTab = 'pipelines';
}}
@ -323,10 +298,10 @@
</button>
<button
class="px-2.5 py-2 min-w-fit rounded-lg flex-1 md:flex-none flex text-right transition {selectedTab ===
class="px-0.5 py-1 min-w-fit rounded-lg flex-1 md:flex-none flex text-right transition {selectedTab ===
'db'
? 'bg-gray-100 dark:bg-gray-800'
: ' hover:bg-gray-50 dark:hover:bg-gray-850'}"
? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => {
selectedTab = 'db';
}}
@ -351,7 +326,7 @@
</button>
</div>
<div class="flex-1 mt-3 lg:mt-0 overflow-y-scroll">
<div class="flex-1 mt-3 lg:mt-0 overflow-y-scroll pr-1 scrollbar-hidden">
{#if selectedTab === 'general'}
<General
saveHandler={async () => {
@ -361,12 +336,6 @@
await config.set(await getBackendConfig());
}}
/>
{:else if selectedTab === 'users'}
<Users
saveHandler={() => {
toast.success($i18n.t('Settings saved successfully!'));
}}
/>
{:else if selectedTab === 'connections'}
<Connections
on:save={() => {

View file

@ -181,7 +181,7 @@
<div>
<div class="mt-1 flex gap-2 mb-1">
<input
class="flex-1 w-full rounded-lg py-2 pl-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-none"
class="flex-1 w-full bg-transparent outline-none"
placeholder={$i18n.t('API Base URL')}
bind:value={STT_OPENAI_API_BASE_URL}
required
@ -322,6 +322,7 @@
}}
>
<option value="">{$i18n.t('Web API')}</option>
<option value="transformers">{$i18n.t('Transformers')} ({$i18n.t('Local')})</option>
<option value="openai">{$i18n.t('OpenAI')}</option>
<option value="elevenlabs">{$i18n.t('ElevenLabs')}</option>
<option value="azure">{$i18n.t('Azure AI Speech')}</option>
@ -333,7 +334,7 @@
<div>
<div class="mt-1 flex gap-2 mb-1">
<input
class="flex-1 w-full rounded-lg py-2 pl-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-none"
class="flex-1 w-full bg-transparent outline-none"
placeholder={$i18n.t('API Base URL')}
bind:value={TTS_OPENAI_API_BASE_URL}
required
@ -396,6 +397,47 @@
</div>
</div>
</div>
{:else if TTS_ENGINE === 'transformers'}
<div>
<div class=" mb-1.5 text-sm font-medium">{$i18n.t('TTS Model')}</div>
<div class="flex w-full">
<div class="flex-1">
<input
list="model-list"
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-none"
bind:value={TTS_MODEL}
placeholder="CMU ARCTIC speaker embedding name"
/>
<datalist id="model-list">
<option value="tts-1" />
</datalist>
</div>
</div>
<div class="mt-2 mb-1 text-xs text-gray-400 dark:text-gray-500">
{$i18n.t(`Open WebUI uses SpeechT5 and CMU Arctic speaker embeddings.`)}
To learn more about SpeechT5,
<a
class=" hover:underline dark:text-gray-200 text-gray-800"
href="https://github.com/microsoft/SpeechT5"
target="_blank"
>
{$i18n.t(`click here`, {
name: 'SpeechT5'
})}.
</a>
To see the available CMU Arctic speaker embeddings,
<a
class=" hover:underline dark:text-gray-200 text-gray-800"
href="https://huggingface.co/datasets/Matthijs/cmu-arctic-xvectors"
target="_blank"
>
{$i18n.t(`click here`)}.
</a>
</div>
</div>
{:else if TTS_ENGINE === 'openai'}
<div class=" flex gap-2">
<div class="w-full">

View file

@ -1,31 +1,23 @@
<script lang="ts">
import { models, user } from '$lib/stores';
import { toast } from 'svelte-sonner';
import { createEventDispatcher, onMount, getContext, tick } from 'svelte';
const dispatch = createEventDispatcher();
import {
getOllamaConfig,
getOllamaUrls,
getOllamaVersion,
updateOllamaConfig,
updateOllamaUrls
} from '$lib/apis/ollama';
import {
getOpenAIConfig,
getOpenAIKeys,
getOpenAIModels,
getOpenAIUrls,
updateOpenAIConfig,
updateOpenAIKeys,
updateOpenAIUrls
} from '$lib/apis/openai';
import { toast } from 'svelte-sonner';
import { getOllamaConfig, updateOllamaConfig } from '$lib/apis/ollama';
import { getOpenAIConfig, updateOpenAIConfig, getOpenAIModels } from '$lib/apis/openai';
import { getModels as _getModels } from '$lib/apis';
import { models, user } from '$lib/stores';
import Switch from '$lib/components/common/Switch.svelte';
import Spinner from '$lib/components/common/Spinner.svelte';
import Tooltip from '$lib/components/common/Tooltip.svelte';
import { getModels as _getModels } from '$lib/apis';
import SensitiveInput from '$lib/components/common/SensitiveInput.svelte';
import Plus from '$lib/components/icons/Plus.svelte';
import OpenAIConnection from './Connections/OpenAIConnection.svelte';
import AddConnectionModal from './Connections/AddConnectionModal.svelte';
import OllamaConnection from './Connections/OllamaConnection.svelte';
const i18n = getContext('i18n');
@ -36,57 +28,24 @@
// External
let OLLAMA_BASE_URLS = [''];
let OLLAMA_API_CONFIGS = {};
let OPENAI_API_KEYS = [''];
let OPENAI_API_BASE_URLS = [''];
let OPENAI_API_CONFIGS = {};
let ENABLE_OPENAI_API: null | boolean = null;
let ENABLE_OLLAMA_API: null | boolean = null;
let pipelineUrls = {};
let ENABLE_OPENAI_API = null;
let ENABLE_OLLAMA_API = null;
const verifyOpenAIHandler = async (idx) => {
OPENAI_API_BASE_URLS = OPENAI_API_BASE_URLS.map((url) => url.replace(/\/$/, ''));
OPENAI_API_BASE_URLS = await updateOpenAIUrls(localStorage.token, OPENAI_API_BASE_URLS);
OPENAI_API_KEYS = await updateOpenAIKeys(localStorage.token, OPENAI_API_KEYS);
const res = await getOpenAIModels(localStorage.token, idx).catch((error) => {
toast.error(error);
return null;
});
if (res) {
toast.success($i18n.t('Server connection verified'));
if (res.pipelines) {
pipelineUrls[OPENAI_API_BASE_URLS[idx]] = true;
}
}
await models.set(await getModels());
};
const verifyOllamaHandler = async (idx) => {
OLLAMA_BASE_URLS = OLLAMA_BASE_URLS.filter((url) => url !== '').map((url) =>
url.replace(/\/$/, '')
);
OLLAMA_BASE_URLS = await updateOllamaUrls(localStorage.token, OLLAMA_BASE_URLS);
const res = await getOllamaVersion(localStorage.token, idx).catch((error) => {
toast.error(error);
return null;
});
if (res) {
toast.success($i18n.t('Server connection verified'));
}
await models.set(await getModels());
};
let showAddOpenAIConnectionModal = false;
let showAddOllamaConnectionModal = false;
const updateOpenAIHandler = async () => {
OPENAI_API_BASE_URLS = OPENAI_API_BASE_URLS.map((url) => url.replace(/\/$/, ''));
if (ENABLE_OPENAI_API !== null) {
OPENAI_API_BASE_URLS = OPENAI_API_BASE_URLS.filter(
(url, urlIdx) => OPENAI_API_BASE_URLS.indexOf(url) === urlIdx && url !== ''
).map((url) => url.replace(/\/$/, ''));
// Check if API KEYS length is same than API URLS length
if (OPENAI_API_KEYS.length !== OPENAI_API_BASE_URLS.length) {
@ -104,319 +63,258 @@
}
}
OPENAI_API_BASE_URLS = await updateOpenAIUrls(localStorage.token, OPENAI_API_BASE_URLS);
OPENAI_API_KEYS = await updateOpenAIKeys(localStorage.token, OPENAI_API_KEYS);
const res = await updateOpenAIConfig(localStorage.token, {
ENABLE_OPENAI_API: ENABLE_OPENAI_API,
OPENAI_API_BASE_URLS: OPENAI_API_BASE_URLS,
OPENAI_API_KEYS: OPENAI_API_KEYS,
OPENAI_API_CONFIGS: OPENAI_API_CONFIGS
}).catch((error) => {
toast.error(error);
});
if (res) {
toast.success($i18n.t('OpenAI API settings updated'));
await models.set(await getModels());
}
}
};
const updateOllamaUrlsHandler = async () => {
OLLAMA_BASE_URLS = OLLAMA_BASE_URLS.filter((url) => url !== '').map((url) =>
url.replace(/\/$/, '')
);
const updateOllamaHandler = async () => {
if (ENABLE_OLLAMA_API !== null) {
// Remove duplicate URLs
OLLAMA_BASE_URLS = OLLAMA_BASE_URLS.filter(
(url, urlIdx) => OLLAMA_BASE_URLS.indexOf(url) === urlIdx && url !== ''
).map((url) => url.replace(/\/$/, ''));
console.log(OLLAMA_BASE_URLS);
if (OLLAMA_BASE_URLS.length === 0) {
ENABLE_OLLAMA_API = false;
await updateOllamaConfig(localStorage.token, ENABLE_OLLAMA_API);
toast.info($i18n.t('Ollama API disabled'));
} else {
OLLAMA_BASE_URLS = await updateOllamaUrls(localStorage.token, OLLAMA_BASE_URLS);
}
const ollamaVersion = await getOllamaVersion(localStorage.token).catch((error) => {
const res = await updateOllamaConfig(localStorage.token, {
ENABLE_OLLAMA_API: ENABLE_OLLAMA_API,
OLLAMA_BASE_URLS: OLLAMA_BASE_URLS,
OLLAMA_API_CONFIGS: OLLAMA_API_CONFIGS
}).catch((error) => {
toast.error(error);
return null;
});
if (ollamaVersion) {
toast.success($i18n.t('Server connection verified'));
if (res) {
toast.success($i18n.t('Ollama API settings updated'));
await models.set(await getModels());
}
}
};
const addOpenAIConnectionHandler = async (connection) => {
OPENAI_API_BASE_URLS = [...OPENAI_API_BASE_URLS, connection.url];
OPENAI_API_KEYS = [...OPENAI_API_KEYS, connection.key];
OPENAI_API_CONFIGS[connection.url] = connection.config;
await updateOpenAIHandler();
};
const addOllamaConnectionHandler = async (connection) => {
OLLAMA_BASE_URLS = [...OLLAMA_BASE_URLS, connection.url];
OLLAMA_API_CONFIGS[connection.url] = connection.config;
await updateOllamaHandler();
};
onMount(async () => {
if ($user.role === 'admin') {
let ollamaConfig = {};
let openaiConfig = {};
await Promise.all([
(async () => {
OLLAMA_BASE_URLS = await getOllamaUrls(localStorage.token);
ollamaConfig = await getOllamaConfig(localStorage.token);
})(),
(async () => {
OPENAI_API_BASE_URLS = await getOpenAIUrls(localStorage.token);
})(),
(async () => {
OPENAI_API_KEYS = await getOpenAIKeys(localStorage.token);
openaiConfig = await getOpenAIConfig(localStorage.token);
})()
]);
const ollamaConfig = await getOllamaConfig(localStorage.token);
const openaiConfig = await getOpenAIConfig(localStorage.token);
ENABLE_OPENAI_API = openaiConfig.ENABLE_OPENAI_API;
ENABLE_OLLAMA_API = ollamaConfig.ENABLE_OLLAMA_API;
OPENAI_API_BASE_URLS = openaiConfig.OPENAI_API_BASE_URLS;
OPENAI_API_KEYS = openaiConfig.OPENAI_API_KEYS;
OPENAI_API_CONFIGS = openaiConfig.OPENAI_API_CONFIGS;
OLLAMA_BASE_URLS = ollamaConfig.OLLAMA_BASE_URLS;
OLLAMA_API_CONFIGS = ollamaConfig.OLLAMA_API_CONFIGS;
if (ENABLE_OPENAI_API) {
for (const url of OPENAI_API_BASE_URLS) {
if (!OPENAI_API_CONFIGS[url]) {
OPENAI_API_CONFIGS[url] = {};
}
}
OPENAI_API_BASE_URLS.forEach(async (url, idx) => {
OPENAI_API_CONFIGS[url] = OPENAI_API_CONFIGS[url] || {};
if (!(OPENAI_API_CONFIGS[url]?.enable ?? true)) {
return;
}
const res = await getOpenAIModels(localStorage.token, idx);
if (res.pipelines) {
pipelineUrls[url] = true;
}
});
}
if (ENABLE_OLLAMA_API) {
for (const url of OLLAMA_BASE_URLS) {
if (!OLLAMA_API_CONFIGS[url]) {
OLLAMA_API_CONFIGS[url] = {};
}
}
}
}
});
</script>
<AddConnectionModal
bind:show={showAddOpenAIConnectionModal}
onSubmit={addOpenAIConnectionHandler}
/>
<AddConnectionModal
ollama
bind:show={showAddOllamaConnectionModal}
onSubmit={addOllamaConnectionHandler}
/>
<form
class="flex flex-col h-full justify-between text-sm"
on:submit|preventDefault={() => {
updateOpenAIHandler();
updateOllamaUrlsHandler();
updateOllamaHandler();
dispatch('save');
}}
>
<div class="space-y-3 overflow-y-scroll scrollbar-hidden h-full">
<div class=" overflow-y-scroll scrollbar-hidden h-full">
{#if ENABLE_OPENAI_API !== null && ENABLE_OLLAMA_API !== null}
<div class=" space-y-3">
<div class="my-2">
<div class="mt-2 space-y-2 pr-1.5">
<div class="flex justify-between items-center text-sm">
<div class=" font-medium">{$i18n.t('OpenAI API')}</div>
<div class="mt-1">
<div class="flex items-center">
<div class="">
<Switch
bind:state={ENABLE_OPENAI_API}
on:change={async () => {
updateOpenAIConfig(localStorage.token, ENABLE_OPENAI_API);
updateOpenAIHandler();
}}
/>
</div>
</div>
</div>
{#if ENABLE_OPENAI_API}
<div class="flex flex-col gap-1">
{#each OPENAI_API_BASE_URLS as url, idx}
<div class="flex w-full gap-2">
<div class="flex-1 relative">
<input
class="w-full rounded-lg py-2 px-4 {pipelineUrls[url]
? 'pr-8'
: ''} text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-none"
placeholder={$i18n.t('API Base URL')}
bind:value={url}
autocomplete="off"
/>
<hr class=" border-gray-50 dark:border-gray-850" />
{#if pipelineUrls[url]}
<div class=" absolute top-2.5 right-2.5">
<Tooltip content="Pipelines">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 24 24"
fill="currentColor"
class="size-4"
>
<path
d="M11.644 1.59a.75.75 0 0 1 .712 0l9.75 5.25a.75.75 0 0 1 0 1.32l-9.75 5.25a.75.75 0 0 1-.712 0l-9.75-5.25a.75.75 0 0 1 0-1.32l9.75-5.25Z"
/>
<path
d="m3.265 10.602 7.668 4.129a2.25 2.25 0 0 0 2.134 0l7.668-4.13 1.37.739a.75.75 0 0 1 0 1.32l-9.75 5.25a.75.75 0 0 1-.71 0l-9.75-5.25a.75.75 0 0 1 0-1.32l1.37-.738Z"
/>
<path
d="m10.933 19.231-7.668-4.13-1.37.739a.75.75 0 0 0 0 1.32l9.75 5.25c.221.12.489.12.71 0l9.75-5.25a.75.75 0 0 0 0-1.32l-1.37-.738-7.668 4.13a2.25 2.25 0 0 1-2.134-.001Z"
/>
</svg>
</Tooltip>
</div>
{/if}
</div>
<div class="">
<div class="flex justify-between items-center">
<div class="font-medium">{$i18n.t('Manage OpenAI API Connections')}</div>
<SensitiveInput
placeholder={$i18n.t('API Key')}
bind:value={OPENAI_API_KEYS[idx]}
/>
<div class="self-center flex items-center">
{#if idx === 0}
<Tooltip content={$i18n.t(`Add Connection`)}>
<button
class="px-1"
on:click={() => {
OPENAI_API_BASE_URLS = [...OPENAI_API_BASE_URLS, ''];
OPENAI_API_KEYS = [...OPENAI_API_KEYS, ''];
showAddOpenAIConnectionModal = true;
}}
type="button"
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="w-4 h-4"
>
<path
d="M8.75 3.75a.75.75 0 0 0-1.5 0v3.5h-3.5a.75.75 0 0 0 0 1.5h3.5v3.5a.75.75 0 0 0 1.5 0v-3.5h3.5a.75.75 0 0 0 0-1.5h-3.5v-3.5Z"
/>
</svg>
<Plus />
</button>
{:else}
<button
class="px-1"
on:click={() => {
</Tooltip>
</div>
<div class="flex flex-col gap-1.5 mt-1.5">
{#each OPENAI_API_BASE_URLS as url, idx}
<OpenAIConnection
pipeline={pipelineUrls[url] ? true : false}
bind:url
bind:key={OPENAI_API_KEYS[idx]}
bind:config={OPENAI_API_CONFIGS[url]}
onSubmit={() => {
updateOpenAIHandler();
}}
onDelete={() => {
OPENAI_API_BASE_URLS = OPENAI_API_BASE_URLS.filter(
(url, urlIdx) => idx !== urlIdx
);
OPENAI_API_KEYS = OPENAI_API_KEYS.filter((key, keyIdx) => idx !== keyIdx);
}}
type="button"
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="w-4 h-4"
>
<path d="M3.75 7.25a.75.75 0 0 0 0 1.5h8.5a.75.75 0 0 0 0-1.5h-8.5Z" />
</svg>
</button>
{/if}
</div>
<div class="flex">
<Tooltip content="Verify connection" className="self-start mt-0.5">
<button
class="self-center p-2 bg-gray-200 hover:bg-gray-300 dark:bg-gray-900 dark:hover:bg-gray-850 rounded-lg transition"
on:click={() => {
verifyOpenAIHandler(idx);
}}
type="button"
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 20 20"
fill="currentColor"
class="w-4 h-4"
>
<path
fill-rule="evenodd"
d="M15.312 11.424a5.5 5.5 0 01-9.201 2.466l-.312-.311h2.433a.75.75 0 000-1.5H3.989a.75.75 0 00-.75.75v4.242a.75.75 0 001.5 0v-2.43l.31.31a7 7 0 0011.712-3.138.75.75 0 00-1.449-.39zm1.23-3.723a.75.75 0 00.219-.53V2.929a.75.75 0 00-1.5 0V5.36l-.31-.31A7 7 0 003.239 8.188a.75.75 0 101.448.389A5.5 5.5 0 0113.89 6.11l.311.31h-2.432a.75.75 0 000 1.5h4.243a.75.75 0 00.53-.219z"
clip-rule="evenodd"
/>
</svg>
</button>
</Tooltip>
</div>
</div>
<div class=" mb-1 text-xs text-gray-400 dark:text-gray-500">
{$i18n.t('WebUI will make requests to')}
<span class=" text-gray-200">'{url}/models'</span>
</div>
{/each}
</div>
</div>
{/if}
</div>
</div>
<hr class=" dark:border-gray-850" />
<hr class=" border-gray-50 dark:border-gray-850" />
<div class="pr-1.5 space-y-2">
<div class="flex justify-between items-center text-sm">
<div class="pr-1.5 my-2">
<div class="flex justify-between items-center text-sm mb-2">
<div class=" font-medium">{$i18n.t('Ollama API')}</div>
<div class="mt-1">
<Switch
bind:state={ENABLE_OLLAMA_API}
on:change={async () => {
updateOllamaConfig(localStorage.token, ENABLE_OLLAMA_API);
if (OLLAMA_BASE_URLS.length === 0) {
OLLAMA_BASE_URLS = [''];
}
updateOllamaHandler();
}}
/>
</div>
</div>
{#if ENABLE_OLLAMA_API}
<div class="flex w-full gap-1.5">
<div class="flex-1 flex flex-col gap-2">
{#each OLLAMA_BASE_URLS as url, idx}
<div class="flex gap-1.5">
<input
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-none"
placeholder={$i18n.t('Enter URL (e.g. http://localhost:11434)')}
bind:value={url}
/>
<hr class=" border-gray-50 dark:border-gray-850 my-2" />
<div class="self-center flex items-center">
{#if idx === 0}
<div class="">
<div class="flex justify-between items-center">
<div class="font-medium">{$i18n.t('Manage Ollama API Connections')}</div>
<Tooltip content={$i18n.t(`Add Connection`)}>
<button
class="px-1"
on:click={() => {
OLLAMA_BASE_URLS = [...OLLAMA_BASE_URLS, ''];
showAddOllamaConnectionModal = true;
}}
type="button"
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="w-4 h-4"
>
<path
d="M8.75 3.75a.75.75 0 0 0-1.5 0v3.5h-3.5a.75.75 0 0 0 0 1.5h3.5v3.5a.75.75 0 0 0 1.5 0v-3.5h3.5a.75.75 0 0 0 0-1.5h-3.5v-3.5Z"
/>
</svg>
</button>
{:else}
<button
class="px-1"
on:click={() => {
OLLAMA_BASE_URLS = OLLAMA_BASE_URLS.filter(
(url, urlIdx) => idx !== urlIdx
);
}}
type="button"
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="w-4 h-4"
>
<path d="M3.75 7.25a.75.75 0 0 0 0 1.5h8.5a.75.75 0 0 0 0-1.5h-8.5Z" />
</svg>
</button>
{/if}
</div>
<div class="flex">
<Tooltip content="Verify connection" className="self-start mt-0.5">
<button
class="self-center p-2 bg-gray-200 hover:bg-gray-300 dark:bg-gray-900 dark:hover:bg-gray-850 rounded-lg transition"
on:click={() => {
verifyOllamaHandler(idx);
}}
type="button"
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 20 20"
fill="currentColor"
class="w-4 h-4"
>
<path
fill-rule="evenodd"
d="M15.312 11.424a5.5 5.5 0 01-9.201 2.466l-.312-.311h2.433a.75.75 0 000-1.5H3.989a.75.75 0 00-.75.75v4.242a.75.75 0 001.5 0v-2.43l.31.31a7 7 0 0011.712-3.138.75.75 0 00-1.449-.39zm1.23-3.723a.75.75 0 00.219-.53V2.929a.75.75 0 00-1.5 0V5.36l-.31-.31A7 7 0 003.239 8.188a.75.75 0 101.448.389A5.5 5.5 0 0113.89 6.11l.311.31h-2.432a.75.75 0 000 1.5h4.243a.75.75 0 00.53-.219z"
clip-rule="evenodd"
/>
</svg>
<Plus />
</button>
</Tooltip>
</div>
</div>
<div class="flex w-full gap-1.5">
<div class="flex-1 flex flex-col gap-1.5 mt-1.5">
{#each OLLAMA_BASE_URLS as url, idx}
<OllamaConnection
bind:url
bind:config={OLLAMA_API_CONFIGS[url]}
{idx}
onSubmit={() => {
updateOllamaHandler();
}}
onDelete={() => {
OLLAMA_BASE_URLS = OLLAMA_BASE_URLS.filter((url, urlIdx) => idx !== urlIdx);
}}
/>
{/each}
</div>
</div>
<div class="mt-2 text-xs text-gray-400 dark:text-gray-500">
<div class="mt-1 text-xs text-gray-400 dark:text-gray-500">
{$i18n.t('Trouble accessing Ollama?')}
<a
class=" text-gray-300 font-medium underline"
@ -426,6 +324,7 @@
{$i18n.t('Click here for help.')}
</a>
</div>
</div>
{/if}
</div>
{:else}

View file

@ -0,0 +1,365 @@
<script lang="ts">
import { toast } from 'svelte-sonner';
import { getContext, onMount } from 'svelte';
const i18n = getContext('i18n');
import { models } from '$lib/stores';
import { verifyOpenAIConnection } from '$lib/apis/openai';
import { verifyOllamaConnection } from '$lib/apis/ollama';
import Modal from '$lib/components/common/Modal.svelte';
import Plus from '$lib/components/icons/Plus.svelte';
import Minus from '$lib/components/icons/Minus.svelte';
import PencilSolid from '$lib/components/icons/PencilSolid.svelte';
import SensitiveInput from '$lib/components/common/SensitiveInput.svelte';
import Tooltip from '$lib/components/common/Tooltip.svelte';
import Switch from '$lib/components/common/Switch.svelte';
export let onSubmit: Function = () => {};
export let onDelete: Function = () => {};
export let show = false;
export let edit = false;
export let ollama = false;
export let connection = null;
let url = '';
let key = '';
let prefixId = '';
let enable = true;
let modelId = '';
let modelIds = [];
let loading = false;
const verifyOllamaHandler = async () => {
const res = await verifyOllamaConnection(localStorage.token, url, key).catch((error) => {
toast.error(error);
});
if (res) {
toast.success($i18n.t('Server connection verified'));
}
};
const verifyOpenAIHandler = async () => {
const res = await verifyOpenAIConnection(localStorage.token, url, key).catch((error) => {
toast.error(error);
});
if (res) {
toast.success($i18n.t('Server connection verified'));
}
};
const verifyHandler = () => {
if (ollama) {
verifyOllamaHandler();
} else {
verifyOpenAIHandler();
}
};
const addModelHandler = () => {
if (modelId) {
modelIds = [...modelIds, modelId];
modelId = '';
}
};
const submitHandler = async () => {
loading = true;
if (!ollama && (!url || !key)) {
loading = false;
toast.error('URL and Key are required');
return;
}
const connection = {
url,
key,
config: {
enable: enable,
prefix_id: prefixId,
model_ids: modelIds
}
};
await onSubmit(connection);
loading = false;
show = false;
url = '';
key = '';
prefixId = '';
modelIds = [];
};
const init = () => {
if (connection) {
url = connection.url;
key = connection.key;
enable = connection.config?.enable ?? true;
prefixId = connection.config?.prefix_id ?? '';
modelIds = connection.config?.model_ids ?? [];
}
};
$: if (show) {
init();
}
onMount(() => {
init();
});
</script>
<Modal size="sm" bind:show>
<div>
<div class=" flex justify-between dark:text-gray-100 px-5 pt-4 pb-2">
<div class=" text-lg font-medium self-center font-primary">
{#if edit}
{$i18n.t('Edit Connection')}
{:else}
{$i18n.t('Add Connection')}
{/if}
</div>
<button
class="self-center"
on:click={() => {
show = false;
}}
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 20 20"
fill="currentColor"
class="w-5 h-5"
>
<path
d="M6.28 5.22a.75.75 0 00-1.06 1.06L8.94 10l-3.72 3.72a.75.75 0 101.06 1.06L10 11.06l3.72 3.72a.75.75 0 101.06-1.06L11.06 10l3.72-3.72a.75.75 0 00-1.06-1.06L10 8.94 6.28 5.22z"
/>
</svg>
</button>
</div>
<div class="flex flex-col md:flex-row w-full px-4 pb-4 md:space-x-4 dark:text-gray-200">
<div class=" flex flex-col w-full sm:flex-row sm:justify-center sm:space-x-6">
<form
class="flex flex-col w-full"
on:submit={(e) => {
e.preventDefault();
submitHandler();
}}
>
<div class="px-1">
<div class="flex gap-2">
<div class="flex flex-col w-full">
<div class=" mb-0.5 text-xs text-gray-500">{$i18n.t('URL')}</div>
<div class="flex-1">
<input
class="w-full text-sm bg-transparent placeholder:text-gray-300 dark:placeholder:text-gray-700 outline-none"
type="text"
bind:value={url}
placeholder={$i18n.t('API Base URL')}
autocomplete="off"
required
/>
</div>
</div>
<Tooltip content="Verify Connection" className="self-end -mb-1">
<button
class="self-center p-1 bg-transparent hover:bg-gray-100 dark:bg-gray-900 dark:hover:bg-gray-850 rounded-lg transition"
on:click={() => {
verifyHandler();
}}
type="button"
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 20 20"
fill="currentColor"
class="w-4 h-4"
>
<path
fill-rule="evenodd"
d="M15.312 11.424a5.5 5.5 0 01-9.201 2.466l-.312-.311h2.433a.75.75 0 000-1.5H3.989a.75.75 0 00-.75.75v4.242a.75.75 0 001.5 0v-2.43l.31.31a7 7 0 0011.712-3.138.75.75 0 00-1.449-.39zm1.23-3.723a.75.75 0 00.219-.53V2.929a.75.75 0 00-1.5 0V5.36l-.31-.31A7 7 0 003.239 8.188a.75.75 0 101.448.389A5.5 5.5 0 0113.89 6.11l.311.31h-2.432a.75.75 0 000 1.5h4.243a.75.75 0 00.53-.219z"
clip-rule="evenodd"
/>
</svg>
</button>
</Tooltip>
<div class="flex flex-col flex-shrink-0 self-end">
<Tooltip content={enable ? $i18n.t('Enabled') : $i18n.t('Disabled')}>
<Switch bind:state={enable} />
</Tooltip>
</div>
</div>
<div class="flex gap-2 mt-2">
<div class="flex flex-col w-full">
<div class=" mb-0.5 text-xs text-gray-500">{$i18n.t('Key')}</div>
<div class="flex-1">
<SensitiveInput
className="w-full text-sm bg-transparent placeholder:text-gray-300 dark:placeholder:text-gray-700 outline-none"
bind:value={key}
placeholder={$i18n.t('API Key')}
required={!ollama}
/>
</div>
</div>
<div class="flex flex-col w-full">
<div class=" mb-1 text-xs text-gray-500">{$i18n.t('Prefix ID')}</div>
<div class="flex-1">
<Tooltip
content={$i18n.t(
'Prefix ID is used to avoid conflicts with other connections by adding a prefix to the model IDs - leave empty to disable'
)}
>
<input
class="w-full text-sm bg-transparent placeholder:text-gray-300 dark:placeholder:text-gray-700 outline-none"
type="text"
bind:value={prefixId}
placeholder={$i18n.t('Prefix ID')}
autocomplete="off"
/>
</Tooltip>
</div>
</div>
</div>
<hr class=" border-gray-100 dark:border-gray-700/10 my-2.5 w-full" />
<div class="flex flex-col w-full">
<div class="mb-1 flex justify-between">
<div class="text-xs text-gray-500">{$i18n.t('Model IDs')}</div>
</div>
{#if modelIds.length > 0}
<div class="flex flex-col">
{#each modelIds as modelId, modelIdx}
<div class=" flex gap-2 w-full justify-between items-center">
<div class=" text-sm flex-1 py-1 rounded-lg">
{modelId}
</div>
<div class="flex-shrink-0">
<button
type="button"
on:click={() => {
modelIds = modelIds.filter((_, idx) => idx !== modelIdx);
}}
>
<Minus strokeWidth="2" className="size-3.5" />
</button>
</div>
</div>
{/each}
</div>
{:else}
<div class="text-gray-500 text-xs text-center py-2 px-10">
{#if ollama}
{$i18n.t('Leave empty to include all models from "{{URL}}/api/tags" endpoint', {
URL: url
})}
{:else}
{$i18n.t('Leave empty to include all models from "{{URL}}/models" endpoint', {
URL: url
})}
{/if}
</div>
{/if}
</div>
<hr class=" border-gray-100 dark:border-gray-700/10 my-2.5 w-full" />
<div class="flex items-center">
<input
class="w-full py-1 text-sm rounded-lg bg-transparent {modelId
? ''
: 'text-gray-500'} placeholder:text-gray-300 dark:placeholder:text-gray-700 outline-none"
bind:value={modelId}
placeholder={$i18n.t('Add a model ID')}
/>
<div>
<button
type="button"
on:click={() => {
addModelHandler();
}}
>
<Plus className="size-3.5" strokeWidth="2" />
</button>
</div>
</div>
</div>
<div class="flex justify-end pt-3 text-sm font-medium gap-1.5">
{#if edit}
<button
class="px-3.5 py-1.5 text-sm font-medium dark:bg-black dark:hover:bg-gray-900 dark:text-white bg-white text-black hover:bg-gray-100 transition rounded-full flex flex-row space-x-1 items-center"
type="button"
on:click={() => {
onDelete();
show = false;
}}
>
{$i18n.t('Delete')}
</button>
{/if}
<button
class="px-3.5 py-1.5 text-sm font-medium bg-black hover:bg-gray-900 text-white dark:bg-white dark:text-black dark:hover:bg-gray-100 transition rounded-full flex flex-row space-x-1 items-center {loading
? ' cursor-not-allowed'
: ''}"
type="submit"
disabled={loading}
>
{$i18n.t('Save')}
{#if loading}
<div class="ml-2 self-center">
<svg
class=" w-4 h-4"
viewBox="0 0 24 24"
fill="currentColor"
xmlns="http://www.w3.org/2000/svg"
><style>
.spinner_ajPY {
transform-origin: center;
animation: spinner_AtaB 0.75s infinite linear;
}
@keyframes spinner_AtaB {
100% {
transform: rotate(360deg);
}
}
</style><path
d="M12,1A11,11,0,1,0,23,12,11,11,0,0,0,12,1Zm0,19a8,8,0,1,1,8-8A8,8,0,0,1,12,20Z"
opacity=".25"
/><path
d="M10.14,1.16a11,11,0,0,0-9,8.92A1.59,1.59,0,0,0,2.46,12,1.52,1.52,0,0,0,4.11,10.7a8,8,0,0,1,6.66-6.61A1.42,1.42,0,0,0,12,2.69h0A1.57,1.57,0,0,0,10.14,1.16Z"
class="spinner_ajPY"
/></svg
>
</div>
{/if}
</button>
</div>
</form>
</div>
</div>
</div>
</Modal>

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,89 @@
<script lang="ts">
import { getContext, tick } from 'svelte';
const i18n = getContext('i18n');
import Tooltip from '$lib/components/common/Tooltip.svelte';
import SensitiveInput from '$lib/components/common/SensitiveInput.svelte';
import AddConnectionModal from './AddConnectionModal.svelte';
import Cog6 from '$lib/components/icons/Cog6.svelte';
import Wrench from '$lib/components/icons/Wrench.svelte';
import ManageOllamaModal from './ManageOllamaModal.svelte';
export let onDelete = () => {};
export let onSubmit = () => {};
export let url = '';
export let idx = 0;
export let config = {};
let showManageModal = false;
let showConfigModal = false;
</script>
<AddConnectionModal
ollama
edit
bind:show={showConfigModal}
connection={{
url,
key: config?.key ?? '',
config: config
}}
{onDelete}
onSubmit={(connection) => {
url = connection.url;
config = { ...connection.config, key: connection.key };
onSubmit(connection);
}}
/>
<ManageOllamaModal bind:show={showManageModal} urlIdx={idx} />
<div class="flex gap-1.5">
<Tooltip
className="w-full relative"
content={$i18n.t(`WebUI will make requests to "{{url}}/api/chat"`, {
url
})}
placement="top-start"
>
{#if !(config?.enable ?? true)}
<div
class="absolute top-0 bottom-0 left-0 right-0 opacity-60 bg-white dark:bg-gray-900 z-10"
></div>
{/if}
<input
class="w-full text-sm bg-transparent outline-none"
placeholder={$i18n.t('Enter URL (e.g. http://localhost:11434)')}
bind:value={url}
/>
</Tooltip>
<div class="flex gap-1">
<Tooltip content={$i18n.t('Manage')} className="self-start">
<button
class="self-center p-1 bg-transparent hover:bg-gray-100 dark:bg-gray-900 dark:hover:bg-gray-850 rounded-lg transition"
on:click={() => {
showManageModal = true;
}}
type="button"
>
<Wrench />
</button>
</Tooltip>
<Tooltip content={$i18n.t('Configure')} className="self-start">
<button
class="self-center p-1 bg-transparent hover:bg-gray-100 dark:bg-gray-900 dark:hover:bg-gray-850 rounded-lg transition"
on:click={() => {
showConfigModal = true;
}}
type="button"
>
<Cog6 />
</button>
</Tooltip>
</div>
</div>

View file

@ -0,0 +1,107 @@
<script lang="ts">
import { getContext, tick } from 'svelte';
const i18n = getContext('i18n');
import Tooltip from '$lib/components/common/Tooltip.svelte';
import SensitiveInput from '$lib/components/common/SensitiveInput.svelte';
import Cog6 from '$lib/components/icons/Cog6.svelte';
import AddConnectionModal from './AddConnectionModal.svelte';
import { connect } from 'socket.io-client';
export let onDelete = () => {};
export let onSubmit = () => {};
export let pipeline = false;
export let url = '';
export let key = '';
export let config = {};
let showConfigModal = false;
</script>
<AddConnectionModal
edit
bind:show={showConfigModal}
connection={{
url,
key,
config
}}
{onDelete}
onSubmit={(connection) => {
url = connection.url;
key = connection.key;
config = connection.config;
onSubmit(connection);
}}
/>
<div class="flex w-full gap-2 items-center">
<Tooltip
className="w-full relative"
content={$i18n.t(`WebUI will make requests to "{{url}}/chat/completions"`, {
url
})}
placement="top-start"
>
{#if !(config?.enable ?? true)}
<div
class="absolute top-0 bottom-0 left-0 right-0 opacity-60 bg-white dark:bg-gray-900 z-10"
></div>
{/if}
<div class="flex w-full">
<div class="flex-1 relative">
<input
class=" outline-none w-full bg-transparent {pipeline ? 'pr-8' : ''}"
placeholder={$i18n.t('API Base URL')}
bind:value={url}
autocomplete="off"
/>
{#if pipeline}
<div class=" absolute top-2.5 right-2.5">
<Tooltip content="Pipelines">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 24 24"
fill="currentColor"
class="size-4"
>
<path
d="M11.644 1.59a.75.75 0 0 1 .712 0l9.75 5.25a.75.75 0 0 1 0 1.32l-9.75 5.25a.75.75 0 0 1-.712 0l-9.75-5.25a.75.75 0 0 1 0-1.32l9.75-5.25Z"
/>
<path
d="m3.265 10.602 7.668 4.129a2.25 2.25 0 0 0 2.134 0l7.668-4.13 1.37.739a.75.75 0 0 1 0 1.32l-9.75 5.25a.75.75 0 0 1-.71 0l-9.75-5.25a.75.75 0 0 1 0-1.32l1.37-.738Z"
/>
<path
d="m10.933 19.231-7.668-4.13-1.37.739a.75.75 0 0 0 0 1.32l9.75 5.25c.221.12.489.12.71 0l9.75-5.25a.75.75 0 0 0 0-1.32l-1.37-.738-7.668 4.13a2.25 2.25 0 0 1-2.134-.001Z"
/>
</svg>
</Tooltip>
</div>
{/if}
</div>
<SensitiveInput
inputClassName=" outline-none bg-transparent w-full"
placeholder={$i18n.t('API Key')}
bind:value={key}
/>
</div>
</Tooltip>
<div class="flex gap-1">
<Tooltip content={$i18n.t('Configure')} className="self-start">
<button
class="self-center p-1 bg-transparent hover:bg-gray-100 dark:bg-gray-900 dark:hover:bg-gray-850 rounded-lg transition"
on:click={() => {
showConfigModal = true;
}}
type="button"
>
<Cog6 />
</button>
</Tooltip>
</div>
</div>

View file

@ -181,37 +181,6 @@
</div>
</button>
{/if}
<hr class=" dark:border-gray-850 my-1" />
<button
type="button"
class=" flex rounded-md py-2 px-3 w-full hover:bg-gray-200 dark:hover:bg-gray-800 transition"
on:click={() => {
downloadLiteLLMConfig(localStorage.token).catch((error) => {
toast.error(error);
});
}}
>
<div class=" self-center mr-3">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="w-4 h-4"
>
<path d="M2 3a1 1 0 0 1 1-1h10a1 1 0 0 1 1 1v1a1 1 0 0 1-1 1H3a1 1 0 0 1-1-1V3Z" />
<path
fill-rule="evenodd"
d="M13 6H3v6a2 2 0 0 0 2 2h6a2 2 0 0 0 2-2V6ZM8.75 7.75a.75.75 0 0 0-1.5 0v2.69L6.03 9.22a.75.75 0 0 0-1.06 1.06l2.5 2.5a.75.75 0 0 0 1.06 0l2.5-2.5a.75.75 0 1 0-1.06-1.06l-1.22 1.22V7.75Z"
clip-rule="evenodd"
/>
</svg>
</div>
<div class=" self-center text-sm font-medium">
{$i18n.t('Export LiteLLM config.yaml')}
</div>
</button>
</div>
</div>

View file

@ -19,7 +19,7 @@
} from '$lib/apis/retrieval';
import { knowledge, models } from '$lib/stores';
import { getKnowledgeItems } from '$lib/apis/knowledge';
import { getKnowledgeBases } from '$lib/apis/knowledge';
import { uploadDir, deleteAllFiles, deleteFileById } from '$lib/apis/files';
import ResetUploadDirConfirmDialog from '$lib/components/common/ConfirmDialog.svelte';
@ -56,8 +56,11 @@
let chunkOverlap = 0;
let pdfExtractImages = true;
let OpenAIKey = '';
let OpenAIUrl = '';
let OpenAIKey = '';
let OllamaUrl = '';
let OllamaKey = '';
let querySettings = {
template: '',
@ -104,19 +107,15 @@
const res = await updateEmbeddingConfig(localStorage.token, {
embedding_engine: embeddingEngine,
embedding_model: embeddingModel,
...(embeddingEngine === 'openai' || embeddingEngine === 'ollama'
? {
embedding_batch_size: embeddingBatchSize
}
: {}),
...(embeddingEngine === 'openai'
? {
embedding_batch_size: embeddingBatchSize,
ollama_config: {
key: OllamaKey,
url: OllamaUrl
},
openai_config: {
key: OpenAIKey,
url: OpenAIUrl
}
}
: {})
}).catch(async (error) => {
toast.error(error);
await setEmbeddingConfig();
@ -206,6 +205,9 @@
OpenAIKey = embeddingConfig.openai_config.key;
OpenAIUrl = embeddingConfig.openai_config.url;
OllamaKey = embeddingConfig.ollama_config.key;
OllamaUrl = embeddingConfig.ollama_config.url;
}
};
@ -310,9 +312,9 @@
</div>
{#if embeddingEngine === 'openai'}
<div class="my-0.5 flex gap-2">
<div class="my-0.5 flex gap-2 pr-2">
<input
class="flex-1 w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-none"
class="flex-1 w-full rounded-lg text-sm bg-transparent outline-none"
placeholder={$i18n.t('API Base URL')}
bind:value={OpenAIUrl}
required
@ -320,7 +322,23 @@
<SensitiveInput placeholder={$i18n.t('API Key')} bind:value={OpenAIKey} />
</div>
{:else if embeddingEngine === 'ollama'}
<div class="my-0.5 flex gap-2 pr-2">
<input
class="flex-1 w-full rounded-lg text-sm bg-transparent outline-none"
placeholder={$i18n.t('API Base URL')}
bind:value={OllamaUrl}
required
/>
<SensitiveInput
placeholder={$i18n.t('API Key')}
bind:value={OllamaKey}
required={false}
/>
</div>
{/if}
{#if embeddingEngine === 'ollama' || embeddingEngine === 'openai'}
<div class="flex mt-0.5 space-x-2">
<div class=" self-center text-xs font-medium">{$i18n.t('Embedding Batch Size')}</div>
@ -376,19 +394,12 @@
{#if embeddingEngine === 'ollama'}
<div class="flex w-full">
<div class="flex-1 mr-2">
<select
<input
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-none"
bind:value={embeddingModel}
placeholder={$i18n.t('Select a model')}
placeholder={$i18n.t('Set embedding model')}
required
>
{#if !embeddingModel}
<option value="" disabled selected>{$i18n.t('Select a model')}</option>
{/if}
{#each $models.filter((m) => m.id && m.ollama && !(m?.preset ?? false)) as model}
<option value={model.id} class="bg-gray-50 dark:bg-gray-700">{model.name}</option>
{/each}
</select>
/>
</div>
</div>
{:else}

View file

@ -11,7 +11,7 @@
import Tooltip from '$lib/components/common/Tooltip.svelte';
import Plus from '$lib/components/icons/Plus.svelte';
import Model from './Evaluations/Model.svelte';
import ModelModal from './Evaluations/ModelModal.svelte';
import ArenaModelModal from './Evaluations/ArenaModelModal.svelte';
import { getConfig, updateConfig } from '$lib/apis/evaluations';
const i18n = getContext('i18n');
@ -65,7 +65,7 @@
});
</script>
<ModelModal
<ArenaModelModal
bind:show={showAddModel}
on:submit={async (e) => {
addModelHandler(e.detail);

View file

@ -9,6 +9,7 @@
import Minus from '$lib/components/icons/Minus.svelte';
import PencilSolid from '$lib/components/icons/PencilSolid.svelte';
import { toast } from 'svelte-sonner';
import AccessControl from '$lib/components/workspace/common/AccessControl.svelte';
export let show = false;
export let edit = false;
@ -39,6 +40,8 @@
let modelIds = [];
let filterMode = 'include';
let accessControl = {};
let imageInputElement;
let loading = false;
@ -74,7 +77,8 @@
profile_image_url: profileImageUrl,
description: description || null,
model_ids: modelIds.length > 0 ? modelIds : null,
filter_mode: modelIds.length > 0 ? (filterMode ? filterMode : null) : null
filter_mode: modelIds.length > 0 ? (filterMode ? filterMode : null) : null,
access_control: accessControl
}
};
@ -98,6 +102,7 @@
description = model.meta.description;
modelIds = model.meta.model_ids || [];
filterMode = model.meta?.filter_mode ?? 'include';
accessControl = 'access_control' in model.meta ? model.meta.access_control : {};
}
};
@ -283,6 +288,14 @@
<hr class=" border-gray-100 dark:border-gray-700/10 my-2.5 w-full" />
<div class="my-2 -mx-2">
<div class="px-3 py-2 bg-gray-50 dark:bg-gray-950 rounded-lg">
<AccessControl bind:accessControl />
</div>
</div>
<hr class=" border-gray-100 dark:border-gray-700/10 my-2.5 w-full" />
<div class="flex flex-col w-full">
<div class="mb-1 flex justify-between">
<div class="text-xs text-gray-500">{$i18n.t('Models')}</div>

View file

@ -4,13 +4,13 @@
const i18n = getContext('i18n');
import Cog6 from '$lib/components/icons/Cog6.svelte';
import ModelModal from './ModelModal.svelte';
import ArenaModelModal from './ArenaModelModal.svelte';
export let model;
let showModel = false;
</script>
<ModelModal
<ArenaModelModal
bind:show={showModel}
edit={true}
{model}

View file

@ -1,7 +1,16 @@
<script lang="ts">
import { getBackendConfig, getWebhookUrl, updateWebhookUrl } from '$lib/apis';
import { getAdminConfig, updateAdminConfig } from '$lib/apis/auths';
import {
getAdminConfig,
getLdapConfig,
getLdapServer,
updateAdminConfig,
updateLdapConfig,
updateLdapServer
} from '$lib/apis/auths';
import SensitiveInput from '$lib/components/common/SensitiveInput.svelte';
import Switch from '$lib/components/common/Switch.svelte';
import Tooltip from '$lib/components/common/Tooltip.svelte';
import { config } from '$lib/stores';
import { onMount, getContext } from 'svelte';
import { toast } from 'svelte-sonner';
@ -13,9 +22,37 @@
let adminConfig = null;
let webhookUrl = '';
// LDAP
let ENABLE_LDAP = false;
let LDAP_SERVER = {
label: '',
host: '',
port: '',
attribute_for_username: 'uid',
app_dn: '',
app_dn_password: '',
search_base: '',
search_filters: '',
use_tls: false,
certificate_path: '',
ciphers: ''
};
const updateLdapServerHandler = async () => {
if (!ENABLE_LDAP) return;
const res = await updateLdapServer(localStorage.token, LDAP_SERVER).catch((error) => {
toast.error(error);
return null;
});
if (res) {
toast.success($i18n.t('LDAP server updated'));
}
};
const updateHandler = async () => {
webhookUrl = await updateWebhookUrl(localStorage.token, webhookUrl);
const res = await updateAdminConfig(localStorage.token, adminConfig);
await updateLdapServerHandler();
if (res) {
saveHandler();
@ -32,8 +69,14 @@
(async () => {
webhookUrl = await getWebhookUrl(localStorage.token);
})(),
(async () => {
LDAP_SERVER = await getLdapServer(localStorage.token);
})()
]);
const ldapConfig = await getLdapConfig(localStorage.token);
ENABLE_LDAP = ldapConfig.ENABLE_LDAP;
});
</script>
@ -69,7 +112,13 @@
</div>
</div>
<hr class=" dark:border-gray-850 my-2" />
<div class=" flex w-full justify-between pr-2">
<div class=" self-center text-xs font-medium">{$i18n.t('Enable API Key Auth')}</div>
<Switch bind:state={adminConfig.ENABLE_API_KEY} />
</div>
<hr class=" border-gray-50 dark:border-gray-850 my-2" />
<div class="my-3 flex w-full items-center justify-between pr-2">
<div class=" self-center text-xs font-medium">
@ -91,7 +140,7 @@
<Switch bind:state={adminConfig.ENABLE_MESSAGE_RATING} />
</div>
<hr class=" dark:border-gray-850 my-2" />
<hr class=" border-gray-50 dark:border-gray-850 my-2" />
<div class=" w-full justify-between">
<div class="flex w-full justify-between">
@ -115,7 +164,7 @@
</div>
</div>
<hr class=" dark:border-gray-850 my-2" />
<hr class=" border-gray-50 dark:border-gray-850 my-2" />
<div class=" w-full justify-between">
<div class="flex w-full justify-between">
@ -133,6 +182,196 @@
</div>
</div>
{/if}
<hr class=" border-gray-50 dark:border-gray-850" />
<div class=" space-y-3">
<div class="mt-2 space-y-2 pr-1.5">
<div class="flex justify-between items-center text-sm">
<div class=" font-medium">{$i18n.t('LDAP')}</div>
<div class="mt-1">
<Switch
bind:state={ENABLE_LDAP}
on:change={async () => {
updateLdapConfig(localStorage.token, ENABLE_LDAP);
}}
/>
</div>
</div>
{#if ENABLE_LDAP}
<div class="flex flex-col gap-1">
<div class="flex w-full gap-2">
<div class="w-full">
<div class=" self-center text-xs font-medium min-w-fit mb-1">
{$i18n.t('Label')}
</div>
<input
class="w-full bg-transparent outline-none py-0.5"
required
placeholder={$i18n.t('Enter server label')}
bind:value={LDAP_SERVER.label}
/>
</div>
<div class="w-full"></div>
</div>
<div class="flex w-full gap-2">
<div class="w-full">
<div class=" self-center text-xs font-medium min-w-fit mb-1">
{$i18n.t('Host')}
</div>
<input
class="w-full bg-transparent outline-none py-0.5"
required
placeholder={$i18n.t('Enter server host')}
bind:value={LDAP_SERVER.host}
/>
</div>
<div class="w-full">
<div class=" self-center text-xs font-medium min-w-fit mb-1">
{$i18n.t('Port')}
</div>
<Tooltip
placement="top-start"
content={$i18n.t('Default to 389 or 636 if TLS is enabled')}
className="w-full"
>
<input
class="w-full bg-transparent outline-none py-0.5"
type="number"
placeholder={$i18n.t('Enter server port')}
bind:value={LDAP_SERVER.port}
/>
</Tooltip>
</div>
</div>
<div class="flex w-full gap-2">
<div class="w-full">
<div class=" self-center text-xs font-medium min-w-fit mb-1">
{$i18n.t('Application DN')}
</div>
<Tooltip
content={$i18n.t('The Application Account DN you bind with for search')}
placement="top-start"
>
<input
class="w-full bg-transparent outline-none py-0.5"
required
placeholder={$i18n.t('Enter Application DN')}
bind:value={LDAP_SERVER.app_dn}
/>
</Tooltip>
</div>
<div class="w-full">
<div class=" self-center text-xs font-medium min-w-fit mb-1">
{$i18n.t('Application DN Password')}
</div>
<SensitiveInput
placeholder={$i18n.t('Enter Application DN Password')}
bind:value={LDAP_SERVER.app_dn_password}
/>
</div>
</div>
<div class="flex w-full gap-2">
<div class="w-full">
<div class=" self-center text-xs font-medium min-w-fit mb-1">
{$i18n.t('Attribute for Username')}
</div>
<Tooltip
content={$i18n.t(
'The LDAP attribute that maps to the username that users use to sign in.'
)}
placement="top-start"
>
<input
class="w-full bg-transparent outline-none py-0.5"
required
placeholder={$i18n.t('Example: sAMAccountName or uid or userPrincipalName')}
bind:value={LDAP_SERVER.attribute_for_username}
/>
</Tooltip>
</div>
</div>
<div class="flex w-full gap-2">
<div class="w-full">
<div class=" self-center text-xs font-medium min-w-fit mb-1">
{$i18n.t('Search Base')}
</div>
<Tooltip content={$i18n.t('The base to search for users')} placement="top-start">
<input
class="w-full bg-transparent outline-none py-0.5"
required
placeholder={$i18n.t('Example: ou=users,dc=foo,dc=example')}
bind:value={LDAP_SERVER.search_base}
/>
</Tooltip>
</div>
</div>
<div class="flex w-full gap-2">
<div class="w-full">
<div class=" self-center text-xs font-medium min-w-fit mb-1">
{$i18n.t('Search Filters')}
</div>
<input
class="w-full bg-transparent outline-none py-0.5"
placeholder={$i18n.t('Example: (&(objectClass=inetOrgPerson)(uid=%s))')}
bind:value={LDAP_SERVER.search_filters}
/>
</div>
</div>
<div class="text-xs text-gray-400 dark:text-gray-500">
<a
class=" text-gray-300 font-medium underline"
href="https://ldap.com/ldap-filters/"
target="_blank"
>
{$i18n.t('Click here for filter guides.')}
</a>
</div>
<div>
<div class="flex justify-between items-center text-sm">
<div class=" font-medium">{$i18n.t('TLS')}</div>
<div class="mt-1">
<Switch bind:state={LDAP_SERVER.use_tls} />
</div>
</div>
{#if LDAP_SERVER.use_tls}
<div class="flex w-full gap-2">
<div class="w-full">
<div class=" self-center text-xs font-medium min-w-fit mb-1 mt-1">
{$i18n.t('Certificate Path')}
</div>
<input
class="w-full bg-transparent outline-none py-0.5"
required
placeholder={$i18n.t('Enter certificate path')}
bind:value={LDAP_SERVER.certificate_path}
/>
</div>
</div>
<div class="flex w-full gap-2">
<div class="w-full">
<div class=" self-center text-xs font-medium min-w-fit mb-1">
{$i18n.t('Ciphers')}
</div>
<Tooltip content={$i18n.t('Default to ALL')} placement="top-start">
<input
class="w-full bg-transparent outline-none py-0.5"
placeholder={$i18n.t('Example: ALL')}
bind:value={LDAP_SERVER.ciphers}
/>
</Tooltip>
</div>
<div class="w-full"></div>
</div>
{/if}
</div>
</div>
{/if}
</div>
</div>
</div>
<div class="flex justify-end pt-3 text-sm font-medium">

View file

@ -566,7 +566,7 @@
<div class="flex gap-2 mb-1">
<input
class="flex-1 w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-none"
class="flex-1 w-full text-sm bg-transparent outline-none"
placeholder={$i18n.t('API Base URL')}
bind:value={config.openai.OPENAI_API_BASE_URL}
required

View file

@ -25,8 +25,10 @@
TASK_MODEL_EXTERNAL: '',
TITLE_GENERATION_PROMPT_TEMPLATE: '',
TAGS_GENERATION_PROMPT_TEMPLATE: '',
ENABLE_SEARCH_QUERY: true,
SEARCH_QUERY_GENERATION_PROMPT_TEMPLATE: ''
ENABLE_TAGS_GENERATION: true,
ENABLE_SEARCH_QUERY_GENERATION: true,
ENABLE_RETRIEVAL_QUERY_GENERATION: true,
QUERY_GENERATION_PROMPT_TEMPLATE: ''
};
let promptSuggestions = [];
@ -133,6 +135,17 @@
</Tooltip>
</div>
<hr class=" dark:border-gray-850 my-3" />
<div class="my-3 flex w-full items-center justify-between">
<div class=" self-center text-xs font-medium">
{$i18n.t('Enable Tags Generation')}
</div>
<Switch bind:state={taskConfig.ENABLE_TAGS_GENERATION} />
</div>
{#if taskConfig.ENABLE_TAGS_GENERATION}
<div class="mt-3">
<div class=" mb-2.5 text-xs font-medium">{$i18n.t('Tags Generation Prompt')}</div>
@ -142,31 +155,6 @@
>
<Textarea
bind:value={taskConfig.TAGS_GENERATION_PROMPT_TEMPLATE}
placeholder={$i18n.t('Leave empty to use the default prompt, or enter a custom prompt')}
/>
</Tooltip>
</div>
<hr class=" dark:border-gray-850 my-3" />
<div class="my-3 flex w-full items-center justify-between">
<div class=" self-center text-xs font-medium">
{$i18n.t('Enable Web Search Query Generation')}
</div>
<Switch bind:state={taskConfig.ENABLE_SEARCH_QUERY} />
</div>
{#if taskConfig.ENABLE_SEARCH_QUERY}
<div class="">
<div class=" mb-2.5 text-xs font-medium">{$i18n.t('Search Query Generation Prompt')}</div>
<Tooltip
content={$i18n.t('Leave empty to use the default prompt, or enter a custom prompt')}
placement="top-start"
>
<Textarea
bind:value={taskConfig.SEARCH_QUERY_GENERATION_PROMPT_TEMPLATE}
placeholder={$i18n.t(
'Leave empty to use the default prompt, or enter a custom prompt'
)}
@ -174,6 +162,38 @@
</Tooltip>
</div>
{/if}
<hr class=" dark:border-gray-850 my-3" />
<div class="my-3 flex w-full items-center justify-between">
<div class=" self-center text-xs font-medium">
{$i18n.t('Enable Retrieval Query Generation')}
</div>
<Switch bind:state={taskConfig.ENABLE_RETRIEVAL_QUERY_GENERATION} />
</div>
<div class="my-3 flex w-full items-center justify-between">
<div class=" self-center text-xs font-medium">
{$i18n.t('Enable Web Search Query Generation')}
</div>
<Switch bind:state={taskConfig.ENABLE_SEARCH_QUERY_GENERATION} />
</div>
<div class="">
<div class=" mb-2.5 text-xs font-medium">{$i18n.t('Query Generation Prompt')}</div>
<Tooltip
content={$i18n.t('Leave empty to use the default prompt, or enter a custom prompt')}
placement="top-start"
>
<Textarea
bind:value={taskConfig.QUERY_GENERATION_PROMPT_TEMPLATE}
placeholder={$i18n.t('Leave empty to use the default prompt, or enter a custom prompt')}
/>
</Tooltip>
</div>
</div>
<hr class=" dark:border-gray-850 my-3" />

File diff suppressed because it is too large Load diff

View file

@ -1,214 +0,0 @@
<script lang="ts">
import { getBackendConfig, getModelFilterConfig, updateModelFilterConfig } from '$lib/apis';
import { getSignUpEnabledStatus, toggleSignUpEnabledStatus } from '$lib/apis/auths';
import { getUserPermissions, updateUserPermissions } from '$lib/apis/users';
import { onMount, getContext } from 'svelte';
import { models, config } from '$lib/stores';
import Switch from '$lib/components/common/Switch.svelte';
import { setDefaultModels } from '$lib/apis/configs';
const i18n = getContext('i18n');
export let saveHandler: Function;
let defaultModelId = '';
let whitelistEnabled = false;
let whitelistModels = [''];
let permissions = {
chat: {
deletion: true,
edit: true,
temporary: true
}
};
let chatDeletion = true;
let chatEdit = true;
let chatTemporary = true;
onMount(async () => {
permissions = await getUserPermissions(localStorage.token);
chatDeletion = permissions?.chat?.deletion ?? true;
chatEdit = permissions?.chat?.editing ?? true;
chatTemporary = permissions?.chat?.temporary ?? true;
const res = await getModelFilterConfig(localStorage.token);
if (res) {
whitelistEnabled = res.enabled;
whitelistModels = res.models.length > 0 ? res.models : [''];
}
defaultModelId = $config.default_models ? $config?.default_models.split(',')[0] : '';
});
</script>
<form
class="flex flex-col h-full justify-between space-y-3 text-sm"
on:submit|preventDefault={async () => {
// console.log('submit');
await setDefaultModels(localStorage.token, defaultModelId);
await updateUserPermissions(localStorage.token, {
chat: {
deletion: chatDeletion,
editing: chatEdit,
temporary: chatTemporary
}
});
await updateModelFilterConfig(localStorage.token, whitelistEnabled, whitelistModels);
saveHandler();
await config.set(await getBackendConfig());
}}
>
<div class=" space-y-3 overflow-y-scroll max-h-full">
<div>
<div class=" mb-2 text-sm font-medium">{$i18n.t('User Permissions')}</div>
<div class=" flex w-full justify-between my-2 pr-2">
<div class=" self-center text-xs font-medium">{$i18n.t('Allow Chat Deletion')}</div>
<Switch bind:state={chatDeletion} />
</div>
<div class=" flex w-full justify-between my-2 pr-2">
<div class=" self-center text-xs font-medium">{$i18n.t('Allow Chat Editing')}</div>
<Switch bind:state={chatEdit} />
</div>
<div class=" flex w-full justify-between my-2 pr-2">
<div class=" self-center text-xs font-medium">{$i18n.t('Allow Temporary Chat')}</div>
<Switch bind:state={chatTemporary} />
</div>
</div>
<hr class=" dark:border-gray-850 my-2" />
<div class="mt-2 space-y-3">
<div>
<div class="mb-2">
<div class="flex justify-between items-center text-xs">
<div class=" text-sm font-medium">{$i18n.t('Manage Models')}</div>
</div>
</div>
<div class=" space-y-1 mb-3">
<div class="mb-2">
<div class="flex justify-between items-center text-xs">
<div class=" text-xs font-medium">{$i18n.t('Default Model')}</div>
</div>
</div>
<div class="flex-1 mr-2">
<select
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-none"
bind:value={defaultModelId}
placeholder="Select a model"
>
<option value="" disabled selected>{$i18n.t('Select a model')}</option>
{#each $models.filter((model) => model.id) as model}
<option value={model.id} class="bg-gray-100 dark:bg-gray-700">{model.name}</option>
{/each}
</select>
</div>
</div>
<div class=" space-y-1">
<div class="mb-2">
<div class="flex justify-between items-center text-xs my-3 pr-2">
<div class=" text-xs font-medium">{$i18n.t('Model Whitelisting')}</div>
<Switch bind:state={whitelistEnabled} />
</div>
</div>
{#if whitelistEnabled}
<div>
<div class=" space-y-1.5">
{#each whitelistModels as modelId, modelIdx}
<div class="flex w-full">
<div class="flex-1 mr-2">
<select
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-none"
bind:value={modelId}
placeholder="Select a model"
>
<option value="" disabled selected>{$i18n.t('Select a model')}</option>
{#each $models.filter((model) => model.id) as model}
<option value={model.id} class="bg-gray-100 dark:bg-gray-700"
>{model.name}</option
>
{/each}
</select>
</div>
{#if modelIdx === 0}
<button
class="px-2.5 bg-gray-100 hover:bg-gray-200 text-gray-800 dark:bg-gray-900 dark:text-white rounded-lg transition"
type="button"
on:click={() => {
if (whitelistModels.at(-1) !== '') {
whitelistModels = [...whitelistModels, ''];
}
}}
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="w-4 h-4"
>
<path
d="M8.75 3.75a.75.75 0 0 0-1.5 0v3.5h-3.5a.75.75 0 0 0 0 1.5h3.5v3.5a.75.75 0 0 0 1.5 0v-3.5h3.5a.75.75 0 0 0 0-1.5h-3.5v-3.5Z"
/>
</svg>
</button>
{:else}
<button
class="px-2.5 bg-gray-100 hover:bg-gray-200 text-gray-800 dark:bg-gray-900 dark:text-white rounded-lg transition"
type="button"
on:click={() => {
whitelistModels.splice(modelIdx, 1);
whitelistModels = whitelistModels;
}}
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="w-4 h-4"
>
<path d="M3.75 7.25a.75.75 0 0 0 0 1.5h8.5a.75.75 0 0 0 0-1.5h-8.5Z" />
</svg>
</button>
{/if}
</div>
{/each}
</div>
<div class="flex justify-end items-center text-xs mt-1.5 text-right">
<div class=" text-xs font-medium">
{whitelistModels.length}
{$i18n.t('Model(s) Whitelisted')}
</div>
</div>
</div>
{/if}
</div>
</div>
</div>
</div>
<div class="flex justify-end pt-3 text-sm font-medium">
<button
class="px-3.5 py-1.5 text-sm font-medium bg-black hover:bg-gray-900 text-white dark:bg-white dark:text-black dark:hover:bg-gray-100 transition rounded-full"
type="submit"
>
{$i18n.t('Save')}
</button>
</div>
</form>

View file

@ -23,7 +23,8 @@
'searchapi',
'duckduckgo',
'tavily',
'jina'
'jina',
'bing'
];
let youtubeLanguage = 'en';
@ -234,6 +235,46 @@
bind:value={webConfig.search.tavily_api_key}
/>
</div>
{:else if webConfig.search.engine === 'jina'}
<div>
<div class=" self-center text-xs font-medium mb-1">
{$i18n.t('Jina API Key')}
</div>
<SensitiveInput
placeholder={$i18n.t('Enter Jina API Key')}
bind:value={webConfig.search.jina_api_key}
/>
</div>
{:else if webConfig.search.engine === 'bing'}
<div>
<div class=" self-center text-xs font-medium mb-1">
{$i18n.t('Bing Search V7 Endpoint')}
</div>
<div class="flex w-full">
<div class="flex-1">
<input
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-none"
type="text"
placeholder={$i18n.t('Enter Bing Search V7 Endpoint')}
bind:value={webConfig.search.bing_search_v7_endpoint}
autocomplete="off"
/>
</div>
</div>
</div>
<div class="mt-2">
<div class=" self-center text-xs font-medium mb-1">
{$i18n.t('Bing Search V7 Subscription Key')}
</div>
<SensitiveInput
placeholder={$i18n.t('Enter Bing Search V7 Subscription Key')}
bind:value={webConfig.search.bing_search_v7_subscription_key}
/>
</div>
{/if}
</div>
{/if}
@ -285,12 +326,12 @@
<button
class="p-1 px-3 text-xs flex rounded transition"
on:click={() => {
webConfig.ssl_verification = !webConfig.ssl_verification;
webConfig.web_loader_ssl_verification = !webConfig.web_loader_ssl_verification;
submitHandler();
}}
type="button"
>
{#if webConfig.ssl_verification === true}
{#if webConfig.web_loader_ssl_verification === false}
<span class="ml-2 self-center">{$i18n.t('On')}</span>
{:else}
<span class="ml-2 self-center">{$i18n.t('Off')}</span>

View file

@ -0,0 +1,110 @@
<script>
import { getContext, tick, onMount } from 'svelte';
import { toast } from 'svelte-sonner';
import { goto } from '$app/navigation';
import { user } from '$lib/stores';
import { getUsers } from '$lib/apis/users';
import UserList from './Users/UserList.svelte';
import Groups from './Users/Groups.svelte';
const i18n = getContext('i18n');
let users = [];
let selectedTab = 'overview';
let loaded = false;
$: if (selectedTab) {
getUsersHandler();
}
const getUsersHandler = async () => {
users = await getUsers(localStorage.token);
};
onMount(async () => {
if ($user?.role !== 'admin') {
await goto('/');
} else {
users = await getUsers(localStorage.token);
}
loaded = true;
const containerElement = document.getElementById('users-tabs-container');
if (containerElement) {
containerElement.addEventListener('wheel', function (event) {
if (event.deltaY !== 0) {
// Adjust horizontal scroll position based on vertical scroll
containerElement.scrollLeft += event.deltaY;
}
});
}
});
</script>
<div class="flex flex-col lg:flex-row w-full h-full pb-2 lg:space-x-4">
<div
id="users-tabs-container"
class=" flex flex-row overflow-x-auto gap-2.5 max-w-full lg:gap-1 lg:flex-col lg:flex-none lg:w-40 dark:text-gray-200 text-sm font-medium text-left scrollbar-none"
>
<button
class="px-0.5 py-1 min-w-fit rounded-lg lg:flex-none flex text-right transition {selectedTab ===
'overview'
? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => {
selectedTab = 'overview';
}}
>
<div class=" self-center mr-2">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="size-4"
>
<path
d="M8.5 4.5a2.5 2.5 0 1 1-5 0 2.5 2.5 0 0 1 5 0ZM10.9 12.006c.11.542-.348.994-.9.994H2c-.553 0-1.01-.452-.902-.994a5.002 5.002 0 0 1 9.803 0ZM14.002 12h-1.59a2.556 2.556 0 0 0-.04-.29 6.476 6.476 0 0 0-1.167-2.603 3.002 3.002 0 0 1 3.633 1.911c.18.522-.283.982-.836.982ZM12 8a2 2 0 1 0 0-4 2 2 0 0 0 0 4Z"
/>
</svg>
</div>
<div class=" self-center">{$i18n.t('Overview')}</div>
</button>
<button
class="px-0.5 py-1 min-w-fit rounded-lg lg:flex-none flex text-right transition {selectedTab ===
'groups'
? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => {
selectedTab = 'groups';
}}
>
<div class=" self-center mr-2">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="size-4"
>
<path
d="M8 8a2.5 2.5 0 1 0 0-5 2.5 2.5 0 0 0 0 5ZM3.156 11.763c.16-.629.44-1.21.813-1.72a2.5 2.5 0 0 0-2.725 1.377c-.136.287.102.58.418.58h1.449c.01-.077.025-.156.045-.237ZM12.847 11.763c.02.08.036.16.046.237h1.446c.316 0 .554-.293.417-.579a2.5 2.5 0 0 0-2.722-1.378c.374.51.653 1.09.813 1.72ZM14 7.5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0ZM3.5 9a1.5 1.5 0 1 0 0-3 1.5 1.5 0 0 0 0 3ZM5 13c-.552 0-1.013-.455-.876-.99a4.002 4.002 0 0 1 7.753 0c.136.535-.324.99-.877.99H5Z"
/>
</svg>
</div>
<div class=" self-center">{$i18n.t('Groups')}</div>
</button>
</div>
<div class="flex-1 mt-1 lg:mt-0 overflow-y-scroll">
{#if selectedTab === 'overview'}
<UserList {users} />
{:else if selectedTab === 'groups'}
<Groups {users} />
{/if}
</div>
</div>

View file

@ -0,0 +1,237 @@
<script>
import { toast } from 'svelte-sonner';
import dayjs from 'dayjs';
import relativeTime from 'dayjs/plugin/relativeTime';
dayjs.extend(relativeTime);
import { onMount, getContext } from 'svelte';
import { goto } from '$app/navigation';
import { WEBUI_NAME, config, user, showSidebar, knowledge } from '$lib/stores';
import { WEBUI_BASE_URL } from '$lib/constants';
import Tooltip from '$lib/components/common/Tooltip.svelte';
import Plus from '$lib/components/icons/Plus.svelte';
import Badge from '$lib/components/common/Badge.svelte';
import UsersSolid from '$lib/components/icons/UsersSolid.svelte';
import ChevronRight from '$lib/components/icons/ChevronRight.svelte';
import EllipsisHorizontal from '$lib/components/icons/EllipsisHorizontal.svelte';
import User from '$lib/components/icons/User.svelte';
import UserCircleSolid from '$lib/components/icons/UserCircleSolid.svelte';
import GroupModal from './Groups/EditGroupModal.svelte';
import Pencil from '$lib/components/icons/Pencil.svelte';
import GroupItem from './Groups/GroupItem.svelte';
import AddGroupModal from './Groups/AddGroupModal.svelte';
import { createNewGroup, getGroups } from '$lib/apis/groups';
import { getUserDefaultPermissions, updateUserDefaultPermissions } from '$lib/apis/users';
const i18n = getContext('i18n');
let loaded = false;
export let users = [];
let groups = [];
let filteredGroups;
$: filteredGroups = groups.filter((user) => {
if (search === '') {
return true;
} else {
let name = user.name.toLowerCase();
const query = search.toLowerCase();
return name.includes(query);
}
});
let search = '';
let defaultPermissions = {
workspace: {
models: false,
knowledge: false,
prompts: false,
tools: false
},
chat: {
file_upload: true,
delete: true,
edit: true,
temporary: true
}
};
let showCreateGroupModal = false;
let showDefaultPermissionsModal = false;
const setGroups = async () => {
groups = await getGroups(localStorage.token);
};
const addGroupHandler = async (group) => {
const res = await createNewGroup(localStorage.token, group).catch((error) => {
toast.error(error);
return null;
});
if (res) {
toast.success($i18n.t('Group created successfully'));
groups = await getGroups(localStorage.token);
}
};
const updateDefaultPermissionsHandler = async (group) => {
console.log(group.permissions);
const res = await updateUserDefaultPermissions(localStorage.token, group.permissions).catch(
(error) => {
toast.error(error);
return null;
}
);
if (res) {
toast.success($i18n.t('Default permissions updated successfully'));
defaultPermissions = await getUserDefaultPermissions(localStorage.token);
}
};
onMount(async () => {
if ($user?.role !== 'admin') {
await goto('/');
} else {
await setGroups();
defaultPermissions = await getUserDefaultPermissions(localStorage.token);
}
loaded = true;
});
</script>
{#if loaded}
<AddGroupModal bind:show={showCreateGroupModal} onSubmit={addGroupHandler} />
<div class="mt-0.5 mb-2 gap-1 flex flex-col md:flex-row justify-between">
<div class="flex md:self-center text-lg font-medium px-0.5">
{$i18n.t('Groups')}
<div class="flex self-center w-[1px] h-6 mx-2.5 bg-gray-50 dark:bg-gray-850" />
<span class="text-lg font-medium text-gray-500 dark:text-gray-300">{groups.length}</span>
</div>
<div class="flex gap-1">
<div class=" flex w-full space-x-2">
<div class="flex flex-1">
<div class=" self-center ml-1 mr-3">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 20 20"
fill="currentColor"
class="w-4 h-4"
>
<path
fill-rule="evenodd"
d="M9 3.5a5.5 5.5 0 100 11 5.5 5.5 0 000-11zM2 9a7 7 0 1112.452 4.391l3.328 3.329a.75.75 0 11-1.06 1.06l-3.329-3.328A7 7 0 012 9z"
clip-rule="evenodd"
/>
</svg>
</div>
<input
class=" w-full text-sm pr-4 py-1 rounded-r-xl outline-none bg-transparent"
bind:value={search}
placeholder={$i18n.t('Search')}
/>
</div>
<div>
<Tooltip content={$i18n.t('Create Group')}>
<button
class=" p-2 rounded-xl hover:bg-gray-100 dark:bg-gray-900 dark:hover:bg-gray-850 transition font-medium text-sm flex items-center space-x-1"
on:click={() => {
showCreateGroupModal = !showCreateGroupModal;
}}
>
<Plus className="size-3.5" />
</button>
</Tooltip>
</div>
</div>
</div>
</div>
<div>
{#if filteredGroups.length === 0}
<div class="flex flex-col items-center justify-center h-40">
<div class=" text-xl font-medium">
{$i18n.t('Organize your users')}
</div>
<div class="mt-1 text-sm dark:text-gray-300">
{$i18n.t('Use groups to group your users and assign permissions.')}
</div>
<div class="mt-3">
<button
class=" px-4 py-1.5 text-sm rounded-full bg-black hover:bg-gray-800 text-white dark:bg-white dark:text-black dark:hover:bg-gray-100 transition font-medium flex items-center space-x-1"
aria-label={$i18n.t('Create Group')}
on:click={() => {
showCreateGroupModal = true;
}}
>
{$i18n.t('Create Group')}
</button>
</div>
</div>
{:else}
<div>
<div class=" flex items-center gap-3 justify-between text-xs uppercase px-1 font-bold">
<div class="w-full">Group</div>
<div class="w-full">Users</div>
<div class="w-full"></div>
</div>
<hr class="mt-1.5 border-gray-50 dark:border-gray-850" />
{#each filteredGroups as group}
<div class="my-2">
<GroupItem {group} {users} {setGroups} />
</div>
{/each}
</div>
{/if}
<hr class="mb-2 border-gray-50 dark:border-gray-850" />
<GroupModal
bind:show={showDefaultPermissionsModal}
tabs={['permissions']}
bind:permissions={defaultPermissions}
custom={false}
onSubmit={updateDefaultPermissionsHandler}
/>
<button
class="flex items-center justify-between rounded-lg w-full transition pt-1"
on:click={() => {
showDefaultPermissionsModal = true;
}}
>
<div class="flex items-center gap-2.5">
<div class="p-1.5 bg-black/5 dark:bg-white/10 rounded-full">
<UsersSolid className="size-4" />
</div>
<div class="text-left">
<div class=" text-sm font-medium">{$i18n.t('Default permissions')}</div>
<div class="flex text-xs mt-0.5">
{$i18n.t('applies to all users with the "user" role')}
</div>
</div>
</div>
<div>
<ChevronRight strokeWidth="2.5" />
</div>
</button>
</div>
{/if}

View file

@ -0,0 +1,149 @@
<script lang="ts">
import { toast } from 'svelte-sonner';
import { getContext, onMount } from 'svelte';
const i18n = getContext('i18n');
import Modal from '$lib/components/common/Modal.svelte';
import Textarea from '$lib/components/common/Textarea.svelte';
export let onSubmit: Function = () => {};
export let show = false;
let name = '';
let description = '';
let userIds = [];
let loading = false;
const submitHandler = async () => {
loading = true;
const group = {
name,
description
};
await onSubmit(group);
loading = false;
show = false;
name = '';
description = '';
userIds = [];
};
onMount(() => {
console.log('mounted');
});
</script>
<Modal size="sm" bind:show>
<div>
<div class=" flex justify-between dark:text-gray-100 px-5 pt-4 mb-1.5">
<div class=" text-lg font-medium self-center font-primary">
{$i18n.t('Add User Group')}
</div>
<button
class="self-center"
on:click={() => {
show = false;
}}
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 20 20"
fill="currentColor"
class="w-5 h-5"
>
<path
d="M6.28 5.22a.75.75 0 00-1.06 1.06L8.94 10l-3.72 3.72a.75.75 0 101.06 1.06L10 11.06l3.72 3.72a.75.75 0 101.06-1.06L11.06 10l3.72-3.72a.75.75 0 00-1.06-1.06L10 8.94 6.28 5.22z"
/>
</svg>
</button>
</div>
<div class="flex flex-col md:flex-row w-full px-4 pb-4 md:space-x-4 dark:text-gray-200">
<div class=" flex flex-col w-full sm:flex-row sm:justify-center sm:space-x-6">
<form
class="flex flex-col w-full"
on:submit={(e) => {
e.preventDefault();
submitHandler();
}}
>
<div class="px-1 flex flex-col w-full">
<div class="flex gap-2">
<div class="flex flex-col w-full">
<div class=" mb-0.5 text-xs text-gray-500">{$i18n.t('Name')}</div>
<div class="flex-1">
<input
class="w-full text-sm bg-transparent placeholder:text-gray-300 dark:placeholder:text-gray-700 outline-none"
type="text"
bind:value={name}
placeholder={$i18n.t('Group Name')}
autocomplete="off"
required
/>
</div>
</div>
</div>
<div class="flex flex-col w-full mt-2">
<div class=" mb-1 text-xs text-gray-500">{$i18n.t('Description')}</div>
<div class="flex-1">
<Textarea
className="w-full text-sm bg-transparent placeholder:text-gray-300 dark:placeholder:text-gray-700 outline-none resize-none"
rows={2}
bind:value={description}
placeholder={$i18n.t('Group Description')}
/>
</div>
</div>
</div>
<div class="flex justify-end pt-3 text-sm font-medium gap-1.5">
<button
class="px-3.5 py-1.5 text-sm font-medium bg-black hover:bg-gray-900 text-white dark:bg-white dark:text-black dark:hover:bg-gray-100 transition rounded-full flex flex-row space-x-1 items-center {loading
? ' cursor-not-allowed'
: ''}"
type="submit"
disabled={loading}
>
{$i18n.t('Create')}
{#if loading}
<div class="ml-2 self-center">
<svg
class=" w-4 h-4"
viewBox="0 0 24 24"
fill="currentColor"
xmlns="http://www.w3.org/2000/svg"
><style>
.spinner_ajPY {
transform-origin: center;
animation: spinner_AtaB 0.75s infinite linear;
}
@keyframes spinner_AtaB {
100% {
transform: rotate(360deg);
}
}
</style><path
d="M12,1A11,11,0,1,0,23,12,11,11,0,0,0,12,1Zm0,19a8,8,0,1,1,8-8A8,8,0,0,1,12,20Z"
opacity=".25"
/><path
d="M10.14,1.16a11,11,0,0,0-9,8.92A1.59,1.59,0,0,0,2.46,12,1.52,1.52,0,0,0,4.11,10.7a8,8,0,0,1,6.66-6.61A1.42,1.42,0,0,0,12,2.69h0A1.57,1.57,0,0,0,10.14,1.16Z"
class="spinner_ajPY"
/></svg
>
</div>
{/if}
</button>
</div>
</form>
</div>
</div>
</div>
</Modal>

View file

@ -0,0 +1,61 @@
<script lang="ts">
import { getContext } from 'svelte';
import Textarea from '$lib/components/common/Textarea.svelte';
import Tooltip from '$lib/components/common/Tooltip.svelte';
const i18n = getContext('i18n');
export let name = '';
export let color = '';
export let description = '';
</script>
<div class="flex gap-2">
<div class="flex flex-col w-full">
<div class=" mb-0.5 text-xs text-gray-500">{$i18n.t('Name')}</div>
<div class="flex-1">
<input
class="w-full text-sm bg-transparent placeholder:text-gray-300 dark:placeholder:text-gray-700 outline-none"
type="text"
bind:value={name}
placeholder={$i18n.t('Group Name')}
autocomplete="off"
required
/>
</div>
</div>
</div>
<!-- <div class="flex flex-col w-full mt-2">
<div class=" mb-1 text-xs text-gray-500">{$i18n.t('Color')}</div>
<div class="flex-1">
<Tooltip content={$i18n.t('Hex Color - Leave empty for default color')} placement="top-start">
<div class="flex gap-0.5">
<div class="text-gray-500">#</div>
<input
class="w-full text-sm bg-transparent placeholder:text-gray-300 dark:placeholder:text-gray-700 outline-none"
type="text"
bind:value={color}
placeholder={$i18n.t('Hex Color')}
autocomplete="off"
/>
</div>
</Tooltip>
</div>
</div> -->
<div class="flex flex-col w-full mt-2">
<div class=" mb-1 text-xs text-gray-500">{$i18n.t('Description')}</div>
<div class="flex-1">
<Textarea
className="w-full text-sm bg-transparent placeholder:text-gray-300 dark:placeholder:text-gray-700 outline-none resize-none"
rows={4}
bind:value={description}
placeholder={$i18n.t('Group Description')}
/>
</div>
</div>

View file

@ -0,0 +1,328 @@
<script lang="ts">
import { toast } from 'svelte-sonner';
import { getContext, onMount } from 'svelte';
const i18n = getContext('i18n');
import Modal from '$lib/components/common/Modal.svelte';
import Display from './Display.svelte';
import Permissions from './Permissions.svelte';
import Users from './Users.svelte';
import UserPlusSolid from '$lib/components/icons/UserPlusSolid.svelte';
import WrenchSolid from '$lib/components/icons/WrenchSolid.svelte';
export let onSubmit: Function = () => {};
export let onDelete: Function = () => {};
export let show = false;
export let edit = false;
export let users = [];
export let group = null;
export let custom = true;
export let tabs = ['general', 'permissions', 'users'];
let selectedTab = 'general';
let loading = false;
export let name = '';
export let description = '';
export let permissions = {
workspace: {
models: false,
knowledge: false,
prompts: false,
tools: false
},
chat: {
file_upload: true,
delete: true,
edit: true,
temporary: true
}
};
export let userIds = [];
const submitHandler = async () => {
loading = true;
const group = {
name,
description,
permissions,
user_ids: userIds
};
await onSubmit(group);
loading = false;
show = false;
};
const init = () => {
if (group) {
name = group.name;
description = group.description;
permissions = group?.permissions ?? {
workspace: {
models: false,
knowledge: false,
prompts: false,
tools: false
},
chat: {
file_upload: true,
delete: true,
edit: true,
temporary: true
}
};
userIds = group?.user_ids ?? [];
}
};
$: if (show) {
init();
}
onMount(() => {
console.log(tabs);
selectedTab = tabs[0];
init();
});
</script>
<Modal size="md" bind:show>
<div>
<div class=" flex justify-between dark:text-gray-100 px-5 pt-4 mb-1.5">
<div class=" text-lg font-medium self-center font-primary">
{#if custom}
{#if edit}
{$i18n.t('Edit User Group')}
{:else}
{$i18n.t('Add User Group')}
{/if}
{:else}
{$i18n.t('Edit Default Permissions')}
{/if}
</div>
<button
class="self-center"
on:click={() => {
show = false;
}}
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 20 20"
fill="currentColor"
class="w-5 h-5"
>
<path
d="M6.28 5.22a.75.75 0 00-1.06 1.06L8.94 10l-3.72 3.72a.75.75 0 101.06 1.06L10 11.06l3.72 3.72a.75.75 0 101.06-1.06L11.06 10l3.72-3.72a.75.75 0 00-1.06-1.06L10 8.94 6.28 5.22z"
/>
</svg>
</button>
</div>
<div class="flex flex-col md:flex-row w-full px-4 pb-4 md:space-x-4 dark:text-gray-200">
<div class=" flex flex-col w-full sm:flex-row sm:justify-center sm:space-x-6">
<form
class="flex flex-col w-full"
on:submit={(e) => {
e.preventDefault();
submitHandler();
}}
>
<div class="flex flex-col lg:flex-row w-full h-full pb-2 lg:space-x-4">
<div
id="admin-settings-tabs-container"
class="tabs flex flex-row overflow-x-auto gap-2.5 max-w-full lg:gap-1 lg:flex-col lg:flex-none lg:w-40 dark:text-gray-200 text-sm font-medium text-left scrollbar-none"
>
{#if tabs.includes('general')}
<button
class="px-0.5 py-1 max-w-fit w-fit rounded-lg flex-1 lg:flex-none flex text-right transition {selectedTab ===
'general'
? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => {
selectedTab = 'general';
}}
type="button"
>
<div class=" self-center mr-2">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="w-4 h-4"
>
<path
fill-rule="evenodd"
d="M6.955 1.45A.5.5 0 0 1 7.452 1h1.096a.5.5 0 0 1 .497.45l.17 1.699c.484.12.94.312 1.356.562l1.321-1.081a.5.5 0 0 1 .67.033l.774.775a.5.5 0 0 1 .034.67l-1.08 1.32c.25.417.44.873.561 1.357l1.699.17a.5.5 0 0 1 .45.497v1.096a.5.5 0 0 1-.45.497l-1.699.17c-.12.484-.312.94-.562 1.356l1.082 1.322a.5.5 0 0 1-.034.67l-.774.774a.5.5 0 0 1-.67.033l-1.322-1.08c-.416.25-.872.44-1.356.561l-.17 1.699a.5.5 0 0 1-.497.45H7.452a.5.5 0 0 1-.497-.45l-.17-1.699a4.973 4.973 0 0 1-1.356-.562L4.108 13.37a.5.5 0 0 1-.67-.033l-.774-.775a.5.5 0 0 1-.034-.67l1.08-1.32a4.971 4.971 0 0 1-.561-1.357l-1.699-.17A.5.5 0 0 1 1 8.548V7.452a.5.5 0 0 1 .45-.497l1.699-.17c.12-.484.312-.94.562-1.356L2.629 4.107a.5.5 0 0 1 .034-.67l.774-.774a.5.5 0 0 1 .67-.033L5.43 3.71a4.97 4.97 0 0 1 1.356-.561l.17-1.699ZM6 8c0 .538.212 1.026.558 1.385l.057.057a2 2 0 0 0 2.828-2.828l-.058-.056A2 2 0 0 0 6 8Z"
clip-rule="evenodd"
/>
</svg>
</div>
<div class=" self-center">{$i18n.t('General')}</div>
</button>
{/if}
{#if tabs.includes('permissions')}
<button
class="px-0.5 py-1 max-w-fit w-fit rounded-lg flex-1 lg:flex-none flex text-right transition {selectedTab ===
'permissions'
? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => {
selectedTab = 'permissions';
}}
type="button"
>
<div class=" self-center mr-2">
<WrenchSolid />
</div>
<div class=" self-center">{$i18n.t('Permissions')}</div>
</button>
{/if}
{#if tabs.includes('users')}
<button
class="px-0.5 py-1 max-w-fit w-fit rounded-lg flex-1 lg:flex-none flex text-right transition {selectedTab ===
'users'
? ''
: ' text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => {
selectedTab = 'users';
}}
type="button"
>
<div class=" self-center mr-2">
<UserPlusSolid />
</div>
<div class=" self-center">{$i18n.t('Users')} ({userIds.length})</div>
</button>
{/if}
</div>
<div
class="flex-1 mt-1 lg:mt-1 lg:h-[22rem] lg:max-h-[22rem] overflow-y-auto scrollbar-hidden"
>
{#if selectedTab == 'general'}
<Display bind:name bind:description />
{:else if selectedTab == 'permissions'}
<Permissions bind:permissions />
{:else if selectedTab == 'users'}
<Users bind:userIds {users} />
{/if}
</div>
</div>
<!-- <div
class=" tabs flex flex-row overflow-x-auto gap-2.5 text-sm font-medium border-b border-b-gray-800 scrollbar-hidden"
>
{#if tabs.includes('display')}
<button
class="px-0.5 pb-1.5 min-w-fit flex text-right transition border-b-2 {selectedTab ===
'display'
? ' dark:border-white'
: 'border-transparent text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => {
selectedTab = 'display';
}}
type="button"
>
{$i18n.t('Display')}
</button>
{/if}
{#if tabs.includes('permissions')}
<button
class="px-0.5 pb-1.5 min-w-fit flex text-right transition border-b-2 {selectedTab ===
'permissions'
? ' dark:border-white'
: 'border-transparent text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => {
selectedTab = 'permissions';
}}
type="button"
>
{$i18n.t('Permissions')}
</button>
{/if}
{#if tabs.includes('users')}
<button
class="px-0.5 pb-1.5 min-w-fit flex text-right transition border-b-2 {selectedTab ===
'users'
? ' dark:border-white'
: ' border-transparent text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'}"
on:click={() => {
selectedTab = 'users';
}}
type="button"
>
{$i18n.t('Users')} ({userIds.length})
</button>
{/if}
</div> -->
<div class="flex justify-end pt-3 text-sm font-medium gap-1.5">
{#if edit}
<button
class="px-3.5 py-1.5 text-sm font-medium dark:bg-black dark:hover:bg-gray-900 dark:text-white bg-white text-black hover:bg-gray-100 transition rounded-full flex flex-row space-x-1 items-center"
type="button"
on:click={() => {
onDelete();
show = false;
}}
>
{$i18n.t('Delete')}
</button>
{/if}
<button
class="px-3.5 py-1.5 text-sm font-medium bg-black hover:bg-gray-900 text-white dark:bg-white dark:text-black dark:hover:bg-gray-100 transition rounded-full flex flex-row space-x-1 items-center {loading
? ' cursor-not-allowed'
: ''}"
type="submit"
disabled={loading}
>
{$i18n.t('Save')}
{#if loading}
<div class="ml-2 self-center">
<svg
class=" w-4 h-4"
viewBox="0 0 24 24"
fill="currentColor"
xmlns="http://www.w3.org/2000/svg"
><style>
.spinner_ajPY {
transform-origin: center;
animation: spinner_AtaB 0.75s infinite linear;
}
@keyframes spinner_AtaB {
100% {
transform: rotate(360deg);
}
}
</style><path
d="M12,1A11,11,0,1,0,23,12,11,11,0,0,0,12,1Zm0,19a8,8,0,1,1,8-8A8,8,0,0,1,12,20Z"
opacity=".25"
/><path
d="M10.14,1.16a11,11,0,0,0-9,8.92A1.59,1.59,0,0,0,2.46,12,1.52,1.52,0,0,0,4.11,10.7a8,8,0,0,1,6.66-6.61A1.42,1.42,0,0,0,12,2.69h0A1.57,1.57,0,0,0,10.14,1.16Z"
class="spinner_ajPY"
/></svg
>
</div>
{/if}
</button>
</div>
</form>
</div>
</div>
</div>
</Modal>

View file

@ -0,0 +1,84 @@
<script>
import { toast } from 'svelte-sonner';
import { getContext } from 'svelte';
const i18n = getContext('i18n');
import { deleteGroupById, updateGroupById } from '$lib/apis/groups';
import Pencil from '$lib/components/icons/Pencil.svelte';
import User from '$lib/components/icons/User.svelte';
import UserCircleSolid from '$lib/components/icons/UserCircleSolid.svelte';
import GroupModal from './EditGroupModal.svelte';
export let users = [];
export let group = {
name: 'Admins',
user_ids: [1, 2, 3]
};
export let setGroups = () => {};
let showEdit = false;
const updateHandler = async (_group) => {
const res = await updateGroupById(localStorage.token, group.id, _group).catch((error) => {
toast.error(error);
return null;
});
if (res) {
toast.success($i18n.t('Group updated successfully'));
setGroups();
}
};
const deleteHandler = async () => {
const res = await deleteGroupById(localStorage.token, group.id).catch((error) => {
toast.error(error);
return null;
});
if (res) {
toast.success($i18n.t('Group deleted successfully'));
setGroups();
}
};
</script>
<GroupModal
bind:show={showEdit}
edit
{users}
{group}
onSubmit={updateHandler}
onDelete={deleteHandler}
/>
<button
class="flex items-center gap-3 justify-between px-1 text-xs w-full transition"
on:click={() => {
showEdit = true;
}}
>
<div class="flex items-center gap-1.5 w-full font-medium">
<div>
<UserCircleSolid className="size-4" />
</div>
{group.name}
</div>
<div class="flex items-center gap-1.5 w-full font-medium">
{group.user_ids.length}
<div>
<User className="size-3.5" />
</div>
</div>
<div class="w-full flex justify-end">
<div class=" rounded-lg p-1 hover:bg-gray-100 dark:hover:bg-gray-850 transition">
<Pencil className="size-3.5" />
</div>
</div>
</button>

View file

@ -0,0 +1,204 @@
<script lang="ts">
import { getContext } from 'svelte';
const i18n = getContext('i18n');
import Switch from '$lib/components/common/Switch.svelte';
import Tooltip from '$lib/components/common/Tooltip.svelte';
export let permissions = {
workspace: {
models: false,
knowledge: false,
prompts: false,
tools: false
},
chat: {
delete: true,
edit: true,
temporary: true,
file_upload: true
}
};
</script>
<div>
<!-- <div>
<div class=" mb-2 text-sm font-medium">{$i18n.t('Model Permissions')}</div>
<div class="mb-2">
<div class="flex justify-between items-center text-xs pr-2">
<div class=" text-xs font-medium">{$i18n.t('Model Filtering')}</div>
<Switch bind:state={permissions.model.filter} />
</div>
</div>
{#if permissions.model.filter}
<div class="mb-2">
<div class=" space-y-1.5">
<div class="flex flex-col w-full">
<div class="mb-1 flex justify-between">
<div class="text-xs text-gray-500">{$i18n.t('Model IDs')}</div>
</div>
{#if model_ids.length > 0}
<div class="flex flex-col">
{#each model_ids as modelId, modelIdx}
<div class=" flex gap-2 w-full justify-between items-center">
<div class=" text-sm flex-1 rounded-lg">
{modelId}
</div>
<div class="flex-shrink-0">
<button
type="button"
on:click={() => {
model_ids = model_ids.filter((_, idx) => idx !== modelIdx);
}}
>
<Minus strokeWidth="2" className="size-3.5" />
</button>
</div>
</div>
{/each}
</div>
{:else}
<div class="text-gray-500 text-xs text-center py-2 px-10">
{$i18n.t('No model IDs')}
</div>
{/if}
</div>
</div>
<hr class=" border-gray-100 dark:border-gray-700/10 mt-2.5 mb-1 w-full" />
<div class="flex items-center">
<select
class="w-full py-1 text-sm rounded-lg bg-transparent {selectedModelId
? ''
: 'text-gray-500'} placeholder:text-gray-300 dark:placeholder:text-gray-700 outline-none"
bind:value={selectedModelId}
>
<option value="">{$i18n.t('Select a model')}</option>
{#each $models.filter((m) => m?.owned_by !== 'arena') as model}
<option value={model.id} class="bg-gray-50 dark:bg-gray-700">{model.name}</option>
{/each}
</select>
<div>
<button
type="button"
on:click={() => {
if (selectedModelId && !permissions.model.model_ids.includes(selectedModelId)) {
permissions.model.model_ids = [...permissions.model.model_ids, selectedModelId];
selectedModelId = '';
}
}}
>
<Plus className="size-3.5" strokeWidth="2" />
</button>
</div>
</div>
</div>
{/if}
<div class=" space-y-1 mb-3">
<div class="">
<div class="flex justify-between items-center text-xs">
<div class=" text-xs font-medium">{$i18n.t('Default Model')}</div>
</div>
</div>
<div class="flex-1 mr-2">
<select
class="w-full bg-transparent outline-none py-0.5 text-sm"
bind:value={permissions.model.default_id}
placeholder="Select a model"
>
<option value="" disabled selected>{$i18n.t('Select a model')}</option>
{#each permissions.model.filter ? $models.filter( (model) => filterModelIds.includes(model.id) ) : $models.filter((model) => model.id) as model}
<option value={model.id} class="bg-gray-100 dark:bg-gray-700">{model.name}</option>
{/each}
</select>
</div>
</div>
</div>
<hr class=" border-gray-50 dark:border-gray-850 my-2" /> -->
<div>
<div class=" mb-2 text-sm font-medium">{$i18n.t('Workspace Permissions')}</div>
<div class=" flex w-full justify-between my-2 pr-2">
<div class=" self-center text-xs font-medium">
{$i18n.t('Models Access')}
</div>
<Switch bind:state={permissions.workspace.models} />
</div>
<div class=" flex w-full justify-between my-2 pr-2">
<div class=" self-center text-xs font-medium">
{$i18n.t('Knowledge Access')}
</div>
<Switch bind:state={permissions.workspace.knowledge} />
</div>
<div class=" flex w-full justify-between my-2 pr-2">
<div class=" self-center text-xs font-medium">
{$i18n.t('Prompts Access')}
</div>
<Switch bind:state={permissions.workspace.prompts} />
</div>
<div class=" ">
<Tooltip
className=" flex w-full justify-between my-2 pr-2"
content={$i18n.t(
'Warning: Enabling this will allow users to upload arbitrary code on the server.'
)}
placement="top-start"
>
<div class=" self-center text-xs font-medium">
{$i18n.t('Tools Access')}
</div>
<Switch bind:state={permissions.workspace.tools} />
</Tooltip>
</div>
</div>
<hr class=" border-gray-50 dark:border-gray-850 my-2" />
<div>
<div class=" mb-2 text-sm font-medium">{$i18n.t('Chat Permissions')}</div>
<div class=" flex w-full justify-between my-2 pr-2">
<div class=" self-center text-xs font-medium">
{$i18n.t('Allow File Upload')}
</div>
<Switch bind:state={permissions.chat.file_upload} />
</div>
<div class=" flex w-full justify-between my-2 pr-2">
<div class=" self-center text-xs font-medium">
{$i18n.t('Allow Chat Delete')}
</div>
<Switch bind:state={permissions.chat.delete} />
</div>
<div class=" flex w-full justify-between my-2 pr-2">
<div class=" self-center text-xs font-medium">
{$i18n.t('Allow Chat Edit')}
</div>
<Switch bind:state={permissions.chat.edit} />
</div>
<div class=" flex w-full justify-between my-2 pr-2">
<div class=" self-center text-xs font-medium">
{$i18n.t('Allow Temporary Chat')}
</div>
<Switch bind:state={permissions.chat.temporary} />
</div>
</div>
</div>

Some files were not shown because too many files have changed in this diff Show more