Compare commits

...

50 commits

Author SHA1 Message Date
Brice Ruth
6c58110679
Merge 5041ad01f5 into 3b3e12b43a 2025-12-11 10:05:57 +01:00
Timothy Jaeryang Baek
3b3e12b43a refac
Some checks are pending
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Python CI / Format Backend (push) Waiting to run
Frontend Build / Format & Build Frontend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
2025-12-11 01:09:14 -05:00
Timothy Jaeryang Baek
4d9a51ba33 refac 2025-12-11 00:11:12 -05:00
Timothy Jaeryang Baek
4b4241273d refac: styling 2025-12-11 00:07:32 -05:00
Timothy Jaeryang Baek
db95e96688 chore: dep 2025-12-10 23:59:52 -05:00
Shirasawa
99c820d607
fix: fixed the issue of mismatched spaces in audio MIME types (#17771) 2025-12-10 23:59:10 -05:00
Timothy Jaeryang Baek
282c541427 refac 2025-12-10 23:56:20 -05:00
Timothy Jaeryang Baek
b364cf43d3 feat: resizable sidebar
Co-Authored-By: ALiNew <42788336+sukjinkim@users.noreply.github.com>
2025-12-10 23:54:36 -05:00
Timothy Jaeryang Baek
b9676cf36f refac: styling 2025-12-10 23:35:46 -05:00
G30
258caaeced
fix: resolve layout shift in knowledge items with long names (#19832)
Co-authored-by: Tim Baek <tim@openwebui.com>
2025-12-10 23:34:36 -05:00
Timothy Jaeryang Baek
6e99b10163 refac 2025-12-10 23:31:11 -05:00
Timothy Jaeryang Baek
a2a9a9bcf4 refac 2025-12-10 23:28:40 -05:00
Timothy Jaeryang Baek
0addc1ea46 refac 2025-12-10 23:28:33 -05:00
Timothy Jaeryang Baek
6812d3b9d1 refac 2025-12-10 23:20:38 -05:00
Timothy Jaeryang Baek
ceae3d48e6 enh/refac: kb pagination 2025-12-10 23:19:19 -05:00
Timothy Jaeryang Baek
3ed1df2e53 refac: search notes db query
Some checks are pending
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Python CI / Format Backend (push) Waiting to run
Frontend Build / Format & Build Frontend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
2025-12-10 21:06:53 -05:00
Timothy Jaeryang Baek
68219d84a9 refac 2025-12-10 17:08:31 -05:00
Timothy Jaeryang Baek
6068e23590 refac 2025-12-10 17:08:18 -05:00
Timothy Jaeryang Baek
d7467a86e2 refac 2025-12-10 17:03:51 -05:00
Timothy Jaeryang Baek
d098c57d4d refac 2025-12-10 17:00:28 -05:00
Timothy Jaeryang Baek
693636d971 enh/refac: show read only kbs 2025-12-10 16:58:53 -05:00
Timothy Jaeryang Baek
a6ef82c5ed refac: styling 2025-12-10 16:43:43 -05:00
Timothy Jaeryang Baek
79cfe29bb2 refac: channel_file and knowledge table migration 2025-12-10 16:41:22 -05:00
Timothy Jaeryang Baek
d1d42128e5 refac/fix: channel files 2025-12-10 15:53:45 -05:00
Timothy Jaeryang Baek
2bccf8350d enh: channel files 2025-12-10 15:48:42 -05:00
Timothy Jaeryang Baek
c15201620d refac: kb files 2025-12-10 15:48:27 -05:00
Andreas
f31ca75892
Fix typo in user permission environment variables (#19860) 2025-12-10 15:09:15 -05:00
Timothy Jaeryang Baek
a7993f6f4e refac
Some checks are pending
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Python CI / Format Backend (push) Waiting to run
Frontend Build / Format & Build Frontend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
2025-12-10 12:22:40 -05:00
Luke Garceau
5041ad01f5 Merge branch 'dev' of github.com:open-webui/open-webui into feat/google-oauth-groups-dev 2025-12-09 16:43:31 -05:00
Luke Garceau
4b46f7d802
Merge branch 'dev' into feat/google-oauth-groups-dev 2025-11-28 14:51:27 -05:00
Luke Garceau
b1800aa224
Merge branch 'main' into feat/google-oauth-groups-dev 2025-11-27 19:02:44 -05:00
Luke Garceau
41e724cdaf resolve incorrect tabulation 2025-11-27 19:00:26 -05:00
Luke Garceau
9358dc2848
Merge branch 'open-webui:main' into main 2025-11-27 18:57:05 -05:00
Luke Garceau
159ef78f6f Merge remote-tracking branch 'origin/dev' into feat/google-oauth-groups-dev
# Conflicts:
#	backend/open_webui/utils/oauth.py
2025-11-27 17:18:02 -05:00
Luke Garceau
89a5dbda45 Merge branch 'main' into feat/google-oauth-groups-dev
# Conflicts:
#	backend/open_webui/utils/oauth.py
#	uv.lock
2025-11-27 16:53:12 -05:00
Luke Garceau
9eb4484e56
Merge pull request #2 from lgarceau768/main
Version 0.6.34 Updates
2025-11-27 15:46:49 -05:00
Tim Baek
140605e660
Merge pull request #19462 from open-webui/dev
Some checks failed
Release to PyPI / release (push) Has been cancelled
Release / release (push) Has been cancelled
Deploy to HuggingFace Spaces / check-secret (push) Has been cancelled
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Python CI / Format Backend (push) Has been cancelled
Frontend Build / Format & Build Frontend (push) Has been cancelled
Frontend Build / Frontend Unit Tests (push) Has been cancelled
Deploy to HuggingFace Spaces / deploy (push) Has been cancelled
Create and publish Docker images with specific build args / merge-main-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-cuda-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-cuda126-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-ollama-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-slim-images (push) Has been cancelled
0.6.40
2025-11-25 06:01:33 -05:00
Tim Baek
9899293f05
Merge pull request #19448 from open-webui/dev
0.6.39
2025-11-25 05:31:34 -05:00
Tim Baek
e3faec62c5
Merge pull request #19416 from open-webui/dev
Some checks are pending
Release / release (push) Waiting to run
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Python CI / Format Backend (push) Waiting to run
Frontend Build / Format & Build Frontend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
Release to PyPI / release (push) Waiting to run
0.6.38
2025-11-24 07:00:31 -05:00
Tim Baek
fc05e0a6c5
Merge pull request #19405 from open-webui/dev
Some checks are pending
Release / release (push) Waiting to run
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Python CI / Format Backend (push) Waiting to run
Frontend Build / Format & Build Frontend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
Release to PyPI / release (push) Waiting to run
chore: format
2025-11-23 22:16:33 -05:00
Tim Baek
fe6783c166
Merge pull request #19030 from open-webui/dev
0.6.37
2025-11-23 22:10:05 -05:00
Luke Garceau
d2776965dc
Merge branch 'main' into main 2025-11-22 11:24:58 -05:00
Luke Garceau
6c86ff7d2e
Merge pull request #1 from lgarceau768/feat/google-groups
Google Groups Functionaliity on top of version 0.6.34
2025-11-05 12:05:23 -05:00
Luke Garceau
6dbc01c31b - resolve merge conflicts 2025-11-05 10:44:30 -05:00
Brice Ruth
04811dd15d
update tests for adjusted query string & payload 2025-06-17 09:13:33 -05:00
Brice Ruth
30f4950c5c
fix google cloud identity query string 2025-06-17 09:13:32 -05:00
Brice Ruth
8d6cf357aa
feat: Add Google Cloud Identity API support for OAuth group-based roles
Enables Google Workspace group-based role assignment by integrating with
Google Cloud Identity API to fetch user groups in real-time.

Key improvements:
- Fetches groups directly from Google API using cloud-identity.groups.readonly scope
- Enables admin role assignment based on Google group membership
- Maintains full backward compatibility with existing OAuth configurations
- Includes comprehensive test suite with proper async mocking
- Complete documentation with Google Cloud Console setup guide

Addresses limitation where Google Workspace doesn't include group membership
claims in OAuth JWT tokens, preventing group-based role assignment.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-17 09:13:31 -05:00
Brice Ruth
cc6a1a7d9f
update tests for adjusted query string & payload 2025-06-16 18:18:52 -05:00
Brice Ruth
64ce040388
fix google cloud identity query string 2025-06-16 17:49:42 -05:00
Brice Ruth
a909fd9296
feat: Add Google Cloud Identity API support for OAuth group-based roles
Enables Google Workspace group-based role assignment by integrating with
Google Cloud Identity API to fetch user groups in real-time.

Key improvements:
- Fetches groups directly from Google API using cloud-identity.groups.readonly scope
- Enables admin role assignment based on Google group membership
- Maintains full backward compatibility with existing OAuth configurations
- Includes comprehensive test suite with proper async mocking
- Complete documentation with Google Cloud Console setup guide

Addresses limitation where Google Workspace doesn't include group membership
claims in OAuth JWT tokens, preventing group-based role assignment.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-16 11:35:47 -05:00
56 changed files with 2804 additions and 963 deletions

View file

@ -1306,7 +1306,7 @@ USER_PERMISSIONS_WORKSPACE_MODELS_ALLOW_PUBLIC_SHARING = (
USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ALLOW_SHARING = ( USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ALLOW_SHARING = (
os.environ.get( os.environ.get(
"USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ALLOW_PUBLIC_SHARING", "False" "USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ALLOW_SHARING", "False"
).lower() ).lower()
== "true" == "true"
) )
@ -1345,7 +1345,7 @@ USER_PERMISSIONS_WORKSPACE_TOOLS_ALLOW_PUBLIC_SHARING = (
USER_PERMISSIONS_NOTES_ALLOW_SHARING = ( USER_PERMISSIONS_NOTES_ALLOW_SHARING = (
os.environ.get("USER_PERMISSIONS_NOTES_ALLOW_PUBLIC_SHARING", "False").lower() os.environ.get("USER_PERMISSIONS_NOTES_ALLOW_SHARING", "False").lower()
== "true" == "true"
) )

View file

@ -0,0 +1,54 @@
"""Add channel file table
Revision ID: 6283dc0e4d8d
Revises: 3e0e00844bb0
Create Date: 2025-12-10 15:11:39.424601
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
import open_webui.internal.db
# revision identifiers, used by Alembic.
revision: str = "6283dc0e4d8d"
down_revision: Union[str, None] = "3e0e00844bb0"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
op.create_table(
"channel_file",
sa.Column("id", sa.Text(), primary_key=True),
sa.Column("user_id", sa.Text(), nullable=False),
sa.Column(
"channel_id",
sa.Text(),
sa.ForeignKey("channel.id", ondelete="CASCADE"),
nullable=False,
),
sa.Column(
"file_id",
sa.Text(),
sa.ForeignKey("file.id", ondelete="CASCADE"),
nullable=False,
),
sa.Column("created_at", sa.BigInteger(), nullable=False),
sa.Column("updated_at", sa.BigInteger(), nullable=False),
# indexes
sa.Index("ix_channel_file_channel_id", "channel_id"),
sa.Index("ix_channel_file_file_id", "file_id"),
sa.Index("ix_channel_file_user_id", "user_id"),
# unique constraints
sa.UniqueConstraint(
"channel_id", "file_id", name="uq_channel_file_channel_file"
), # prevent duplicate entries
)
def downgrade() -> None:
op.drop_table("channel_file")

View file

@ -0,0 +1,49 @@
"""Update channel file and knowledge table
Revision ID: 81cc2ce44d79
Revises: 6283dc0e4d8d
Create Date: 2025-12-10 16:07:58.001282
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
import open_webui.internal.db
# revision identifiers, used by Alembic.
revision: str = "81cc2ce44d79"
down_revision: Union[str, None] = "6283dc0e4d8d"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# Add message_id column to channel_file table
with op.batch_alter_table("channel_file", schema=None) as batch_op:
batch_op.add_column(
sa.Column(
"message_id",
sa.Text(),
sa.ForeignKey(
"message.id", ondelete="CASCADE", name="fk_channel_file_message_id"
),
nullable=True,
)
)
# Add data column to knowledge table
with op.batch_alter_table("knowledge", schema=None) as batch_op:
batch_op.add_column(sa.Column("data", sa.JSON(), nullable=True))
def downgrade() -> None:
# Remove message_id column from channel_file table
with op.batch_alter_table("channel_file", schema=None) as batch_op:
batch_op.drop_column("message_id")
# Remove data column from knowledge table
with op.batch_alter_table("knowledge", schema=None) as batch_op:
batch_op.drop_column("data")

View file

@ -10,7 +10,18 @@ from pydantic import BaseModel, ConfigDict
from sqlalchemy.dialects.postgresql import JSONB from sqlalchemy.dialects.postgresql import JSONB
from sqlalchemy import BigInteger, Boolean, Column, String, Text, JSON, case, cast from sqlalchemy import (
BigInteger,
Boolean,
Column,
ForeignKey,
String,
Text,
JSON,
UniqueConstraint,
case,
cast,
)
from sqlalchemy import or_, func, select, and_, text from sqlalchemy import or_, func, select, and_, text
from sqlalchemy.sql import exists from sqlalchemy.sql import exists
@ -137,6 +148,41 @@ class ChannelMemberModel(BaseModel):
updated_at: Optional[int] = None # timestamp in epoch (time_ns) updated_at: Optional[int] = None # timestamp in epoch (time_ns)
class ChannelFile(Base):
__tablename__ = "channel_file"
id = Column(Text, unique=True, primary_key=True)
user_id = Column(Text, nullable=False)
channel_id = Column(
Text, ForeignKey("channel.id", ondelete="CASCADE"), nullable=False
)
message_id = Column(
Text, ForeignKey("message.id", ondelete="CASCADE"), nullable=True
)
file_id = Column(Text, ForeignKey("file.id", ondelete="CASCADE"), nullable=False)
created_at = Column(BigInteger, nullable=False)
updated_at = Column(BigInteger, nullable=False)
__table_args__ = (
UniqueConstraint("channel_id", "file_id", name="uq_channel_file_channel_file"),
)
class ChannelFileModel(BaseModel):
model_config = ConfigDict(from_attributes=True)
id: str
channel_id: str
file_id: str
user_id: str
created_at: int # timestamp in epoch (time_ns)
updated_at: int # timestamp in epoch (time_ns)
class ChannelWebhook(Base): class ChannelWebhook(Base):
__tablename__ = "channel_webhook" __tablename__ = "channel_webhook"
@ -642,6 +688,135 @@ class ChannelTable:
channel = db.query(Channel).filter(Channel.id == id).first() channel = db.query(Channel).filter(Channel.id == id).first()
return ChannelModel.model_validate(channel) if channel else None return ChannelModel.model_validate(channel) if channel else None
def get_channels_by_file_id(self, file_id: str) -> list[ChannelModel]:
with get_db() as db:
channel_files = (
db.query(ChannelFile).filter(ChannelFile.file_id == file_id).all()
)
channel_ids = [cf.channel_id for cf in channel_files]
channels = db.query(Channel).filter(Channel.id.in_(channel_ids)).all()
return [ChannelModel.model_validate(channel) for channel in channels]
def get_channels_by_file_id_and_user_id(
self, file_id: str, user_id: str
) -> list[ChannelModel]:
with get_db() as db:
# 1. Determine which channels have this file
channel_file_rows = (
db.query(ChannelFile).filter(ChannelFile.file_id == file_id).all()
)
channel_ids = [row.channel_id for row in channel_file_rows]
if not channel_ids:
return []
# 2. Load all channel rows that still exist
channels = (
db.query(Channel)
.filter(
Channel.id.in_(channel_ids),
Channel.deleted_at.is_(None),
Channel.archived_at.is_(None),
)
.all()
)
if not channels:
return []
# Preload user's group membership
user_group_ids = [g.id for g in Groups.get_groups_by_member_id(user_id)]
allowed_channels = []
for channel in channels:
# --- Case A: group or dm => user must be an active member ---
if channel.type in ["group", "dm"]:
membership = (
db.query(ChannelMember)
.filter(
ChannelMember.channel_id == channel.id,
ChannelMember.user_id == user_id,
ChannelMember.is_active.is_(True),
)
.first()
)
if membership:
allowed_channels.append(ChannelModel.model_validate(channel))
continue
# --- Case B: standard channel => rely on ACL permissions ---
query = db.query(Channel).filter(Channel.id == channel.id)
query = self._has_permission(
db,
query,
{"user_id": user_id, "group_ids": user_group_ids},
permission="read",
)
allowed = query.first()
if allowed:
allowed_channels.append(ChannelModel.model_validate(allowed))
return allowed_channels
def get_channel_by_id_and_user_id(
self, id: str, user_id: str
) -> Optional[ChannelModel]:
with get_db() as db:
# Fetch the channel
channel: Channel = (
db.query(Channel)
.filter(
Channel.id == id,
Channel.deleted_at.is_(None),
Channel.archived_at.is_(None),
)
.first()
)
if not channel:
return None
# If the channel is a group or dm, read access requires membership (active)
if channel.type in ["group", "dm"]:
membership = (
db.query(ChannelMember)
.filter(
ChannelMember.channel_id == id,
ChannelMember.user_id == user_id,
ChannelMember.is_active.is_(True),
)
.first()
)
if membership:
return ChannelModel.model_validate(channel)
else:
return None
# For channels that are NOT group/dm, fall back to ACL-based read access
query = db.query(Channel).filter(Channel.id == id)
# Determine user groups
user_group_ids = [
group.id for group in Groups.get_groups_by_member_id(user_id)
]
# Apply ACL rules
query = self._has_permission(
db,
query,
{"user_id": user_id, "group_ids": user_group_ids},
permission="read",
)
channel_allowed = query.first()
return (
ChannelModel.model_validate(channel_allowed)
if channel_allowed
else None
)
def update_channel_by_id( def update_channel_by_id(
self, id: str, form_data: ChannelForm self, id: str, form_data: ChannelForm
) -> Optional[ChannelModel]: ) -> Optional[ChannelModel]:
@ -663,6 +838,65 @@ class ChannelTable:
db.commit() db.commit()
return ChannelModel.model_validate(channel) if channel else None return ChannelModel.model_validate(channel) if channel else None
def add_file_to_channel_by_id(
self, channel_id: str, file_id: str, user_id: str
) -> Optional[ChannelFileModel]:
with get_db() as db:
channel_file = ChannelFileModel(
**{
"id": str(uuid.uuid4()),
"channel_id": channel_id,
"file_id": file_id,
"user_id": user_id,
"created_at": int(time.time()),
"updated_at": int(time.time()),
}
)
try:
result = ChannelFile(**channel_file.model_dump())
db.add(result)
db.commit()
db.refresh(result)
if result:
return ChannelFileModel.model_validate(result)
else:
return None
except Exception:
return None
def set_file_message_id_in_channel_by_id(
self, channel_id: str, file_id: str, message_id: str
) -> bool:
try:
with get_db() as db:
channel_file = (
db.query(ChannelFile)
.filter_by(channel_id=channel_id, file_id=file_id)
.first()
)
if not channel_file:
return False
channel_file.message_id = message_id
channel_file.updated_at = int(time.time())
db.commit()
return True
except Exception:
return False
def remove_file_from_channel_by_id(self, channel_id: str, file_id: str) -> bool:
try:
with get_db() as db:
db.query(ChannelFile).filter_by(
channel_id=channel_id, file_id=file_id
).delete()
db.commit()
return True
except Exception:
return False
def delete_channel_by_id(self, id: str): def delete_channel_by_id(self, id: str):
with get_db() as db: with get_db() as db:
db.query(Channel).filter(Channel.id == id).delete() db.query(Channel).filter(Channel.id == id).delete()

View file

@ -126,6 +126,49 @@ class ChatTitleIdResponse(BaseModel):
created_at: int created_at: int
class ChatListResponse(BaseModel):
items: list[ChatModel]
total: int
class ChatUsageStatsResponse(BaseModel):
id: str # chat id
models: dict = {} # models used in the chat with their usage counts
message_count: int # number of messages in the chat
history_models: dict = {} # models used in the chat history with their usage counts
history_message_count: int # number of messages in the chat history
history_user_message_count: int # number of user messages in the chat history
history_assistant_message_count: (
int # number of assistant messages in the chat history
)
average_response_time: (
float # average response time of assistant messages in seconds
)
average_user_message_content_length: (
float # average length of user message contents
)
average_assistant_message_content_length: (
float # average length of assistant message contents
)
tags: list[str] = [] # tags associated with the chat
last_message_at: int # timestamp of the last message
updated_at: int
created_at: int
model_config = ConfigDict(extra="allow")
class ChatUsageStatsListResponse(BaseModel):
items: list[ChatUsageStatsResponse]
total: int
model_config = ConfigDict(extra="allow")
class ChatTable: class ChatTable:
def _clean_null_bytes(self, obj): def _clean_null_bytes(self, obj):
""" """
@ -675,14 +718,31 @@ class ChatTable:
) )
return [ChatModel.model_validate(chat) for chat in all_chats] return [ChatModel.model_validate(chat) for chat in all_chats]
def get_chats_by_user_id(self, user_id: str) -> list[ChatModel]: def get_chats_by_user_id(
self, user_id: str, skip: Optional[int] = None, limit: Optional[int] = None
) -> ChatListResponse:
with get_db() as db: with get_db() as db:
all_chats = ( query = (
db.query(Chat) db.query(Chat)
.filter_by(user_id=user_id) .filter_by(user_id=user_id)
.order_by(Chat.updated_at.desc()) .order_by(Chat.updated_at.desc())
) )
return [ChatModel.model_validate(chat) for chat in all_chats]
total = query.count()
if skip is not None:
query = query.offset(skip)
if limit is not None:
query = query.limit(limit)
all_chats = query.all()
return ChatListResponse(
**{
"items": [ChatModel.model_validate(chat) for chat in all_chats],
"total": total,
}
)
def get_pinned_chats_by_user_id(self, user_id: str) -> list[ChatModel]: def get_pinned_chats_by_user_id(self, user_id: str) -> list[ChatModel]:
with get_db() as db: with get_db() as db:

View file

@ -104,6 +104,11 @@ class FileUpdateForm(BaseModel):
meta: Optional[dict] = None meta: Optional[dict] = None
class FileListResponse(BaseModel):
items: list[FileModel]
total: int
class FilesTable: class FilesTable:
def insert_new_file(self, user_id: str, form_data: FileForm) -> Optional[FileModel]: def insert_new_file(self, user_id: str, form_data: FileForm) -> Optional[FileModel]:
with get_db() as db: with get_db() as db:

View file

@ -5,6 +5,7 @@ from typing import Optional
import uuid import uuid
from open_webui.internal.db import Base, get_db from open_webui.internal.db import Base, get_db
from open_webui.env import SRC_LOG_LEVELS from open_webui.env import SRC_LOG_LEVELS
from open_webui.models.files import ( from open_webui.models.files import (
@ -30,6 +31,8 @@ from sqlalchemy import (
) )
from open_webui.utils.access_control import has_access from open_webui.utils.access_control import has_access
from open_webui.utils.db.access_control import has_permission
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["MODELS"]) log.setLevel(SRC_LOG_LEVELS["MODELS"])
@ -132,7 +135,7 @@ class KnowledgeResponse(KnowledgeModel):
class KnowledgeUserResponse(KnowledgeUserModel): class KnowledgeUserResponse(KnowledgeUserModel):
files: Optional[list[FileMetadataResponse | dict]] = None pass
class KnowledgeForm(BaseModel): class KnowledgeForm(BaseModel):
@ -145,6 +148,11 @@ class FileUserResponse(FileModelResponse):
user: Optional[UserResponse] = None user: Optional[UserResponse] = None
class KnowledgeListResponse(BaseModel):
items: list[KnowledgeUserModel]
total: int
class KnowledgeFileListResponse(BaseModel): class KnowledgeFileListResponse(BaseModel):
items: list[FileUserResponse] items: list[FileUserResponse]
total: int total: int
@ -177,12 +185,13 @@ class KnowledgeTable:
except Exception: except Exception:
return None return None
def get_knowledge_bases(self) -> list[KnowledgeUserModel]: def get_knowledge_bases(
self, skip: int = 0, limit: int = 30
) -> list[KnowledgeUserModel]:
with get_db() as db: with get_db() as db:
all_knowledge = ( all_knowledge = (
db.query(Knowledge).order_by(Knowledge.updated_at.desc()).all() db.query(Knowledge).order_by(Knowledge.updated_at.desc()).all()
) )
user_ids = list(set(knowledge.user_id for knowledge in all_knowledge)) user_ids = list(set(knowledge.user_id for knowledge in all_knowledge))
users = Users.get_users_by_user_ids(user_ids) if user_ids else [] users = Users.get_users_by_user_ids(user_ids) if user_ids else []
@ -201,6 +210,126 @@ class KnowledgeTable:
) )
return knowledge_bases return knowledge_bases
def search_knowledge_bases(
self, user_id: str, filter: dict, skip: int = 0, limit: int = 30
) -> KnowledgeListResponse:
try:
with get_db() as db:
query = db.query(Knowledge, User).outerjoin(
User, User.id == Knowledge.user_id
)
if filter:
query_key = filter.get("query")
if query_key:
query = query.filter(
or_(
Knowledge.name.ilike(f"%{query_key}%"),
Knowledge.description.ilike(f"%{query_key}%"),
)
)
view_option = filter.get("view_option")
if view_option == "created":
query = query.filter(Knowledge.user_id == user_id)
elif view_option == "shared":
query = query.filter(Knowledge.user_id != user_id)
query = has_permission(db, Knowledge, query, filter)
query = query.order_by(Knowledge.updated_at.desc())
total = query.count()
if skip:
query = query.offset(skip)
if limit:
query = query.limit(limit)
items = query.all()
knowledge_bases = []
for knowledge_base, user in items:
knowledge_bases.append(
KnowledgeUserModel.model_validate(
{
**KnowledgeModel.model_validate(
knowledge_base
).model_dump(),
"user": (
UserModel.model_validate(user).model_dump()
if user
else None
),
}
)
)
return KnowledgeListResponse(items=knowledge_bases, total=total)
except Exception as e:
print(e)
return KnowledgeListResponse(items=[], total=0)
def search_knowledge_files(
self, filter: dict, skip: int = 0, limit: int = 30
) -> KnowledgeFileListResponse:
"""
Scalable version: search files across all knowledge bases the user has
READ access to, without loading all KBs or using large IN() lists.
"""
try:
with get_db() as db:
# Base query: join Knowledge → KnowledgeFile → File
query = (
db.query(File, User)
.join(KnowledgeFile, File.id == KnowledgeFile.file_id)
.join(Knowledge, KnowledgeFile.knowledge_id == Knowledge.id)
.outerjoin(User, User.id == KnowledgeFile.user_id)
)
# Apply access-control directly to the joined query
# This makes the database handle filtering, even with 10k+ KBs
query = has_permission(db, Knowledge, query, filter)
# Apply filename search
if filter:
q = filter.get("query")
if q:
query = query.filter(File.filename.ilike(f"%{q}%"))
# Order by file changes
query = query.order_by(File.updated_at.desc())
# Count before pagination
total = query.count()
if skip:
query = query.offset(skip)
if limit:
query = query.limit(limit)
rows = query.all()
items = []
for file, user in rows:
items.append(
FileUserResponse(
**FileModel.model_validate(file).model_dump(),
user=(
UserResponse(
**UserModel.model_validate(user).model_dump()
)
if user
else None
),
)
)
return KnowledgeFileListResponse(items=items, total=total)
except Exception as e:
print("search_knowledge_files error:", e)
return KnowledgeFileListResponse(items=[], total=0)
def check_access_by_user_id(self, id, user_id, permission="write") -> bool: def check_access_by_user_id(self, id, user_id, permission="write") -> bool:
knowledge = self.get_knowledge_by_id(id) knowledge = self.get_knowledge_by_id(id)
if not knowledge: if not knowledge:
@ -232,6 +361,21 @@ class KnowledgeTable:
except Exception: except Exception:
return None return None
def get_knowledge_by_id_and_user_id(
self, id: str, user_id: str
) -> Optional[KnowledgeModel]:
knowledge = self.get_knowledge_by_id(id)
if not knowledge:
return None
if knowledge.user_id == user_id:
return knowledge
user_group_ids = {group.id for group in Groups.get_groups_by_member_id(user_id)}
if has_access(user_id, "write", knowledge.access_control, user_group_ids):
return knowledge
return None
def get_knowledges_by_file_id(self, file_id: str) -> list[KnowledgeModel]: def get_knowledges_by_file_id(self, file_id: str) -> list[KnowledgeModel]:
try: try:
with get_db() as db: with get_db() as db:

View file

@ -255,7 +255,9 @@ class NoteTable:
query = query.filter( query = query.filter(
or_( or_(
Note.title.ilike(f"%{query_key}%"), Note.title.ilike(f"%{query_key}%"),
Note.data["content"]["md"].ilike(f"%{query_key}%"), cast(Note.data["content"]["md"], Text).ilike(
f"%{query_key}%"
),
) )
) )

View file

@ -33,6 +33,7 @@ from fastapi.responses import FileResponse
from pydantic import BaseModel from pydantic import BaseModel
from open_webui.utils.misc import strict_match_mime_type
from open_webui.utils.auth import get_admin_user, get_verified_user from open_webui.utils.auth import get_admin_user, get_verified_user
from open_webui.utils.headers import include_user_info_headers from open_webui.utils.headers import include_user_info_headers
from open_webui.config import ( from open_webui.config import (
@ -1155,17 +1156,9 @@ def transcription(
stt_supported_content_types = getattr( stt_supported_content_types = getattr(
request.app.state.config, "STT_SUPPORTED_CONTENT_TYPES", [] request.app.state.config, "STT_SUPPORTED_CONTENT_TYPES", []
) ) or ["audio/*", "video/webm"]
if not any( if not strict_match_mime_type(stt_supported_content_types, file.content_type):
fnmatch(file.content_type, content_type)
for content_type in (
stt_supported_content_types
if stt_supported_content_types
and any(t.strip() for t in stt_supported_content_types)
else ["audio/*", "video/webm"]
)
):
raise HTTPException( raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST, status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.FILE_NOT_SUPPORTED, detail=ERROR_MESSAGES.FILE_NOT_SUPPORTED,

View file

@ -1093,6 +1093,15 @@ async def post_new_message(
try: try:
message, channel = await new_message_handler(request, id, form_data, user) message, channel = await new_message_handler(request, id, form_data, user)
try:
if files := message.data.get("files", []):
for file in files:
Channels.set_file_message_id_in_channel_by_id(
channel.id, file.get("id", ""), message.id
)
except Exception as e:
log.debug(e)
active_user_ids = get_user_ids_from_room(f"channel:{channel.id}") active_user_ids = get_user_ids_from_room(f"channel:{channel.id}")
async def background_handler(): async def background_handler():

View file

@ -8,6 +8,7 @@ from open_webui.socket.main import get_event_emitter
from open_webui.models.chats import ( from open_webui.models.chats import (
ChatForm, ChatForm,
ChatImportForm, ChatImportForm,
ChatUsageStatsListResponse,
ChatsImportForm, ChatsImportForm,
ChatResponse, ChatResponse,
Chats, Chats,
@ -68,16 +69,25 @@ def get_session_user_chat_list(
############################ ############################
# GetChatList # GetChatUsageStats
# EXPERIMENTAL: may be removed in future releases
############################ ############################
@router.get("/stats/usage", response_model=list[ChatTitleIdResponse]) @router.get("/stats/usage", response_model=ChatUsageStatsListResponse)
def get_session_user_chat_usage( def get_session_user_chat_usage_stats(
items_per_page: Optional[int] = 50,
page: Optional[int] = 1,
user=Depends(get_verified_user), user=Depends(get_verified_user),
): ):
try: try:
chats = Chats.get_chats_by_user_id(user.id) limit = items_per_page
skip = (page - 1) * limit
result = Chats.get_chats_by_user_id(user.id, skip=skip, limit=limit)
chats = result.items
total = result.total
chat_stats = [] chat_stats = []
for chat in chats: for chat in chats:
@ -86,37 +96,96 @@ def get_session_user_chat_usage(
if messages_map and message_id: if messages_map and message_id:
try: try:
history_models = {}
history_message_count = len(messages_map)
history_user_messages = []
history_assistant_messages = []
for message in messages_map.values():
if message.get("role", "") == "user":
history_user_messages.append(message)
elif message.get("role", "") == "assistant":
history_assistant_messages.append(message)
model = message.get("model", None)
if model:
if model not in history_models:
history_models[model] = 0
history_models[model] += 1
average_user_message_content_length = (
sum(
len(message.get("content", ""))
for message in history_user_messages
)
/ len(history_user_messages)
if len(history_user_messages) > 0
else 0
)
average_assistant_message_content_length = (
sum(
len(message.get("content", ""))
for message in history_assistant_messages
)
/ len(history_assistant_messages)
if len(history_assistant_messages) > 0
else 0
)
response_times = []
for message in history_assistant_messages:
user_message_id = message.get("parentId", None)
if user_message_id and user_message_id in messages_map:
user_message = messages_map[user_message_id]
response_time = message.get(
"timestamp", 0
) - user_message.get("timestamp", 0)
response_times.append(response_time)
average_response_time = (
sum(response_times) / len(response_times)
if len(response_times) > 0
else 0
)
message_list = get_message_list(messages_map, message_id) message_list = get_message_list(messages_map, message_id)
message_count = len(message_list) message_count = len(message_list)
last_assistant_message = next( models = {}
( for message in reversed(message_list):
message if message.get("role") == "assistant":
for message in reversed(message_list) model = message.get("model", None)
if message["role"] == "assistant" if model:
), if model not in models:
None, models[model] = 0
) models[model] += 1
annotation = message.get("annotation", {})
model_id = (
last_assistant_message.get("model", None)
if last_assistant_message
else None
)
chat_stats.append( chat_stats.append(
{ {
"id": chat.id, "id": chat.id,
"model_id": model_id, "models": models,
"message_count": message_count, "message_count": message_count,
"history_models": history_models,
"history_message_count": history_message_count,
"history_user_message_count": len(history_user_messages),
"history_assistant_message_count": len(
history_assistant_messages
),
"average_response_time": average_response_time,
"average_user_message_content_length": average_user_message_content_length,
"average_assistant_message_content_length": average_assistant_message_content_length,
"tags": chat.meta.get("tags", []), "tags": chat.meta.get("tags", []),
"model_ids": chat.chat.get("models", []), "last_message_at": message_list[-1].get("timestamp", None),
"updated_at": chat.updated_at, "updated_at": chat.updated_at,
"created_at": chat.created_at, "created_at": chat.created_at,
} }
) )
except Exception as e: except Exception as e:
pass pass
return chat_stats
return ChatUsageStatsListResponse(items=chat_stats, total=total)
except Exception as e: except Exception as e:
log.exception(e) log.exception(e)

View file

@ -27,6 +27,7 @@ from open_webui.constants import ERROR_MESSAGES
from open_webui.env import SRC_LOG_LEVELS from open_webui.env import SRC_LOG_LEVELS
from open_webui.retrieval.vector.factory import VECTOR_DB_CLIENT from open_webui.retrieval.vector.factory import VECTOR_DB_CLIENT
from open_webui.models.channels import Channels
from open_webui.models.users import Users from open_webui.models.users import Users
from open_webui.models.files import ( from open_webui.models.files import (
FileForm, FileForm,
@ -38,7 +39,6 @@ from open_webui.models.knowledge import Knowledges
from open_webui.models.groups import Groups from open_webui.models.groups import Groups
from open_webui.routers.knowledge import get_knowledge, get_knowledge_list
from open_webui.routers.retrieval import ProcessFileForm, process_file from open_webui.routers.retrieval import ProcessFileForm, process_file
from open_webui.routers.audio import transcribe from open_webui.routers.audio import transcribe
@ -47,7 +47,7 @@ from open_webui.storage.provider import Storage
from open_webui.utils.auth import get_admin_user, get_verified_user from open_webui.utils.auth import get_admin_user, get_verified_user
from open_webui.utils.access_control import has_access from open_webui.utils.access_control import has_access
from open_webui.utils.misc import strict_match_mime_type
from pydantic import BaseModel from pydantic import BaseModel
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -91,6 +91,10 @@ def has_access_to_file(
if knowledge_base.id == knowledge_base_id: if knowledge_base.id == knowledge_base_id:
return True return True
channels = Channels.get_channels_by_file_id_and_user_id(file_id, user.id)
if access_type == "read" and channels:
return True
return False return False
@ -104,17 +108,9 @@ def process_uploaded_file(request, file, file_path, file_item, file_metadata, us
if file.content_type: if file.content_type:
stt_supported_content_types = getattr( stt_supported_content_types = getattr(
request.app.state.config, "STT_SUPPORTED_CONTENT_TYPES", [] request.app.state.config, "STT_SUPPORTED_CONTENT_TYPES", []
) ) or ["audio/*", "video/webm"]
if any( if strict_match_mime_type(stt_supported_content_types, file.content_type):
fnmatch(file.content_type, content_type)
for content_type in (
stt_supported_content_types
if stt_supported_content_types
and any(t.strip() for t in stt_supported_content_types)
else ["audio/*", "video/webm"]
)
):
file_path = Storage.get_file(file_path) file_path = Storage.get_file(file_path)
result = transcribe(request, file_path, file_metadata, user) result = transcribe(request, file_path, file_metadata, user)
@ -138,6 +134,7 @@ def process_uploaded_file(request, file, file_path, file_item, file_metadata, us
f"File type {file.content_type} is not provided, but trying to process anyway" f"File type {file.content_type} is not provided, but trying to process anyway"
) )
process_file(request, ProcessFileForm(file_id=file_item.id), user=user) process_file(request, ProcessFileForm(file_id=file_item.id), user=user)
except Exception as e: except Exception as e:
log.error(f"Error processing file: {file_item.id}") log.error(f"Error processing file: {file_item.id}")
Files.update_file_data_by_id( Files.update_file_data_by_id(
@ -247,6 +244,13 @@ def upload_file_handler(
), ),
) )
if "channel_id" in file_metadata:
channel = Channels.get_channel_by_id_and_user_id(
file_metadata["channel_id"], user.id
)
if channel:
Channels.add_file_to_channel_by_id(channel.id, file_item.id, user.id)
if process: if process:
if background_tasks and process_in_background: if background_tasks and process_in_background:
background_tasks.add_task( background_tasks.add_task(

View file

@ -4,6 +4,7 @@ from fastapi import APIRouter, Depends, HTTPException, status, Request, Query
from fastapi.concurrency import run_in_threadpool from fastapi.concurrency import run_in_threadpool
import logging import logging
from open_webui.models.groups import Groups
from open_webui.models.knowledge import ( from open_webui.models.knowledge import (
KnowledgeFileListResponse, KnowledgeFileListResponse,
Knowledges, Knowledges,
@ -40,41 +41,115 @@ router = APIRouter()
# getKnowledgeBases # getKnowledgeBases
############################ ############################
PAGE_ITEM_COUNT = 30
@router.get("/", response_model=list[KnowledgeUserResponse])
async def get_knowledge(user=Depends(get_verified_user)):
# Return knowledge bases with read access
knowledge_bases = []
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
knowledge_bases = Knowledges.get_knowledge_bases()
else:
knowledge_bases = Knowledges.get_knowledge_bases_by_user_id(user.id, "read")
return [
KnowledgeUserResponse(
**knowledge_base.model_dump(),
files=Knowledges.get_file_metadatas_by_id(knowledge_base.id),
)
for knowledge_base in knowledge_bases
]
@router.get("/list", response_model=list[KnowledgeUserResponse]) class KnowledgeAccessResponse(KnowledgeUserResponse):
async def get_knowledge_list(user=Depends(get_verified_user)): write_access: Optional[bool] = False
# Return knowledge bases with write access
knowledge_bases = []
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
knowledge_bases = Knowledges.get_knowledge_bases()
else:
knowledge_bases = Knowledges.get_knowledge_bases_by_user_id(user.id, "write")
return [
KnowledgeUserResponse( class KnowledgeAccessListResponse(BaseModel):
**knowledge_base.model_dump(), items: list[KnowledgeAccessResponse]
files=Knowledges.get_file_metadatas_by_id(knowledge_base.id), total: int
)
for knowledge_base in knowledge_bases
] @router.get("/", response_model=KnowledgeAccessListResponse)
async def get_knowledge_bases(page: Optional[int] = 1, user=Depends(get_verified_user)):
page = max(page, 1)
limit = PAGE_ITEM_COUNT
skip = (page - 1) * limit
filter = {}
if not user.role == "admin" or not BYPASS_ADMIN_ACCESS_CONTROL:
groups = Groups.get_groups_by_member_id(user.id)
if groups:
filter["group_ids"] = [group.id for group in groups]
filter["user_id"] = user.id
result = Knowledges.search_knowledge_bases(
user.id, filter=filter, skip=skip, limit=limit
)
return KnowledgeAccessListResponse(
items=[
KnowledgeAccessResponse(
**knowledge_base.model_dump(),
write_access=(
user.id == knowledge_base.user_id
or has_access(user.id, "write", knowledge_base.access_control)
),
)
for knowledge_base in result.items
],
total=result.total,
)
@router.get("/search", response_model=KnowledgeAccessListResponse)
async def search_knowledge_bases(
query: Optional[str] = None,
view_option: Optional[str] = None,
page: Optional[int] = 1,
user=Depends(get_verified_user),
):
page = max(page, 1)
limit = PAGE_ITEM_COUNT
skip = (page - 1) * limit
filter = {}
if query:
filter["query"] = query
if view_option:
filter["view_option"] = view_option
if not user.role == "admin" or not BYPASS_ADMIN_ACCESS_CONTROL:
groups = Groups.get_groups_by_member_id(user.id)
if groups:
filter["group_ids"] = [group.id for group in groups]
filter["user_id"] = user.id
result = Knowledges.search_knowledge_bases(
user.id, filter=filter, skip=skip, limit=limit
)
return KnowledgeAccessListResponse(
items=[
KnowledgeAccessResponse(
**knowledge_base.model_dump(),
write_access=(
user.id == knowledge_base.user_id
or has_access(user.id, "write", knowledge_base.access_control)
),
)
for knowledge_base in result.items
],
total=result.total,
)
@router.get("/search/files", response_model=KnowledgeFileListResponse)
async def search_knowledge_files(
query: Optional[str] = None,
page: Optional[int] = 1,
user=Depends(get_verified_user),
):
page = max(page, 1)
limit = PAGE_ITEM_COUNT
skip = (page - 1) * limit
filter = {}
if query:
filter["query"] = query
groups = Groups.get_groups_by_member_id(user.id)
if groups:
filter["group_ids"] = [group.id for group in groups]
filter["user_id"] = user.id
return Knowledges.search_knowledge_files(filter=filter, skip=skip, limit=limit)
############################ ############################
@ -186,7 +261,8 @@ async def reindex_knowledge_files(request: Request, user=Depends(get_verified_us
class KnowledgeFilesResponse(KnowledgeResponse): class KnowledgeFilesResponse(KnowledgeResponse):
files: list[FileMetadataResponse] files: Optional[list[FileMetadataResponse]] = None
write_access: Optional[bool] = False
@router.get("/{id}", response_model=Optional[KnowledgeFilesResponse]) @router.get("/{id}", response_model=Optional[KnowledgeFilesResponse])
@ -202,7 +278,10 @@ async def get_knowledge_by_id(id: str, user=Depends(get_verified_user)):
return KnowledgeFilesResponse( return KnowledgeFilesResponse(
**knowledge.model_dump(), **knowledge.model_dump(),
files=Knowledges.get_file_metadatas_by_id(knowledge.id), write_access=(
user.id == knowledge.user_id
or has_access(user.id, "write", knowledge.access_control)
),
) )
else: else:
raise HTTPException( raise HTTPException(
@ -363,11 +442,6 @@ def add_file_to_knowledge_by_id(
detail=ERROR_MESSAGES.FILE_NOT_PROCESSED, detail=ERROR_MESSAGES.FILE_NOT_PROCESSED,
) )
# Add file to knowledge base
Knowledges.add_file_to_knowledge_by_id(
knowledge_id=id, file_id=form_data.file_id, user_id=user.id
)
# Add content to the vector database # Add content to the vector database
try: try:
process_file( process_file(
@ -375,6 +449,11 @@ def add_file_to_knowledge_by_id(
ProcessFileForm(file_id=form_data.file_id, collection_name=id), ProcessFileForm(file_id=form_data.file_id, collection_name=id),
user=user, user=user,
) )
# Add file to knowledge base
Knowledges.add_file_to_knowledge_by_id(
knowledge_id=id, file_id=form_data.file_id, user_id=user.id
)
except Exception as e: except Exception as e:
log.debug(e) log.debug(e)
raise HTTPException( raise HTTPException(

View file

@ -0,0 +1,266 @@
import pytest
from unittest.mock import AsyncMock, patch, MagicMock
import aiohttp
from open_webui.utils.oauth import OAuthManager
from open_webui.config import AppConfig
class TestOAuthGoogleGroups:
"""Basic tests for Google OAuth Groups functionality"""
def setup_method(self):
"""Setup test fixtures"""
self.oauth_manager = OAuthManager(app=MagicMock())
@pytest.mark.asyncio
async def test_fetch_google_groups_success(self):
"""Test successful Google groups fetching with proper aiohttp mocking"""
# Mock response data from Google Cloud Identity API
mock_response_data = {
"memberships": [
{
"groupKey": {"id": "admin@company.com"},
"group": "groups/123",
"displayName": "Admin Group"
},
{
"groupKey": {"id": "users@company.com"},
"group": "groups/456",
"displayName": "Users Group"
}
]
}
# Create properly structured async mocks
mock_response = MagicMock()
mock_response.status = 200
mock_response.json = AsyncMock(return_value=mock_response_data)
# Mock the async context manager for session.get()
mock_get_context = MagicMock()
mock_get_context.__aenter__ = AsyncMock(return_value=mock_response)
mock_get_context.__aexit__ = AsyncMock(return_value=None)
# Mock the session
mock_session = MagicMock()
mock_session.get = MagicMock(return_value=mock_get_context)
# Mock the async context manager for ClientSession
mock_session_context = MagicMock()
mock_session_context.__aenter__ = AsyncMock(return_value=mock_session)
mock_session_context.__aexit__ = AsyncMock(return_value=None)
with patch("aiohttp.ClientSession", return_value=mock_session_context):
groups = await self.oauth_manager._fetch_google_groups_via_cloud_identity(
access_token="test_token",
user_email="user@company.com"
)
# Verify the results
assert groups == ["admin@company.com", "users@company.com"]
# Verify the HTTP call was made correctly
mock_session.get.assert_called_once()
call_args = mock_session.get.call_args
# Check the URL contains the user email (URL encoded)
url_arg = call_args[0][0] # First positional argument
assert "user%40company.com" in url_arg # @ is encoded as %40
assert "searchTransitiveGroups" in url_arg
# Check headers contain the bearer token
headers_arg = call_args[1]["headers"] # headers keyword argument
assert headers_arg["Authorization"] == "Bearer test_token"
assert headers_arg["Content-Type"] == "application/json"
@pytest.mark.asyncio
async def test_fetch_google_groups_api_error(self):
"""Test handling of API errors when fetching groups"""
# Mock failed response
mock_response = MagicMock()
mock_response.status = 403
mock_response.text = AsyncMock(return_value="Permission denied")
# Mock the async context manager for session.get()
mock_get_context = MagicMock()
mock_get_context.__aenter__ = AsyncMock(return_value=mock_response)
mock_get_context.__aexit__ = AsyncMock(return_value=None)
# Mock the session
mock_session = MagicMock()
mock_session.get = MagicMock(return_value=mock_get_context)
# Mock the async context manager for ClientSession
mock_session_context = MagicMock()
mock_session_context.__aenter__ = AsyncMock(return_value=mock_session)
mock_session_context.__aexit__ = AsyncMock(return_value=None)
with patch("aiohttp.ClientSession", return_value=mock_session_context):
groups = await self.oauth_manager._fetch_google_groups_via_cloud_identity(
access_token="test_token",
user_email="user@company.com"
)
# Should return empty list on error
assert groups == []
@pytest.mark.asyncio
async def test_fetch_google_groups_network_error(self):
"""Test handling of network errors when fetching groups"""
# Mock the session that raises an exception when get() is called
mock_session = MagicMock()
mock_session.get.side_effect = aiohttp.ClientError("Network error")
# Mock the async context manager for ClientSession
mock_session_context = MagicMock()
mock_session_context.__aenter__ = AsyncMock(return_value=mock_session)
mock_session_context.__aexit__ = AsyncMock(return_value=None)
with patch("aiohttp.ClientSession", return_value=mock_session_context):
groups = await self.oauth_manager._fetch_google_groups_via_cloud_identity(
access_token="test_token",
user_email="user@company.com"
)
# Should return empty list on network error
assert groups == []
@pytest.mark.asyncio
async def test_get_user_role_with_google_groups(self):
"""Test role assignment using Google groups"""
# Mock configuration
mock_config = MagicMock()
mock_config.ENABLE_OAUTH_ROLE_MANAGEMENT = True
mock_config.OAUTH_ROLES_CLAIM = "groups"
mock_config.OAUTH_ALLOWED_ROLES = ["users@company.com"]
mock_config.OAUTH_ADMIN_ROLES = ["admin@company.com"]
mock_config.DEFAULT_USER_ROLE = "pending"
mock_config.OAUTH_EMAIL_CLAIM = "email"
user_data = {"email": "user@company.com"}
# Mock Google OAuth scope check and Users class
with patch("open_webui.utils.oauth.auth_manager_config", mock_config), \
patch("open_webui.utils.oauth.GOOGLE_OAUTH_SCOPE") as mock_scope, \
patch("open_webui.utils.oauth.Users") as mock_users, \
patch.object(self.oauth_manager, "_fetch_google_groups_via_cloud_identity") as mock_fetch:
mock_scope.value = "openid email profile https://www.googleapis.com/auth/cloud-identity.groups.readonly"
mock_fetch.return_value = ["admin@company.com", "users@company.com"]
mock_users.get_num_users.return_value = 5 # Not first user
role = await self.oauth_manager.get_user_role(
user=None,
user_data=user_data,
provider="google",
access_token="test_token"
)
# Should assign admin role since user is in admin group
assert role == "admin"
mock_fetch.assert_called_once_with("test_token", "user@company.com")
@pytest.mark.asyncio
async def test_get_user_role_fallback_to_claims(self):
"""Test fallback to traditional claims when Google groups fail"""
mock_config = MagicMock()
mock_config.ENABLE_OAUTH_ROLE_MANAGEMENT = True
mock_config.OAUTH_ROLES_CLAIM = "groups"
mock_config.OAUTH_ALLOWED_ROLES = ["users"]
mock_config.OAUTH_ADMIN_ROLES = ["admin"]
mock_config.DEFAULT_USER_ROLE = "pending"
mock_config.OAUTH_EMAIL_CLAIM = "email"
user_data = {
"email": "user@company.com",
"groups": ["users"]
}
with patch("open_webui.utils.oauth.auth_manager_config", mock_config), \
patch("open_webui.utils.oauth.GOOGLE_OAUTH_SCOPE") as mock_scope, \
patch("open_webui.utils.oauth.Users") as mock_users, \
patch.object(self.oauth_manager, "_fetch_google_groups_via_cloud_identity") as mock_fetch:
# Mock scope without Cloud Identity
mock_scope.value = "openid email profile"
mock_users.get_num_users.return_value = 5 # Not first user
role = await self.oauth_manager.get_user_role(
user=None,
user_data=user_data,
provider="google",
access_token="test_token"
)
# Should use traditional claims since Cloud Identity scope not present
assert role == "user"
mock_fetch.assert_not_called()
@pytest.mark.asyncio
async def test_get_user_role_non_google_provider(self):
"""Test that non-Google providers use traditional claims"""
mock_config = MagicMock()
mock_config.ENABLE_OAUTH_ROLE_MANAGEMENT = True
mock_config.OAUTH_ROLES_CLAIM = "roles"
mock_config.OAUTH_ALLOWED_ROLES = ["user"]
mock_config.OAUTH_ADMIN_ROLES = ["admin"]
mock_config.DEFAULT_USER_ROLE = "pending"
user_data = {"roles": ["user"]}
with patch("open_webui.utils.oauth.auth_manager_config", mock_config), \
patch("open_webui.utils.oauth.Users") as mock_users, \
patch.object(self.oauth_manager, "_fetch_google_groups_via_cloud_identity") as mock_fetch:
mock_users.get_num_users.return_value = 5 # Not first user
role = await self.oauth_manager.get_user_role(
user=None,
user_data=user_data,
provider="microsoft",
access_token="test_token"
)
# Should use traditional claims for non-Google providers
assert role == "user"
mock_fetch.assert_not_called()
@pytest.mark.asyncio
async def test_update_user_groups_with_google_groups(self):
"""Test group management using Google groups from user_data"""
mock_config = MagicMock()
mock_config.OAUTH_GROUPS_CLAIM = "groups"
mock_config.OAUTH_BLOCKED_GROUPS = "[]"
mock_config.ENABLE_OAUTH_GROUP_CREATION = False
# Mock user with Google groups data
mock_user = MagicMock()
mock_user.id = "user123"
user_data = {
"google_groups": ["developers@company.com", "employees@company.com"]
}
# Mock existing groups and user groups
mock_existing_group = MagicMock()
mock_existing_group.name = "developers@company.com"
mock_existing_group.id = "group1"
mock_existing_group.user_ids = []
mock_existing_group.permissions = {"read": True}
mock_existing_group.description = "Developers group"
with patch("open_webui.utils.oauth.auth_manager_config", mock_config), \
patch("open_webui.utils.oauth.Groups") as mock_groups:
mock_groups.get_groups_by_member_id.return_value = []
mock_groups.get_groups.return_value = [mock_existing_group]
await self.oauth_manager.update_user_groups(
user=mock_user,
user_data=user_data,
default_permissions={"read": True}
)
# Should use Google groups instead of traditional claims
mock_groups.get_groups_by_member_id.assert_called_once_with("user123")
mock_groups.update_group_by_id.assert_called()

View file

@ -0,0 +1,130 @@
from pydantic import BaseModel, ConfigDict
from sqlalchemy import BigInteger, Boolean, Column, String, Text, JSON
from sqlalchemy.dialects.postgresql import JSONB
from sqlalchemy import or_, func, select, and_, text, cast, or_, and_, func
def has_permission(db, DocumentModel, query, filter: dict, permission: str = "read"):
group_ids = filter.get("group_ids", [])
user_id = filter.get("user_id")
dialect_name = db.bind.dialect.name
conditions = []
# Handle read_only permission separately
if permission == "read_only":
# For read_only, we want items where:
# 1. User has explicit read permission (via groups or user-level)
# 2. BUT does NOT have write permission
# 3. Public items are NOT considered read_only
read_conditions = []
# Group-level read permission
if group_ids:
group_read_conditions = []
for gid in group_ids:
if dialect_name == "sqlite":
group_read_conditions.append(
DocumentModel.access_control["read"]["group_ids"].contains(
[gid]
)
)
elif dialect_name == "postgresql":
group_read_conditions.append(
cast(
DocumentModel.access_control["read"]["group_ids"],
JSONB,
).contains([gid])
)
if group_read_conditions:
read_conditions.append(or_(*group_read_conditions))
# Combine read conditions
if read_conditions:
has_read = or_(*read_conditions)
else:
# If no read conditions, return empty result
return query.filter(False)
# Now exclude items where user has write permission
write_exclusions = []
# Exclude items owned by user (they have implicit write)
if user_id:
write_exclusions.append(DocumentModel.user_id != user_id)
# Exclude items where user has explicit write permission via groups
if group_ids:
group_write_conditions = []
for gid in group_ids:
if dialect_name == "sqlite":
group_write_conditions.append(
DocumentModel.access_control["write"]["group_ids"].contains(
[gid]
)
)
elif dialect_name == "postgresql":
group_write_conditions.append(
cast(
DocumentModel.access_control["write"]["group_ids"],
JSONB,
).contains([gid])
)
if group_write_conditions:
# User should NOT have write permission
write_exclusions.append(~or_(*group_write_conditions))
# Exclude public items (items without access_control)
write_exclusions.append(DocumentModel.access_control.isnot(None))
write_exclusions.append(cast(DocumentModel.access_control, String) != "null")
# Combine: has read AND does not have write AND not public
if write_exclusions:
query = query.filter(and_(has_read, *write_exclusions))
else:
query = query.filter(has_read)
return query
# Original logic for other permissions (read, write, etc.)
# Public access conditions
if group_ids or user_id:
conditions.extend(
[
DocumentModel.access_control.is_(None),
cast(DocumentModel.access_control, String) == "null",
]
)
# User-level permission (owner has all permissions)
if user_id:
conditions.append(DocumentModel.user_id == user_id)
# Group-level permission
if group_ids:
group_conditions = []
for gid in group_ids:
if dialect_name == "sqlite":
group_conditions.append(
DocumentModel.access_control[permission]["group_ids"].contains(
[gid]
)
)
elif dialect_name == "postgresql":
group_conditions.append(
cast(
DocumentModel.access_control[permission]["group_ids"],
JSONB,
).contains([gid])
)
conditions.append(or_(*group_conditions))
if conditions:
query = query.filter(or_(*conditions))
return query

View file

@ -9,6 +9,7 @@ from pathlib import Path
from typing import Callable, Optional, Sequence, Union from typing import Callable, Optional, Sequence, Union
import json import json
import aiohttp import aiohttp
import mimeparse
import collections.abc import collections.abc
@ -577,6 +578,37 @@ def throttle(interval: float = 10.0):
return decorator return decorator
def strict_match_mime_type(supported: list[str] | str, header: str) -> Optional[str]:
"""
Strictly match the mime type with the supported mime types.
:param supported: The supported mime types.
:param header: The header to match.
:return: The matched mime type or None if no match is found.
"""
try:
if isinstance(supported, str):
supported = supported.split(",")
supported = [s for s in supported if s.strip() and "/" in s]
match = mimeparse.best_match(supported, header)
if not match:
return None
_, _, match_params = mimeparse.parse_mime_type(match)
_, _, header_params = mimeparse.parse_mime_type(header)
for k, v in match_params.items():
if header_params.get(k) != v:
return None
return match
except Exception as e:
log.exception(f"Failed to match mime type {header}: {e}")
return None
def extract_urls(text: str) -> list[str]: def extract_urls(text: str) -> list[str]:
# Regex pattern to match URLs # Regex pattern to match URLs
url_pattern = re.compile( url_pattern = re.compile(

View file

@ -7,6 +7,7 @@ import sys
import urllib import urllib
import uuid import uuid
import json import json
from urllib.parse import quote
from datetime import datetime, timedelta from datetime import datetime, timedelta
import re import re
@ -15,6 +16,7 @@ import time
import secrets import secrets
from cryptography.fernet import Fernet from cryptography.fernet import Fernet
from typing import Literal from typing import Literal
from urllib.parse import quote
import aiohttp import aiohttp
from authlib.integrations.starlette_client import OAuth from authlib.integrations.starlette_client import OAuth
@ -58,6 +60,7 @@ from open_webui.config import (
OAUTH_AUDIENCE, OAUTH_AUDIENCE,
WEBHOOK_URL, WEBHOOK_URL,
JWT_EXPIRES_IN, JWT_EXPIRES_IN,
GOOGLE_OAUTH_SCOPE,
AppConfig, AppConfig,
) )
from open_webui.constants import ERROR_MESSAGES, WEBHOOK_MESSAGES from open_webui.constants import ERROR_MESSAGES, WEBHOOK_MESSAGES
@ -1001,7 +1004,7 @@ class OAuthManager:
log.error(f"Exception during token refresh for provider {provider}: {e}") log.error(f"Exception during token refresh for provider {provider}: {e}")
return None return None
def get_user_role(self, user, user_data): async def get_user_role(self, user, user_data, provider=None, access_token=None):
user_count = Users.get_num_users() user_count = Users.get_num_users()
if user and user_count == 1: if user and user_count == 1:
# If the user is the only user, assign the role "admin" - actually repairs role for single user on login # If the user is the only user, assign the role "admin" - actually repairs role for single user on login
@ -1021,12 +1024,45 @@ class OAuthManager:
# Default/fallback role if no matching roles are found # Default/fallback role if no matching roles are found
role = auth_manager_config.DEFAULT_USER_ROLE role = auth_manager_config.DEFAULT_USER_ROLE
# Next block extracts the roles from the user data, accepting nested claims of any depth # Check if this is Google OAuth with Cloud Identity scope
if oauth_claim and oauth_allowed_roles and oauth_admin_roles: if (
claim_data = user_data provider == "google"
nested_claims = oauth_claim.split(".") and access_token
for nested_claim in nested_claims: and "https://www.googleapis.com/auth/cloud-identity.groups.readonly"
claim_data = claim_data.get(nested_claim, {}) in GOOGLE_OAUTH_SCOPE.value
):
log.debug(
"Google OAuth with Cloud Identity scope detected - fetching groups via API"
)
user_email = user_data.get(auth_manager_config.OAUTH_EMAIL_CLAIM, "")
if user_email:
try:
google_groups = (
await self._fetch_google_groups_via_cloud_identity(
access_token, user_email
)
)
# Store groups in user_data for potential group management later
if "google_groups" not in user_data:
user_data["google_groups"] = google_groups
# Use Google groups as oauth_roles for role determination
oauth_roles = google_groups
log.debug(f"Using Google groups as roles: {oauth_roles}")
except Exception as e:
log.error(f"Failed to fetch Google groups: {e}")
# Fall back to default behavior with claims
oauth_roles = []
# If not using Google groups or Google groups fetch failed, use traditional claims method
if not oauth_roles:
# Next block extracts the roles from the user data, accepting nested claims of any depth
if oauth_claim and oauth_allowed_roles and oauth_admin_roles:
claim_data = user_data
nested_claims = oauth_claim.split(".")
for nested_claim in nested_claims:
claim_data = claim_data.get(nested_claim, {})
# Try flat claim structure as alternative # Try flat claim structure as alternative
if not claim_data: if not claim_data:
@ -1045,6 +1081,47 @@ class OAuthManager:
elif isinstance(claim_data, int): elif isinstance(claim_data, int):
oauth_roles = [str(claim_data)] oauth_roles = [str(claim_data)]
# Check if this is Google OAuth with Cloud Identity scope
if (
provider == "google"
and access_token
and "https://www.googleapis.com/auth/cloud-identity.groups.readonly"
in GOOGLE_OAUTH_SCOPE.value
):
log.debug(
"Google OAuth with Cloud Identity scope detected - fetching groups via API"
)
user_email = user_data.get(auth_manager_config.OAUTH_EMAIL_CLAIM, "")
if user_email:
try:
google_groups = (
await self._fetch_google_groups_via_cloud_identity(
access_token, user_email
)
)
# Store groups in user_data for potential group management later
if "google_groups" not in user_data:
user_data["google_groups"] = google_groups
# Use Google groups as oauth_roles for role determination
oauth_roles = google_groups
log.debug(f"Using Google groups as roles: {oauth_roles}")
except Exception as e:
log.error(f"Failed to fetch Google groups: {e}")
# Fall back to default behavior with claims
oauth_roles = []
# If not using Google groups or Google groups fetch failed, use traditional claims method
if not oauth_roles:
# Next block extracts the roles from the user data, accepting nested claims of any depth
if oauth_claim and oauth_allowed_roles and oauth_admin_roles:
claim_data = user_data
nested_claims = oauth_claim.split(".")
for nested_claim in nested_claims:
claim_data = claim_data.get(nested_claim, {})
oauth_roles = claim_data if isinstance(claim_data, list) else []
log.debug(f"Oauth Roles claim: {oauth_claim}") log.debug(f"Oauth Roles claim: {oauth_claim}")
log.debug(f"User roles from oauth: {oauth_roles}") log.debug(f"User roles from oauth: {oauth_roles}")
log.debug(f"Accepted user roles: {oauth_allowed_roles}") log.debug(f"Accepted user roles: {oauth_allowed_roles}")
@ -1062,7 +1139,9 @@ class OAuthManager:
for admin_role in oauth_admin_roles: for admin_role in oauth_admin_roles:
# If the user has any of the admin roles, assign the role "admin" # If the user has any of the admin roles, assign the role "admin"
if admin_role in oauth_roles: if admin_role in oauth_roles:
log.debug("Assigned user the admin role") log.debug(
f"Assigned user the admin role based on group: {admin_role}"
)
role = "admin" role = "admin"
break break
else: else:
@ -1075,7 +1154,88 @@ class OAuthManager:
return role return role
def update_user_groups(self, user, user_data, default_permissions): async def _fetch_google_groups_via_cloud_identity(
self, access_token: str, user_email: str
) -> list[str]:
"""
Fetch Google Workspace groups for a user via Cloud Identity API.
Args:
access_token: OAuth access token with cloud-identity.groups.readonly scope
user_email: User's email address
Returns:
List of group email addresses the user belongs to
"""
groups = []
base_url = "https://content-cloudidentity.googleapis.com/v1/groups/-/memberships:searchTransitiveGroups"
# Create the query string with proper URL encoding
query_string = f"member_key_id == '{user_email}' && 'cloudidentity.googleapis.com/groups.security' in labels"
encoded_query = quote(query_string)
headers = {
"Authorization": f"Bearer {access_token}",
"Content-Type": "application/json",
}
page_token = ""
try:
async with aiohttp.ClientSession(trust_env=True) as session:
while True:
# Build URL with query parameter
url = f"{base_url}?query={encoded_query}"
# Add page token to URL if present
if page_token:
url += f"&pageToken={quote(page_token)}"
log.debug("Fetching Google groups via Cloud Identity API")
async with session.get(
url, headers=headers, ssl=AIOHTTP_CLIENT_SESSION_SSL
) as resp:
if resp.status == 200:
data = await resp.json()
# Extract group emails from memberships
memberships = data.get("memberships", [])
log.debug(f"Found {len(memberships)} memberships")
for membership in memberships:
group_key = membership.get("groupKey", {})
group_email = group_key.get("id", "")
if group_email:
groups.append(group_email)
log.debug(f"Found group membership: {group_email}")
# Check for next page
page_token = data.get("nextPageToken", "")
if not page_token:
break
else:
error_text = await resp.text()
log.error(
f"Failed to fetch Google groups (status {resp.status})"
)
# Log error details without sensitive information
try:
error_json = json.loads(error_text)
if "error" in error_json:
log.error(f"API error: {error_json['error'].get('message', 'Unknown error')}")
except json.JSONDecodeError:
log.error("Error response contains non-JSON data")
break
except Exception as e:
log.error(f"Error fetching Google groups via Cloud Identity API: {e}")
log.info(f"Retrieved {len(groups)} Google groups for user {user_email}")
return groups
async def update_user_groups(
self, user, user_data, default_permissions, provider=None, access_token=None
):
log.debug("Running OAUTH Group management") log.debug("Running OAUTH Group management")
oauth_claim = auth_manager_config.OAUTH_GROUPS_CLAIM oauth_claim = auth_manager_config.OAUTH_GROUPS_CLAIM
@ -1086,23 +1246,31 @@ class OAuthManager:
blocked_groups = [] blocked_groups = []
user_oauth_groups = [] user_oauth_groups = []
# Nested claim search for groups claim
if oauth_claim:
claim_data = user_data
nested_claims = oauth_claim.split(".")
for nested_claim in nested_claims:
claim_data = claim_data.get(nested_claim, {})
if isinstance(claim_data, list): # Check if Google groups were fetched via Cloud Identity API
user_oauth_groups = claim_data if "google_groups" in user_data:
elif isinstance(claim_data, str): log.debug(
# Split by the configured separator if present "Using Google groups from Cloud Identity API for group management"
if OAUTH_GROUPS_SEPARATOR in claim_data: )
user_oauth_groups = claim_data.split(OAUTH_GROUPS_SEPARATOR) user_oauth_groups = user_data["google_groups"]
else:
# Nested claim search for groups claim (traditional method)
if oauth_claim:
claim_data = user_data
nested_claims = oauth_claim.split(".")
for nested_claim in nested_claims:
claim_data = claim_data.get(nested_claim, {})
if isinstance(claim_data, list):
user_oauth_groups = claim_data
elif isinstance(claim_data, str):
# Split by the configured separator if present
if OAUTH_GROUPS_SEPARATOR in claim_data:
user_oauth_groups = claim_data.split(OAUTH_GROUPS_SEPARATOR)
else:
user_oauth_groups = [claim_data]
else: else:
user_oauth_groups = [claim_data] user_oauth_groups = []
else:
user_oauth_groups = []
user_current_groups: list[GroupModel] = Groups.get_groups_by_member_id(user.id) user_current_groups: list[GroupModel] = Groups.get_groups_by_member_id(user.id)
all_available_groups: list[GroupModel] = Groups.get_all_groups() all_available_groups: list[GroupModel] = Groups.get_all_groups()
@ -1272,7 +1440,7 @@ class OAuthManager:
client = self.get_client(provider) client = self.get_client(provider)
if client is None: if client is None:
raise HTTPException(404) raise HTTPException(404)
kwargs = {} kwargs = {}
if (auth_manager_config.OAUTH_AUDIENCE): if (auth_manager_config.OAUTH_AUDIENCE):
kwargs["audience"] = auth_manager_config.OAUTH_AUDIENCE kwargs["audience"] = auth_manager_config.OAUTH_AUDIENCE
@ -1307,9 +1475,8 @@ class OAuthManager:
exc_info=True, exc_info=True,
) )
raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED) raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED)
# Try to get userinfo from the token first, some providers include it there
user_data: UserInfo = token.get("userinfo") user_data: UserInfo = token.get("userinfo")
if ( if (
(not user_data) (not user_data)
or (auth_manager_config.OAUTH_EMAIL_CLAIM not in user_data) or (auth_manager_config.OAUTH_EMAIL_CLAIM not in user_data)
@ -1395,8 +1562,7 @@ class OAuthManager:
# If allowed domains are configured, check if the email domain is in the list # If allowed domains are configured, check if the email domain is in the list
if ( if (
"*" not in auth_manager_config.OAUTH_ALLOWED_DOMAINS "*" not in auth_manager_config.OAUTH_ALLOWED_DOMAINS
and email.split("@")[-1] and email.split("@")[-1] not in auth_manager_config.OAUTH_ALLOWED_DOMAINS
not in auth_manager_config.OAUTH_ALLOWED_DOMAINS
): ):
log.warning( log.warning(
f"OAuth callback failed, e-mail domain is not in the list of allowed domains: {user_data}" f"OAuth callback failed, e-mail domain is not in the list of allowed domains: {user_data}"
@ -1404,7 +1570,8 @@ class OAuthManager:
raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED) raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED)
# Check if the user exists # Check if the user exists
user = Users.get_user_by_oauth_sub(provider, sub) user = Users.get_user_by_oauth_sub(provider_sub)
if not user: if not user:
# If the user does not exist, check if merging is enabled # If the user does not exist, check if merging is enabled
if auth_manager_config.OAUTH_MERGE_ACCOUNTS_BY_EMAIL: if auth_manager_config.OAUTH_MERGE_ACCOUNTS_BY_EMAIL:
@ -1415,7 +1582,9 @@ class OAuthManager:
Users.update_user_oauth_by_id(user.id, provider, sub) Users.update_user_oauth_by_id(user.id, provider, sub)
if user: if user:
determined_role = self.get_user_role(user, user_data) determined_role = await self.get_user_role(
user, user_data, provider, token.get("access_token")
)
if user.role != determined_role: if user.role != determined_role:
Users.update_user_role_by_id(user.id, determined_role) Users.update_user_role_by_id(user.id, determined_role)
# Update the user object in memory as well, # Update the user object in memory as well,
@ -1426,8 +1595,7 @@ class OAuthManager:
picture_claim = auth_manager_config.OAUTH_PICTURE_CLAIM picture_claim = auth_manager_config.OAUTH_PICTURE_CLAIM
if picture_claim: if picture_claim:
new_picture_url = user_data.get( new_picture_url = user_data.get(
picture_claim, picture_claim, OAUTH_PROVIDERS[provider].get("picture_url", "")
OAUTH_PROVIDERS[provider].get("picture_url", ""),
) )
processed_picture_url = await self._process_picture_url( processed_picture_url = await self._process_picture_url(
new_picture_url, token.get("access_token") new_picture_url, token.get("access_token")
@ -1437,7 +1605,7 @@ class OAuthManager:
user.id, processed_picture_url user.id, processed_picture_url
) )
log.debug(f"Updated profile picture for user {user.email}") log.debug(f"Updated profile picture for user {user.email}")
else: if not user:
# If the user does not exist, check if signups are enabled # If the user does not exist, check if signups are enabled
if auth_manager_config.ENABLE_OAUTH_SIGNUP: if auth_manager_config.ENABLE_OAUTH_SIGNUP:
# Check if an existing user with the same email already exists # Check if an existing user with the same email already exists
@ -1448,14 +1616,14 @@ class OAuthManager:
picture_claim = auth_manager_config.OAUTH_PICTURE_CLAIM picture_claim = auth_manager_config.OAUTH_PICTURE_CLAIM
if picture_claim: if picture_claim:
picture_url = user_data.get( picture_url = user_data.get(
picture_claim, picture_claim, OAUTH_PROVIDERS[provider].get("picture_url", "")
OAUTH_PROVIDERS[provider].get("picture_url", ""),
) )
picture_url = await self._process_picture_url( picture_url = await self._process_picture_url(
picture_url, token.get("access_token") picture_url, token.get("access_token")
) )
else: else:
picture_url = "/user.png" picture_url = "/user.png"
username_claim = auth_manager_config.OAUTH_USERNAME_CLAIM username_claim = auth_manager_config.OAUTH_USERNAME_CLAIM
name = user_data.get(username_claim) name = user_data.get(username_claim)
@ -1463,34 +1631,37 @@ class OAuthManager:
log.warning("Username claim is missing, using email as name") log.warning("Username claim is missing, using email as name")
name = email name = email
user = Auths.insert_new_auth( role = await self.get_user_role(
email=email, None, user_data, provider, token.get("access_token")
password=get_password_hash( )
str(uuid.uuid4())
), # Random password, not used
name=name,
profile_image_url=picture_url,
role=self.get_user_role(None, user_data),
oauth=oauth_data,
)
if auth_manager_config.WEBHOOK_URL: user = Auths.insert_new_auth(
await post_webhook( email=email,
WEBUI_NAME, password=get_password_hash(
auth_manager_config.WEBHOOK_URL, str(uuid.uuid4())
WEBHOOK_MESSAGES.USER_SIGNUP(user.name), ), # Random password, not used
{ name=name,
"action": "signup", profile_image_url=picture_url,
"message": WEBHOOK_MESSAGES.USER_SIGNUP(user.name), role=role,
"user": user.model_dump_json(exclude_none=True), oauth_sub=provider_sub,
}, )
)
if auth_manager_config.WEBHOOK_URL:
await post_webhook(
WEBUI_NAME,
auth_manager_config.WEBHOOK_URL,
WEBHOOK_MESSAGES.USER_SIGNUP(user.name),
{
"action": "signup",
"message": WEBHOOK_MESSAGES.USER_SIGNUP(user.name),
"user": user.model_dump_json(exclude_none=True),
},
)
apply_default_group_assignment( apply_default_group_assignment(
request.app.state.config.DEFAULT_GROUP_ID, request.app.state.config.DEFAULT_GROUP_ID,
user.id, user.id,
) )
else: else:
raise HTTPException( raise HTTPException(
status.HTTP_403_FORBIDDEN, status.HTTP_403_FORBIDDEN,
@ -1501,14 +1672,14 @@ class OAuthManager:
data={"id": user.id}, data={"id": user.id},
expires_delta=parse_duration(auth_manager_config.JWT_EXPIRES_IN), expires_delta=parse_duration(auth_manager_config.JWT_EXPIRES_IN),
) )
if (
auth_manager_config.ENABLE_OAUTH_GROUP_MANAGEMENT if auth_manager_config.ENABLE_OAUTH_GROUP_MANAGEMENT and user.role != "admin":
and user.role != "admin" await self.update_user_groups(
):
self.update_user_groups(
user=user, user=user,
user_data=user_data, user_data=user_data,
default_permissions=request.app.state.config.USER_PERMISSIONS, default_permissions=request.app.state.config.USER_PERMISSIONS,
provider=provider,
access_token=token.get("access_token"),
) )
except Exception as e: except Exception as e:

View file

@ -20,6 +20,7 @@ aiofiles
starlette-compress==1.6.1 starlette-compress==1.6.1
httpx[socks,http2,zstd,cli,brotli]==0.28.1 httpx[socks,http2,zstd,cli,brotli]==0.28.1
starsessions[redis]==2.2.1 starsessions[redis]==2.2.1
python-mimeparse==2.0.0
sqlalchemy==2.0.44 sqlalchemy==2.0.44
alembic==1.17.2 alembic==1.17.2

View file

@ -0,0 +1,95 @@
# Google OAuth with Cloud Identity Groups Support
This example demonstrates how to configure Open WebUI to use Google OAuth with Cloud Identity API for group-based role management.
## Configuration
### Environment Variables
```bash
# Google OAuth Configuration
GOOGLE_CLIENT_ID="your-google-client-id.apps.googleusercontent.com"
GOOGLE_CLIENT_SECRET="your-google-client-secret"
# IMPORTANT: Include the Cloud Identity Groups scope
GOOGLE_OAUTH_SCOPE="openid email profile https://www.googleapis.com/auth/cloud-identity.groups.readonly"
# Enable OAuth features
ENABLE_OAUTH_SIGNUP=true
ENABLE_OAUTH_ROLE_MANAGEMENT=true
ENABLE_OAUTH_GROUP_MANAGEMENT=true
# Configure admin roles using Google group emails
OAUTH_ADMIN_ROLES="admin@yourcompany.com,superadmin@yourcompany.com"
OAUTH_ALLOWED_ROLES="users@yourcompany.com,employees@yourcompany.com"
# Optional: Configure group creation
ENABLE_OAUTH_GROUP_CREATION=true
```
## How It Works
1. **Scope Detection**: When a user logs in with Google OAuth, the system checks if the `https://www.googleapis.com/auth/cloud-identity.groups.readonly` scope is present in `GOOGLE_OAUTH_SCOPE`.
2. **Groups Fetching**: If the scope is present, the system uses the Google Cloud Identity API to fetch all groups the user belongs to, instead of relying on claims in the OAuth token.
3. **Role Assignment**:
- If the user belongs to any group listed in `OAUTH_ADMIN_ROLES`, they get admin privileges
- If the user belongs to any group listed in `OAUTH_ALLOWED_ROLES`, they get user privileges
- Default role is applied if no matching groups are found
4. **Group Management**: If `ENABLE_OAUTH_GROUP_MANAGEMENT` is enabled, Open WebUI groups are synchronized with Google Workspace groups.
## Google Cloud Console Setup
1. **Enable APIs**:
- Cloud Identity API
- Cloud Identity Groups API
2. **OAuth 2.0 Setup**:
- Create OAuth 2.0 credentials
- Add authorized redirect URIs
- Configure consent screen
3. **Required Scopes**:
```
openid
email
profile
https://www.googleapis.com/auth/cloud-identity.groups.readonly
```
## Example Groups Structure
```
Your Google Workspace:
├── admin@yourcompany.com (Admin group)
├── superadmin@yourcompany.com (Super admin group)
├── users@yourcompany.com (Regular users)
├── employees@yourcompany.com (All employees)
└── developers@yourcompany.com (Development team)
```
## Fallback Behavior
If the Cloud Identity scope is not present or the API call fails, the system falls back to the traditional method of reading roles from OAuth token claims.
## Security Considerations
- The Cloud Identity API requires proper authentication and authorization
- Only users with appropriate permissions can access group membership information
- Groups are fetched server-side, not exposed to the client
- Access tokens are handled securely and not logged
## Troubleshooting
1. **Groups not detected**: Ensure the Cloud Identity API is enabled and the OAuth client has the required scope
2. **Permission denied**: Verify the service account or OAuth client has Cloud Identity API access
3. **No admin role**: Check that the user belongs to a group listed in `OAUTH_ADMIN_ROLES`
## Benefits Over Token Claims
- **Real-time**: Groups are fetched fresh on each login
- **Complete**: Gets all group memberships, including nested groups
- **Accurate**: No dependency on ID token size limits
- **Flexible**: Can handle complex group hierarchies in Google Workspace

View file

@ -28,6 +28,7 @@ dependencies = [
"starlette-compress==1.6.1", "starlette-compress==1.6.1",
"httpx[socks,http2,zstd,cli,brotli]==0.28.1", "httpx[socks,http2,zstd,cli,brotli]==0.28.1",
"starsessions[redis]==2.2.1", "starsessions[redis]==2.2.1",
"python-mimeparse==2.0.0",
"sqlalchemy==2.0.44", "sqlalchemy==2.0.44",
"alembic==1.17.2", "alembic==1.17.2",

View file

@ -38,10 +38,13 @@ export const createNewKnowledge = async (
return res; return res;
}; };
export const getKnowledgeBases = async (token: string = '') => { export const getKnowledgeBases = async (token: string = '', page: number | null = null) => {
let error = null; let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/knowledge/`, { const searchParams = new URLSearchParams();
if (page) searchParams.append('page', page.toString());
const res = await fetch(`${WEBUI_API_BASE_URL}/knowledge/?${searchParams.toString()}`, {
method: 'GET', method: 'GET',
headers: { headers: {
Accept: 'application/json', Accept: 'application/json',
@ -69,10 +72,20 @@ export const getKnowledgeBases = async (token: string = '') => {
return res; return res;
}; };
export const getKnowledgeBaseList = async (token: string = '') => { export const searchKnowledgeBases = async (
token: string = '',
query: string | null = null,
viewOption: string | null = null,
page: number | null = null
) => {
let error = null; let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/knowledge/list`, { const searchParams = new URLSearchParams();
if (query) searchParams.append('query', query);
if (viewOption) searchParams.append('view_option', viewOption);
if (page) searchParams.append('page', page.toString());
const res = await fetch(`${WEBUI_API_BASE_URL}/knowledge/search?${searchParams.toString()}`, {
method: 'GET', method: 'GET',
headers: { headers: {
Accept: 'application/json', Accept: 'application/json',
@ -100,6 +113,55 @@ export const getKnowledgeBaseList = async (token: string = '') => {
return res; return res;
}; };
export const searchKnowledgeFiles = async (
token: string,
query?: string | null = null,
viewOption?: string | null = null,
orderBy?: string | null = null,
direction?: string | null = null,
page: number = 1
) => {
let error = null;
const searchParams = new URLSearchParams();
if (query) searchParams.append('query', query);
if (viewOption) searchParams.append('view_option', viewOption);
if (orderBy) searchParams.append('order_by', orderBy);
if (direction) searchParams.append('direction', direction);
searchParams.append('page', page.toString());
const res = await fetch(
`${WEBUI_API_BASE_URL}/knowledge/search/files?${searchParams.toString()}`,
{
method: 'GET',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
}
}
)
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.then((json) => {
return json;
})
.catch((err) => {
error = err.detail;
console.error(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const getKnowledgeById = async (token: string, id: string) => { export const getKnowledgeById = async (token: string, id: string) => {
let error = null; let error = null;

View file

@ -290,7 +290,7 @@
<div <div
class="h-screen max-h-[100dvh] transition-width duration-200 ease-in-out {$showSidebar class="h-screen max-h-[100dvh] transition-width duration-200 ease-in-out {$showSidebar
? 'md:max-w-[calc(100%-260px)]' ? 'md:max-w-[calc(100%-var(--sidebar-width))]'
: ''} w-full max-w-full flex flex-col" : ''} w-full max-w-full flex flex-col"
id="channel-container" id="channel-container"
> >
@ -365,6 +365,7 @@
bind:chatInputElement bind:chatInputElement
bind:replyToMessage bind:replyToMessage
{typingUsers} {typingUsers}
{channel}
userSuggestions={true} userSuggestions={true}
channelSuggestions={true} channelSuggestions={true}
disabled={!channel?.write_access} disabled={!channel?.write_access}

View file

@ -42,9 +42,10 @@
import XMark from '../icons/XMark.svelte'; import XMark from '../icons/XMark.svelte';
export let placeholder = $i18n.t('Type here...'); export let placeholder = $i18n.t('Type here...');
export let chatInputElement;
export let id = null; export let id = null;
export let chatInputElement; export let channel = null;
export let typingUsers = []; export let typingUsers = [];
export let inputLoading = false; export let inputLoading = false;
@ -459,15 +460,16 @@
try { try {
// During the file upload, file content is automatically extracted. // During the file upload, file content is automatically extracted.
// If the file is an audio file, provide the language for STT. // If the file is an audio file, provide the language for STT.
let metadata = null; let metadata = {
if ( channel_id: channel.id,
(file.type.startsWith('audio/') || file.type.startsWith('video/')) && // If the file is an audio file, provide the language for STT.
...((file.type.startsWith('audio/') || file.type.startsWith('video/')) &&
$settings?.audio?.stt?.language $settings?.audio?.stt?.language
) { ? {
metadata = { language: $settings?.audio?.stt?.language
language: $settings?.audio?.stt?.language }
}; : {})
} };
const uploadedFile = await uploadFile(localStorage.token, file, metadata, process); const uploadedFile = await uploadFile(localStorage.token, file, metadata, process);

View file

@ -2384,7 +2384,7 @@
<div <div
class="h-screen max-h-[100dvh] transition-width duration-200 ease-in-out {$showSidebar class="h-screen max-h-[100dvh] transition-width duration-200 ease-in-out {$showSidebar
? ' md:max-w-[calc(100%-260px)]' ? ' md:max-w-[calc(100%-var(--sidebar-width))]'
: ' '} w-full max-w-full flex flex-col" : ' '} w-full max-w-full flex flex-col"
id="chat-container" id="chat-container"
> >

View file

@ -1,10 +1,37 @@
<script> <script lang="ts">
import { embed, showControls, showEmbeds } from '$lib/stores'; import { embed, showControls, showEmbeds } from '$lib/stores';
import FullHeightIframe from '$lib/components/common/FullHeightIframe.svelte'; import FullHeightIframe from '$lib/components/common/FullHeightIframe.svelte';
import XMark from '$lib/components/icons/XMark.svelte'; import XMark from '$lib/components/icons/XMark.svelte';
export let overlay = false; export let overlay = false;
const getSrcUrl = (url: string, chatId?: string, messageId?: string) => {
try {
const parsed = new URL(url);
if (chatId) {
parsed.searchParams.set('chat_id', chatId);
}
if (messageId) {
parsed.searchParams.set('message_id', messageId);
}
return parsed.toString();
} catch {
// Fallback for relative URLs or invalid input
const hasQuery = url.includes('?');
const parts = [];
if (chatId) parts.push(`chat_id=${encodeURIComponent(chatId)}`);
if (messageId) parts.push(`message_id=${encodeURIComponent(messageId)}`);
if (parts.length === 0) return url;
return url + (hasQuery ? '&' : '?') + parts.join('&');
}
};
</script> </script>
{#if $embed} {#if $embed}
@ -40,7 +67,11 @@
<div class=" absolute top-0 left-0 right-0 bottom-0 z-10"></div> <div class=" absolute top-0 left-0 right-0 bottom-0 z-10"></div>
{/if} {/if}
<FullHeightIframe src={$embed?.url} iframeClassName="w-full h-full" /> <FullHeightIframe
src={getSrcUrl($embed?.url ?? '', $embed?.chatId, $embed?.messageId)}
payload={$embed?.source ?? null}
iframeClassName="w-full h-full"
/>
</div> </div>
</div> </div>
{/if} {/if}

View file

@ -28,9 +28,6 @@
await Promise.all([ await Promise.all([
(async () => { (async () => {
prompts.set(await getPrompts(localStorage.token)); prompts.set(await getPrompts(localStorage.token));
})(),
(async () => {
knowledge.set(await getKnowledgeBases(localStorage.token));
})() })()
]); ]);
loading = false; loading = false;
@ -103,7 +100,6 @@
bind:this={suggestionElement} bind:this={suggestionElement}
{query} {query}
bind:filteredItems bind:filteredItems
knowledge={$knowledge ?? []}
onSelect={(e) => { onSelect={(e) => {
const { type, data } = e; const { type, data } = e;

View file

@ -1,19 +1,21 @@
<script lang="ts"> <script lang="ts">
import { toast } from 'svelte-sonner'; import { toast } from 'svelte-sonner';
import Fuse from 'fuse.js';
import dayjs from 'dayjs'; import dayjs from 'dayjs';
import relativeTime from 'dayjs/plugin/relativeTime'; import relativeTime from 'dayjs/plugin/relativeTime';
dayjs.extend(relativeTime); dayjs.extend(relativeTime);
import { tick, getContext, onMount, onDestroy } from 'svelte'; import { tick, getContext, onMount, onDestroy } from 'svelte';
import { removeLastWordFromString, isValidHttpUrl, isYoutubeUrl } from '$lib/utils';
import { folders } from '$lib/stores';
import { getFolders } from '$lib/apis/folders';
import { searchKnowledgeBases, searchKnowledgeFiles } from '$lib/apis/knowledge';
import { removeLastWordFromString, isValidHttpUrl, isYoutubeUrl, decodeString } from '$lib/utils';
import Tooltip from '$lib/components/common/Tooltip.svelte'; import Tooltip from '$lib/components/common/Tooltip.svelte';
import DocumentPage from '$lib/components/icons/DocumentPage.svelte'; import DocumentPage from '$lib/components/icons/DocumentPage.svelte';
import Database from '$lib/components/icons/Database.svelte'; import Database from '$lib/components/icons/Database.svelte';
import GlobeAlt from '$lib/components/icons/GlobeAlt.svelte'; import GlobeAlt from '$lib/components/icons/GlobeAlt.svelte';
import Youtube from '$lib/components/icons/Youtube.svelte'; import Youtube from '$lib/components/icons/Youtube.svelte';
import { folders } from '$lib/stores';
import Folder from '$lib/components/icons/Folder.svelte'; import Folder from '$lib/components/icons/Folder.svelte';
const i18n = getContext('i18n'); const i18n = getContext('i18n');
@ -21,35 +23,24 @@
export let query = ''; export let query = '';
export let onSelect = (e) => {}; export let onSelect = (e) => {};
export let knowledge = [];
let selectedIdx = 0; let selectedIdx = 0;
let items = []; let items = [];
let fuse = null;
export let filteredItems = []; export let filteredItems = [];
$: if (fuse) { $: filteredItems = [
filteredItems = [ ...(query.startsWith('http')
...(query ? isYoutubeUrl(query)
? fuse.search(query).map((e) => { ? [{ type: 'youtube', name: query, description: query }]
return e.item; : [
}) {
: items), type: 'web',
name: query,
...(query.startsWith('http') description: query
? isYoutubeUrl(query) }
? [{ type: 'youtube', name: query, description: query }] ]
: [ : []),
{ ...items
type: 'web', ];
name: query,
description: query
}
]
: [])
];
}
$: if (query) { $: if (query) {
selectedIdx = 0; selectedIdx = 0;
@ -71,58 +62,70 @@
item.click(); item.click();
} }
}; };
const decodeString = (str: string) => {
try { let folderItems = [];
return decodeURIComponent(str); let knowledgeItems = [];
} catch (e) { let fileItems = [];
return str;
$: items = [...folderItems, ...knowledgeItems, ...fileItems];
$: if (query !== null) {
getItems();
}
const getItems = () => {
getFolderItems();
getKnowledgeItems();
getKnowledgeFileItems();
};
const getFolderItems = async () => {
folderItems = $folders
.map((folder) => ({
...folder,
type: 'folder',
description: $i18n.t('Folder'),
title: folder.name
}))
.filter((folder) => folder.name.toLowerCase().includes(query.toLowerCase()));
};
const getKnowledgeItems = async () => {
const res = await searchKnowledgeBases(localStorage.token, query).catch(() => {
return null;
});
if (res) {
knowledgeItems = res.items.map((item) => {
return {
...item,
type: 'collection'
};
});
}
};
const getKnowledgeFileItems = async () => {
const res = await searchKnowledgeFiles(localStorage.token, query).catch(() => {
return null;
});
if (res) {
fileItems = res.items.map((item) => {
return {
...item,
type: 'file',
name: item.filename,
description: item.collection ? item.collection.name : ''
};
});
} }
}; };
onMount(async () => { onMount(async () => {
let collections = knowledge if ($folders === null) {
.filter((item) => !item?.meta?.document) await folders.set(await getFolders(localStorage.token));
.map((item) => ({ }
...item,
type: 'collection'
}));
let collection_files =
knowledge.length > 0
? [
...knowledge
.reduce((a, item) => {
return [
...new Set([
...a,
...(item?.files ?? []).map((file) => ({
...file,
collection: { name: item.name, description: item.description } // DO NOT REMOVE, USED IN FILE DESCRIPTION/ATTACHMENT
}))
])
];
}, [])
.map((file) => ({
...file,
name: file?.meta?.name,
description: `${file?.collection?.description}`,
knowledge: true, // DO NOT REMOVE, USED TO INDICATE KNOWLEDGE BASE FILE
type: 'file'
}))
]
: [];
let folder_items = $folders.map((folder) => ({
...folder,
type: 'folder',
description: $i18n.t('Folder'),
title: folder.name
}));
items = [...folder_items, ...collections, ...collection_files];
fuse = new Fuse(items, {
keys: ['name', 'description']
});
await tick(); await tick();
}); });
@ -142,12 +145,20 @@
}); });
</script> </script>
<div class="px-2 text-xs text-gray-500 py-1">
{$i18n.t('Knowledge')}
</div>
{#if filteredItems.length > 0 || query.startsWith('http')} {#if filteredItems.length > 0 || query.startsWith('http')}
{#each filteredItems as item, idx} {#each filteredItems as item, idx}
{#if idx === 0 || item?.type !== items[idx - 1]?.type}
<div class="px-2 text-xs text-gray-500 py-1">
{#if item?.type === 'folder'}
{$i18n.t('Folders')}
{:else if item?.type === 'collection'}
{$i18n.t('Collections')}
{:else if item?.type === 'file'}
{$i18n.t('Files')}
{/if}
</div>
{/if}
{#if !['youtube', 'web'].includes(item.type)} {#if !['youtube', 'web'].includes(item.type)}
<button <button
class=" px-2 py-1 rounded-xl w-full text-left flex justify-between items-center {idx === class=" px-2 py-1 rounded-xl w-full text-left flex justify-between items-center {idx ===

View file

@ -18,7 +18,7 @@
<div <div
bind:this={overlayElement} bind:this={overlayElement}
class="fixed {$showSidebar class="fixed {$showSidebar
? 'left-0 md:left-[260px] md:w-[calc(100%-260px)]' ? 'left-0 md:left-[var(--sidebar-width)] md:w-[calc(100%-var(--sidebar-width))]'
: 'left-0'} fixed top-0 right-0 bottom-0 w-full h-full flex z-9999 touch-none pointer-events-none" : 'left-0'} fixed top-0 right-0 bottom-0 w-full h-full flex z-9999 touch-none pointer-events-none"
id="dropzone" id="dropzone"
role="region" role="region"

View file

@ -73,16 +73,6 @@
} }
}; };
const init = async () => {
if ($knowledge === null) {
await knowledge.set(await getKnowledgeBases(localStorage.token));
}
};
$: if (show) {
init();
}
const onSelect = (item) => { const onSelect = (item) => {
if (files.find((f) => f.id === item.id)) { if (files.find((f) => f.id === item.id)) {
return; return;
@ -249,37 +239,35 @@
</Tooltip> </Tooltip>
{/if} {/if}
{#if ($knowledge ?? []).length > 0} <Tooltip
<Tooltip content={fileUploadCapableModels.length !== selectedModels.length
content={fileUploadCapableModels.length !== selectedModels.length ? $i18n.t('Model(s) do not support file upload')
? $i18n.t('Model(s) do not support file upload') : !fileUploadEnabled
: !fileUploadEnabled ? $i18n.t('You do not have permission to upload files.')
? $i18n.t('You do not have permission to upload files.') : ''}
: ''} className="w-full"
className="w-full" >
<button
class="flex gap-2 w-full items-center px-3 py-1.5 text-sm cursor-pointer hover:bg-gray-50 dark:hover:bg-gray-800/50 rounded-xl {!fileUploadEnabled
? 'opacity-50'
: ''}"
on:click={() => {
tab = 'knowledge';
}}
> >
<button <Database />
class="flex gap-2 w-full items-center px-3 py-1.5 text-sm cursor-pointer hover:bg-gray-50 dark:hover:bg-gray-800/50 rounded-xl {!fileUploadEnabled
? 'opacity-50'
: ''}"
on:click={() => {
tab = 'knowledge';
}}
>
<Database />
<div class="flex items-center w-full justify-between"> <div class="flex items-center w-full justify-between">
<div class=" line-clamp-1"> <div class=" line-clamp-1">
{$i18n.t('Attach Knowledge')} {$i18n.t('Attach Knowledge')}
</div>
<div class="text-gray-500">
<ChevronRight />
</div>
</div> </div>
</button>
</Tooltip> <div class="text-gray-500">
{/if} <ChevronRight />
</div>
</div>
</button>
</Tooltip>
{#if ($chats ?? []).length > 0} {#if ($chats ?? []).length > 0}
<Tooltip <Tooltip

View file

@ -4,114 +4,296 @@
import { decodeString } from '$lib/utils'; import { decodeString } from '$lib/utils';
import { knowledge } from '$lib/stores'; import { knowledge } from '$lib/stores';
import { getKnowledgeBases } from '$lib/apis/knowledge'; import { getKnowledgeBases, searchKnowledgeFilesById } from '$lib/apis/knowledge';
import Tooltip from '$lib/components/common/Tooltip.svelte'; import Tooltip from '$lib/components/common/Tooltip.svelte';
import Database from '$lib/components/icons/Database.svelte'; import Database from '$lib/components/icons/Database.svelte';
import DocumentPage from '$lib/components/icons/DocumentPage.svelte'; import DocumentPage from '$lib/components/icons/DocumentPage.svelte';
import Spinner from '$lib/components/common/Spinner.svelte'; import Spinner from '$lib/components/common/Spinner.svelte';
import Loader from '$lib/components/common/Loader.svelte';
import ChevronDown from '$lib/components/icons/ChevronDown.svelte';
import ChevronRight from '$lib/components/icons/ChevronRight.svelte';
const i18n = getContext('i18n'); const i18n = getContext('i18n');
export let onSelect = (e) => {}; export let onSelect = (e) => {};
let loaded = false; let loaded = false;
let items = [];
let selectedIdx = 0; let selectedIdx = 0;
onMount(async () => { let selectedItem = null;
if ($knowledge === null) {
await knowledge.set(await getKnowledgeBases(localStorage.token)); let selectedFileItemsPage = 1;
let selectedFileItems = null;
let selectedFileItemsTotal = null;
let selectedFileItemsLoading = false;
let selectedFileAllItemsLoaded = false;
$: if (selectedItem) {
initSelectedFileItems();
}
const initSelectedFileItems = async () => {
selectedFileItemsPage = 1;
selectedFileItems = null;
selectedFileItemsTotal = null;
selectedFileAllItemsLoaded = false;
selectedFileItemsLoading = false;
await tick();
await getSelectedFileItemsPage();
};
const loadMoreSelectedFileItems = async () => {
if (selectedFileAllItemsLoaded) return;
selectedFileItemsPage += 1;
await getSelectedFileItemsPage();
};
const getSelectedFileItemsPage = async () => {
if (!selectedItem) return;
selectedFileItemsLoading = true;
const res = await searchKnowledgeFilesById(
localStorage.token,
selectedItem.id,
null,
null,
null,
null,
selectedFileItemsPage
).catch(() => {
return null;
});
if (res) {
selectedFileItemsTotal = res.total;
const pageItems = res.items;
if ((pageItems ?? []).length === 0) {
selectedFileAllItemsLoaded = true;
} else {
selectedFileAllItemsLoaded = false;
}
if (selectedFileItems) {
selectedFileItems = [...selectedFileItems, ...pageItems];
} else {
selectedFileItems = pageItems;
}
} }
let collections = $knowledge selectedFileItemsLoading = false;
.filter((item) => !item?.meta?.document) return res;
.map((item) => ({ };
...item,
type: 'collection'
}));
``;
let collection_files =
$knowledge.length > 0
? [
...$knowledge
.reduce((a, item) => {
return [
...new Set([
...a,
...(item?.files ?? []).map((file) => ({
...file,
collection: { name: item.name, description: item.description } // DO NOT REMOVE, USED IN FILE DESCRIPTION/ATTACHMENT
}))
])
];
}, [])
.map((file) => ({
...file,
name: file?.meta?.name,
description: `${file?.collection?.name} - ${file?.collection?.description}`,
knowledge: true, // DO NOT REMOVE, USED TO INDICATE KNOWLEDGE BASE FILE
type: 'file'
}))
]
: [];
items = [...collections, ...collection_files]; let page = 1;
let items = null;
let total = null;
let itemsLoading = false;
let allItemsLoaded = false;
$: if (loaded) {
init();
}
const init = async () => {
reset();
await tick(); await tick();
await getItemsPage();
};
const reset = () => {
page = 1;
items = null;
total = null;
allItemsLoaded = false;
itemsLoading = false;
};
const loadMoreItems = async () => {
if (allItemsLoaded) return;
page += 1;
await getItemsPage();
};
const getItemsPage = async () => {
itemsLoading = true;
const res = await getKnowledgeBases(localStorage.token, page).catch(() => {
return null;
});
if (res) {
console.log(res);
total = res.total;
const pageItems = res.items;
if ((pageItems ?? []).length === 0) {
allItemsLoaded = true;
} else {
allItemsLoaded = false;
}
if (items) {
items = [...items, ...pageItems];
} else {
items = pageItems;
}
}
itemsLoading = false;
return res;
};
onMount(async () => {
await tick();
loaded = true; loaded = true;
}); });
</script> </script>
{#if loaded} {#if loaded && items !== null}
<div class="flex flex-col gap-0.5"> <div class="flex flex-col gap-0.5">
{#each items as item, idx} {#if items.length === 0}
<button <div class="py-4 text-center text-sm text-gray-500 dark:text-gray-400">
class=" px-2.5 py-1 rounded-xl w-full text-left flex justify-between items-center text-sm {idx === {$i18n.t('No knowledge bases found.')}
selectedIdx </div>
? ' bg-gray-50 dark:bg-gray-800 dark:text-gray-100 selected-command-option-button' {:else}
: ''}" {#each items as item, idx (item.id)}
type="button" <div
on:click={() => { class=" px-2.5 py-1 rounded-xl w-full text-left flex justify-between items-center text-sm {idx ===
console.log(item); selectedIdx
onSelect(item); ? ' bg-gray-50 dark:bg-gray-800 dark:text-gray-100 selected-command-option-button'
}} : ''}"
on:mousemove={() => { >
selectedIdx = idx; <button
}} class="w-full flex-1"
on:mouseleave={() => { type="button"
if (idx === 0) { on:click={() => {
selectedIdx = -1; onSelect({
} type: 'collection',
}} ...item
data-selected={idx === selectedIdx} });
> }}
<div class=" text-black dark:text-gray-100 flex items-center gap-1"> on:mousemove={() => {
<Tooltip selectedIdx = idx;
content={item?.legacy }}
? $i18n.t('Legacy') on:mouseleave={() => {
: item?.type === 'file' if (idx === 0) {
? $i18n.t('File') selectedIdx = -1;
: item?.type === 'collection' }
? $i18n.t('Collection') }}
: ''} data-selected={idx === selectedIdx}
placement="top"
> >
{#if item?.type === 'collection'} <div class=" text-black dark:text-gray-100 flex items-center gap-1 shrink-0">
<Database className="size-4" /> <Tooltip content={$i18n.t('Collection')} placement="top">
{:else} <Database className="size-4" />
<DocumentPage className="size-4" /> </Tooltip>
{/if}
</Tooltip>
<Tooltip content={item.description || decodeString(item?.name)} placement="top-start"> <Tooltip content={item.description || decodeString(item?.name)} placement="top-start">
<div class="line-clamp-1 flex-1"> <div class="line-clamp-1 flex-1 text-sm">
{decodeString(item?.name)} {decodeString(item?.name)}
</div>
</Tooltip>
</div> </div>
</button>
<Tooltip content={$i18n.t('Show Files')} placement="top">
<button
type="button"
class=" ml-2 opacity-50 hover:opacity-100 transition"
on:click={() => {
if (selectedItem && selectedItem.id === item.id) {
selectedItem = null;
} else {
selectedItem = item;
}
}}
>
{#if selectedItem && selectedItem.id === item.id}
<ChevronDown className="size-3" />
{:else}
<ChevronRight className="size-3" />
{/if}
</button>
</Tooltip> </Tooltip>
</div> </div>
</button>
{/each} {#if selectedItem && selectedItem.id === item.id}
<div class="pl-3 mb-1 flex flex-col gap-0.5">
{#if selectedFileItems === null && selectedFileItemsTotal === null}
<div class=" py-1 flex justify-center">
<Spinner className="size-3" />
</div>
{:else if selectedFileItemsTotal === 0}
<div class=" text-xs text-gray-500 dark:text-gray-400 italic py-0.5 px-2">
{$i18n.t('No files in this knowledge base.')}
</div>
{:else}
{#each selectedFileItems as file, fileIdx (file.id)}
<button
class=" px-2.5 py-1 rounded-xl w-full text-left flex justify-between items-center text-sm hover:bg-gray-50 hover:dark:bg-gray-800 hover:dark:text-gray-100"
type="button"
on:click={() => {
console.log(file);
onSelect({
type: 'file',
name: file?.meta?.name,
...file
});
}}
>
<div class=" flex items-center gap-1.5">
<Tooltip content={$i18n.t('Collection')} placement="top">
<DocumentPage className="size-4" />
</Tooltip>
<Tooltip content={decodeString(file?.meta?.name)} placement="top-start">
<div class="line-clamp-1 flex-1 text-sm">
{decodeString(file?.meta?.name)}
</div>
</Tooltip>
</div>
</button>
{/each}
{#if !selectedFileAllItemsLoaded && !selectedFileItemsLoading}
<Loader
on:visible={async (e) => {
if (!selectedFileItemsLoading) {
await loadMoreSelectedFileItems();
}
}}
>
<div
class="w-full flex justify-center py-4 text-xs animate-pulse items-center gap-2"
>
<Spinner className=" size-3" />
<div class=" ">{$i18n.t('Loading...')}</div>
</div>
</Loader>
{/if}
{/if}
</div>
{/if}
{/each}
{#if !allItemsLoaded}
<Loader
on:visible={(e) => {
if (!itemsLoading) {
loadMoreItems();
}
}}
>
<div class="w-full flex justify-center py-4 text-xs animate-pulse items-center gap-2">
<Spinner className=" size-4" />
<div class=" ">{$i18n.t('Loading...')}</div>
</div>
</Loader>
{/if}
{/if}
</div> </div>
{:else} {:else}
<div class="py-4.5"> <div class="py-4.5">

View file

@ -1,11 +1,14 @@
<script lang="ts"> <script lang="ts">
import { getContext } from 'svelte'; import { getContext } from 'svelte';
import CitationModal from './Citations/CitationModal.svelte';
import { embed, showControls, showEmbeds } from '$lib/stores'; import { embed, showControls, showEmbeds } from '$lib/stores';
import CitationModal from './Citations/CitationModal.svelte';
const i18n = getContext('i18n'); const i18n = getContext('i18n');
export let id = ''; export let id = '';
export let chatId = '';
export let sources = []; export let sources = [];
export let readOnly = false; export let readOnly = false;
@ -35,8 +38,11 @@
showControls.set(true); showControls.set(true);
showEmbeds.set(true); showEmbeds.set(true);
embed.set({ embed.set({
url: embedUrl,
title: citations[sourceIdx]?.source?.name || 'Embedded Content', title: citations[sourceIdx]?.source?.name || 'Embedded Content',
url: embedUrl source: citations[sourceIdx],
chatId: chatId,
messageId: id
}); });
} }
} else { } else {

View file

@ -39,8 +39,6 @@
}; };
</script> </script>
{sourceIds}
{#if sourceIds} {#if sourceIds}
{#if (token?.ids ?? []).length == 1} {#if (token?.ids ?? []).length == 1}
<Source id={token.ids[0] - 1} title={sourceIds[token.ids[0] - 1]} {onClick} /> <Source id={token.ids[0] - 1} title={sourceIds[token.ids[0] - 1]} {onClick} />

View file

@ -824,6 +824,7 @@
<Citations <Citations
bind:this={citationsElement} bind:this={citationsElement}
id={message?.id} id={message?.id}
{chatId}
sources={message?.sources ?? message?.citations} sources={message?.sources ?? message?.citations}
{readOnly} {readOnly}
/> />

View file

@ -21,6 +21,8 @@
'strict-origin-when-cross-origin'; 'strict-origin-when-cross-origin';
export let allowFullscreen = true; export let allowFullscreen = true;
export let payload = null; // payload to send into the iframe on request
let iframe: HTMLIFrameElement | null = null; let iframe: HTMLIFrameElement | null = null;
let iframeSrc: string | null = null; let iframeSrc: string | null = null;
let iframeDoc: string | null = null; let iframeDoc: string | null = null;
@ -142,13 +144,29 @@ window.Chart = parent.Chart; // Chart previously assigned on parent
} }
} }
// Handle height messages from the iframe (we also verify the sender)
function onMessage(e: MessageEvent) { function onMessage(e: MessageEvent) {
if (!iframe || e.source !== iframe.contentWindow) return; if (!iframe || e.source !== iframe.contentWindow) return;
const data = e.data as { type?: string; height?: number };
const data = e.data || {};
if (data?.type === 'iframe:height' && typeof data.height === 'number') { if (data?.type === 'iframe:height' && typeof data.height === 'number') {
iframe.style.height = Math.max(0, data.height) + 'px'; iframe.style.height = Math.max(0, data.height) + 'px';
} }
// Pong message for testing connectivity
if (data?.type === 'pong') {
console.log('Received pong from iframe:', data);
// Optional: reply back
iframe.contentWindow?.postMessage({ type: 'pong:ack' }, '*');
}
// Send payload data if requested
if (data?.type === 'payload') {
iframe.contentWindow?.postMessage(
{ type: 'payload', requestId: data?.requestId ?? null, payload: payload },
'*'
);
}
} }
// When the iframe loads, try same-origin resize (cross-origin will noop) // When the iframe loads, try same-origin resize (cross-origin will noop)

View file

@ -36,7 +36,7 @@
let filter = {}; let filter = {};
$: filter = { $: filter = {
...(query ? { query } : {}), ...(query ? { query: query } : {}),
...(orderBy ? { order_by: orderBy } : {}), ...(orderBy ? { order_by: orderBy } : {}),
...(direction ? { direction } : {}) ...(direction ? { direction } : {})
}; };

View file

@ -121,6 +121,7 @@
class=" w-full text-sm pr-4 py-1 rounded-r-xl outline-hidden bg-transparent" class=" w-full text-sm pr-4 py-1 rounded-r-xl outline-hidden bg-transparent"
bind:value={query} bind:value={query}
placeholder={$i18n.t('Search Chats')} placeholder={$i18n.t('Search Chats')}
maxlength="500"
/> />
{#if query} {#if query}

View file

@ -25,7 +25,8 @@
isApp, isApp,
models, models,
selectedFolder, selectedFolder,
WEBUI_NAME WEBUI_NAME,
sidebarWidth
} from '$lib/stores'; } from '$lib/stores';
import { onMount, getContext, tick, onDestroy } from 'svelte'; import { onMount, getContext, tick, onDestroy } from 'svelte';
@ -371,8 +372,55 @@
selectedChatId = null; selectedChatId = null;
}; };
const MIN_WIDTH = 220;
const MAX_WIDTH = 480;
let isResizing = false;
let startWidth = 0;
let startClientX = 0;
const resizeStartHandler = (e: MouseEvent) => {
if ($mobile) return;
isResizing = true;
startClientX = e.clientX;
startWidth = $sidebarWidth ?? 260;
document.body.style.userSelect = 'none';
};
const resizeEndHandler = () => {
if (!isResizing) return;
isResizing = false;
document.body.style.userSelect = '';
localStorage.setItem('sidebarWidth', String($sidebarWidth));
};
const resizeSidebarHandler = (endClientX) => {
const dx = endClientX - startClientX;
const newSidebarWidth = Math.min(MAX_WIDTH, Math.max(MIN_WIDTH, startWidth + dx));
sidebarWidth.set(newSidebarWidth);
document.documentElement.style.setProperty('--sidebar-width', `${newSidebarWidth}px`);
};
let unsubscribers = []; let unsubscribers = [];
onMount(async () => { onMount(async () => {
try {
const width = Number(localStorage.getItem('sidebarWidth'));
if (!Number.isNaN(width) && width >= MIN_WIDTH && width <= MAX_WIDTH) {
sidebarWidth.set(width);
}
} catch {}
document.documentElement.style.setProperty('--sidebar-width', `${$sidebarWidth}px`);
sidebarWidth.subscribe((w) => {
document.documentElement.style.setProperty('--sidebar-width', `${w}px`);
});
await showSidebar.set(!$mobile ? localStorage.sidebar === 'true' : false); await showSidebar.set(!$mobile ? localStorage.sidebar === 'true' : false);
unsubscribers = [ unsubscribers = [
@ -570,6 +618,16 @@
}} }}
/> />
<svelte:window
on:mousemove={(e) => {
if (!isResizing) return;
resizeSidebarHandler(e.clientX);
}}
on:mouseup={() => {
resizeEndHandler();
}}
/>
{#if !$mobile && !$showSidebar} {#if !$mobile && !$showSidebar}
<div <div
class=" pt-[7px] pb-2 px-2 flex flex-col justify-between text-black dark:text-white hover:bg-gray-50/30 dark:hover:bg-gray-950/30 h-full z-10 transition-all border-e-[0.5px] border-gray-50 dark:border-gray-850/30" class=" pt-[7px] pb-2 px-2 flex flex-col justify-between text-black dark:text-white hover:bg-gray-50/30 dark:hover:bg-gray-950/30 h-full z-10 transition-all border-e-[0.5px] border-gray-50 dark:border-gray-850/30"
@ -775,7 +833,7 @@
data-state={$showSidebar} data-state={$showSidebar}
> >
<div <div
class=" my-auto flex flex-col justify-between h-screen max-h-[100dvh] w-[260px] overflow-x-hidden scrollbar-hidden z-50 {$showSidebar class=" my-auto flex flex-col justify-between h-screen max-h-[100dvh] w-[var(--sidebar-width)] overflow-x-hidden scrollbar-hidden z-50 {$showSidebar
? '' ? ''
: 'invisible'}" : 'invisible'}"
> >
@ -1321,4 +1379,17 @@
</div> </div>
</div> </div>
</div> </div>
{#if !$mobile}
<div
class="relative flex items-center justify-center group border-l border-gray-50 dark:border-gray-850/30 hover:border-gray-200 dark:hover:border-gray-800 transition z-20"
id="sidebar-resizer"
on:mousedown={resizeStartHandler}
role="separator"
>
<div
class=" absolute -left-1.5 -right-1.5 -top-0 -bottom-0 z-20 cursor-col-resize bg-transparent"
/>
</div>
{/if}
{/if} {/if}

View file

@ -209,6 +209,7 @@
class="w-full rounded-r-xl py-1.5 pl-2.5 text-sm bg-transparent dark:text-gray-300 outline-hidden" class="w-full rounded-r-xl py-1.5 pl-2.5 text-sm bg-transparent dark:text-gray-300 outline-hidden"
placeholder={placeholder ? placeholder : $i18n.t('Search')} placeholder={placeholder ? placeholder : $i18n.t('Search')}
autocomplete="off" autocomplete="off"
maxlength="500"
bind:value bind:value
on:input={() => { on:input={() => {
dispatch('input'); dispatch('input');

View file

@ -337,7 +337,7 @@
> >
<Plus className="size-3" strokeWidth="2.5" /> <Plus className="size-3" strokeWidth="2.5" />
<div class=" md:ml-1 text-xs">{$i18n.t('New Note')}</div> <div class=" ml-1 text-xs">{$i18n.t('New Note')}</div>
</button> </button>
</div> </div>
</div> </div>

View file

@ -1,6 +1,4 @@
<script lang="ts"> <script lang="ts">
import Fuse from 'fuse.js';
import dayjs from 'dayjs'; import dayjs from 'dayjs';
import relativeTime from 'dayjs/plugin/relativeTime'; import relativeTime from 'dayjs/plugin/relativeTime';
dayjs.extend(relativeTime); dayjs.extend(relativeTime);
@ -10,11 +8,7 @@
const i18n = getContext('i18n'); const i18n = getContext('i18n');
import { WEBUI_NAME, knowledge, user } from '$lib/stores'; import { WEBUI_NAME, knowledge, user } from '$lib/stores';
import { import { deleteKnowledgeById, searchKnowledgeBases } from '$lib/apis/knowledge';
getKnowledgeBases,
deleteKnowledgeById,
getKnowledgeBaseList
} from '$lib/apis/knowledge';
import { goto } from '$app/navigation'; import { goto } from '$app/navigation';
import { capitalizeFirstLetter } from '$lib/utils'; import { capitalizeFirstLetter } from '$lib/utils';
@ -28,75 +22,90 @@
import Tooltip from '../common/Tooltip.svelte'; import Tooltip from '../common/Tooltip.svelte';
import XMark from '../icons/XMark.svelte'; import XMark from '../icons/XMark.svelte';
import ViewSelector from './common/ViewSelector.svelte'; import ViewSelector from './common/ViewSelector.svelte';
import Loader from '../common/Loader.svelte';
let loaded = false; let loaded = false;
let query = '';
let selectedItem = null;
let showDeleteConfirm = false; let showDeleteConfirm = false;
let tagsContainerElement: HTMLDivElement; let tagsContainerElement: HTMLDivElement;
let selectedItem = null;
let page = 1;
let query = '';
let viewOption = ''; let viewOption = '';
let fuse = null; let items = null;
let total = null;
let knowledgeBases = []; let allItemsLoaded = false;
let itemsLoading = false;
let items = []; $: if (loaded && query !== undefined && viewOption !== undefined) {
let filteredItems = []; init();
}
const setFuse = async () => { const reset = () => {
items = knowledgeBases.filter( page = 1;
(item) => items = null;
viewOption === '' || total = null;
(viewOption === 'created' && item.user_id === $user?.id) || allItemsLoaded = false;
(viewOption === 'shared' && item.user_id !== $user?.id) itemsLoading = false;
};
const loadMoreItems = async () => {
if (allItemsLoaded) return;
page += 1;
await getItemsPage();
};
const init = async () => {
reset();
await getItemsPage();
};
const getItemsPage = async () => {
itemsLoading = true;
const res = await searchKnowledgeBases(localStorage.token, query, viewOption, page).catch(
() => {
return [];
}
); );
fuse = new Fuse(items, { if (res) {
keys: [ console.log(res);
'name', total = res.total;
'description', const pageItems = res.items;
'user.name', // Ensures Fuse looks into item.user.name
'user.email' // Ensures Fuse looks into item.user.email
],
threshold: 0.3
});
await tick(); if ((pageItems ?? []).length === 0) {
setFilteredItems(); allItemsLoaded = true;
} else {
allItemsLoaded = false;
}
if (items) {
items = [...items, ...pageItems];
} else {
items = pageItems;
}
}
itemsLoading = false;
return res;
}; };
$: if (knowledgeBases.length > 0 && viewOption !== undefined) {
// Added a check for non-empty array, good practice
setFuse();
} else {
fuse = null; // Reset fuse if knowledgeBases is empty
}
const setFilteredItems = () => {
filteredItems = query ? fuse.search(query).map((result) => result.item) : items;
};
$: if (query !== undefined && fuse) {
setFilteredItems();
}
const deleteHandler = async (item) => { const deleteHandler = async (item) => {
const res = await deleteKnowledgeById(localStorage.token, item.id).catch((e) => { const res = await deleteKnowledgeById(localStorage.token, item.id).catch((e) => {
toast.error(`${e}`); toast.error(`${e}`);
}); });
if (res) { if (res) {
knowledgeBases = await getKnowledgeBaseList(localStorage.token);
knowledge.set(await getKnowledgeBases(localStorage.token));
toast.success($i18n.t('Knowledge deleted successfully.')); toast.success($i18n.t('Knowledge deleted successfully.'));
init();
} }
}; };
onMount(async () => { onMount(async () => {
viewOption = localStorage?.workspaceViewOption || ''; viewOption = localStorage?.workspaceViewOption || '';
knowledgeBases = await getKnowledgeBaseList(localStorage.token);
loaded = true; loaded = true;
}); });
</script> </script>
@ -123,7 +132,7 @@
</div> </div>
<div class="text-lg font-medium text-gray-500 dark:text-gray-500"> <div class="text-lg font-medium text-gray-500 dark:text-gray-500">
{filteredItems.length} {total}
</div> </div>
</div> </div>
@ -192,11 +201,11 @@
</div> </div>
</div> </div>
{#if (filteredItems ?? []).length !== 0} {#if items !== null && total !== null}
<!-- The Aleph dreams itself into being, and the void learns its own name --> {#if (items ?? []).length !== 0}
<div class=" my-2 px-3 grid grid-cols-1 lg:grid-cols-2 gap-2"> <!-- The Aleph dreams itself into being, and the void learns its own name -->
{#each filteredItems as item} <div class=" my-2 px-3 grid grid-cols-1 lg:grid-cols-2 gap-2">
<Tooltip content={item?.description ?? item.name}> {#each items as item}
<button <button
class=" flex space-x-4 cursor-pointer text-left w-full px-3 py-2.5 dark:hover:bg-gray-850/50 hover:bg-gray-50 transition rounded-2xl" class=" flex space-x-4 cursor-pointer text-left w-full px-3 py-2.5 dark:hover:bg-gray-850/50 hover:bg-gray-50 transition rounded-2xl"
on:click={() => { on:click={() => {
@ -212,42 +221,49 @@
}} }}
> >
<div class=" w-full"> <div class=" w-full">
<div class=" self-center flex-1"> <div class=" self-center flex-1 justify-between">
<div class="flex items-center justify-between -my-1"> <div class="flex items-center justify-between -my-1 h-8">
<div class=" flex gap-2 items-center"> <div class=" flex gap-2 items-center justify-between w-full">
<div> <div>
{#if item?.meta?.document} <Badge type="success" content={$i18n.t('Collection')} />
<Badge type="muted" content={$i18n.t('Document')} />
{:else}
<Badge type="success" content={$i18n.t('Collection')} />
{/if}
</div> </div>
<div class=" text-xs text-gray-500 line-clamp-1"> {#if !item?.write_access}
{$i18n.t('Updated')} <div>
{dayjs(item.updated_at * 1000).fromNow()} <Badge type="muted" content={$i18n.t('Read Only')} />
</div> </div>
{/if}
</div> </div>
{#if item?.write_access}
<div class="flex items-center gap-2"> <div class="flex items-center gap-2">
<div class=" flex self-center"> <div class=" flex self-center">
<ItemMenu <ItemMenu
on:delete={() => { on:delete={() => {
selectedItem = item; selectedItem = item;
showDeleteConfirm = true; showDeleteConfirm = true;
}} }}
/> />
</div>
</div> </div>
</div> {/if}
</div> </div>
<div class=" flex items-center gap-1 justify-between px-1.5"> <div class=" flex items-center gap-1 justify-between px-1.5">
<div class=" flex items-center gap-2"> <Tooltip content={item?.description ?? item.name}>
<div class=" text-sm font-medium line-clamp-1 capitalize">{item.name}</div> <div class=" flex items-center gap-2">
</div> <div class=" text-sm font-medium line-clamp-1 capitalize">{item.name}</div>
</div>
</Tooltip>
<div> <div class="flex items-center gap-2 shrink-0">
<div class="text-xs text-gray-500"> <Tooltip content={dayjs(item.updated_at * 1000).format('LLLL')}>
<div class=" text-xs text-gray-500 line-clamp-1 hidden sm:block">
{$i18n.t('Updated')}
{dayjs(item.updated_at * 1000).fromNow()}
</div>
</Tooltip>
<div class="text-xs text-gray-500 shrink-0">
<Tooltip <Tooltip
content={item?.user?.email ?? $i18n.t('Deleted User')} content={item?.user?.email ?? $i18n.t('Deleted User')}
className="flex shrink-0" className="flex shrink-0"
@ -265,18 +281,37 @@
</div> </div>
</div> </div>
</button> </button>
</Tooltip> {/each}
{/each} </div>
</div>
{:else} {#if !allItemsLoaded}
<div class=" w-full h-full flex flex-col justify-center items-center my-16 mb-24"> <Loader
<div class="max-w-md text-center"> on:visible={(e) => {
<div class=" text-3xl mb-3">😕</div> if (!itemsLoading) {
<div class=" text-lg font-medium mb-1">{$i18n.t('No knowledge found')}</div> loadMoreItems();
<div class=" text-gray-500 text-center text-xs"> }
{$i18n.t('Try adjusting your search or filter to find what you are looking for.')} }}
>
<div class="w-full flex justify-center py-4 text-xs animate-pulse items-center gap-2">
<Spinner className=" size-4" />
<div class=" ">{$i18n.t('Loading...')}</div>
</div>
</Loader>
{/if}
{:else}
<div class=" w-full h-full flex flex-col justify-center items-center my-16 mb-24">
<div class="max-w-md text-center">
<div class=" text-3xl mb-3">😕</div>
<div class=" text-lg font-medium mb-1">{$i18n.t('No knowledge found')}</div>
<div class=" text-gray-500 text-center text-xs">
{$i18n.t('Try adjusting your search or filter to find what you are looking for.')}
</div>
</div> </div>
</div> </div>
{/if}
{:else}
<div class="w-full h-full flex justify-center items-center py-10">
<Spinner className="size-4" />
</div> </div>
{/if} {/if}
</div> </div>

View file

@ -1,11 +1,13 @@
<script> <script>
import { toast } from 'svelte-sonner';
import { goto } from '$app/navigation'; import { goto } from '$app/navigation';
import { getContext } from 'svelte'; import { getContext } from 'svelte';
const i18n = getContext('i18n'); const i18n = getContext('i18n');
import { createNewKnowledge, getKnowledgeBases } from '$lib/apis/knowledge'; import { user } from '$lib/stores';
import { toast } from 'svelte-sonner'; import { createNewKnowledge } from '$lib/apis/knowledge';
import { knowledge, user } from '$lib/stores';
import AccessControl from '../common/AccessControl.svelte'; import AccessControl from '../common/AccessControl.svelte';
import Spinner from '$lib/components/common/Spinner.svelte'; import Spinner from '$lib/components/common/Spinner.svelte';
@ -37,7 +39,6 @@
if (res) { if (res) {
toast.success($i18n.t('Knowledge created successfully.')); toast.success($i18n.t('Knowledge created successfully.'));
knowledge.set(await getKnowledgeBases(localStorage.token));
goto(`/workspace/knowledge/${res.id}`); goto(`/workspace/knowledge/${res.id}`);
} }

View file

@ -27,7 +27,6 @@
import { import {
addFileToKnowledgeById, addFileToKnowledgeById,
getKnowledgeById, getKnowledgeById,
getKnowledgeBases,
removeFileFromKnowledgeById, removeFileFromKnowledgeById,
resetKnowledgeById, resetKnowledgeById,
updateFileFromKnowledgeById, updateFileFromKnowledgeById,
@ -206,16 +205,16 @@
fileItems = [...(fileItems ?? []), fileItem]; fileItems = [...(fileItems ?? []), fileItem];
try { try {
// If the file is an audio file, provide the language for STT. let metadata = {
let metadata = null; knowledge_id: knowledge.id,
if ( // If the file is an audio file, provide the language for STT.
(file.type.startsWith('audio/') || file.type.startsWith('video/')) && ...((file.type.startsWith('audio/') || file.type.startsWith('video/')) &&
$settings?.audio?.stt?.language $settings?.audio?.stt?.language
) { ? {
metadata = { language: $settings?.audio?.stt?.language
language: $settings?.audio?.stt?.language }
}; : {})
} };
const uploadedFile = await uploadFile(localStorage.token, file, metadata).catch((e) => { const uploadedFile = await uploadFile(localStorage.token, file, metadata).catch((e) => {
toast.error(`${e}`); toast.error(`${e}`);
@ -423,13 +422,13 @@
// Helper function to maintain file paths within zip // Helper function to maintain file paths within zip
const syncDirectoryHandler = async () => { const syncDirectoryHandler = async () => {
if ((knowledge?.files ?? []).length > 0) { if (fileItems.length > 0) {
const res = await resetKnowledgeById(localStorage.token, id).catch((e) => { const res = await resetKnowledgeById(localStorage.token, id).catch((e) => {
toast.error(`${e}`); toast.error(`${e}`);
}); });
if (res) { if (res) {
knowledge = res; fileItems = [];
toast.success($i18n.t('Knowledge reset successfully.')); toast.success($i18n.t('Knowledge reset successfully.'));
// Upload directory // Upload directory
@ -441,16 +440,14 @@
}; };
const addFileHandler = async (fileId) => { const addFileHandler = async (fileId) => {
const updatedKnowledge = await addFileToKnowledgeById(localStorage.token, id, fileId).catch( const res = await addFileToKnowledgeById(localStorage.token, id, fileId).catch((e) => {
(e) => { toast.error(`${e}`);
toast.error(`${e}`); return null;
return null; });
}
);
if (updatedKnowledge) { if (res) {
knowledge = updatedKnowledge;
toast.success($i18n.t('File added successfully.')); toast.success($i18n.t('File added successfully.'));
init();
} else { } else {
toast.error($i18n.t('Failed to add file.')); toast.error($i18n.t('Failed to add file.'));
fileItems = fileItems.filter((file) => file.id !== fileId); fileItems = fileItems.filter((file) => file.id !== fileId);
@ -462,13 +459,12 @@
console.log('Starting file deletion process for:', fileId); console.log('Starting file deletion process for:', fileId);
// Remove from knowledge base only // Remove from knowledge base only
const updatedKnowledge = await removeFileFromKnowledgeById(localStorage.token, id, fileId); const res = await removeFileFromKnowledgeById(localStorage.token, id, fileId);
console.log('Knowledge base updated:', res);
console.log('Knowledge base updated:', updatedKnowledge); if (res) {
if (updatedKnowledge) {
knowledge = updatedKnowledge;
toast.success($i18n.t('File removed successfully.')); toast.success($i18n.t('File removed successfully.'));
await init();
} }
} catch (e) { } catch (e) {
console.error('Error in deleteFileHandler:', e); console.error('Error in deleteFileHandler:', e);
@ -537,7 +533,6 @@
if (res) { if (res) {
toast.success($i18n.t('Knowledge updated successfully')); toast.success($i18n.t('Knowledge updated successfully'));
_knowledge.set(await getKnowledgeBases(localStorage.token));
} }
}, 1000); }, 1000);
}; };
@ -569,6 +564,11 @@
e.preventDefault(); e.preventDefault();
dragged = false; dragged = false;
if (!knowledge?.write_access) {
toast.error($i18n.t('You do not have permission to upload files to this knowledge base.'));
return;
}
const handleUploadingFileFolder = (items) => { const handleUploadingFileFolder = (items) => {
for (const item of items) { for (const item of items) {
if (item.isFile) { if (item.isFile) {
@ -750,37 +750,44 @@
class="text-left w-full font-medium text-lg font-primary bg-transparent outline-hidden flex-1" class="text-left w-full font-medium text-lg font-primary bg-transparent outline-hidden flex-1"
bind:value={knowledge.name} bind:value={knowledge.name}
placeholder={$i18n.t('Knowledge Name')} placeholder={$i18n.t('Knowledge Name')}
disabled={!knowledge?.write_access}
on:input={() => { on:input={() => {
changeDebounceHandler(); changeDebounceHandler();
}} }}
/> />
<div class="shrink-0 mr-2.5"> <div class="shrink-0 mr-2.5">
{#if (knowledge?.files ?? []).length} {#if fileItemsTotal}
<div class="text-xs text-gray-500"> <div class="text-xs text-gray-500">
{$i18n.t('{{count}} files', { {$i18n.t('{{count}} files', {
count: (knowledge?.files ?? []).length count: fileItemsTotal
})} })}
</div> </div>
{/if} {/if}
</div> </div>
</div> </div>
<div class="self-center shrink-0"> {#if knowledge?.write_access}
<button <div class="self-center shrink-0">
class="bg-gray-50 hover:bg-gray-100 text-black dark:bg-gray-850 dark:hover:bg-gray-800 dark:text-white transition px-2 py-1 rounded-full flex gap-1 items-center" <button
type="button" class="bg-gray-50 hover:bg-gray-100 text-black dark:bg-gray-850 dark:hover:bg-gray-800 dark:text-white transition px-2 py-1 rounded-full flex gap-1 items-center"
on:click={() => { type="button"
showAccessControlModal = true; on:click={() => {
}} showAccessControlModal = true;
> }}
<LockClosed strokeWidth="2.5" className="size-3.5" /> >
<LockClosed strokeWidth="2.5" className="size-3.5" />
<div class="text-sm font-medium shrink-0"> <div class="text-sm font-medium shrink-0">
{$i18n.t('Access')} {$i18n.t('Access')}
</div> </div>
</button> </button>
</div> </div>
{:else}
<div class="text-xs shrink-0 text-gray-500">
{$i18n.t('Read Only')}
</div>
{/if}
</div> </div>
<div class="flex w-full"> <div class="flex w-full">
@ -789,6 +796,7 @@
class="text-left text-xs w-full text-gray-500 bg-transparent outline-hidden" class="text-left text-xs w-full text-gray-500 bg-transparent outline-hidden"
bind:value={knowledge.description} bind:value={knowledge.description}
placeholder={$i18n.t('Knowledge Description')} placeholder={$i18n.t('Knowledge Description')}
disabled={!knowledge?.write_access}
on:input={() => { on:input={() => {
changeDebounceHandler(); changeDebounceHandler();
}} }}
@ -815,22 +823,24 @@
}} }}
/> />
<div> {#if knowledge?.write_access}
<AddContentMenu <div>
on:upload={(e) => { <AddContentMenu
if (e.detail.type === 'directory') { on:upload={(e) => {
uploadDirectoryHandler(); if (e.detail.type === 'directory') {
} else if (e.detail.type === 'text') { uploadDirectoryHandler();
showAddTextContentModal = true; } else if (e.detail.type === 'text') {
} else { showAddTextContentModal = true;
document.getElementById('files-input').click(); } else {
} document.getElementById('files-input').click();
}} }
on:sync={(e) => { }}
showSyncConfirmModal = true; on:sync={(e) => {
}} showSyncConfirmModal = true;
/> }}
</div> />
</div>
{/if}
</div> </div>
</div> </div>
@ -899,6 +909,7 @@
<div class=" flex overflow-y-auto h-full w-full scrollbar-hidden text-xs"> <div class=" flex overflow-y-auto h-full w-full scrollbar-hidden text-xs">
<Files <Files
files={fileItems} files={fileItems}
{knowledge}
{selectedFileId} {selectedFileId}
onClick={(fileId) => { onClick={(fileId) => {
selectedFileId = fileId; selectedFileId = fileId;
@ -962,28 +973,31 @@
{selectedFile?.meta?.name} {selectedFile?.meta?.name}
</div> </div>
<div> {#if knowledge?.write_access}
<button <div>
class="flex self-center w-fit text-sm py-1 px-2.5 dark:text-gray-300 dark:hover:text-white hover:bg-black/5 dark:hover:bg-white/5 rounded-lg disabled:opacity-50 disabled:cursor-not-allowed" <button
disabled={isSaving} class="flex self-center w-fit text-sm py-1 px-2.5 dark:text-gray-300 dark:hover:text-white hover:bg-black/5 dark:hover:bg-white/5 rounded-lg disabled:opacity-50 disabled:cursor-not-allowed"
on:click={() => { disabled={isSaving}
updateFileContentHandler(); on:click={() => {
}} updateFileContentHandler();
> }}
{$i18n.t('Save')} >
{#if isSaving} {$i18n.t('Save')}
<div class="ml-2 self-center"> {#if isSaving}
<Spinner /> <div class="ml-2 self-center">
</div> <Spinner />
{/if} </div>
</button> {/if}
</div> </button>
</div>
{/if}
</div> </div>
{#key selectedFile.id} {#key selectedFile.id}
<textarea <textarea
class="w-full h-full text-sm outline-none resize-none px-3 py-2" class="w-full h-full text-sm outline-none resize-none px-3 py-2"
bind:value={selectedFileContent} bind:value={selectedFileContent}
disabled={!knowledge?.write_access}
placeholder={$i18n.t('Add content here')} placeholder={$i18n.t('Add content here')}
/> />
{/key} {/key}

View file

@ -16,6 +16,7 @@
import XMark from '$lib/components/icons/XMark.svelte'; import XMark from '$lib/components/icons/XMark.svelte';
import Spinner from '$lib/components/common/Spinner.svelte'; import Spinner from '$lib/components/common/Spinner.svelte';
export let knowledge = null;
export let selectedFileId = null; export let selectedFileId = null;
export let files = []; export let files = [];
@ -42,15 +43,17 @@
<div class="flex gap-2 items-center line-clamp-1"> <div class="flex gap-2 items-center line-clamp-1">
<div class="shrink-0"> <div class="shrink-0">
{#if file?.status !== 'uploading'} {#if file?.status !== 'uploading'}
<DocumentPage className="size-3" /> <DocumentPage className="size-3.5" />
{:else} {:else}
<Spinner className="size-3" /> <Spinner className="size-3.5" />
{/if} {/if}
</div> </div>
<div class="line-clamp-1"> <div class="line-clamp-1 text-sm">
{file?.name ?? file?.meta?.name} {file?.name ?? file?.meta?.name}
<span class="text-xs text-gray-500">{formatFileSize(file?.meta?.size)}</span> {#if file?.meta?.size}
<span class="text-xs text-gray-500">{formatFileSize(file?.meta?.size)}</span>
{/if}
</div> </div>
</div> </div>
</div> </div>
@ -77,19 +80,21 @@
</div> </div>
</button> </button>
<div class="flex items-center"> {#if knowledge?.write_access}
<Tooltip content={$i18n.t('Delete')}> <div class="flex items-center">
<button <Tooltip content={$i18n.t('Delete')}>
class="p-1 rounded-full hover:bg-gray-100 dark:hover:bg-gray-850 transition" <button
type="button" class="p-1 rounded-full hover:bg-gray-100 dark:hover:bg-gray-850 transition"
on:click={() => { type="button"
onDelete(file?.id ?? file?.tempId); on:click={() => {
}} onDelete(file?.id ?? file?.tempId);
> }}
<XMark /> >
</button> <XMark />
</Tooltip> </button>
</div> </Tooltip>
</div>
{/if}
</div> </div>
{/each} {/each}
</div> </div>

View file

@ -68,13 +68,18 @@
let models = null; let models = null;
let total = null; let total = null;
let searchDebounceTimer;
$: if ( $: if (
page !== undefined && page !== undefined &&
query !== undefined && query !== undefined &&
selectedTag !== undefined && selectedTag !== undefined &&
viewOption !== undefined viewOption !== undefined
) { ) {
getModelList(); clearTimeout(searchDebounceTimer);
searchDebounceTimer = setTimeout(() => {
getModelList();
}, 300);
} }
const getModelList = async () => { const getModelList = async () => {
@ -381,6 +386,7 @@
class=" w-full text-sm py-1 rounded-r-xl outline-hidden bg-transparent" class=" w-full text-sm py-1 rounded-r-xl outline-hidden bg-transparent"
bind:value={query} bind:value={query}
placeholder={$i18n.t('Search Models')} placeholder={$i18n.t('Search Models')}
maxlength="500"
/> />
{#if query} {#if query}
@ -430,213 +436,221 @@
</div> </div>
</div> </div>
{#if (models ?? []).length !== 0} {#if models !== null}
<div class=" px-3 my-2 gap-1 lg:gap-2 grid lg:grid-cols-2" id="model-list"> {#if (models ?? []).length !== 0}
{#each models as model (model.id)} <div class=" px-3 my-2 gap-1 lg:gap-2 grid lg:grid-cols-2" id="model-list">
<!-- svelte-ignore a11y_no_static_element_interactions --> {#each models as model (model.id)}
<!-- svelte-ignore a11y_click_events_have_key_events --> <!-- svelte-ignore a11y_no_static_element_interactions -->
<div <!-- svelte-ignore a11y_click_events_have_key_events -->
class=" flex cursor-pointer dark:hover:bg-gray-850/50 hover:bg-gray-50 transition rounded-2xl w-full p-2.5" <div
id="model-item-{model.id}" class=" flex cursor-pointer dark:hover:bg-gray-850/50 hover:bg-gray-50 transition rounded-2xl w-full p-2.5"
on:click={() => { id="model-item-{model.id}"
if ( on:click={() => {
$user?.role === 'admin' || if (
model.user_id === $user?.id || $user?.role === 'admin' ||
model.access_control.write.group_ids.some((wg) => groupIds.includes(wg)) model.user_id === $user?.id ||
) { model.access_control.write.group_ids.some((wg) => groupIds.includes(wg))
goto(`/workspace/models/edit?id=${encodeURIComponent(model.id)}`); ) {
} goto(`/workspace/models/edit?id=${encodeURIComponent(model.id)}`);
}} }
> }}
<div class="flex group/item gap-3.5 w-full"> >
<div class="self-center pl-0.5"> <div class="flex group/item gap-3.5 w-full">
<div class="flex bg-white rounded-2xl"> <div class="self-center pl-0.5">
<div <div class="flex bg-white rounded-2xl">
class="{model.is_active <div
? '' class="{model.is_active
: 'opacity-50 dark:opacity-50'} bg-transparent rounded-2xl" ? ''
> : 'opacity-50 dark:opacity-50'} bg-transparent rounded-2xl"
<img >
src={`${WEBUI_API_BASE_URL}/models/model/profile/image?id=${model.id}&lang=${$i18n.language}`} <img
alt="modelfile profile" src={`${WEBUI_API_BASE_URL}/models/model/profile/image?id=${model.id}&lang=${$i18n.language}`}
class=" rounded-2xl size-12 object-cover" alt="modelfile profile"
/> class=" rounded-2xl size-12 object-cover"
/>
</div>
</div> </div>
</div> </div>
</div>
<div class=" shrink-0 flex w-full min-w-0 flex-1 pr-1 self-center"> <div class=" shrink-0 flex w-full min-w-0 flex-1 pr-1 self-center">
<div class="flex h-full w-full flex-1 flex-col justify-start self-center group"> <div class="flex h-full w-full flex-1 flex-col justify-start self-center group">
<div class="flex-1 w-full"> <div class="flex-1 w-full">
<div class="flex items-center justify-between w-full"> <div class="flex items-center justify-between w-full">
<Tooltip content={model.name} className=" w-fit" placement="top-start"> <Tooltip content={model.name} className=" w-fit" placement="top-start">
<a <a
class=" font-medium line-clamp-1 hover:underline capitalize" class=" font-medium line-clamp-1 hover:underline capitalize"
href={`/?models=${encodeURIComponent(model.id)}`} href={`/?models=${encodeURIComponent(model.id)}`}
> >
{model.name} {model.name}
</a> </a>
</Tooltip> </Tooltip>
<div class=" flex items-center gap-1"> <div class=" flex items-center gap-1">
<div <div
class="flex justify-end w-full {model.is_active ? '' : 'text-gray-500'}" class="flex justify-end w-full {model.is_active ? '' : 'text-gray-500'}"
> >
<div class="flex justify-between items-center w-full"> <div class="flex justify-between items-center w-full">
<div class=""></div> <div class=""></div>
<div class="flex flex-row gap-0.5 items-center"> <div class="flex flex-row gap-0.5 items-center">
{#if shiftKey} {#if shiftKey}
<Tooltip <Tooltip
content={model?.meta?.hidden ? $i18n.t('Show') : $i18n.t('Hide')} content={model?.meta?.hidden
> ? $i18n.t('Show')
<button : $i18n.t('Hide')}
class="self-center w-fit text-sm p-1.5 dark:text-white hover:bg-black/5 dark:hover:bg-white/5 rounded-xl" >
type="button" <button
on:click={(e) => { class="self-center w-fit text-sm p-1.5 dark:text-white hover:bg-black/5 dark:hover:bg-white/5 rounded-xl"
e.stopPropagation(); type="button"
on:click={(e) => {
e.stopPropagation();
hideModelHandler(model);
}}
>
{#if model?.meta?.hidden}
<EyeSlash />
{:else}
<Eye />
{/if}
</button>
</Tooltip>
<Tooltip content={$i18n.t('Delete')}>
<button
class="self-center w-fit text-sm p-1.5 dark:text-white hover:bg-black/5 dark:hover:bg-white/5 rounded-xl"
type="button"
on:click={(e) => {
e.stopPropagation();
deleteModelHandler(model);
}}
>
<GarbageBin />
</button>
</Tooltip>
{:else}
<ModelMenu
user={$user}
{model}
editHandler={() => {
goto(
`/workspace/models/edit?id=${encodeURIComponent(model.id)}`
);
}}
shareHandler={() => {
shareModelHandler(model);
}}
cloneHandler={() => {
cloneModelHandler(model);
}}
exportHandler={() => {
exportModelHandler(model);
}}
hideHandler={() => {
hideModelHandler(model); hideModelHandler(model);
}} }}
> copyLinkHandler={() => {
{#if model?.meta?.hidden} copyLinkHandler(model);
<EyeSlash />
{:else}
<Eye />
{/if}
</button>
</Tooltip>
<Tooltip content={$i18n.t('Delete')}>
<button
class="self-center w-fit text-sm p-1.5 dark:text-white hover:bg-black/5 dark:hover:bg-white/5 rounded-xl"
type="button"
on:click={(e) => {
e.stopPropagation();
deleteModelHandler(model);
}} }}
deleteHandler={() => {
selectedModel = model;
showModelDeleteConfirm = true;
}}
onClose={() => {}}
> >
<GarbageBin /> <div
</button> class="self-center w-fit p-1 text-sm dark:text-white hover:bg-black/5 dark:hover:bg-white/5 rounded-xl"
</Tooltip> >
<EllipsisHorizontal className="size-5" />
</div>
</ModelMenu>
{/if}
</div>
</div>
</div>
<button
on:click={(e) => {
e.stopPropagation();
}}
>
<Tooltip
content={model.is_active ? $i18n.t('Enabled') : $i18n.t('Disabled')}
>
<Switch
bind:state={model.is_active}
on:change={async () => {
toggleModelById(localStorage.token, model.id);
_models.set(
await getModels(
localStorage.token,
$config?.features?.enable_direct_connections &&
($settings?.directConnections ?? null)
)
);
}}
/>
</Tooltip>
</button>
</div>
</div>
<div class=" flex gap-1 pr-2 -mt-1 items-center">
<Tooltip
content={model?.user?.email ?? $i18n.t('Deleted User')}
className="flex shrink-0"
placement="top-start"
>
<div class="shrink-0 text-gray-500 text-xs">
{$i18n.t('By {{name}}', {
name: capitalizeFirstLetter(
model?.user?.name ?? model?.user?.email ?? $i18n.t('Deleted User')
)
})}
</div>
</Tooltip>
<div>·</div>
<Tooltip
content={marked.parse(model?.meta?.description ?? model.id)}
className=" w-fit text-left"
placement="top-start"
>
<div class="flex gap-1 text-xs overflow-hidden">
<div class="line-clamp-1">
{#if (model?.meta?.description ?? '').trim()}
{model?.meta?.description}
{:else} {:else}
<ModelMenu {model.id}
user={$user}
{model}
editHandler={() => {
goto(
`/workspace/models/edit?id=${encodeURIComponent(model.id)}`
);
}}
shareHandler={() => {
shareModelHandler(model);
}}
cloneHandler={() => {
cloneModelHandler(model);
}}
exportHandler={() => {
exportModelHandler(model);
}}
hideHandler={() => {
hideModelHandler(model);
}}
copyLinkHandler={() => {
copyLinkHandler(model);
}}
deleteHandler={() => {
selectedModel = model;
showModelDeleteConfirm = true;
}}
onClose={() => {}}
>
<div
class="self-center w-fit p-1 text-sm dark:text-white hover:bg-black/5 dark:hover:bg-white/5 rounded-xl"
>
<EllipsisHorizontal className="size-5" />
</div>
</ModelMenu>
{/if} {/if}
</div> </div>
</div> </div>
</div> </Tooltip>
<button
on:click={(e) => {
e.stopPropagation();
}}
>
<Tooltip
content={model.is_active ? $i18n.t('Enabled') : $i18n.t('Disabled')}
>
<Switch
bind:state={model.is_active}
on:change={async () => {
toggleModelById(localStorage.token, model.id);
_models.set(
await getModels(
localStorage.token,
$config?.features?.enable_direct_connections &&
($settings?.directConnections ?? null)
)
);
}}
/>
</Tooltip>
</button>
</div> </div>
</div> </div>
<div class=" flex gap-1 pr-2 -mt-1 items-center">
<Tooltip
content={model?.user?.email ?? $i18n.t('Deleted User')}
className="flex shrink-0"
placement="top-start"
>
<div class="shrink-0 text-gray-500 text-xs">
{$i18n.t('By {{name}}', {
name: capitalizeFirstLetter(
model?.user?.name ?? model?.user?.email ?? $i18n.t('Deleted User')
)
})}
</div>
</Tooltip>
<div>·</div>
<Tooltip
content={marked.parse(model?.meta?.description ?? model.id)}
className=" w-fit text-left"
placement="top-start"
>
<div class="flex gap-1 text-xs overflow-hidden">
<div class="line-clamp-1">
{#if (model?.meta?.description ?? '').trim()}
{model?.meta?.description}
{:else}
{model.id}
{/if}
</div>
</div>
</Tooltip>
</div>
</div> </div>
</div> </div>
</div> </div>
</div> </div>
</div> {/each}
{/each} </div>
</div>
{#if total > 30} {#if total > 30}
<Pagination bind:page count={total} perPage={30} /> <Pagination bind:page count={total} perPage={30} />
{/if} {/if}
{:else} {:else}
<div class=" w-full h-full flex flex-col justify-center items-center my-16 mb-24"> <div class=" w-full h-full flex flex-col justify-center items-center my-16 mb-24">
<div class="max-w-md text-center"> <div class="max-w-md text-center">
<div class=" text-3xl mb-3">😕</div> <div class=" text-3xl mb-3">😕</div>
<div class=" text-lg font-medium mb-1">{$i18n.t('No models found')}</div> <div class=" text-lg font-medium mb-1">{$i18n.t('No models found')}</div>
<div class=" text-gray-500 text-center text-xs"> <div class=" text-gray-500 text-center text-xs">
{$i18n.t('Try adjusting your search or filter to find what you are looking for.')} {$i18n.t('Try adjusting your search or filter to find what you are looking for.')}
</div>
</div> </div>
</div> </div>
{/if}
{:else}
<div class="w-full h-full flex justify-center items-center py-10">
<Spinner className="size-4" />
</div> </div>
{/if} {/if}
</div> </div>

View file

@ -2,7 +2,7 @@
import { getContext, onMount } from 'svelte'; import { getContext, onMount } from 'svelte';
import { config, knowledge, settings, user } from '$lib/stores'; import { config, knowledge, settings, user } from '$lib/stores';
import Selector from './Knowledge/Selector.svelte'; import KnowledgeSelector from './Knowledge/KnowledgeSelector.svelte';
import FileItem from '$lib/components/common/FileItem.svelte'; import FileItem from '$lib/components/common/FileItem.svelte';
import { getKnowledgeBases } from '$lib/apis/knowledge'; import { getKnowledgeBases } from '$lib/apis/knowledge';
@ -128,9 +128,6 @@
}; };
onMount(async () => { onMount(async () => {
if (!$knowledge) {
knowledge.set(await getKnowledgeBases(localStorage.token));
}
loaded = true; loaded = true;
}); });
</script> </script>
@ -190,8 +187,7 @@
{#if loaded} {#if loaded}
<div class="flex flex-wrap flex-row text-sm gap-1"> <div class="flex flex-wrap flex-row text-sm gap-1">
<Selector <KnowledgeSelector
knowledgeItems={$knowledge || []}
on:select={(e) => { on:select={(e) => {
const item = e.detail; const item = e.detail;
@ -210,7 +206,7 @@
> >
{$i18n.t('Select Knowledge')} {$i18n.t('Select Knowledge')}
</div> </div>
</Selector> </KnowledgeSelector>
{#if $user?.role === 'admin' || $user?.permissions?.chat?.file_upload} {#if $user?.role === 'admin' || $user?.permissions?.chat?.file_upload}
<button <button

View file

@ -0,0 +1,195 @@
<script lang="ts">
import dayjs from 'dayjs';
import { DropdownMenu } from 'bits-ui';
import { onMount, getContext, createEventDispatcher } from 'svelte';
import { searchNotes } from '$lib/apis/notes';
import { searchKnowledgeBases, searchKnowledgeFiles } from '$lib/apis/knowledge';
import { flyAndScale } from '$lib/utils/transitions';
import { decodeString } from '$lib/utils';
import Dropdown from '$lib/components/common/Dropdown.svelte';
import Search from '$lib/components/icons/Search.svelte';
import Tooltip from '$lib/components/common/Tooltip.svelte';
import Database from '$lib/components/icons/Database.svelte';
import ChevronDown from '$lib/components/icons/ChevronDown.svelte';
import ChevronRight from '$lib/components/icons/ChevronRight.svelte';
import PageEdit from '$lib/components/icons/PageEdit.svelte';
import DocumentPage from '$lib/components/icons/DocumentPage.svelte';
const i18n = getContext('i18n');
const dispatch = createEventDispatcher();
export let onClose: Function = () => {};
let show = false;
let query = '';
let noteItems = [];
let knowledgeItems = [];
let fileItems = [];
let items = [];
$: items = [...noteItems, ...knowledgeItems, ...fileItems];
$: if (query !== null) {
getItems();
}
const getItems = () => {
getNoteItems();
getKnowledgeItems();
getKnowledgeFileItems();
};
const getNoteItems = async () => {
const res = await searchNotes(localStorage.token, query).catch(() => {
return null;
});
if (res) {
noteItems = res.items.map((note) => {
return {
...note,
type: 'note',
name: note.title,
description: dayjs(note.updated_at / 1000000).fromNow()
};
});
}
};
const getKnowledgeItems = async () => {
const res = await searchKnowledgeBases(localStorage.token, query).catch(() => {
return null;
});
if (res) {
knowledgeItems = res.items.map((note) => {
return {
...note,
type: 'collection'
};
});
}
};
const getKnowledgeFileItems = async () => {
const res = await searchKnowledgeFiles(localStorage.token, query).catch(() => {
return null;
});
if (res) {
fileItems = res.items.map((file) => {
return {
...file,
type: 'file',
name: file.meta?.name || file.filename,
description: file.description || ''
};
});
}
};
onMount(async () => {
getItems();
});
</script>
<Dropdown
bind:show
on:change={(e) => {
if (e.detail === false) {
onClose();
query = '';
}
}}
>
<slot />
<div slot="content">
<DropdownMenu.Content
class=" text-black dark:text-white rounded-2xl shadow-lg border border-gray-200 dark:border-gray-800 flex flex-col bg-white dark:bg-gray-850 w-70 p-1.5"
sideOffset={8}
side="bottom"
align="start"
transition={flyAndScale}
>
<div class=" flex w-full space-x-2 px-2 pb-0.5">
<div class="flex flex-1">
<div class=" self-center mr-2">
<Search className="size-3.5" />
</div>
<input
class=" w-full text-sm pr-4 py-1 rounded-r-xl outline-hidden bg-transparent"
bind:value={query}
placeholder={$i18n.t('Search')}
/>
</div>
</div>
<div class="max-h-56 overflow-y-scroll gap-0.5 flex flex-col">
{#if items.length === 0}
<div class="text-center text-xs text-gray-500 dark:text-gray-400 pt-4 pb-6">
{$i18n.t('No knowledge found')}
</div>
{:else}
{#each items as item, i}
{#if i === 0 || item?.type !== items[i - 1]?.type}
<div class="px-2 text-xs text-gray-500 py-1">
{#if item?.type === 'note'}
{$i18n.t('Notes')}
{:else if item?.type === 'collection'}
{$i18n.t('Collections')}
{:else if item?.type === 'file'}
{$i18n.t('Files')}
{/if}
</div>
{/if}
<div
class=" px-2.5 py-1 rounded-xl w-full text-left flex justify-between items-center text-sm hover:bg-gray-50 hover:dark:bg-gray-800 hover:dark:text-gray-100 selected-command-option-button"
>
<button
class="w-full flex-1"
type="button"
on:click={() => {
dispatch('select', item);
show = false;
}}
>
<div class=" text-black dark:text-gray-100 flex items-center gap-1 shrink-0">
{#if item.type === 'note'}
<Tooltip content={$i18n.t('Note')} placement="top">
<PageEdit className="size-4" />
</Tooltip>
{:else if item.type === 'collection'}
<Tooltip content={$i18n.t('Collection')} placement="top">
<Database className="size-4" />
</Tooltip>
{:else if item.type === 'file'}
<Tooltip content={$i18n.t('File')} placement="top">
<DocumentPage className="size-4" />
</Tooltip>
{/if}
<Tooltip
content={item.description || decodeString(item?.name)}
placement="top-start"
>
<div class="line-clamp-1 flex-1 text-sm text-left">
{decodeString(item?.name)}
</div>
</Tooltip>
</div>
</button>
</div>
{/each}
{/if}
</div>
</DropdownMenu.Content>
</div>
</Dropdown>

View file

@ -1,186 +0,0 @@
<script lang="ts">
import Fuse from 'fuse.js';
import { DropdownMenu } from 'bits-ui';
import { onMount, getContext, createEventDispatcher } from 'svelte';
import { flyAndScale } from '$lib/utils/transitions';
import { knowledge } from '$lib/stores';
import Dropdown from '$lib/components/common/Dropdown.svelte';
import Search from '$lib/components/icons/Search.svelte';
import { getNoteList } from '$lib/apis/notes';
import dayjs from 'dayjs';
const i18n = getContext('i18n');
const dispatch = createEventDispatcher();
export let onClose: Function = () => {};
export let knowledgeItems = [];
let query = '';
let items = [];
let filteredItems = [];
let fuse = null;
$: if (fuse) {
filteredItems = query
? fuse.search(query).map((e) => {
return e.item;
})
: items;
}
const decodeString = (str: string) => {
try {
return decodeURIComponent(str);
} catch (e) {
return str;
}
};
onMount(async () => {
let notes = await getNoteList(localStorage.token).catch(() => {
return [];
});
notes = notes.map((note) => {
return {
...note,
type: 'note',
name: note.title,
description: dayjs(note.updated_at / 1000000).fromNow()
};
});
let collections = knowledgeItems
.filter((item) => !item?.meta?.document)
.map((item) => ({
...item,
type: 'collection'
}));
let collection_files =
knowledgeItems.length > 0
? [
...knowledgeItems
.reduce((a, item) => {
return [
...new Set([
...a,
...(item?.files ?? []).map((file) => ({
...file,
collection: { name: item.name, description: item.description } // DO NOT REMOVE, USED IN FILE DESCRIPTION/ATTACHMENT
}))
])
];
}, [])
.map((file) => ({
...file,
name: file?.meta?.name,
description: `${file?.collection?.name} - ${file?.collection?.description}`,
type: 'file'
}))
]
: [];
items = [...notes, ...collections, ...collection_files];
fuse = new Fuse(items, {
keys: ['name', 'description']
});
});
</script>
<Dropdown
on:change={(e) => {
if (e.detail === false) {
onClose();
query = '';
}
}}
>
<slot />
<div slot="content">
<DropdownMenu.Content
class="w-full max-w-96 rounded-xl p-1 border border-gray-100 dark:border-gray-800 z-[99999999] bg-white dark:bg-gray-850 dark:text-white shadow-lg"
sideOffset={8}
side="bottom"
align="start"
transition={flyAndScale}
>
<div class=" flex w-full space-x-2 py-0.5 px-2 pb-2">
<div class="flex flex-1">
<div class=" self-center ml-1 mr-3">
<Search />
</div>
<input
class=" w-full text-sm pr-4 py-1 rounded-r-xl outline-hidden bg-transparent"
bind:value={query}
placeholder={$i18n.t('Search Knowledge')}
/>
</div>
</div>
<div class="max-h-56 overflow-y-scroll">
{#if filteredItems.length === 0}
<div class="text-center text-xs text-gray-500 dark:text-gray-400 py-4">
{$i18n.t('No knowledge found')}
</div>
{:else}
{#each filteredItems as item}
<DropdownMenu.Item
class="flex gap-2.5 items-center px-3 py-2 text-sm cursor-pointer hover:bg-gray-50 dark:hover:bg-gray-800 rounded-md"
on:click={() => {
dispatch('select', item);
}}
>
<div>
<div class=" font-medium text-black dark:text-gray-100 flex items-center gap-1">
{#if item.legacy}
<div
class="bg-gray-500/20 text-gray-700 dark:text-gray-200 rounded-sm uppercase text-xs font-semibold px-1 shrink-0"
>
Legacy
</div>
{:else if item?.meta?.document}
<div
class="bg-gray-500/20 text-gray-700 dark:text-gray-200 rounded-sm uppercase text-xs font-semibold px-1 shrink-0"
>
Document
</div>
{:else if item?.type === 'file'}
<div
class="bg-gray-500/20 text-gray-700 dark:text-gray-200 rounded-sm uppercase text-xs font-semibold px-1 shrink-0"
>
File
</div>
{:else if item?.type === 'note'}
<div
class="bg-blue-500/20 text-blue-700 dark:text-blue-200 rounded-sm uppercase text-xs font-semibold px-1 shrink-0"
>
Note
</div>
{:else}
<div
class="bg-green-500/20 text-green-700 dark:text-green-200 rounded-sm uppercase text-xs font-semibold px-1 shrink-0"
>
Collection
</div>
{/if}
<div class="line-clamp-1">
{decodeString(item?.name)}
</div>
</div>
<div class=" text-xs text-gray-600 dark:text-gray-100 line-clamp-1">
{item?.description}
</div>
</div>
</DropdownMenu.Item>
{/each}
{/if}
</div>
</DropdownMenu.Content>
</div>
</Dropdown>

View file

@ -2,12 +2,11 @@
import { toast } from 'svelte-sonner'; import { toast } from 'svelte-sonner';
import { onMount, getContext, tick } from 'svelte'; import { onMount, getContext, tick } from 'svelte';
import { models, tools, functions, knowledge as knowledgeCollections, user } from '$lib/stores'; import { models, tools, functions, user } from '$lib/stores';
import { WEBUI_BASE_URL } from '$lib/constants'; import { WEBUI_BASE_URL } from '$lib/constants';
import { getTools } from '$lib/apis/tools'; import { getTools } from '$lib/apis/tools';
import { getFunctions } from '$lib/apis/functions'; import { getFunctions } from '$lib/apis/functions';
import { getKnowledgeBases } from '$lib/apis/knowledge';
import AdvancedParams from '$lib/components/chat/Settings/Advanced/AdvancedParams.svelte'; import AdvancedParams from '$lib/components/chat/Settings/Advanced/AdvancedParams.svelte';
import Tags from '$lib/components/common/Tags.svelte'; import Tags from '$lib/components/common/Tags.svelte';
@ -223,7 +222,6 @@
onMount(async () => { onMount(async () => {
await tools.set(await getTools(localStorage.token)); await tools.set(await getTools(localStorage.token));
await functions.set(await getFunctions(localStorage.token)); await functions.set(await getFunctions(localStorage.token));
await knowledgeCollections.set([...(await getKnowledgeBases(localStorage.token))]);
// Scroll to top 'workspace-container' element // Scroll to top 'workspace-container' element
const workspaceContainer = document.getElementById('workspace-container'); const workspaceContainer = document.getElementById('workspace-container');

View file

@ -75,6 +75,8 @@ export const settings: Writable<Settings> = writable({});
export const audioQueue = writable(null); export const audioQueue = writable(null);
export const sidebarWidth = writable(260);
export const showSidebar = writable(false); export const showSidebar = writable(false);
export const showSearch = writable(false); export const showSearch = writable(false);
export const showSettings = writable(false); export const showSettings = writable(false);

View file

@ -383,7 +383,7 @@
{:else} {:else}
<div <div
class="w-full flex-1 h-full flex items-center justify-center {$showSidebar class="w-full flex-1 h-full flex items-center justify-center {$showSidebar
? ' md:max-w-[calc(100%-260px)]' ? ' md:max-w-[calc(100%-var(--sidebar-width))]'
: ' '}" : ' '}"
> >
<Spinner className="size-5" /> <Spinner className="size-5" />

View file

@ -29,7 +29,7 @@
{#if loaded} {#if loaded}
<div <div
class=" flex flex-col h-screen max-h-[100dvh] flex-1 transition-width duration-200 ease-in-out {$showSidebar class=" flex flex-col h-screen max-h-[100dvh] flex-1 transition-width duration-200 ease-in-out {$showSidebar
? 'md:max-w-[calc(100%-260px)]' ? 'md:max-w-[calc(100%-var(--sidebar-width))]'
: ' md:max-w-[calc(100%-49px)]'} w-full max-w-full" : ' md:max-w-[calc(100%-49px)]'} w-full max-w-full"
> >
<nav class=" px-2.5 pt-1.5 backdrop-blur-xl drag-region"> <nav class=" px-2.5 pt-1.5 backdrop-blur-xl drag-region">

View file

@ -18,7 +18,7 @@
<div <div
class=" flex flex-col w-full h-screen max-h-[100dvh] transition-width duration-200 ease-in-out {$showSidebar class=" flex flex-col w-full h-screen max-h-[100dvh] transition-width duration-200 ease-in-out {$showSidebar
? 'md:max-w-[calc(100%-260px)]' ? 'md:max-w-[calc(100%-var(--sidebar-width))]'
: ''} max-w-full" : ''} max-w-full"
> >
<nav class=" px-2.5 pt-1.5 backdrop-blur-xl w-full drag-region"> <nav class=" px-2.5 pt-1.5 backdrop-blur-xl w-full drag-region">

View file

@ -41,7 +41,7 @@
{#if loaded} {#if loaded}
<div <div
class=" flex flex-col w-full h-screen max-h-[100dvh] transition-width duration-200 ease-in-out {$showSidebar class=" flex flex-col w-full h-screen max-h-[100dvh] transition-width duration-200 ease-in-out {$showSidebar
? 'md:max-w-[calc(100%-260px)]' ? 'md:max-w-[calc(100%-var(--sidebar-width))]'
: ''} max-w-full" : ''} max-w-full"
> >
<nav class=" px-2 pt-1.5 backdrop-blur-xl w-full drag-region"> <nav class=" px-2 pt-1.5 backdrop-blur-xl w-full drag-region">

View file

@ -20,7 +20,7 @@
{#if loaded} {#if loaded}
<div <div
id="note-container" id="note-container"
class="w-full h-full {$showSidebar ? 'md:max-w-[calc(100%-260px)]' : ''}" class="w-full h-full {$showSidebar ? 'md:max-w-[calc(100%-var(--sidebar-width))]' : ''}"
> >
<NoteEditor id={$page.params.id} /> <NoteEditor id={$page.params.id} />
</div> </div>

View file

@ -18,7 +18,7 @@
<div <div
class=" flex flex-col w-full h-screen max-h-[100dvh] transition-width duration-200 ease-in-out {$showSidebar class=" flex flex-col w-full h-screen max-h-[100dvh] transition-width duration-200 ease-in-out {$showSidebar
? 'md:max-w-[calc(100%-260px)]' ? 'md:max-w-[calc(100%-var(--sidebar-width))]'
: ''} max-w-full" : ''} max-w-full"
> >
<nav class=" px-2.5 pt-1.5 backdrop-blur-xl w-full drag-region"> <nav class=" px-2.5 pt-1.5 backdrop-blur-xl w-full drag-region">

View file

@ -52,7 +52,7 @@
{#if loaded} {#if loaded}
<div <div
class=" relative flex flex-col w-full h-screen max-h-[100dvh] transition-width duration-200 ease-in-out {$showSidebar class=" relative flex flex-col w-full h-screen max-h-[100dvh] transition-width duration-200 ease-in-out {$showSidebar
? 'md:max-w-[calc(100%-260px)]' ? 'md:max-w-[calc(100%-var(--sidebar-width))]'
: ''} max-w-full" : ''} max-w-full"
> >
<nav class=" px-2.5 pt-1.5 backdrop-blur-xl drag-region"> <nav class=" px-2.5 pt-1.5 backdrop-blur-xl drag-region">