Compare commits

...

20 commits

Author SHA1 Message Date
jamie-dit
c21c2d2d10
Merge b766a23e36 into 68219d84a9 2025-12-11 12:24:14 +11:00
Timothy Jaeryang Baek
68219d84a9 refac 2025-12-10 17:08:31 -05:00
Timothy Jaeryang Baek
6068e23590 refac 2025-12-10 17:08:18 -05:00
Timothy Jaeryang Baek
d7467a86e2 refac 2025-12-10 17:03:51 -05:00
Timothy Jaeryang Baek
d098c57d4d refac 2025-12-10 17:00:28 -05:00
Timothy Jaeryang Baek
693636d971 enh/refac: show read only kbs 2025-12-10 16:58:53 -05:00
Timothy Jaeryang Baek
a6ef82c5ed refac: styling 2025-12-10 16:43:43 -05:00
Timothy Jaeryang Baek
79cfe29bb2 refac: channel_file and knowledge table migration 2025-12-10 16:41:22 -05:00
Timothy Jaeryang Baek
d1d42128e5 refac/fix: channel files 2025-12-10 15:53:45 -05:00
Timothy Jaeryang Baek
2bccf8350d enh: channel files 2025-12-10 15:48:42 -05:00
Timothy Jaeryang Baek
c15201620d refac: kb files 2025-12-10 15:48:27 -05:00
Andreas
f31ca75892
Fix typo in user permission environment variables (#19860) 2025-12-10 15:09:15 -05:00
Timothy Jaeryang Baek
a7993f6f4e refac
Some checks are pending
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Python CI / Format Backend (push) Waiting to run
Frontend Build / Format & Build Frontend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
2025-12-10 12:22:40 -05:00
jamie
b766a23e36
fix: MCP OAuth discovery via Protected Resource metadata flow
When an MCP server's OAuth authorization server is on a different domain
(e.g., Todoist MCP at ai.todoist.net with OAuth at todoist.com), the
current implementation fails because it only looks for OAuth metadata at
the MCP server's domain.

This commit implements the full MCP Protected Resource discovery flow as
specified in the MCP authorization spec:

1. Make an unauthenticated request to the MCP endpoint
2. Parse the WWW-Authenticate header to get the resource_metadata URL
3. Fetch the Protected Resource metadata
4. Extract the authorization_servers array
5. Use those servers for OAuth metadata discovery

The fix is backwards-compatible - if Protected Resource discovery fails,
it falls back to the existing behavior.

Fixes #19794
2025-12-07 12:53:22 +11:00
Tim Baek
6f1486ffd0
Merge pull request #19466 from open-webui/dev
Some checks failed
Python CI / Format Backend (push) Has been cancelled
Frontend Build / Format & Build Frontend (push) Has been cancelled
Frontend Build / Frontend Unit Tests (push) Has been cancelled
Release / release (push) Has been cancelled
Deploy to HuggingFace Spaces / check-secret (push) Has been cancelled
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Release to PyPI / release (push) Has been cancelled
Deploy to HuggingFace Spaces / deploy (push) Has been cancelled
Create and publish Docker images with specific build args / merge-main-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-cuda-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-cuda126-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-ollama-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-slim-images (push) Has been cancelled
0.6.41
2025-12-02 17:28:46 -05:00
Tim Baek
140605e660
Merge pull request #19462 from open-webui/dev
Some checks failed
Release to PyPI / release (push) Has been cancelled
Release / release (push) Has been cancelled
Deploy to HuggingFace Spaces / check-secret (push) Has been cancelled
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Python CI / Format Backend (push) Has been cancelled
Frontend Build / Format & Build Frontend (push) Has been cancelled
Frontend Build / Frontend Unit Tests (push) Has been cancelled
Deploy to HuggingFace Spaces / deploy (push) Has been cancelled
Create and publish Docker images with specific build args / merge-main-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-cuda-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-cuda126-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-ollama-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-slim-images (push) Has been cancelled
0.6.40
2025-11-25 06:01:33 -05:00
Tim Baek
9899293f05
Merge pull request #19448 from open-webui/dev
0.6.39
2025-11-25 05:31:34 -05:00
Tim Baek
e3faec62c5
Merge pull request #19416 from open-webui/dev
Some checks are pending
Release / release (push) Waiting to run
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Python CI / Format Backend (push) Waiting to run
Frontend Build / Format & Build Frontend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
Release to PyPI / release (push) Waiting to run
0.6.38
2025-11-24 07:00:31 -05:00
Tim Baek
fc05e0a6c5
Merge pull request #19405 from open-webui/dev
Some checks are pending
Release / release (push) Waiting to run
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Python CI / Format Backend (push) Waiting to run
Frontend Build / Format & Build Frontend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
Release to PyPI / release (push) Waiting to run
chore: format
2025-11-23 22:16:33 -05:00
Tim Baek
fe6783c166
Merge pull request #19030 from open-webui/dev
0.6.37
2025-11-23 22:10:05 -05:00
17 changed files with 803 additions and 179 deletions

View file

@ -1306,7 +1306,7 @@ USER_PERMISSIONS_WORKSPACE_MODELS_ALLOW_PUBLIC_SHARING = (
USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ALLOW_SHARING = (
os.environ.get(
"USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ALLOW_PUBLIC_SHARING", "False"
"USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ALLOW_SHARING", "False"
).lower()
== "true"
)
@ -1345,7 +1345,7 @@ USER_PERMISSIONS_WORKSPACE_TOOLS_ALLOW_PUBLIC_SHARING = (
USER_PERMISSIONS_NOTES_ALLOW_SHARING = (
os.environ.get("USER_PERMISSIONS_NOTES_ALLOW_PUBLIC_SHARING", "False").lower()
os.environ.get("USER_PERMISSIONS_NOTES_ALLOW_SHARING", "False").lower()
== "true"
)

View file

@ -0,0 +1,54 @@
"""Add channel file table
Revision ID: 6283dc0e4d8d
Revises: 3e0e00844bb0
Create Date: 2025-12-10 15:11:39.424601
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
import open_webui.internal.db
# revision identifiers, used by Alembic.
revision: str = "6283dc0e4d8d"
down_revision: Union[str, None] = "3e0e00844bb0"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
op.create_table(
"channel_file",
sa.Column("id", sa.Text(), primary_key=True),
sa.Column("user_id", sa.Text(), nullable=False),
sa.Column(
"channel_id",
sa.Text(),
sa.ForeignKey("channel.id", ondelete="CASCADE"),
nullable=False,
),
sa.Column(
"file_id",
sa.Text(),
sa.ForeignKey("file.id", ondelete="CASCADE"),
nullable=False,
),
sa.Column("created_at", sa.BigInteger(), nullable=False),
sa.Column("updated_at", sa.BigInteger(), nullable=False),
# indexes
sa.Index("ix_channel_file_channel_id", "channel_id"),
sa.Index("ix_channel_file_file_id", "file_id"),
sa.Index("ix_channel_file_user_id", "user_id"),
# unique constraints
sa.UniqueConstraint(
"channel_id", "file_id", name="uq_channel_file_channel_file"
), # prevent duplicate entries
)
def downgrade() -> None:
op.drop_table("channel_file")

View file

@ -0,0 +1,49 @@
"""Update channel file and knowledge table
Revision ID: 81cc2ce44d79
Revises: 6283dc0e4d8d
Create Date: 2025-12-10 16:07:58.001282
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
import open_webui.internal.db
# revision identifiers, used by Alembic.
revision: str = "81cc2ce44d79"
down_revision: Union[str, None] = "6283dc0e4d8d"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# Add message_id column to channel_file table
with op.batch_alter_table("channel_file", schema=None) as batch_op:
batch_op.add_column(
sa.Column(
"message_id",
sa.Text(),
sa.ForeignKey(
"message.id", ondelete="CASCADE", name="fk_channel_file_message_id"
),
nullable=True,
)
)
# Add data column to knowledge table
with op.batch_alter_table("knowledge", schema=None) as batch_op:
batch_op.add_column(sa.Column("data", sa.JSON(), nullable=True))
def downgrade() -> None:
# Remove message_id column from channel_file table
with op.batch_alter_table("channel_file", schema=None) as batch_op:
batch_op.drop_column("message_id")
# Remove data column from knowledge table
with op.batch_alter_table("knowledge", schema=None) as batch_op:
batch_op.drop_column("data")

View file

@ -10,7 +10,18 @@ from pydantic import BaseModel, ConfigDict
from sqlalchemy.dialects.postgresql import JSONB
from sqlalchemy import BigInteger, Boolean, Column, String, Text, JSON, case, cast
from sqlalchemy import (
BigInteger,
Boolean,
Column,
ForeignKey,
String,
Text,
JSON,
UniqueConstraint,
case,
cast,
)
from sqlalchemy import or_, func, select, and_, text
from sqlalchemy.sql import exists
@ -137,6 +148,41 @@ class ChannelMemberModel(BaseModel):
updated_at: Optional[int] = None # timestamp in epoch (time_ns)
class ChannelFile(Base):
__tablename__ = "channel_file"
id = Column(Text, unique=True, primary_key=True)
user_id = Column(Text, nullable=False)
channel_id = Column(
Text, ForeignKey("channel.id", ondelete="CASCADE"), nullable=False
)
message_id = Column(
Text, ForeignKey("message.id", ondelete="CASCADE"), nullable=True
)
file_id = Column(Text, ForeignKey("file.id", ondelete="CASCADE"), nullable=False)
created_at = Column(BigInteger, nullable=False)
updated_at = Column(BigInteger, nullable=False)
__table_args__ = (
UniqueConstraint("channel_id", "file_id", name="uq_channel_file_channel_file"),
)
class ChannelFileModel(BaseModel):
model_config = ConfigDict(from_attributes=True)
id: str
channel_id: str
file_id: str
user_id: str
created_at: int # timestamp in epoch (time_ns)
updated_at: int # timestamp in epoch (time_ns)
class ChannelWebhook(Base):
__tablename__ = "channel_webhook"
@ -642,6 +688,135 @@ class ChannelTable:
channel = db.query(Channel).filter(Channel.id == id).first()
return ChannelModel.model_validate(channel) if channel else None
def get_channels_by_file_id(self, file_id: str) -> list[ChannelModel]:
with get_db() as db:
channel_files = (
db.query(ChannelFile).filter(ChannelFile.file_id == file_id).all()
)
channel_ids = [cf.channel_id for cf in channel_files]
channels = db.query(Channel).filter(Channel.id.in_(channel_ids)).all()
return [ChannelModel.model_validate(channel) for channel in channels]
def get_channels_by_file_id_and_user_id(
self, file_id: str, user_id: str
) -> list[ChannelModel]:
with get_db() as db:
# 1. Determine which channels have this file
channel_file_rows = (
db.query(ChannelFile).filter(ChannelFile.file_id == file_id).all()
)
channel_ids = [row.channel_id for row in channel_file_rows]
if not channel_ids:
return []
# 2. Load all channel rows that still exist
channels = (
db.query(Channel)
.filter(
Channel.id.in_(channel_ids),
Channel.deleted_at.is_(None),
Channel.archived_at.is_(None),
)
.all()
)
if not channels:
return []
# Preload user's group membership
user_group_ids = [g.id for g in Groups.get_groups_by_member_id(user_id)]
allowed_channels = []
for channel in channels:
# --- Case A: group or dm => user must be an active member ---
if channel.type in ["group", "dm"]:
membership = (
db.query(ChannelMember)
.filter(
ChannelMember.channel_id == channel.id,
ChannelMember.user_id == user_id,
ChannelMember.is_active.is_(True),
)
.first()
)
if membership:
allowed_channels.append(ChannelModel.model_validate(channel))
continue
# --- Case B: standard channel => rely on ACL permissions ---
query = db.query(Channel).filter(Channel.id == channel.id)
query = self._has_permission(
db,
query,
{"user_id": user_id, "group_ids": user_group_ids},
permission="read",
)
allowed = query.first()
if allowed:
allowed_channels.append(ChannelModel.model_validate(allowed))
return allowed_channels
def get_channel_by_id_and_user_id(
self, id: str, user_id: str
) -> Optional[ChannelModel]:
with get_db() as db:
# Fetch the channel
channel: Channel = (
db.query(Channel)
.filter(
Channel.id == id,
Channel.deleted_at.is_(None),
Channel.archived_at.is_(None),
)
.first()
)
if not channel:
return None
# If the channel is a group or dm, read access requires membership (active)
if channel.type in ["group", "dm"]:
membership = (
db.query(ChannelMember)
.filter(
ChannelMember.channel_id == id,
ChannelMember.user_id == user_id,
ChannelMember.is_active.is_(True),
)
.first()
)
if membership:
return ChannelModel.model_validate(channel)
else:
return None
# For channels that are NOT group/dm, fall back to ACL-based read access
query = db.query(Channel).filter(Channel.id == id)
# Determine user groups
user_group_ids = [
group.id for group in Groups.get_groups_by_member_id(user_id)
]
# Apply ACL rules
query = self._has_permission(
db,
query,
{"user_id": user_id, "group_ids": user_group_ids},
permission="read",
)
channel_allowed = query.first()
return (
ChannelModel.model_validate(channel_allowed)
if channel_allowed
else None
)
def update_channel_by_id(
self, id: str, form_data: ChannelForm
) -> Optional[ChannelModel]:
@ -663,6 +838,65 @@ class ChannelTable:
db.commit()
return ChannelModel.model_validate(channel) if channel else None
def add_file_to_channel_by_id(
self, channel_id: str, file_id: str, user_id: str
) -> Optional[ChannelFileModel]:
with get_db() as db:
channel_file = ChannelFileModel(
**{
"id": str(uuid.uuid4()),
"channel_id": channel_id,
"file_id": file_id,
"user_id": user_id,
"created_at": int(time.time()),
"updated_at": int(time.time()),
}
)
try:
result = ChannelFile(**channel_file.model_dump())
db.add(result)
db.commit()
db.refresh(result)
if result:
return ChannelFileModel.model_validate(result)
else:
return None
except Exception:
return None
def set_file_message_id_in_channel_by_id(
self, channel_id: str, file_id: str, message_id: str
) -> bool:
try:
with get_db() as db:
channel_file = (
db.query(ChannelFile)
.filter_by(channel_id=channel_id, file_id=file_id)
.first()
)
if not channel_file:
return False
channel_file.message_id = message_id
channel_file.updated_at = int(time.time())
db.commit()
return True
except Exception:
return False
def remove_file_from_channel_by_id(self, channel_id: str, file_id: str) -> bool:
try:
with get_db() as db:
db.query(ChannelFile).filter_by(
channel_id=channel_id, file_id=file_id
).delete()
db.commit()
return True
except Exception:
return False
def delete_channel_by_id(self, id: str):
with get_db() as db:
db.query(Channel).filter(Channel.id == id).delete()

View file

@ -126,6 +126,49 @@ class ChatTitleIdResponse(BaseModel):
created_at: int
class ChatListResponse(BaseModel):
items: list[ChatModel]
total: int
class ChatUsageStatsResponse(BaseModel):
id: str # chat id
models: dict = {} # models used in the chat with their usage counts
message_count: int # number of messages in the chat
history_models: dict = {} # models used in the chat history with their usage counts
history_message_count: int # number of messages in the chat history
history_user_message_count: int # number of user messages in the chat history
history_assistant_message_count: (
int # number of assistant messages in the chat history
)
average_response_time: (
float # average response time of assistant messages in seconds
)
average_user_message_content_length: (
float # average length of user message contents
)
average_assistant_message_content_length: (
float # average length of assistant message contents
)
tags: list[str] = [] # tags associated with the chat
last_message_at: int # timestamp of the last message
updated_at: int
created_at: int
model_config = ConfigDict(extra="allow")
class ChatUsageStatsListResponse(BaseModel):
items: list[ChatUsageStatsResponse]
total: int
model_config = ConfigDict(extra="allow")
class ChatTable:
def _clean_null_bytes(self, obj):
"""
@ -675,14 +718,31 @@ class ChatTable:
)
return [ChatModel.model_validate(chat) for chat in all_chats]
def get_chats_by_user_id(self, user_id: str) -> list[ChatModel]:
def get_chats_by_user_id(
self, user_id: str, skip: Optional[int] = None, limit: Optional[int] = None
) -> ChatListResponse:
with get_db() as db:
all_chats = (
query = (
db.query(Chat)
.filter_by(user_id=user_id)
.order_by(Chat.updated_at.desc())
)
return [ChatModel.model_validate(chat) for chat in all_chats]
total = query.count()
if skip is not None:
query = query.offset(skip)
if limit is not None:
query = query.limit(limit)
all_chats = query.all()
return ChatListResponse(
**{
"items": [ChatModel.model_validate(chat) for chat in all_chats],
"total": total,
}
)
def get_pinned_chats_by_user_id(self, user_id: str) -> list[ChatModel]:
with get_db() as db:

View file

@ -232,6 +232,21 @@ class KnowledgeTable:
except Exception:
return None
def get_knowledge_by_id_and_user_id(
self, id: str, user_id: str
) -> Optional[KnowledgeModel]:
knowledge = self.get_knowledge_by_id(id)
if not knowledge:
return None
if knowledge.user_id == user_id:
return knowledge
user_group_ids = {group.id for group in Groups.get_groups_by_member_id(user_id)}
if has_access(user_id, "write", knowledge.access_control, user_group_ids):
return knowledge
return None
def get_knowledges_by_file_id(self, file_id: str) -> list[KnowledgeModel]:
try:
with get_db() as db:

View file

@ -1093,6 +1093,15 @@ async def post_new_message(
try:
message, channel = await new_message_handler(request, id, form_data, user)
try:
if files := message.data.get("files", []):
for file in files:
Channels.set_file_message_id_in_channel_by_id(
channel.id, file.get("id", ""), message.id
)
except Exception as e:
log.debug(e)
active_user_ids = get_user_ids_from_room(f"channel:{channel.id}")
async def background_handler():

View file

@ -8,6 +8,7 @@ from open_webui.socket.main import get_event_emitter
from open_webui.models.chats import (
ChatForm,
ChatImportForm,
ChatUsageStatsListResponse,
ChatsImportForm,
ChatResponse,
Chats,
@ -68,16 +69,25 @@ def get_session_user_chat_list(
############################
# GetChatList
# GetChatUsageStats
# EXPERIMENTAL: may be removed in future releases
############################
@router.get("/stats/usage", response_model=list[ChatTitleIdResponse])
def get_session_user_chat_usage(
@router.get("/stats/usage", response_model=ChatUsageStatsListResponse)
def get_session_user_chat_usage_stats(
items_per_page: Optional[int] = 50,
page: Optional[int] = 1,
user=Depends(get_verified_user),
):
try:
chats = Chats.get_chats_by_user_id(user.id)
limit = items_per_page
skip = (page - 1) * limit
result = Chats.get_chats_by_user_id(user.id, skip=skip, limit=limit)
chats = result.items
total = result.total
chat_stats = []
for chat in chats:
@ -86,37 +96,96 @@ def get_session_user_chat_usage(
if messages_map and message_id:
try:
history_models = {}
history_message_count = len(messages_map)
history_user_messages = []
history_assistant_messages = []
for message in messages_map.values():
if message.get("role", "") == "user":
history_user_messages.append(message)
elif message.get("role", "") == "assistant":
history_assistant_messages.append(message)
model = message.get("model", None)
if model:
if model not in history_models:
history_models[model] = 0
history_models[model] += 1
average_user_message_content_length = (
sum(
len(message.get("content", ""))
for message in history_user_messages
)
/ len(history_user_messages)
if len(history_user_messages) > 0
else 0
)
average_assistant_message_content_length = (
sum(
len(message.get("content", ""))
for message in history_assistant_messages
)
/ len(history_assistant_messages)
if len(history_assistant_messages) > 0
else 0
)
response_times = []
for message in history_assistant_messages:
user_message_id = message.get("parentId", None)
if user_message_id and user_message_id in messages_map:
user_message = messages_map[user_message_id]
response_time = message.get(
"timestamp", 0
) - user_message.get("timestamp", 0)
response_times.append(response_time)
average_response_time = (
sum(response_times) / len(response_times)
if len(response_times) > 0
else 0
)
message_list = get_message_list(messages_map, message_id)
message_count = len(message_list)
last_assistant_message = next(
(
message
for message in reversed(message_list)
if message["role"] == "assistant"
),
None,
)
models = {}
for message in reversed(message_list):
if message.get("role") == "assistant":
model = message.get("model", None)
if model:
if model not in models:
models[model] = 0
models[model] += 1
annotation = message.get("annotation", {})
model_id = (
last_assistant_message.get("model", None)
if last_assistant_message
else None
)
chat_stats.append(
{
"id": chat.id,
"model_id": model_id,
"models": models,
"message_count": message_count,
"history_models": history_models,
"history_message_count": history_message_count,
"history_user_message_count": len(history_user_messages),
"history_assistant_message_count": len(
history_assistant_messages
),
"average_response_time": average_response_time,
"average_user_message_content_length": average_user_message_content_length,
"average_assistant_message_content_length": average_assistant_message_content_length,
"tags": chat.meta.get("tags", []),
"model_ids": chat.chat.get("models", []),
"last_message_at": message_list[-1].get("timestamp", None),
"updated_at": chat.updated_at,
"created_at": chat.created_at,
}
)
except Exception as e:
pass
return chat_stats
return ChatUsageStatsListResponse(items=chat_stats, total=total)
except Exception as e:
log.exception(e)

View file

@ -27,6 +27,7 @@ from open_webui.constants import ERROR_MESSAGES
from open_webui.env import SRC_LOG_LEVELS
from open_webui.retrieval.vector.factory import VECTOR_DB_CLIENT
from open_webui.models.channels import Channels
from open_webui.models.users import Users
from open_webui.models.files import (
FileForm,
@ -91,6 +92,10 @@ def has_access_to_file(
if knowledge_base.id == knowledge_base_id:
return True
channels = Channels.get_channels_by_file_id_and_user_id(file_id, user.id)
if access_type == "read" and channels:
return True
return False
@ -138,6 +143,7 @@ def process_uploaded_file(request, file, file_path, file_item, file_metadata, us
f"File type {file.content_type} is not provided, but trying to process anyway"
)
process_file(request, ProcessFileForm(file_id=file_item.id), user=user)
except Exception as e:
log.error(f"Error processing file: {file_item.id}")
Files.update_file_data_by_id(
@ -247,6 +253,13 @@ def upload_file_handler(
),
)
if "channel_id" in file_metadata:
channel = Channels.get_channel_by_id_and_user_id(
file_metadata["channel_id"], user.id
)
if channel:
Channels.add_file_to_channel_by_id(channel.id, file_item.id, user.id)
if process:
if background_tasks and process_in_background:
background_tasks.add_task(

View file

@ -41,7 +41,11 @@ router = APIRouter()
############################
@router.get("/", response_model=list[KnowledgeUserResponse])
class KnowledgeAccessResponse(KnowledgeUserResponse):
write_access: Optional[bool] = False
@router.get("/", response_model=list[KnowledgeAccessResponse])
async def get_knowledge(user=Depends(get_verified_user)):
# Return knowledge bases with read access
knowledge_bases = []
@ -51,27 +55,35 @@ async def get_knowledge(user=Depends(get_verified_user)):
knowledge_bases = Knowledges.get_knowledge_bases_by_user_id(user.id, "read")
return [
KnowledgeUserResponse(
KnowledgeAccessResponse(
**knowledge_base.model_dump(),
files=Knowledges.get_file_metadatas_by_id(knowledge_base.id),
write_access=(
user.id == knowledge_base.user_id
or has_access(user.id, "write", knowledge_base.access_control)
),
)
for knowledge_base in knowledge_bases
]
@router.get("/list", response_model=list[KnowledgeUserResponse])
@router.get("/list", response_model=list[KnowledgeAccessResponse])
async def get_knowledge_list(user=Depends(get_verified_user)):
# Return knowledge bases with write access
knowledge_bases = []
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
knowledge_bases = Knowledges.get_knowledge_bases()
else:
knowledge_bases = Knowledges.get_knowledge_bases_by_user_id(user.id, "write")
knowledge_bases = Knowledges.get_knowledge_bases_by_user_id(user.id, "read")
return [
KnowledgeUserResponse(
KnowledgeAccessResponse(
**knowledge_base.model_dump(),
files=Knowledges.get_file_metadatas_by_id(knowledge_base.id),
write_access=(
user.id == knowledge_base.user_id
or has_access(user.id, "write", knowledge_base.access_control)
),
)
for knowledge_base in knowledge_bases
]
@ -187,6 +199,7 @@ async def reindex_knowledge_files(request: Request, user=Depends(get_verified_us
class KnowledgeFilesResponse(KnowledgeResponse):
files: list[FileMetadataResponse]
write_access: Optional[bool] = False
@router.get("/{id}", response_model=Optional[KnowledgeFilesResponse])
@ -203,6 +216,10 @@ async def get_knowledge_by_id(id: str, user=Depends(get_verified_user)):
return KnowledgeFilesResponse(
**knowledge.model_dump(),
files=Knowledges.get_file_metadatas_by_id(knowledge.id),
write_access=(
user.id == knowledge.user_id
or has_access(user.id, "write", knowledge.access_control)
),
)
else:
raise HTTPException(
@ -363,11 +380,6 @@ def add_file_to_knowledge_by_id(
detail=ERROR_MESSAGES.FILE_NOT_PROCESSED,
)
# Add file to knowledge base
Knowledges.add_file_to_knowledge_by_id(
knowledge_id=id, file_id=form_data.file_id, user_id=user.id
)
# Add content to the vector database
try:
process_file(
@ -375,6 +387,11 @@ def add_file_to_knowledge_by_id(
ProcessFileForm(file_id=form_data.file_id, collection_name=id),
user=user,
)
# Add file to knowledge base
Knowledges.add_file_to_knowledge_by_id(
knowledge_id=id, file_id=form_data.file_id, user_id=user.id
)
except Exception as e:
log.debug(e)
raise HTTPException(

View file

@ -241,6 +241,61 @@ def is_in_blocked_groups(group_name: str, groups: list) -> bool:
return False
async def discover_authorization_server_from_mcp(mcp_server_url: str) -> list[str]:
"""
Discover OAuth authorization servers by following the MCP Protected Resource flow.
According to the MCP spec (https://modelcontextprotocol.io/specification/2025-03-26/basic/authorization):
1. Make an unauthenticated request to the MCP endpoint
2. Parse WWW-Authenticate header to get resource_metadata URL
3. Fetch Protected Resource metadata to get authorization_servers
Returns:
List of authorization server base URLs, or empty list if discovery fails
"""
authorization_servers = []
try:
# Step 1: Make unauthenticated request to MCP endpoint to get WWW-Authenticate header
async with aiohttp.ClientSession(trust_env=True) as session:
async with session.post(
mcp_server_url,
json={"jsonrpc": "2.0", "method": "initialize", "params": {}, "id": 1},
headers={"Content-Type": "application/json"},
ssl=AIOHTTP_CLIENT_SESSION_SSL,
) as response:
if response.status == 401:
www_auth = response.headers.get("WWW-Authenticate", "")
# Parse resource_metadata from WWW-Authenticate header
# Format: Bearer resource_metadata="https://..."
match = re.search(r'resource_metadata="([^"]+)"', www_auth)
if match:
resource_metadata_url = match.group(1)
log.debug(f"Found resource_metadata URL: {resource_metadata_url}")
# Step 2: Fetch Protected Resource metadata
async with session.get(
resource_metadata_url, ssl=AIOHTTP_CLIENT_SESSION_SSL
) as resource_response:
if resource_response.status == 200:
resource_metadata = await resource_response.json()
# Step 3: Extract authorization_servers
servers = resource_metadata.get(
"authorization_servers", []
)
if servers:
authorization_servers = servers
log.debug(
f"Discovered authorization servers: {servers}"
)
except Exception as e:
log.debug(f"MCP Protected Resource discovery failed: {e}")
return authorization_servers
def get_parsed_and_base_url(server_url) -> tuple[urllib.parse.ParseResult, str]:
parsed = urllib.parse.urlparse(server_url)
base_url = f"{parsed.scheme}://{parsed.netloc}"
@ -303,9 +358,29 @@ async def get_oauth_client_info_with_dynamic_client_registration(
response_types=["code"],
)
# First, try MCP Protected Resource discovery flow
# This handles cases where the OAuth server is on a different domain than the MCP server
# (e.g., Todoist MCP at ai.todoist.net, OAuth at todoist.com)
authorization_servers = await discover_authorization_server_from_mcp(
oauth_server_url
)
# Build discovery URLs - prioritize authorization servers from MCP discovery
all_discovery_urls = []
for auth_server in authorization_servers:
auth_server = auth_server.rstrip("/")
all_discovery_urls.extend(
[
f"{auth_server}/.well-known/oauth-authorization-server",
f"{auth_server}/.well-known/openid-configuration",
]
)
# Fall back to standard discovery URLs based on the MCP server URL
all_discovery_urls.extend(get_discovery_urls(oauth_server_url))
# Attempt to fetch OAuth server metadata to get registration endpoint & scopes
discovery_urls = get_discovery_urls(oauth_server_url)
for url in discovery_urls:
for url in all_discovery_urls:
async with aiohttp.ClientSession(trust_env=True) as session:
async with session.get(
url, ssl=AIOHTTP_CLIENT_SESSION_SSL

View file

@ -365,6 +365,7 @@
bind:chatInputElement
bind:replyToMessage
{typingUsers}
{channel}
userSuggestions={true}
channelSuggestions={true}
disabled={!channel?.write_access}

View file

@ -42,9 +42,10 @@
import XMark from '../icons/XMark.svelte';
export let placeholder = $i18n.t('Type here...');
export let chatInputElement;
export let id = null;
export let chatInputElement;
export let channel = null;
export let typingUsers = [];
export let inputLoading = false;
@ -459,15 +460,16 @@
try {
// During the file upload, file content is automatically extracted.
// If the file is an audio file, provide the language for STT.
let metadata = null;
if (
(file.type.startsWith('audio/') || file.type.startsWith('video/')) &&
let metadata = {
channel_id: channel.id,
// If the file is an audio file, provide the language for STT.
...((file.type.startsWith('audio/') || file.type.startsWith('video/')) &&
$settings?.audio?.stt?.language
) {
metadata = {
language: $settings?.audio?.stt?.language
};
}
? {
language: $settings?.audio?.stt?.language
}
: {})
};
const uploadedFile = await uploadFile(localStorage.token, file, metadata, process);

View file

@ -337,7 +337,7 @@
>
<Plus className="size-3" strokeWidth="2.5" />
<div class=" md:ml-1 text-xs">{$i18n.t('New Note')}</div>
<div class=" ml-1 text-xs">{$i18n.t('New Note')}</div>
</button>
</div>
</div>

View file

@ -196,39 +196,35 @@
<!-- The Aleph dreams itself into being, and the void learns its own name -->
<div class=" my-2 px-3 grid grid-cols-1 lg:grid-cols-2 gap-2">
{#each filteredItems as item}
<Tooltip content={item?.description ?? item.name}>
<button
class=" flex space-x-4 cursor-pointer text-left w-full px-3 py-2.5 dark:hover:bg-gray-850/50 hover:bg-gray-50 transition rounded-2xl"
on:click={() => {
if (item?.meta?.document) {
toast.error(
$i18n.t(
'Only collections can be edited, create a new knowledge base to edit/add documents.'
)
);
} else {
goto(`/workspace/knowledge/${item.id}`);
}
}}
>
<div class=" w-full">
<div class=" self-center flex-1">
<div class="flex items-center justify-between -my-1">
<div class=" flex gap-2 items-center">
<div>
{#if item?.meta?.document}
<Badge type="muted" content={$i18n.t('Document')} />
{:else}
<Badge type="success" content={$i18n.t('Collection')} />
{/if}
</div>
<div class=" text-xs text-gray-500 line-clamp-1">
{$i18n.t('Updated')}
{dayjs(item.updated_at * 1000).fromNow()}
</div>
<button
class=" flex space-x-4 cursor-pointer text-left w-full px-3 py-2.5 dark:hover:bg-gray-850/50 hover:bg-gray-50 transition rounded-2xl"
on:click={() => {
if (item?.meta?.document) {
toast.error(
$i18n.t(
'Only collections can be edited, create a new knowledge base to edit/add documents.'
)
);
} else {
goto(`/workspace/knowledge/${item.id}`);
}
}}
>
<div class=" w-full">
<div class=" self-center flex-1 justify-between">
<div class="flex items-center justify-between -my-1 h-8">
<div class=" flex gap-2 items-center justify-between w-full">
<div>
<Badge type="success" content={$i18n.t('Collection')} />
</div>
{#if !item?.write_access}
<div>
<Badge type="muted" content={$i18n.t('Read Only')} />
</div>
{/if}
</div>
{#if item?.write_access}
<div class="flex items-center gap-2">
<div class=" flex self-center">
<ItemMenu
@ -239,33 +235,42 @@
/>
</div>
</div>
</div>
{/if}
</div>
<div class=" flex items-center gap-1 justify-between px-1.5">
<div class=" flex items-center gap-1 justify-between px-1.5">
<Tooltip content={item?.description ?? item.name}>
<div class=" flex items-center gap-2">
<div class=" text-sm font-medium line-clamp-1 capitalize">{item.name}</div>
</div>
</Tooltip>
<div>
<div class="text-xs text-gray-500">
<Tooltip
content={item?.user?.email ?? $i18n.t('Deleted User')}
className="flex shrink-0"
placement="top-start"
>
{$i18n.t('By {{name}}', {
name: capitalizeFirstLetter(
item?.user?.name ?? item?.user?.email ?? $i18n.t('Deleted User')
)
})}
</Tooltip>
<div class="flex items-center gap-2">
<Tooltip content={dayjs(item.updated_at * 1000).format('LLLL')}>
<div class=" text-xs text-gray-500 line-clamp-1">
{$i18n.t('Updated')}
{dayjs(item.updated_at * 1000).fromNow()}
</div>
</Tooltip>
<div class="text-xs text-gray-500">
<Tooltip
content={item?.user?.email ?? $i18n.t('Deleted User')}
className="flex shrink-0"
placement="top-start"
>
{$i18n.t('By {{name}}', {
name: capitalizeFirstLetter(
item?.user?.name ?? item?.user?.email ?? $i18n.t('Deleted User')
)
})}
</Tooltip>
</div>
</div>
</div>
</div>
</button>
</Tooltip>
</div>
</button>
{/each}
</div>
{:else}

View file

@ -206,16 +206,16 @@
fileItems = [...(fileItems ?? []), fileItem];
try {
// If the file is an audio file, provide the language for STT.
let metadata = null;
if (
(file.type.startsWith('audio/') || file.type.startsWith('video/')) &&
let metadata = {
knowledge_id: knowledge.id,
// If the file is an audio file, provide the language for STT.
...((file.type.startsWith('audio/') || file.type.startsWith('video/')) &&
$settings?.audio?.stt?.language
) {
metadata = {
language: $settings?.audio?.stt?.language
};
}
? {
language: $settings?.audio?.stt?.language
}
: {})
};
const uploadedFile = await uploadFile(localStorage.token, file, metadata).catch((e) => {
toast.error(`${e}`);
@ -429,7 +429,7 @@
});
if (res) {
knowledge = res;
fileItems = [];
toast.success($i18n.t('Knowledge reset successfully.'));
// Upload directory
@ -441,16 +441,14 @@
};
const addFileHandler = async (fileId) => {
const updatedKnowledge = await addFileToKnowledgeById(localStorage.token, id, fileId).catch(
(e) => {
toast.error(`${e}`);
return null;
}
);
const res = await addFileToKnowledgeById(localStorage.token, id, fileId).catch((e) => {
toast.error(`${e}`);
return null;
});
if (updatedKnowledge) {
knowledge = updatedKnowledge;
if (res) {
toast.success($i18n.t('File added successfully.'));
init();
} else {
toast.error($i18n.t('Failed to add file.'));
fileItems = fileItems.filter((file) => file.id !== fileId);
@ -462,13 +460,12 @@
console.log('Starting file deletion process for:', fileId);
// Remove from knowledge base only
const updatedKnowledge = await removeFileFromKnowledgeById(localStorage.token, id, fileId);
const res = await removeFileFromKnowledgeById(localStorage.token, id, fileId);
console.log('Knowledge base updated:', res);
console.log('Knowledge base updated:', updatedKnowledge);
if (updatedKnowledge) {
knowledge = updatedKnowledge;
if (res) {
toast.success($i18n.t('File removed successfully.'));
await init();
}
} catch (e) {
console.error('Error in deleteFileHandler:', e);
@ -569,6 +566,11 @@
e.preventDefault();
dragged = false;
if (!knowledge?.write_access) {
toast.error($i18n.t('You do not have permission to upload files to this knowledge base.'));
return;
}
const handleUploadingFileFolder = (items) => {
for (const item of items) {
if (item.isFile) {
@ -750,6 +752,7 @@
class="text-left w-full font-medium text-lg font-primary bg-transparent outline-hidden flex-1"
bind:value={knowledge.name}
placeholder={$i18n.t('Knowledge Name')}
disabled={!knowledge?.write_access}
on:input={() => {
changeDebounceHandler();
}}
@ -766,21 +769,27 @@
</div>
</div>
<div class="self-center shrink-0">
<button
class="bg-gray-50 hover:bg-gray-100 text-black dark:bg-gray-850 dark:hover:bg-gray-800 dark:text-white transition px-2 py-1 rounded-full flex gap-1 items-center"
type="button"
on:click={() => {
showAccessControlModal = true;
}}
>
<LockClosed strokeWidth="2.5" className="size-3.5" />
{#if knowledge?.write_access}
<div class="self-center shrink-0">
<button
class="bg-gray-50 hover:bg-gray-100 text-black dark:bg-gray-850 dark:hover:bg-gray-800 dark:text-white transition px-2 py-1 rounded-full flex gap-1 items-center"
type="button"
on:click={() => {
showAccessControlModal = true;
}}
>
<LockClosed strokeWidth="2.5" className="size-3.5" />
<div class="text-sm font-medium shrink-0">
{$i18n.t('Access')}
</div>
</button>
</div>
<div class="text-sm font-medium shrink-0">
{$i18n.t('Access')}
</div>
</button>
</div>
{:else}
<div class="text-xs shrink-0 text-gray-500">
{$i18n.t('Read Only')}
</div>
{/if}
</div>
<div class="flex w-full">
@ -789,6 +798,7 @@
class="text-left text-xs w-full text-gray-500 bg-transparent outline-hidden"
bind:value={knowledge.description}
placeholder={$i18n.t('Knowledge Description')}
disabled={!knowledge?.write_access}
on:input={() => {
changeDebounceHandler();
}}
@ -815,22 +825,24 @@
}}
/>
<div>
<AddContentMenu
on:upload={(e) => {
if (e.detail.type === 'directory') {
uploadDirectoryHandler();
} else if (e.detail.type === 'text') {
showAddTextContentModal = true;
} else {
document.getElementById('files-input').click();
}
}}
on:sync={(e) => {
showSyncConfirmModal = true;
}}
/>
</div>
{#if knowledge?.write_access}
<div>
<AddContentMenu
on:upload={(e) => {
if (e.detail.type === 'directory') {
uploadDirectoryHandler();
} else if (e.detail.type === 'text') {
showAddTextContentModal = true;
} else {
document.getElementById('files-input').click();
}
}}
on:sync={(e) => {
showSyncConfirmModal = true;
}}
/>
</div>
{/if}
</div>
</div>
@ -899,6 +911,7 @@
<div class=" flex overflow-y-auto h-full w-full scrollbar-hidden text-xs">
<Files
files={fileItems}
{knowledge}
{selectedFileId}
onClick={(fileId) => {
selectedFileId = fileId;
@ -962,28 +975,31 @@
{selectedFile?.meta?.name}
</div>
<div>
<button
class="flex self-center w-fit text-sm py-1 px-2.5 dark:text-gray-300 dark:hover:text-white hover:bg-black/5 dark:hover:bg-white/5 rounded-lg disabled:opacity-50 disabled:cursor-not-allowed"
disabled={isSaving}
on:click={() => {
updateFileContentHandler();
}}
>
{$i18n.t('Save')}
{#if isSaving}
<div class="ml-2 self-center">
<Spinner />
</div>
{/if}
</button>
</div>
{#if knowledge?.write_access}
<div>
<button
class="flex self-center w-fit text-sm py-1 px-2.5 dark:text-gray-300 dark:hover:text-white hover:bg-black/5 dark:hover:bg-white/5 rounded-lg disabled:opacity-50 disabled:cursor-not-allowed"
disabled={isSaving}
on:click={() => {
updateFileContentHandler();
}}
>
{$i18n.t('Save')}
{#if isSaving}
<div class="ml-2 self-center">
<Spinner />
</div>
{/if}
</button>
</div>
{/if}
</div>
{#key selectedFile.id}
<textarea
class="w-full h-full text-sm outline-none resize-none px-3 py-2"
bind:value={selectedFileContent}
disabled={!knowledge?.write_access}
placeholder={$i18n.t('Add content here')}
/>
{/key}

View file

@ -16,6 +16,7 @@
import XMark from '$lib/components/icons/XMark.svelte';
import Spinner from '$lib/components/common/Spinner.svelte';
export let knowledge = null;
export let selectedFileId = null;
export let files = [];
@ -50,7 +51,9 @@
<div class="line-clamp-1">
{file?.name ?? file?.meta?.name}
<span class="text-xs text-gray-500">{formatFileSize(file?.meta?.size)}</span>
{#if file?.meta?.size}
<span class="text-xs text-gray-500">{formatFileSize(file?.meta?.size)}</span>
{/if}
</div>
</div>
</div>
@ -77,19 +80,21 @@
</div>
</button>
<div class="flex items-center">
<Tooltip content={$i18n.t('Delete')}>
<button
class="p-1 rounded-full hover:bg-gray-100 dark:hover:bg-gray-850 transition"
type="button"
on:click={() => {
onDelete(file?.id ?? file?.tempId);
}}
>
<XMark />
</button>
</Tooltip>
</div>
{#if knowledge?.write_access}
<div class="flex items-center">
<Tooltip content={$i18n.t('Delete')}>
<button
class="p-1 rounded-full hover:bg-gray-100 dark:hover:bg-gray-850 transition"
type="button"
on:click={() => {
onDelete(file?.id ?? file?.tempId);
}}
>
<XMark />
</button>
</Tooltip>
</div>
{/if}
</div>
{/each}
</div>