Compare commits

...

181 commits

Author SHA1 Message Date
Tim Baek
6f1486ffd0
Merge pull request #19466 from open-webui/dev
Some checks failed
Python CI / Format Backend (push) Has been cancelled
Frontend Build / Format & Build Frontend (push) Has been cancelled
Frontend Build / Frontend Unit Tests (push) Has been cancelled
Release / release (push) Has been cancelled
Deploy to HuggingFace Spaces / check-secret (push) Has been cancelled
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Release to PyPI / release (push) Has been cancelled
Deploy to HuggingFace Spaces / deploy (push) Has been cancelled
Create and publish Docker images with specific build args / merge-main-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-cuda-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-cuda126-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-ollama-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-slim-images (push) Has been cancelled
0.6.41
2025-12-02 17:28:46 -05:00
Timothy Jaeryang Baek
73f7e91dec chore: format
Some checks are pending
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Python CI / Format Backend (push) Waiting to run
Frontend Build / Format & Build Frontend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
2025-12-02 17:16:12 -05:00
Classic298
8361f73ca6
chore: 0.6.41 Changelog (#19473)
* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md
2025-12-02 17:15:19 -05:00
Timothy Jaeryang Baek
11efb982c1 refac 2025-12-02 17:14:45 -05:00
Timothy Jaeryang Baek
9d87688ecc chore: bump 2025-12-02 16:53:05 -05:00
Classic298
4f9677ffcf
Update translation.json (#19697)
* Update translation.json

* Update translation.json
2025-12-02 16:48:11 -05:00
Classic298
a49e1d87ad
fix: Default Group ID assignment on SSO/OAUTH and LDAP (#19685)
* fix (#99)

Co-authored-by: Tim Baek <tim@openwebui.com>
Co-authored-by: Claude <noreply@anthropic.com>

* Update auths.py

* unified logic

* PUSH

* remove getattr

* rem getattr

* whitespace

* Update oauth.py

* trusted header group sync

Added default group re-application after trusted header group sync

* not apply after syncs

* .

* rem

---------

Co-authored-by: Tim Baek <tim@openwebui.com>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-02 16:48:00 -05:00
Timothy Jaeryang Baek
9a65ed2260 chore: format 2025-12-02 16:06:57 -05:00
Classic298
864d54095f
Update translation.json (#19696) 2025-12-02 16:06:06 -05:00
Classic298
b29fdc2a0c
Update milvus_multitenancy.py (#19695) 2025-12-02 15:38:06 -05:00
Classic298
12f237ff80
fix: Update milvus.py (#19602)
* Update milvus.py

* Update milvus.py

* Update milvus.py

* Update milvus.py

* Update milvus.py

---------

Co-authored-by: Tim Baek <tim@openwebui.com>
2025-12-02 15:30:31 -05:00
Timothy Jaeryang Baek
192c2af7ba refac 2025-12-02 15:17:47 -05:00
Matthew Kusz
17bfd38696
Fix dropdown backgrounds (#19693) 2025-12-02 15:16:36 -05:00
Henne
a7e614ca4c
feat: Adds document intelligence model configuration (#19692)
* Adds document intelligence model configuration

Enables the configuration of the Document Intelligence model to be used by the RAG pipeline.

This allows users to specify the model they want to use for document processing, providing flexibility and control over the extraction process.

* Added Titel to Document Intelligence Model Config

Added Titel to Document Intelligence Model Config
2025-12-02 14:41:09 -05:00
Timothy Jaeryang Baek
e5c6b739c2 refac 2025-12-02 11:43:00 -05:00
Timothy Jaeryang Baek
34169b3581 refac 2025-12-02 11:31:23 -05:00
Timothy Jaeryang Baek
01868e856a enh: group members endpoint 2025-12-02 11:24:23 -05:00
Timothy Jaeryang Baek
e301d1962e refac/perf: has_access_to_file optimization 2025-12-02 11:11:17 -05:00
Timothy Jaeryang Baek
9f6c91987f refac 2025-12-02 11:00:34 -05:00
Timothy Jaeryang Baek
d19023288e feat/enh: kb files db migration 2025-12-02 10:53:32 -05:00
Timothy Jaeryang Baek
29236aefe8 refac
Some checks are pending
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Python CI / Format Backend (push) Waiting to run
Frontend Build / Format & Build Frontend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
2025-12-02 10:25:38 -05:00
Timothy Jaeryang Baek
6ce9afd95d refac 2025-12-02 09:21:03 -05:00
Timothy Jaeryang Baek
39f7575b64 refac: show connection type for custom models 2025-12-02 06:19:48 -05:00
Timothy Jaeryang Baek
954aaa6bdc refac: styling 2025-12-02 05:47:05 -05:00
Timothy Jaeryang Baek
aa589fcbd9 refac 2025-12-02 05:36:45 -05:00
Timothy Jaeryang Baek
9f42b9369f refac 2025-12-02 05:29:34 -05:00
Timothy Jaeryang Baek
143d3fbce2 refac 2025-12-02 04:18:19 -05:00
Poccia
6e531679f4
fix/adjust web search to properly block domains (#19670)
Co-authored-by: Tim Baek <tim@openwebui.com>
2025-12-02 04:17:32 -05:00
Timothy Jaeryang Baek
562f22960c refac 2025-12-02 04:07:02 -05:00
Timothy Jaeryang Baek
5388cc1bc6 refac 2025-12-02 04:03:44 -05:00
Classic298
0a14196afb
Update milvus_multitenancy.py (#19680) 2025-12-02 03:57:14 -05:00
Timothy Jaeryang Baek
7b16637043 feat: signin rate limit 2025-12-02 03:52:38 -05:00
Timothy Jaeryang Baek
734c04ebf0 refac 2025-12-02 02:53:49 -05:00
Classic298
4f50571b53
Chore: dep bump (#19667)
* Update pyproject.toml

* Update requirements-min.txt

* Update requirements.txt

---------

Co-authored-by: Tim Baek <tim@openwebui.com>
2025-12-02 02:34:57 -05:00
Timothy Jaeryang Baek
52ccab8fc0 refac
Some checks are pending
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Python CI / Format Backend (push) Waiting to run
Frontend Build / Format & Build Frontend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
2025-12-01 13:52:09 -05:00
Timothy Jaeryang Baek
f5e8d4d5a0 refac 2025-12-01 13:34:57 -05:00
Timothy Jaeryang Baek
51621ba91a feat/enh: user status 2025-12-01 13:18:59 -05:00
Timothy Jaeryang Baek
dba86bc980 fix: audit 2025-12-01 10:59:01 -05:00
Shirasawa
21f3411692
i18n: improve Chinese translation (#19651) 2025-12-01 10:58:06 -05:00
Timothy Jaeryang Baek
91473c788c chore: otel bump 2025-12-01 10:57:00 -05:00
Timothy Jaeryang Baek
25f0c26b25 chore: otel bump 2025-12-01 10:29:20 -05:00
Timothy Jaeryang Baek
9791c9bd8b refac
Some checks failed
Python CI / Format Backend (push) Has been cancelled
Frontend Build / Format & Build Frontend (push) Has been cancelled
Frontend Build / Frontend Unit Tests (push) Has been cancelled
Deploy to HuggingFace Spaces / check-secret (push) Has been cancelled
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Deploy to HuggingFace Spaces / deploy (push) Has been cancelled
Create and publish Docker images with specific build args / merge-main-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-cuda-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-cuda126-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-ollama-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-slim-images (push) Has been cancelled
2025-11-30 15:56:42 -05:00
Timothy Jaeryang Baek
c62609faba refac 2025-11-30 14:51:44 -05:00
Timothy Jaeryang Baek
88decab9be refac 2025-11-30 14:23:08 -05:00
Timothy Jaeryang Baek
d499c3aed8 refac 2025-11-30 14:17:54 -05:00
Timothy Jaeryang Baek
277f3a91f1 refac 2025-11-30 14:06:16 -05:00
joaoback
1818f2b3d9
Update translation.json (pt-BR) (#19603)
translations of the new items that have been included
2025-11-30 11:09:16 -05:00
Timothy Jaeryang Baek
a0826ec9fe feat/enh: dm from user profile preview 2025-11-30 11:04:06 -05:00
Timothy Jaeryang Baek
3c846617cd refac 2025-11-30 10:45:54 -05:00
Timothy Jaeryang Baek
39645102d1 refac 2025-11-30 10:40:24 -05:00
Timothy Jaeryang Baek
3f1d9ccbf8 feat/enh: add/remove users from group channel 2025-11-30 10:33:50 -05:00
Timothy Jaeryang Baek
781aeebd2a refac
Some checks are pending
Python CI / Format Backend (push) Waiting to run
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Frontend Build / Format & Build Frontend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
2025-11-30 08:28:19 -05:00
Timothy Jaeryang Baek
f589b7c189 feat/enh: group channel 2025-11-30 08:24:27 -05:00
Timothy Jaeryang Baek
696f356881 refac 2025-11-30 08:12:22 -05:00
Timothy Jaeryang Baek
515f85fe1c refac 2025-11-30 05:17:52 -05:00
Timothy Jaeryang Baek
4d74e6cefa refac: styling 2025-11-30 05:01:41 -05:00
Timothy Jaeryang Baek
3ebb3e2143 refac: styling 2025-11-30 03:56:12 -05:00
Timothy Jaeryang Baek
69b82edd63 refac 2025-11-30 03:50:14 -05:00
Timothy Jaeryang Baek
9d39b9b42c refac: styling 2025-11-30 03:47:34 -05:00
Timothy Jaeryang Baek
e65d92fc6f refac 2025-11-30 02:32:34 -05:00
Timothy Jaeryang Baek
f3c8c7045d refac
Some checks are pending
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Python CI / Format Backend (push) Waiting to run
Frontend Build / Format & Build Frontend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
2025-11-29 14:19:55 -05:00
Timothy Jaeryang Baek
c9185aaf44 refac 2025-11-29 14:18:22 -05:00
Timothy Jaeryang Baek
05e79bdd0c enh: message reaction user names 2025-11-29 13:54:52 -05:00
Timothy Jaeryang Baek
fb6b18faef refac: knowledge file delete behaviour 2025-11-29 13:19:06 -05:00
Tim Baek
b56adf01e3
Merge pull request #19584 from Classic298/patch-1
refac: improve weaker model's ability of understanding that the image was created successfully
2025-11-29 13:09:45 -05:00
Timothy Jaeryang Baek
356e982d30 refac 2025-11-29 12:57:17 -05:00
Classic298
bb4b547574
Update middleware.py 2025-11-29 11:11:21 +01:00
Timothy Jaeryang Baek
20340c3e4e refac
Some checks are pending
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Frontend Build / Format & Build Frontend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
2025-11-29 01:05:39 -05:00
Timothy Jaeryang Baek
6c53bf7175 refac: styling
Some checks are pending
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Python CI / Format Backend (push) Waiting to run
Frontend Build / Format & Build Frontend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
2025-11-28 23:10:01 -05:00
Timothy Jaeryang Baek
ff121413da refac 2025-11-28 22:54:00 -05:00
Timothy Jaeryang Baek
c1d760692f refac: db group 2025-11-28 22:48:58 -05:00
Timothy Jaeryang Baek
a7c7993bbf refac/fix: temp chat image generation
Some checks are pending
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Python CI / Format Backend (push) Waiting to run
Frontend Build / Format & Build Frontend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
2025-11-28 11:11:56 -05:00
Timothy Jaeryang Baek
0f3156651c refac/fix: ollama model delete 2025-11-28 11:01:22 -05:00
Timothy Jaeryang Baek
c8071a3180 refac 2025-11-28 10:51:30 -05:00
Timothy Jaeryang Baek
25994dd3da refac/enh: channel message 2025-11-28 10:45:48 -05:00
Timothy Jaeryang Baek
b9e849f17d refac: styling 2025-11-28 10:08:37 -05:00
Timothy Jaeryang Baek
80fbb29ccc refac: styling 2025-11-28 10:07:57 -05:00
Timothy Jaeryang Baek
7b1895ec8a refac 2025-11-28 10:04:06 -05:00
Timothy Jaeryang Baek
aae2fce173 feat/enh: pinned messages in channels 2025-11-28 09:58:44 -05:00
Timothy Jaeryang Baek
451907cc92 refac 2025-11-28 09:32:37 -05:00
Timothy Jaeryang Baek
1b095d12ff refac: admin user list active indicator 2025-11-28 08:45:31 -05:00
Timothy Jaeryang Baek
0518749d51 refac: pin icons
Some checks are pending
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Python CI / Format Backend (push) Waiting to run
Frontend Build / Format & Build Frontend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
2025-11-28 08:01:42 -05:00
Tim Baek
fc06c16dd4
Merge pull request #19573 from open-webui/update-user-table
refac/db: update user table
2025-11-28 07:55:00 -05:00
Timothy Jaeryang Baek
33b59adf27 refac 2025-11-28 07:42:45 -05:00
Timothy Jaeryang Baek
70948f8803 enh/refac: deprecate USER_POOL 2025-11-28 07:39:02 -05:00
Timothy Jaeryang Baek
c2634d45ad refac 2025-11-28 07:27:55 -05:00
Timothy Jaeryang Baek
8ef482a52a refac: user oauth display 2025-11-28 06:59:59 -05:00
Timothy Jaeryang Baek
dcf50c4758 refac: api_key table migration 2025-11-28 06:49:10 -05:00
Timothy Jaeryang Baek
742832a850 refac 2025-11-28 06:41:41 -05:00
Timothy Jaeryang Baek
0a4358c3d1 refac: oauth_sub -> oauth migration 2025-11-28 06:39:36 -05:00
Timothy Jaeryang Baek
369298a83e refac: user table db migration 2025-11-28 06:29:41 -05:00
Timothy Jaeryang Baek
b99c9b277a refac: styling 2025-11-28 04:29:50 -05:00
Timothy Jaeryang Baek
4b6773885c enh: dm active user indicator 2025-11-28 04:24:25 -05:00
Timothy Jaeryang Baek
d232e433e8 refac: profile preview 2025-11-28 04:22:54 -05:00
Timothy Jaeryang Baek
848f3fd4d8 refac: hide active user count in sidebar user menu 2025-11-28 03:40:16 -05:00
Timothy Jaeryang Baek
453ea9b9a1 refac/fix: db migration issue 2025-11-28 03:10:48 -05:00
Timothy Jaeryang Baek
6ee50770cd refac 2025-11-28 02:44:36 -05:00
Timothy Jaeryang Baek
15dc607779 refac: rm print 2025-11-28 02:34:25 -05:00
Timothy Jaeryang Baek
32c888c280 refac 2025-11-28 01:40:52 -05:00
Timothy Jaeryang Baek
99a7823e01 refac: db 2025-11-28 01:17:43 -05:00
RomualdYT
022f9ff3a5
Update french translation.json (#19547)
Some checks are pending
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Frontend Build / Format & Build Frontend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
2025-11-27 09:21:27 -05:00
Timothy Jaeryang Baek
ad86707605 refac 2025-11-27 08:20:14 -05:00
Timothy Jaeryang Baek
289801b608 refac: styling 2025-11-27 08:12:06 -05:00
Timothy Jaeryang Baek
6bb204eb80 refac 2025-11-27 08:07:39 -05:00
Timothy Jaeryang Baek
560702a8f7 refac
Some checks are pending
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Python CI / Format Backend (push) Waiting to run
Frontend Build / Format & Build Frontend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
2025-11-27 08:04:41 -05:00
Timothy Jaeryang Baek
6752772c1d chore: format 2025-11-27 07:57:31 -05:00
Timothy Jaeryang Baek
d645cdbaf3 refac 2025-11-27 07:49:19 -05:00
Timothy Jaeryang Baek
3b4d7d568b refac 2025-11-27 07:43:10 -05:00
Timothy Jaeryang Baek
d5d0e72590 refac 2025-11-27 07:39:00 -05:00
Timothy Jaeryang Baek
f1a7de94ba refac 2025-11-27 07:33:33 -05:00
Timothy Jaeryang Baek
acccb9afdd feat: dm channels 2025-11-27 07:27:32 -05:00
Timothy Jaeryang Baek
f2c56fc839 refac 2025-11-27 06:03:22 -05:00
Timothy Jaeryang Baek
dd6b808e69 refac 2025-11-27 05:25:38 -05:00
Timothy Jaeryang Baek
7a374ca2a5 refac 2025-11-27 05:11:17 -05:00
Timothy Jaeryang Baek
421aba7cd7 refac: hide channel add button for users 2025-11-27 04:49:29 -05:00
Timothy Jaeryang Baek
09b6ea38c5 feat/enh: group export endpoint 2025-11-27 04:44:01 -05:00
Timothy Jaeryang Baek
28659f6af5 refac/fix: files batch/add endpoint 2025-11-27 04:35:12 -05:00
Timothy Jaeryang Baek
64b4d5d9c2 feat/enh: channels unread messages count 2025-11-27 04:31:04 -05:00
Aleix Dorca
c7a48c50a3
Update catalan translation.json (#19536) 2025-11-27 03:02:08 -05:00
Timothy Jaeryang Baek
b5e5617a41 enh: redis dict for internal models state
Co-Authored-By: cw.a <57549718+acwoo97@users.noreply.github.com>
2025-11-27 01:33:52 -05:00
Timothy Jaeryang Baek
ff4b1b9824 refac: chat history data structure 2025-11-27 00:10:53 -05:00
stevessr
86cdcda29a
fix: button without type (#19534) 2025-11-27 00:01:36 -05:00
Timothy Jaeryang Baek
5a32ea9b49 refac
Some checks are pending
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Python CI / Format Backend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
Frontend Build / Format & Build Frontend (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
2025-11-26 23:54:55 -05:00
Timothy Jaeryang Baek
457af65df6 enh/feat: toggle folders & user perm 2025-11-26 22:47:48 -05:00
Tobias Genannt
04b337323a
fix: correct role check on OAuth login (#19476)
When a users role is switched from admin to user in the OAuth provider
their groups are not correctly updated when ENABLE_OAUTH_GROUP_MANAGEMENT
is enabled.
2025-11-26 21:48:06 -05:00
Timothy Jaeryang Baek
384753c6ca refac/enh: drop profile_image_url field in responses 2025-11-26 21:47:20 -05:00
Timothy Jaeryang Baek
3fe5a47050 refac/enh: knowledge base name on icon hover 2025-11-26 21:33:27 -05:00
Timothy Jaeryang Baek
d1bbf6ba92 refac 2025-11-26 21:29:52 -05:00
Timothy Jaeryang Baek
9f89cc5adc refac 2025-11-26 21:28:32 -05:00
Shirasawa
fa0efae4d5
i18n: improve Chinese translation (#19497) 2025-11-26 17:42:56 -05:00
gerhardj-b
f2d6a425de
feat: also consider OAUTH_ROLES_SEPARATOR for string claims themselves (#19514) 2025-11-26 17:38:26 -05:00
Classic298
d071cdf7d4
chore: update transformers dependency to fix issue #19512 (#19513)
* Update pyproject.toml

* Update requirements.txt

* Update requirements.txt

* Update pyproject.toml
2025-11-26 16:41:56 -05:00
Classic298
4b21704498
chore: Update pymilvus dep (#19507)
* Update requirements.txt

* Update pyproject.toml
2025-11-26 16:41:20 -05:00
Classic298
9fca4969db
chore: dep bump pypdf to ver 6.4.0 (#19508)
* Update pyproject.toml

* Update requirements.txt
2025-11-26 16:41:12 -05:00
Timothy Jaeryang Baek
4370dee79e fix: async save docs to vector db
Some checks failed
Frontend Build / Format & Build Frontend (push) Has been cancelled
Frontend Build / Frontend Unit Tests (push) Has been cancelled
Deploy to HuggingFace Spaces / check-secret (push) Has been cancelled
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Python CI / Format Backend (push) Has been cancelled
Deploy to HuggingFace Spaces / deploy (push) Has been cancelled
Create and publish Docker images with specific build args / merge-main-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-cuda-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-cuda126-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-ollama-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-slim-images (push) Has been cancelled
2025-11-25 17:19:33 -05:00
Classic298
c631659327
i18n: de-de (#19471) 2025-11-25 16:31:38 -05:00
Classic298
4df5b7eb2e
fix: update dependency to prevent rediss:// failure (#19488)
* Update pyproject.toml

* Update requirements.txt

* Update requirements-min.txt
2025-11-25 16:28:58 -05:00
Timothy Jaeryang Baek
8b2015a97b refac 2025-11-25 16:28:06 -05:00
Timothy Jaeryang Baek
477097c2e4 refac 2025-11-25 16:27:27 -05:00
Timothy Jaeryang Baek
c5b73d7184 refac/fix: function name filter type 2025-11-25 16:25:40 -05:00
Timothy Jaeryang Baek
c7eb713689 fix: user preview profile image
Some checks are pending
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Python CI / Format Backend (push) Waiting to run
2025-11-25 08:00:30 -05:00
Aleix Dorca
1bfe2c92ba
Merge pull request #19464 from aleixdorca/dev
Some checks are pending
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Python CI / Format Backend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
Frontend Build / Format & Build Frontend (push) Waiting to run
i18n: Update Catalan translation.json
2025-11-25 07:03:49 -05:00
Timothy Jaeryang Baek
69722ba973 fix/refac: workspace shared model list 2025-11-25 06:32:27 -05:00
Tim Baek
140605e660
Merge pull request #19462 from open-webui/dev
Some checks failed
Release to PyPI / release (push) Has been cancelled
Release / release (push) Has been cancelled
Deploy to HuggingFace Spaces / check-secret (push) Has been cancelled
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Has been cancelled
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Has been cancelled
Python CI / Format Backend (push) Has been cancelled
Frontend Build / Format & Build Frontend (push) Has been cancelled
Frontend Build / Frontend Unit Tests (push) Has been cancelled
Deploy to HuggingFace Spaces / deploy (push) Has been cancelled
Create and publish Docker images with specific build args / merge-main-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-cuda-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-cuda126-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-ollama-images (push) Has been cancelled
Create and publish Docker images with specific build args / merge-slim-images (push) Has been cancelled
0.6.40
2025-11-25 06:01:33 -05:00
Timothy Jaeryang Baek
f3547568e4 refac: channel user list order by 2025-11-25 05:53:31 -05:00
Classic298
15c6860a49
Update CHANGELOG.md (#19463)
* Update CHANGELOG.md

* Update CHANGELOG.md
2025-11-25 05:50:39 -05:00
Timothy Jaeryang Baek
363ef194d8 chore: bump python-socketio==5.14.0 2025-11-25 05:49:30 -05:00
Timothy Jaeryang Baek
33a52628e6 chore: bump 2025-11-25 05:48:12 -05:00
Timothy Jaeryang Baek
35ab6b7667 fix: postgres user list issue 2025-11-25 05:47:04 -05:00
Timothy Jaeryang Baek
97ba5b8436 fix: changelog 2025-11-25 05:42:18 -05:00
Tim Baek
9899293f05
Merge pull request #19448 from open-webui/dev
0.6.39
2025-11-25 05:31:34 -05:00
Timothy Jaeryang Baek
3fa484f290 doc: changelog 2025-11-25 05:29:16 -05:00
Timothy Jaeryang Baek
03dc4d7182 refac/enh: copy formatted table 2025-11-25 05:25:49 -05:00
Classic298
82a5f11b72
CHANGELOG: 0.6.39 (#19446)
* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md
2025-11-25 05:13:23 -05:00
Timothy Jaeryang Baek
4847bdcc9b chore: format 2025-11-25 05:12:45 -05:00
Timothy Jaeryang Baek
0fa97bde00 fix: i18n 2025-11-25 05:11:37 -05:00
Timothy Jaeryang Baek
d5c3e9ea42 refac 2025-11-25 05:10:08 -05:00
Timothy Jaeryang Baek
6235243b62 refac 2025-11-25 05:07:53 -05:00
Timothy Jaeryang Baek
63ca0a3519 refac 2025-11-25 04:56:26 -05:00
Classic298
6a095099d5
chore: add chardet (#19458)
* Update pyproject.toml

* Update requirements-min.txt

* Update requirements.txt

* Update requirements-min.txt

* Update requirements.txt

* Update pyproject.toml
2025-11-25 04:52:25 -05:00
Timothy Jaeryang Baek
f22d92e102 refac: styling 2025-11-25 04:52:03 -05:00
Timothy Jaeryang Baek
84ca2258be refac 2025-11-25 04:45:52 -05:00
Timothy Jaeryang Baek
e6d8f89850 chore: version bump 2025-11-25 04:38:13 -05:00
Timothy Jaeryang Baek
c0e1203538 feat: user list in channels 2025-11-25 04:38:07 -05:00
Timothy Jaeryang Baek
baa1e07aec refac 2025-11-25 04:37:58 -05:00
Timothy Jaeryang Baek
f2ee70cbfc fix: ENABLE_CHAT_RESPONSE_BASE64_IMAGE_URL_CONVERSION env var 2025-11-25 04:15:41 -05:00
Timothy Jaeryang Baek
3b5710d0cd feat/enh: show user count in channels 2025-11-25 03:46:30 -05:00
Timothy Jaeryang Baek
a7ee36266a refac: styling 2025-11-25 03:15:40 -05:00
Timothy Jaeryang Baek
f0c7bd3f79 refac 2025-11-25 03:12:21 -05:00
Timothy Jaeryang Baek
743199f2d0 feat/enh: tool server function name filter list 2025-11-25 02:31:34 -05:00
Timothy Jaeryang Baek
488631db98 refac 2025-11-25 02:05:27 -05:00
Timothy Jaeryang Baek
2328dc284e feat/enh: async embedding processing setting
Co-Authored-By: Classic298 <27028174+Classic298@users.noreply.github.com>
2025-11-25 01:55:43 -05:00
Timothy Jaeryang Baek
b1c1e68e56 refac/fix: group member user list 2025-11-25 01:37:33 -05:00
Timothy Jaeryang Baek
38c6b0bff6 fix: inline citations 2025-11-24 23:04:01 -05:00
Timothy Jaeryang Baek
9c19d0abd4 refac/breaking: docling params
Some checks are pending
Deploy to HuggingFace Spaces / check-secret (push) Waiting to run
Deploy to HuggingFace Spaces / deploy (push) Blocked by required conditions
Create and publish Docker images with specific build args / build-main-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-main-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-cuda126-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-ollama-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/amd64, ubuntu-latest) (push) Waiting to run
Create and publish Docker images with specific build args / build-slim-image (linux/arm64, ubuntu-24.04-arm) (push) Waiting to run
Create and publish Docker images with specific build args / merge-main-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-cuda126-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-ollama-images (push) Blocked by required conditions
Create and publish Docker images with specific build args / merge-slim-images (push) Blocked by required conditions
Python CI / Format Backend (push) Waiting to run
Frontend Build / Format & Build Frontend (push) Waiting to run
Frontend Build / Frontend Unit Tests (push) Waiting to run
2025-11-24 16:01:13 -05:00
Classic298
b875a438f0
Update translation.json (#19445)
Co-authored-by: Tim Baek <tim@openwebui.com>
2025-11-24 15:39:53 -05:00
Timothy Jaeryang Baek
f0d75e3a48 refac/fix: db operations 2025-11-24 15:39:13 -05:00
Classic298
0a687980ee
Update knowledge.py (#19434) 2025-11-24 15:22:23 -05:00
Alexandr Promakh
a7b611c0e5
fix: "No connection adapters were found" routers/images.py (#19435) 2025-11-24 14:51:52 -05:00
Tim Baek
e567f42020
Merge pull request #19428 from joaoback/patch-16
Update translation.json (pt-BR)
2025-11-24 14:51:17 -05:00
joaoback
3b23b96a27
Update translation.json (pt-BR)
New translations of the items added in the latest version.
2025-11-24 12:11:22 -03:00
234 changed files with 10165 additions and 4299 deletions

View file

@ -5,6 +5,109 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/), The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.6.41] - 2025-12-02
### Added
- 🚦 Sign-in rate limiting was implemented to protect against brute force attacks, limiting login attempts to 15 per 3-minute window per email address using Redis with automatic fallback to in-memory storage when Redis is unavailable. [Commit](https://github.com/open-webui/open-webui/commit/7b166370432414ce8f186747fb098e0c70fb2d6b)
- 📂 Administrators can now globally disable the folders feature and control user-level folder permissions through the admin panel, enabling minimalist interface configurations for deployments that don't require workspace organization features. [#19529](https://github.com/open-webui/open-webui/pull/19529), [#19210](https://github.com/open-webui/open-webui/discussions/19210), [#18459](https://github.com/open-webui/open-webui/discussions/18459), [#18299](https://github.com/open-webui/open-webui/discussions/18299)
- 👥 Group channels were introduced as a new channel type enabling membership-based collaboration spaces where users explicitly join as members rather than accessing through permissions, with support for public or private visibility, automatic member inclusion from specified user groups, member role tracking with invitation metadata, and post-creation member management allowing channel managers to add or remove members through the channel info modal. [Commit](https://github.com/open-webui/open-webui/commit/f589b7c1895a6a77166c047891acfa21bc0936c4), [Commit](https://github.com/open-webui/open-webui/commit/3f1d9ccbf8443a2fa5278f36202bad930a216680)
- 💬 Direct Message channels were introduced with a dedicated channel type selector and multi-user member selection interface, enabling private conversations between specific users without requiring full channel visibility. [Commit](https://github.com/open-webui/open-webui/commit/64b4d5d9c280b926746584aaf92b447d09deb386)
- 📨 Direct Message channels now support a complete user-to-user messaging system with member-based access control, automatic deduplication for one-on-one conversations, optional channel naming, and distinct visual presentation using participant avatars instead of channel icons. [Commit](https://github.com/open-webui/open-webui/commit/acccb9afdd557274d6296c70258bb897bbb6652f)
- 🙈 Users can now hide Direct Message channels from their sidebar while preserving message history, with automatic reactivation when new messages arrive from other participants, providing a cleaner interface for managing active conversations. [Commit](https://github.com/open-webui/open-webui/commit/acccb9afdd557274d6296c70258bb897bbb6652f)
- ☑️ A comprehensive user selection component was added to the channel creation modal, featuring search functionality, sortable user lists, pagination support, and multi-select checkboxes for building Direct Message participant lists. [Commit](https://github.com/open-webui/open-webui/commit/acccb9afdd557274d6296c70258bb897bbb6652f)
- 🔴 Channel unread message count tracking was implemented with visual badge indicators in the sidebar, automatically updating counts in real-time and marking messages as read when users view channels, with join/leave functionality to manage membership status. [Commit](https://github.com/open-webui/open-webui/commit/64b4d5d9c280b926746584aaf92b447d09deb386)
- 📌 Message pinning functionality was added to channels, allowing users to pin important messages for easy reference with visual highlighting, a dedicated pinned messages modal accessible from the navbar, and complete backend support for tracking pinned status, pin timestamp, and the user who pinned each message. [Commit](https://github.com/open-webui/open-webui/commit/64b4d5d9c280b926746584aaf92b447d09deb386), [Commit](https://github.com/open-webui/open-webui/commit/aae2fce17355419d9c29f8100409108037895201)
- 🟢 Direct Message channels now display an active status indicator for one-on-one conversations, showing a green dot when the other participant is currently online or a gray dot when offline. [Commit](https://github.com/open-webui/open-webui/commit/4b6773885cd7527c5a56b963781dac5e95105eec), [Commit](https://github.com/open-webui/open-webui/commit/39645102d14f34e71b34e5ddce0625790be33f6f)
- 🆔 Users can now start Direct Message conversations directly from user profile previews by clicking the "Message" button, enabling quick access to private messaging without navigating away from the current channel. [Commit](https://github.com/open-webui/open-webui/commit/a0826ec9fedb56320532616d568fa59dda831d4e)
- ⚡ Channel messages now appear instantly when sent using optimistic UI rendering, displaying with a pending state while the server confirms delivery, providing a more responsive messaging experience. [Commit](https://github.com/open-webui/open-webui/commit/25994dd3da90600401f53596d4e4fb067c1b8eaa)
- 👍 Channel message reactions now display the names of users who reacted when hovering over the emoji, showing up to three names with a count for additional reactors. [Commit](https://github.com/open-webui/open-webui/commit/05e79bdd0c7af70b631e958924e3656db1013b80)
- 🛠️ Channel creators can now edit and delete their own group and DM channels without requiring administrator privileges, enabling users to manage the channels they create independently. [Commit](https://github.com/open-webui/open-webui/commit/f589b7c1895a6a77166c047891acfa21bc0936c4)
- 🔌 A new API endpoint was added to directly get or create a Direct Message channel with a specific user by their ID, streamlining programmatic DM channel creation for integrations and frontend workflows. [Commit](https://github.com/open-webui/open-webui/commit/f589b7c1895a6a77166c047891acfa21bc0936c4)
- 💭 Users can now set a custom status with an emoji and message that displays in profile previews, the sidebar user menu, and Direct Message channel items in the sidebar, with the ability to clear status at any time, providing visibility into availability or current focus similar to team communication platforms. [Commit](https://github.com/open-webui/open-webui/commit/51621ba91a982e52da168ce823abffd11ad3e4fa), [Commit](https://github.com/open-webui/open-webui/commit/f5e8d4d5a004115489c35725408b057e24dfe318)
- 📤 A group export API endpoint was added, enabling administrators to export complete group data including member lists for backup and migration purposes. [Commit](https://github.com/open-webui/open-webui/commit/09b6ea38c579659f8ca43ae5ea3746df3ac561ad)
- 📡 A new API endpoint was added to retrieve all users belonging to a specific group, enabling programmatic access to group membership information for administrative workflows. [Commit](https://github.com/open-webui/open-webui/commit/01868e856a10f474f74fbd1b4425dafdf949222f)
- 👁️ The admin user list now displays an active status indicator next to each user, showing a visual green dot for users who have been active within the last three minutes. [Commit](https://github.com/open-webui/open-webui/commit/1b095d12ff2465b83afa94af89ded9593f8a8655)
- 🔑 The admin user edit modal now displays OAuth identity information with a per-provider breakdown, showing each linked identity provider and its associated subject identifier separately. [#19573](https://github.com/open-webui/open-webui/pull/19573)
- 🧩 OAuth role claim parsing now respects the "OAUTH_ROLES_SEPARATOR" configuration, enabling proper parsing of roles returned as comma-separated strings and providing consistent behavior with group claim handling. [#19514](https://github.com/open-webui/open-webui/pull/19514)
- 🎛️ Channel feature access can now be controlled through both the "USER_PERMISSIONS_FEATURES_CHANNELS" environment variable and group permission toggles in the admin panel, allowing administrators to restrict channel functionality for specific users or groups while defaulting to enabled for all users. [Commit](https://github.com/open-webui/open-webui/commit/f589b7c1895a6a77166c047891acfa21bc0936c4)
- 🎨 The model editor interface was refined with access control settings moved to a dedicated modal, group member counts now displayed when configuring permissions, reorganized layout with improved visual hierarchy, and redesigned prompt suggestions cards with tooltips for field guidance. [Commit](https://github.com/open-webui/open-webui/commit/e65d92fc6f49da5ca059e1c65a729e7973354b99), [Commit](https://github.com/open-webui/open-webui/commit/9d39b9b42c653ee2acf2674b2df343ecbceb4954)
- 🏗️ Knowledge base file management was rebuilt with a dedicated database table replacing the previous JSON array storage, enabling pagination support for large knowledge bases, significantly faster file listing performance, and more reliable file-knowledge base relationship tracking. [Commit](https://github.com/open-webui/open-webui/commit/d19023288e2ca40f86e2dc3fd9f230540f3e70d7)
- ☁️ Azure Document Intelligence model selection was added, allowing administrators to specify which model to use for document processing via the "DOCUMENT_INTELLIGENCE_MODEL" environment variable or admin UI setting, with "prebuilt-layout" as the default. [#19692](https://github.com/open-webui/open-webui/pull/19692), [Docs:#872](https://github.com/open-webui/docs/pull/872)
- 🚀 Milvus multitenancy vector database performance was improved by removing manual flush calls after upsert operations, eliminating rate limit errors and reducing load on etcd and MinIO/S3 storage by allowing Milvus to manage segment persistence automatically via its WAL and auto-flush policies. [#19680](https://github.com/open-webui/open-webui/pull/19680)
- ✨ Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
- 🌍 Translations for German, French, Portuguese (Brazil), Catalan, Simplified Chinese, and Traditional Chinese were enhanced and expanded.
### Fixed
- 🔄 Tool call response token duplication was fixed by removing redundant message history additions in non-native function calling mode, resolving an issue where tool results were included twice in the context and causing 2x token consumption. [#19656](https://github.com/open-webui/open-webui/issues/19656), [Commit](https://github.com/open-webui/open-webui/commit/52ccab8)
- 🛡️ Web search domain filtering was corrected to properly block results when any resolved hostname or IP address matches a blocked domain, preventing blocked sites from appearing in search results due to permissive hostname resolution logic that previously allowed results through if any single resolved address passed the filter. [#19670](https://github.com/open-webui/open-webui/pull/19670), [#19669](https://github.com/open-webui/open-webui/issues/19669)
- 🧠 Custom models based on Ollama or OpenAI now properly inherit the connection type from their base model, ensuring they appear correctly in the "Local" or "External" model selection tabs instead of only appearing under "All". [#19183](https://github.com/open-webui/open-webui/issues/19183), [Commit](https://github.com/open-webui/open-webui/commit/39f7575)
- 🐍 SentenceTransformers embedding initialization was fixed by updating the transformers dependency to version 4.57.3, resolving a regression in v0.6.40 where document ingestion failed with "'NoneType' object has no attribute 'encode'" errors due to a bug in transformers 4.57.2. [#19512](https://github.com/open-webui/open-webui/issues/19512), [#19513](https://github.com/open-webui/open-webui/pull/19513)
- 📈 Active user count accuracy was significantly improved by replacing the socket-based USER_POOL tracking with a database-backed heartbeat mechanism, resolving long-standing issues where Redis deployments displayed inflated user counts due to stale sessions never being cleaned up on disconnect. [#16074](https://github.com/open-webui/open-webui/discussions/16074), [Commit](https://github.com/open-webui/open-webui/commit/70948f8803e417459d5203839f8077fdbfbbb213)
- 👥 Default group assignment now applies consistently across all user registration methods including OAuth/SSO, LDAP, and admin-created users, fixing an issue where the "DEFAULT_GROUP_ID" setting was only being applied to users who signed up via the email/password signup form. [#19685](https://github.com/open-webui/open-webui/pull/19685)
- 🔦 Model list filtering in workspaces was corrected to properly include models shared with user groups, ensuring members can view models they have write access to through group permissions. [#19461](https://github.com/open-webui/open-webui/issues/19461), [Commit](https://github.com/open-webui/open-webui/commit/69722ba973768a5f689f2e2351bf583a8db9bba8)
- 🖼️ User profile image display in preview contexts was fixed by resolving a Pydantic validation error that prevented proper rendering. [Commit](https://github.com/open-webui/open-webui/commit/c7eb7136893b0ddfdc5d55ffc7a05bd84a00f5d6)
- 🔒 Redis TLS connection failures were resolved by updating the python-socketio dependency to version 5.15.0, restoring support for the "rediss://" URL schema. [#19480](https://github.com/open-webui/open-webui/issues/19480), [#19488](https://github.com/open-webui/open-webui/pull/19488)
- 📝 MCP tool server configuration was corrected to properly handle the "Function Name Filter List" as both string and list types, preventing AttributeError when the field is empty and ensuring backward compatibility. [#19486](https://github.com/open-webui/open-webui/issues/19486), [Commit](https://github.com/open-webui/open-webui/commit/c5b73d71843edc024325d4a6e625ec939a747279), [Commit](https://github.com/open-webui/open-webui/commit/477097c2e42985c14892301d0127314629d07df1)
- 📎 Web page attachment failures causing TypeError on metadata checks were resolved by correcting async threadpool parameter passing in vector database operations. [#19493](https://github.com/open-webui/open-webui/issues/19493), [Commit](https://github.com/open-webui/open-webui/commit/4370dee79e19d77062c03fba81780cb3b779fca3)
- 💾 Model allowlist persistence in multi-worker deployments was fixed by implementing Redis-based shared state for the internal models dictionary, ensuring configuration changes are consistently visible across all worker processes. [#19395](https://github.com/open-webui/open-webui/issues/19395), [Commit](https://github.com/open-webui/open-webui/commit/b5e5617d7f7ad3e4eec9f15f4cc7f07cb5afc2fa)
- ⏳ Chat history infinite loading was prevented by enhancing message data structure to properly track parent message relationships, resolving issues where missing parentId fields caused perpetual loading states. [#19225](https://github.com/open-webui/open-webui/issues/19225), [Commit](https://github.com/open-webui/open-webui/commit/ff4b1b9862d15adfa15eac17d2ce066c3d8ae38f)
- 🩹 Database migration robustness was improved by automatically detecting and correcting missing primary key constraints on the user table, ensuring successful schema upgrades for databases with non-standard configurations. [#19487](https://github.com/open-webui/open-webui/discussions/19487), [Commit](https://github.com/open-webui/open-webui/commit/453ea9b9a167c0b03d86c46e6efd086bf10056ce)
- 🏷️ OAuth group assignment now updates correctly on first login when users transition from admin to user role, ensuring group memberships reflect immediately when group management is enabled. [#19475](https://github.com/open-webui/open-webui/issues/19475), [#19476](https://github.com/open-webui/open-webui/pull/19476)
- 💡 Knowledge base file tooltips now properly display the parent collection name when referencing files with the hash symbol, preventing confusion between identically-named files in different collections. [#19491](https://github.com/open-webui/open-webui/issues/19491), [Commit](https://github.com/open-webui/open-webui/commit/3fe5a47b0ff84ac97f8e4ff56a19fa2ec065bf66)
- 🔐 Knowledge base file access inconsistencies were resolved where authorized non-admin users received "Not found" or permission errors for certain files due to race conditions during upload causing mismatched collection_name values, with file access validation now properly checking against knowledge base file associations. [#18689](https://github.com/open-webui/open-webui/issues/18689), [#19523](https://github.com/open-webui/open-webui/pull/19523), [Commit](https://github.com/open-webui/open-webui/commit/e301d1962e45900ababd3eabb7e9a2ad275a5761)
- 📦 Knowledge API batch file addition endpoint was corrected to properly handle async operations, resolving 500 Internal Server Error responses when adding multiple files simultaneously. [#19538](https://github.com/open-webui/open-webui/issues/19538), [Commit](https://github.com/open-webui/open-webui/commit/28659f60d94feb4f6a99bb1a5b54d7f45e5ea10f)
- 🤖 Embedding model auto-update functionality was fixed to properly respect the "RAG_EMBEDDING_MODEL_AUTO_UPDATE" setting by correctly passing the flag to the model path resolver, ensuring models update as expected when the auto-update option is enabled. [#19687](https://github.com/open-webui/open-webui/pull/19687)
- 📉 API response payload sizes were dramatically reduced by removing base64-encoded profile images from most endpoints, eliminating multi-megabyte responses caused by high-resolution avatars and enabling better browser caching. [#19519](https://github.com/open-webui/open-webui/issues/19519), [Commit](https://github.com/open-webui/open-webui/commit/384753c4c17f62a68d38af4bbcf55a21ee08e0f2)
- 📞 Redundant API calls on the admin user overview page were eliminated by consolidating reactive statements, reducing four duplicate requests to a single efficient call and significantly improving page load performance. [#19509](https://github.com/open-webui/open-webui/issues/19509), [Commit](https://github.com/open-webui/open-webui/commit/9f89cc5e9f7e1c6c9e2bc91177e08df7c79f66f9)
- 🧹 Duplicate API calls on the workspace models page were eliminated by removing redundant model list fetching, reducing two identical requests to a single call and improving page responsiveness. [#19517](https://github.com/open-webui/open-webui/issues/19517), [Commit](https://github.com/open-webui/open-webui/commit/d1bbf6be7a4d1d53fa8ad46ca4f62fc4b2e6a8cb)
- 🔘 The model valves button was corrected to prevent unintended form submission by adding explicit button type attribute, ensuring it no longer triggers message sending when the input area contains text. [#19534](https://github.com/open-webui/open-webui/pull/19534)
- 🗑️ Ollama model deletion was fixed by correcting the request payload format and ensuring the model selector properly displays the placeholder option. [Commit](https://github.com/open-webui/open-webui/commit/0f3156651c64bc5af188a65fc2908bdcecf30c74)
- 🎨 Image generation in temporary chats was fixed by correctly handling local chat sessions that are not persisted to the database. [Commit](https://github.com/open-webui/open-webui/commit/a7c7993bbf3a21cb7ba416525b89233cf2ad877f)
- 🕵️‍♂️ Audit logging was fixed by correctly awaiting the async user authentication call, resolving failures where coroutine objects were passed instead of user data. [#19658](https://github.com/open-webui/open-webui/pull/19658), [Commit](https://github.com/open-webui/open-webui/commit/dba86bc)
- 🌙 Dark mode select dropdown styling was corrected to use proper background colors, fixing an issue where dropdown borders and hover states appeared white instead of matching the dark theme. [#19693](https://github.com/open-webui/open-webui/pull/19693), [#19442](https://github.com/open-webui/open-webui/issues/19442)
- 🔍 Milvus vector database query filtering was fixed by correcting string quote handling in filter expressions and using the proper parameter name for queries, resolving false "duplicate content detected" errors that prevented uploading multiple files to knowledge bases. [#19602](https://github.com/open-webui/open-webui/pull/19602), [#18119](https://github.com/open-webui/open-webui/issues/18119), [#16345](https://github.com/open-webui/open-webui/issues/16345), [#17088](https://github.com/open-webui/open-webui/issues/17088), [#18485](https://github.com/open-webui/open-webui/issues/18485)
- 🆙 Milvus multitenancy vector database was updated to use query_iterator() for improved robustness and consistency with the standard Milvus implementation, fixing the same false duplicate detection errors and improving handling of large result sets in multi-tenant deployments. [#19695](https://github.com/open-webui/open-webui/pull/19695)
### Changed
- ⚠️ **IMPORTANT for Multi-Instance Deployments** — This release includes database schema changes; multi-worker, multi-server, or load-balanced deployments must update all instances simultaneously rather than performing rolling updates, as running mixed versions will cause application failures due to schema incompatibility between old and new instances.
- 👮 Channel creation is now restricted to administrators only, with the channel add button hidden for regular users to maintain organizational control over communication channels. [Commit](https://github.com/open-webui/open-webui/commit/421aba7cd7cd708168b1f2565026c74525a67905)
- The active user count indicator was removed from the bottom-left user menu in the sidebar to streamline the interface. [Commit](https://github.com/open-webui/open-webui/commit/848f3fd4d86ca66656e0ff0335773945af8d7d8d)
- 🗂️ The user table was restructured with API keys migrated to a dedicated table supporting future multi-key functionality, OAuth data storage converted to a JSON structure enabling multiple identity providers per user account, and internal column types optimized from TEXT to JSON for the "info" and "settings" fields, with automatic migration preserving all existing data and associations. [#19573](https://github.com/open-webui/open-webui/pull/19573)
- 🔄 The knowledge base API was restructured to support the new file relationship model.
## [0.6.40] - 2025-11-25
### Fixed
- 🗄️ A critical PostgreSQL user listing performance issue was resolved by removing a redundant count operation that caused severe database slowdowns and potential timeouts when viewing user lists in admin panels.
## [0.6.39] - 2025-11-25
### Added
- 💬 A user list modal was added to channels, displaying all users with access and featuring search, sorting, and pagination capabilities. [Commit](https://github.com/open-webui/open-webui/commit/c0e120353824be00a2ef63cbde8be5d625bd6fd0)
- 💬 Channel navigation now displays the total number of users with access to the channel. [Commit](https://github.com/open-webui/open-webui/commit/3b5710d0cd445cf86423187f5ee7c40472a0df0b)
- 🔌 Tool servers and MCP connections now support function name filtering, allowing administrators to selectively enable or block specific functions using allow/block lists. [Commit](https://github.com/open-webui/open-webui/commit/743199f2d097ae1458381bce450d9025a0ab3f3d)
- ⚡ A toggle to disable parallel embedding processing was added via "ENABLE_ASYNC_EMBEDDING", allowing sequential processing for rate-limited or resource-constrained local embedding setups. [#19444](https://github.com/open-webui/open-webui/pull/19444)
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
- 🌐 Localization improvements were made for German (de-DE) and Portuguese (Brazil) translations.
### Fixed
- 📝 Inline citations now render correctly within markdown lists and nested elements instead of displaying as "undefined" values. [#19452](https://github.com/open-webui/open-webui/issues/19452)
- 👥 Group member selection now works correctly without randomly selecting other users or causing the user list to jump around. [#19426](https://github.com/open-webui/open-webui/issues/19426)
- 👥 Admin panel user list now displays the correct total user count and properly paginates 30 items per page after fixing database query issues with group member joins. [#19429](https://github.com/open-webui/open-webui/issues/19429)
- 🔍 Knowledge base reindexing now works correctly after resolving async execution chain issues by implementing threadpool workers for embedding operations. [#19434](https://github.com/open-webui/open-webui/pull/19434)
- 🖼️ OpenAI image generation now works correctly after fixing a connection adapter error caused by incorrect URL formatting. [#19435](https://github.com/open-webui/open-webui/pull/19435)
### Changed
- 🔧 BREAKING: Docling configuration has been consolidated from individual environment variables into a single "DOCLING_PARAMS" JSON configuration and now supports API key authentication via "DOCLING_API_KEY", requiring users to migrate existing Docling settings to the new format. [#16841](https://github.com/open-webui/open-webui/issues/16841), [#19427](https://github.com/open-webui/open-webui/pull/19427)
- 🔧 The environment variable "REPLACE_IMAGE_URLS_IN_CHAT_RESPONSE" has been renamed to "ENABLE_CHAT_RESPONSE_BASE64_IMAGE_URL_CONVERSION" for naming consistency.
## [0.6.38] - 2025-11-24 ## [0.6.38] - 2025-11-24
### Fixed ### Fixed

View file

@ -583,14 +583,16 @@ OAUTH_ROLES_CLAIM = PersistentConfig(
os.environ.get("OAUTH_ROLES_CLAIM", "roles"), os.environ.get("OAUTH_ROLES_CLAIM", "roles"),
) )
SEP = os.environ.get("OAUTH_ROLES_SEPARATOR", ",") OAUTH_ROLES_SEPARATOR = os.environ.get("OAUTH_ROLES_SEPARATOR", ",")
OAUTH_ALLOWED_ROLES = PersistentConfig( OAUTH_ALLOWED_ROLES = PersistentConfig(
"OAUTH_ALLOWED_ROLES", "OAUTH_ALLOWED_ROLES",
"oauth.allowed_roles", "oauth.allowed_roles",
[ [
role.strip() role.strip()
for role in os.environ.get("OAUTH_ALLOWED_ROLES", f"user{SEP}admin").split(SEP) for role in os.environ.get(
"OAUTH_ALLOWED_ROLES", f"user{OAUTH_ROLES_SEPARATOR}admin"
).split(OAUTH_ROLES_SEPARATOR)
if role if role
], ],
) )
@ -600,7 +602,9 @@ OAUTH_ADMIN_ROLES = PersistentConfig(
"oauth.admin_roles", "oauth.admin_roles",
[ [
role.strip() role.strip()
for role in os.environ.get("OAUTH_ADMIN_ROLES", "admin").split(SEP) for role in os.environ.get("OAUTH_ADMIN_ROLES", "admin").split(
OAUTH_ROLES_SEPARATOR
)
if role if role
], ],
) )
@ -1443,10 +1447,18 @@ USER_PERMISSIONS_FEATURES_CODE_INTERPRETER = (
== "true" == "true"
) )
USER_PERMISSIONS_FEATURES_FOLDERS = (
os.environ.get("USER_PERMISSIONS_FEATURES_FOLDERS", "True").lower() == "true"
)
USER_PERMISSIONS_FEATURES_NOTES = ( USER_PERMISSIONS_FEATURES_NOTES = (
os.environ.get("USER_PERMISSIONS_FEATURES_NOTES", "True").lower() == "true" os.environ.get("USER_PERMISSIONS_FEATURES_NOTES", "True").lower() == "true"
) )
USER_PERMISSIONS_FEATURES_CHANNELS = (
os.environ.get("USER_PERMISSIONS_FEATURES_CHANNELS", "True").lower() == "true"
)
USER_PERMISSIONS_FEATURES_API_KEYS = ( USER_PERMISSIONS_FEATURES_API_KEYS = (
os.environ.get("USER_PERMISSIONS_FEATURES_API_KEYS", "False").lower() == "true" os.environ.get("USER_PERMISSIONS_FEATURES_API_KEYS", "False").lower() == "true"
) )
@ -1499,12 +1511,16 @@ DEFAULT_USER_PERMISSIONS = {
"temporary_enforced": USER_PERMISSIONS_CHAT_TEMPORARY_ENFORCED, "temporary_enforced": USER_PERMISSIONS_CHAT_TEMPORARY_ENFORCED,
}, },
"features": { "features": {
# General features
"api_keys": USER_PERMISSIONS_FEATURES_API_KEYS, "api_keys": USER_PERMISSIONS_FEATURES_API_KEYS,
"notes": USER_PERMISSIONS_FEATURES_NOTES,
"folders": USER_PERMISSIONS_FEATURES_FOLDERS,
"channels": USER_PERMISSIONS_FEATURES_CHANNELS,
"direct_tool_servers": USER_PERMISSIONS_FEATURES_DIRECT_TOOL_SERVERS, "direct_tool_servers": USER_PERMISSIONS_FEATURES_DIRECT_TOOL_SERVERS,
# Chat features
"web_search": USER_PERMISSIONS_FEATURES_WEB_SEARCH, "web_search": USER_PERMISSIONS_FEATURES_WEB_SEARCH,
"image_generation": USER_PERMISSIONS_FEATURES_IMAGE_GENERATION, "image_generation": USER_PERMISSIONS_FEATURES_IMAGE_GENERATION,
"code_interpreter": USER_PERMISSIONS_FEATURES_CODE_INTERPRETER, "code_interpreter": USER_PERMISSIONS_FEATURES_CODE_INTERPRETER,
"notes": USER_PERMISSIONS_FEATURES_NOTES,
}, },
} }
@ -1514,6 +1530,12 @@ USER_PERMISSIONS = PersistentConfig(
DEFAULT_USER_PERMISSIONS, DEFAULT_USER_PERMISSIONS,
) )
ENABLE_FOLDERS = PersistentConfig(
"ENABLE_FOLDERS",
"folders.enable",
os.environ.get("ENABLE_FOLDERS", "True").lower() == "true",
)
ENABLE_CHANNELS = PersistentConfig( ENABLE_CHANNELS = PersistentConfig(
"ENABLE_CHANNELS", "ENABLE_CHANNELS",
"channels.enable", "channels.enable",
@ -2538,6 +2560,12 @@ DOCLING_SERVER_URL = PersistentConfig(
os.getenv("DOCLING_SERVER_URL", "http://docling:5001"), os.getenv("DOCLING_SERVER_URL", "http://docling:5001"),
) )
DOCLING_API_KEY = PersistentConfig(
"DOCLING_API_KEY",
"rag.docling_api_key",
os.getenv("DOCLING_API_KEY", ""),
)
docling_params = os.getenv("DOCLING_PARAMS", "") docling_params = os.getenv("DOCLING_PARAMS", "")
try: try:
docling_params = json.loads(docling_params) docling_params = json.loads(docling_params)
@ -2550,88 +2578,6 @@ DOCLING_PARAMS = PersistentConfig(
docling_params, docling_params,
) )
DOCLING_DO_OCR = PersistentConfig(
"DOCLING_DO_OCR",
"rag.docling_do_ocr",
os.getenv("DOCLING_DO_OCR", "True").lower() == "true",
)
DOCLING_FORCE_OCR = PersistentConfig(
"DOCLING_FORCE_OCR",
"rag.docling_force_ocr",
os.getenv("DOCLING_FORCE_OCR", "False").lower() == "true",
)
DOCLING_OCR_ENGINE = PersistentConfig(
"DOCLING_OCR_ENGINE",
"rag.docling_ocr_engine",
os.getenv("DOCLING_OCR_ENGINE", "tesseract"),
)
DOCLING_OCR_LANG = PersistentConfig(
"DOCLING_OCR_LANG",
"rag.docling_ocr_lang",
os.getenv("DOCLING_OCR_LANG", "eng,fra,deu,spa"),
)
DOCLING_PDF_BACKEND = PersistentConfig(
"DOCLING_PDF_BACKEND",
"rag.docling_pdf_backend",
os.getenv("DOCLING_PDF_BACKEND", "dlparse_v4"),
)
DOCLING_TABLE_MODE = PersistentConfig(
"DOCLING_TABLE_MODE",
"rag.docling_table_mode",
os.getenv("DOCLING_TABLE_MODE", "accurate"),
)
DOCLING_PIPELINE = PersistentConfig(
"DOCLING_PIPELINE",
"rag.docling_pipeline",
os.getenv("DOCLING_PIPELINE", "standard"),
)
DOCLING_DO_PICTURE_DESCRIPTION = PersistentConfig(
"DOCLING_DO_PICTURE_DESCRIPTION",
"rag.docling_do_picture_description",
os.getenv("DOCLING_DO_PICTURE_DESCRIPTION", "False").lower() == "true",
)
DOCLING_PICTURE_DESCRIPTION_MODE = PersistentConfig(
"DOCLING_PICTURE_DESCRIPTION_MODE",
"rag.docling_picture_description_mode",
os.getenv("DOCLING_PICTURE_DESCRIPTION_MODE", ""),
)
docling_picture_description_local = os.getenv("DOCLING_PICTURE_DESCRIPTION_LOCAL", "")
try:
docling_picture_description_local = json.loads(docling_picture_description_local)
except json.JSONDecodeError:
docling_picture_description_local = {}
DOCLING_PICTURE_DESCRIPTION_LOCAL = PersistentConfig(
"DOCLING_PICTURE_DESCRIPTION_LOCAL",
"rag.docling_picture_description_local",
docling_picture_description_local,
)
docling_picture_description_api = os.getenv("DOCLING_PICTURE_DESCRIPTION_API", "")
try:
docling_picture_description_api = json.loads(docling_picture_description_api)
except json.JSONDecodeError:
docling_picture_description_api = {}
DOCLING_PICTURE_DESCRIPTION_API = PersistentConfig(
"DOCLING_PICTURE_DESCRIPTION_API",
"rag.docling_picture_description_api",
docling_picture_description_api,
)
DOCUMENT_INTELLIGENCE_ENDPOINT = PersistentConfig( DOCUMENT_INTELLIGENCE_ENDPOINT = PersistentConfig(
"DOCUMENT_INTELLIGENCE_ENDPOINT", "DOCUMENT_INTELLIGENCE_ENDPOINT",
"rag.document_intelligence_endpoint", "rag.document_intelligence_endpoint",
@ -2644,6 +2590,12 @@ DOCUMENT_INTELLIGENCE_KEY = PersistentConfig(
os.getenv("DOCUMENT_INTELLIGENCE_KEY", ""), os.getenv("DOCUMENT_INTELLIGENCE_KEY", ""),
) )
DOCUMENT_INTELLIGENCE_MODEL = PersistentConfig(
"DOCUMENT_INTELLIGENCE_MODEL",
"rag.document_intelligence_model",
os.getenv("DOCUMENT_INTELLIGENCE_MODEL", "prebuilt-layout"),
)
MISTRAL_OCR_API_BASE_URL = PersistentConfig( MISTRAL_OCR_API_BASE_URL = PersistentConfig(
"MISTRAL_OCR_API_BASE_URL", "MISTRAL_OCR_API_BASE_URL",
"rag.MISTRAL_OCR_API_BASE_URL", "rag.MISTRAL_OCR_API_BASE_URL",
@ -2789,6 +2741,12 @@ RAG_EMBEDDING_BATCH_SIZE = PersistentConfig(
), ),
) )
ENABLE_ASYNC_EMBEDDING = PersistentConfig(
"ENABLE_ASYNC_EMBEDDING",
"rag.enable_async_embedding",
os.environ.get("ENABLE_ASYNC_EMBEDDING", "True").lower() == "true",
)
RAG_EMBEDDING_QUERY_PREFIX = os.environ.get("RAG_EMBEDDING_QUERY_PREFIX", None) RAG_EMBEDDING_QUERY_PREFIX = os.environ.get("RAG_EMBEDDING_QUERY_PREFIX", None)
RAG_EMBEDDING_CONTENT_PREFIX = os.environ.get("RAG_EMBEDDING_CONTENT_PREFIX", None) RAG_EMBEDDING_CONTENT_PREFIX = os.environ.get("RAG_EMBEDDING_CONTENT_PREFIX", None)

View file

@ -561,7 +561,8 @@ else:
#################################### ####################################
ENABLE_CHAT_RESPONSE_BASE64_IMAGE_URL_CONVERSION = ( ENABLE_CHAT_RESPONSE_BASE64_IMAGE_URL_CONVERSION = (
os.environ.get("REPLACE_IMAGE_URLS_IN_CHAT_RESPONSE", "False").lower() == "true" os.environ.get("ENABLE_CHAT_RESPONSE_BASE64_IMAGE_URL_CONVERSION", "False").lower()
== "true"
) )
CHAT_RESPONSE_STREAM_DELTA_CHUNK_SIZE = os.environ.get( CHAT_RESPONSE_STREAM_DELTA_CHUNK_SIZE = os.environ.get(

View file

@ -61,11 +61,11 @@ from open_webui.utils import logger
from open_webui.utils.audit import AuditLevel, AuditLoggingMiddleware from open_webui.utils.audit import AuditLevel, AuditLoggingMiddleware
from open_webui.utils.logger import start_logger from open_webui.utils.logger import start_logger
from open_webui.socket.main import ( from open_webui.socket.main import (
MODELS,
app as socket_app, app as socket_app,
periodic_usage_pool_cleanup, periodic_usage_pool_cleanup,
get_event_emitter, get_event_emitter,
get_models_in_use, get_models_in_use,
get_active_user_ids,
) )
from open_webui.routers import ( from open_webui.routers import (
audio, audio,
@ -230,6 +230,7 @@ from open_webui.config import (
RAG_RERANKING_MODEL_TRUST_REMOTE_CODE, RAG_RERANKING_MODEL_TRUST_REMOTE_CODE,
RAG_EMBEDDING_ENGINE, RAG_EMBEDDING_ENGINE,
RAG_EMBEDDING_BATCH_SIZE, RAG_EMBEDDING_BATCH_SIZE,
ENABLE_ASYNC_EMBEDDING,
RAG_TOP_K, RAG_TOP_K,
RAG_TOP_K_RERANKER, RAG_TOP_K_RERANKER,
RAG_RELEVANCE_THRESHOLD, RAG_RELEVANCE_THRESHOLD,
@ -268,20 +269,11 @@ from open_webui.config import (
EXTERNAL_DOCUMENT_LOADER_API_KEY, EXTERNAL_DOCUMENT_LOADER_API_KEY,
TIKA_SERVER_URL, TIKA_SERVER_URL,
DOCLING_SERVER_URL, DOCLING_SERVER_URL,
DOCLING_API_KEY,
DOCLING_PARAMS, DOCLING_PARAMS,
DOCLING_DO_OCR,
DOCLING_FORCE_OCR,
DOCLING_OCR_ENGINE,
DOCLING_OCR_LANG,
DOCLING_PDF_BACKEND,
DOCLING_TABLE_MODE,
DOCLING_PIPELINE,
DOCLING_DO_PICTURE_DESCRIPTION,
DOCLING_PICTURE_DESCRIPTION_MODE,
DOCLING_PICTURE_DESCRIPTION_LOCAL,
DOCLING_PICTURE_DESCRIPTION_API,
DOCUMENT_INTELLIGENCE_ENDPOINT, DOCUMENT_INTELLIGENCE_ENDPOINT,
DOCUMENT_INTELLIGENCE_KEY, DOCUMENT_INTELLIGENCE_KEY,
DOCUMENT_INTELLIGENCE_MODEL,
MISTRAL_OCR_API_BASE_URL, MISTRAL_OCR_API_BASE_URL,
MISTRAL_OCR_API_KEY, MISTRAL_OCR_API_KEY,
RAG_TEXT_SPLITTER, RAG_TEXT_SPLITTER,
@ -361,6 +353,7 @@ from open_webui.config import (
ENABLE_API_KEYS, ENABLE_API_KEYS,
ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS, ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS,
API_KEYS_ALLOWED_ENDPOINTS, API_KEYS_ALLOWED_ENDPOINTS,
ENABLE_FOLDERS,
ENABLE_CHANNELS, ENABLE_CHANNELS,
ENABLE_NOTES, ENABLE_NOTES,
ENABLE_COMMUNITY_SHARING, ENABLE_COMMUNITY_SHARING,
@ -776,6 +769,7 @@ app.state.config.WEBHOOK_URL = WEBHOOK_URL
app.state.config.BANNERS = WEBUI_BANNERS app.state.config.BANNERS = WEBUI_BANNERS
app.state.config.ENABLE_FOLDERS = ENABLE_FOLDERS
app.state.config.ENABLE_CHANNELS = ENABLE_CHANNELS app.state.config.ENABLE_CHANNELS = ENABLE_CHANNELS
app.state.config.ENABLE_NOTES = ENABLE_NOTES app.state.config.ENABLE_NOTES = ENABLE_NOTES
app.state.config.ENABLE_COMMUNITY_SHARING = ENABLE_COMMUNITY_SHARING app.state.config.ENABLE_COMMUNITY_SHARING = ENABLE_COMMUNITY_SHARING
@ -874,20 +868,11 @@ app.state.config.EXTERNAL_DOCUMENT_LOADER_URL = EXTERNAL_DOCUMENT_LOADER_URL
app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY = EXTERNAL_DOCUMENT_LOADER_API_KEY app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY = EXTERNAL_DOCUMENT_LOADER_API_KEY
app.state.config.TIKA_SERVER_URL = TIKA_SERVER_URL app.state.config.TIKA_SERVER_URL = TIKA_SERVER_URL
app.state.config.DOCLING_SERVER_URL = DOCLING_SERVER_URL app.state.config.DOCLING_SERVER_URL = DOCLING_SERVER_URL
app.state.config.DOCLING_API_KEY = DOCLING_API_KEY
app.state.config.DOCLING_PARAMS = DOCLING_PARAMS app.state.config.DOCLING_PARAMS = DOCLING_PARAMS
app.state.config.DOCLING_DO_OCR = DOCLING_DO_OCR
app.state.config.DOCLING_FORCE_OCR = DOCLING_FORCE_OCR
app.state.config.DOCLING_OCR_ENGINE = DOCLING_OCR_ENGINE
app.state.config.DOCLING_OCR_LANG = DOCLING_OCR_LANG
app.state.config.DOCLING_PDF_BACKEND = DOCLING_PDF_BACKEND
app.state.config.DOCLING_TABLE_MODE = DOCLING_TABLE_MODE
app.state.config.DOCLING_PIPELINE = DOCLING_PIPELINE
app.state.config.DOCLING_DO_PICTURE_DESCRIPTION = DOCLING_DO_PICTURE_DESCRIPTION
app.state.config.DOCLING_PICTURE_DESCRIPTION_MODE = DOCLING_PICTURE_DESCRIPTION_MODE
app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL = DOCLING_PICTURE_DESCRIPTION_LOCAL
app.state.config.DOCLING_PICTURE_DESCRIPTION_API = DOCLING_PICTURE_DESCRIPTION_API
app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT = DOCUMENT_INTELLIGENCE_ENDPOINT app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT = DOCUMENT_INTELLIGENCE_ENDPOINT
app.state.config.DOCUMENT_INTELLIGENCE_KEY = DOCUMENT_INTELLIGENCE_KEY app.state.config.DOCUMENT_INTELLIGENCE_KEY = DOCUMENT_INTELLIGENCE_KEY
app.state.config.DOCUMENT_INTELLIGENCE_MODEL = DOCUMENT_INTELLIGENCE_MODEL
app.state.config.MISTRAL_OCR_API_BASE_URL = MISTRAL_OCR_API_BASE_URL app.state.config.MISTRAL_OCR_API_BASE_URL = MISTRAL_OCR_API_BASE_URL
app.state.config.MISTRAL_OCR_API_KEY = MISTRAL_OCR_API_KEY app.state.config.MISTRAL_OCR_API_KEY = MISTRAL_OCR_API_KEY
app.state.config.MINERU_API_MODE = MINERU_API_MODE app.state.config.MINERU_API_MODE = MINERU_API_MODE
@ -904,6 +889,7 @@ app.state.config.CHUNK_OVERLAP = CHUNK_OVERLAP
app.state.config.RAG_EMBEDDING_ENGINE = RAG_EMBEDDING_ENGINE app.state.config.RAG_EMBEDDING_ENGINE = RAG_EMBEDDING_ENGINE
app.state.config.RAG_EMBEDDING_MODEL = RAG_EMBEDDING_MODEL app.state.config.RAG_EMBEDDING_MODEL = RAG_EMBEDDING_MODEL
app.state.config.RAG_EMBEDDING_BATCH_SIZE = RAG_EMBEDDING_BATCH_SIZE app.state.config.RAG_EMBEDDING_BATCH_SIZE = RAG_EMBEDDING_BATCH_SIZE
app.state.config.ENABLE_ASYNC_EMBEDDING = ENABLE_ASYNC_EMBEDDING
app.state.config.RAG_RERANKING_ENGINE = RAG_RERANKING_ENGINE app.state.config.RAG_RERANKING_ENGINE = RAG_RERANKING_ENGINE
app.state.config.RAG_RERANKING_MODEL = RAG_RERANKING_MODEL app.state.config.RAG_RERANKING_MODEL = RAG_RERANKING_MODEL
@ -998,9 +984,7 @@ app.state.YOUTUBE_LOADER_TRANSLATION = None
try: try:
app.state.ef = get_ef( app.state.ef = get_ef(
app.state.config.RAG_EMBEDDING_ENGINE, app.state.config.RAG_EMBEDDING_ENGINE, app.state.config.RAG_EMBEDDING_MODEL
app.state.config.RAG_EMBEDDING_MODEL,
RAG_EMBEDDING_MODEL_AUTO_UPDATE,
) )
if ( if (
app.state.config.ENABLE_RAG_HYBRID_SEARCH app.state.config.ENABLE_RAG_HYBRID_SEARCH
@ -1011,7 +995,6 @@ try:
app.state.config.RAG_RERANKING_MODEL, app.state.config.RAG_RERANKING_MODEL,
app.state.config.RAG_EXTERNAL_RERANKER_URL, app.state.config.RAG_EXTERNAL_RERANKER_URL,
app.state.config.RAG_EXTERNAL_RERANKER_API_KEY, app.state.config.RAG_EXTERNAL_RERANKER_API_KEY,
RAG_RERANKING_MODEL_AUTO_UPDATE,
) )
else: else:
app.state.rf = None app.state.rf = None
@ -1233,7 +1216,7 @@ app.state.config.VOICE_MODE_PROMPT_TEMPLATE = VOICE_MODE_PROMPT_TEMPLATE
# #
######################################## ########################################
app.state.MODELS = {} app.state.MODELS = MODELS
# Add the middleware to the app # Add the middleware to the app
if ENABLE_COMPRESSION_MIDDLEWARE: if ENABLE_COMPRESSION_MIDDLEWARE:
@ -1593,6 +1576,7 @@ async def chat_completion(
"user_id": user.id, "user_id": user.id,
"chat_id": form_data.pop("chat_id", None), "chat_id": form_data.pop("chat_id", None),
"message_id": form_data.pop("id", None), "message_id": form_data.pop("id", None),
"parent_message_id": form_data.pop("parent_id", None),
"session_id": form_data.pop("session_id", None), "session_id": form_data.pop("session_id", None),
"filter_ids": form_data.pop("filter_ids", []), "filter_ids": form_data.pop("filter_ids", []),
"tool_ids": form_data.get("tool_ids", None), "tool_ids": form_data.get("tool_ids", None),
@ -1649,6 +1633,7 @@ async def chat_completion(
metadata["chat_id"], metadata["chat_id"],
metadata["message_id"], metadata["message_id"],
{ {
"parentId": metadata.get("parent_message_id", None),
"model": model_id, "model": model_id,
}, },
) )
@ -1681,6 +1666,7 @@ async def chat_completion(
metadata["chat_id"], metadata["chat_id"],
metadata["message_id"], metadata["message_id"],
{ {
"parentId": metadata.get("parent_message_id", None),
"error": {"content": str(e)}, "error": {"content": str(e)},
}, },
) )
@ -1860,6 +1846,7 @@ async def get_app_config(request: Request):
**( **(
{ {
"enable_direct_connections": app.state.config.ENABLE_DIRECT_CONNECTIONS, "enable_direct_connections": app.state.config.ENABLE_DIRECT_CONNECTIONS,
"enable_folders": app.state.config.ENABLE_FOLDERS,
"enable_channels": app.state.config.ENABLE_CHANNELS, "enable_channels": app.state.config.ENABLE_CHANNELS,
"enable_notes": app.state.config.ENABLE_NOTES, "enable_notes": app.state.config.ENABLE_NOTES,
"enable_web_search": app.state.config.ENABLE_WEB_SEARCH, "enable_web_search": app.state.config.ENABLE_WEB_SEARCH,
@ -2032,7 +2019,10 @@ async def get_current_usage(user=Depends(get_verified_user)):
This is an experimental endpoint and subject to change. This is an experimental endpoint and subject to change.
""" """
try: try:
return {"model_ids": get_models_in_use(), "user_ids": get_active_user_ids()} return {
"model_ids": get_models_in_use(),
"user_count": Users.get_active_user_count(),
}
except Exception as e: except Exception as e:
log.error(f"Error getting usage statistics: {e}") log.error(f"Error getting usage statistics: {e}")
raise HTTPException(status_code=500, detail="Internal Server Error") raise HTTPException(status_code=500, detail="Internal Server Error")
@ -2095,7 +2085,7 @@ except Exception as e:
) )
async def register_client(self, request, client_id: str) -> bool: async def register_client(request, client_id: str) -> bool:
server_type, server_id = client_id.split(":", 1) server_type, server_id = client_id.split(":", 1)
connection = None connection = None

View file

@ -0,0 +1,103 @@
"""Update messages and channel member table
Revision ID: 2f1211949ecc
Revises: 37f288994c47
Create Date: 2025-11-27 03:07:56.200231
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
import open_webui.internal.db
# revision identifiers, used by Alembic.
revision: str = "2f1211949ecc"
down_revision: Union[str, None] = "37f288994c47"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# New columns to be added to channel_member table
op.add_column("channel_member", sa.Column("status", sa.Text(), nullable=True))
op.add_column(
"channel_member",
sa.Column(
"is_active",
sa.Boolean(),
nullable=False,
default=True,
server_default=sa.sql.expression.true(),
),
)
op.add_column(
"channel_member",
sa.Column(
"is_channel_muted",
sa.Boolean(),
nullable=False,
default=False,
server_default=sa.sql.expression.false(),
),
)
op.add_column(
"channel_member",
sa.Column(
"is_channel_pinned",
sa.Boolean(),
nullable=False,
default=False,
server_default=sa.sql.expression.false(),
),
)
op.add_column("channel_member", sa.Column("data", sa.JSON(), nullable=True))
op.add_column("channel_member", sa.Column("meta", sa.JSON(), nullable=True))
op.add_column(
"channel_member", sa.Column("joined_at", sa.BigInteger(), nullable=False)
)
op.add_column(
"channel_member", sa.Column("left_at", sa.BigInteger(), nullable=True)
)
op.add_column(
"channel_member", sa.Column("last_read_at", sa.BigInteger(), nullable=True)
)
op.add_column(
"channel_member", sa.Column("updated_at", sa.BigInteger(), nullable=True)
)
# New columns to be added to message table
op.add_column(
"message",
sa.Column(
"is_pinned",
sa.Boolean(),
nullable=False,
default=False,
server_default=sa.sql.expression.false(),
),
)
op.add_column("message", sa.Column("pinned_at", sa.BigInteger(), nullable=True))
op.add_column("message", sa.Column("pinned_by", sa.Text(), nullable=True))
def downgrade() -> None:
op.drop_column("channel_member", "updated_at")
op.drop_column("channel_member", "last_read_at")
op.drop_column("channel_member", "meta")
op.drop_column("channel_member", "data")
op.drop_column("channel_member", "is_channel_pinned")
op.drop_column("channel_member", "is_channel_muted")
op.drop_column("message", "pinned_by")
op.drop_column("message", "pinned_at")
op.drop_column("message", "is_pinned")

View file

@ -20,18 +20,46 @@ depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None: def upgrade() -> None:
# Ensure 'id' column in 'user' table is unique and primary key (ForeignKey constraint)
inspector = sa.inspect(op.get_bind())
columns = inspector.get_columns("user")
pk_columns = inspector.get_pk_constraint("user")["constrained_columns"]
id_column = next((col for col in columns if col["name"] == "id"), None)
if id_column and not id_column.get("unique", False):
unique_constraints = inspector.get_unique_constraints("user")
unique_columns = {tuple(u["column_names"]) for u in unique_constraints}
with op.batch_alter_table("user") as batch_op:
# If primary key is wrong, drop it
if pk_columns and pk_columns != ["id"]:
batch_op.drop_constraint(
inspector.get_pk_constraint("user")["name"], type_="primary"
)
# Add unique constraint if missing
if ("id",) not in unique_columns:
batch_op.create_unique_constraint("uq_user_id", ["id"])
# Re-create correct primary key
batch_op.create_primary_key("pk_user_id", ["id"])
# Create oauth_session table # Create oauth_session table
op.create_table( op.create_table(
"oauth_session", "oauth_session",
sa.Column("id", sa.Text(), nullable=False), sa.Column("id", sa.Text(), primary_key=True, nullable=False, unique=True),
sa.Column("user_id", sa.Text(), nullable=False), sa.Column(
"user_id",
sa.Text(),
sa.ForeignKey("user.id", ondelete="CASCADE"),
nullable=False,
),
sa.Column("provider", sa.Text(), nullable=False), sa.Column("provider", sa.Text(), nullable=False),
sa.Column("token", sa.Text(), nullable=False), sa.Column("token", sa.Text(), nullable=False),
sa.Column("expires_at", sa.BigInteger(), nullable=False), sa.Column("expires_at", sa.BigInteger(), nullable=False),
sa.Column("created_at", sa.BigInteger(), nullable=False), sa.Column("created_at", sa.BigInteger(), nullable=False),
sa.Column("updated_at", sa.BigInteger(), nullable=False), sa.Column("updated_at", sa.BigInteger(), nullable=False),
sa.PrimaryKeyConstraint("id"),
sa.ForeignKeyConstraint(["user_id"], ["user.id"], ondelete="CASCADE"),
) )
# Create indexes for better performance # Create indexes for better performance

View file

@ -0,0 +1,169 @@
"""Add knowledge_file table
Revision ID: 3e0e00844bb0
Revises: 90ef40d4714e
Create Date: 2025-12-02 06:54:19.401334
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
from sqlalchemy import inspect
import open_webui.internal.db
import time
import json
import uuid
# revision identifiers, used by Alembic.
revision: str = "3e0e00844bb0"
down_revision: Union[str, None] = "90ef40d4714e"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
op.create_table(
"knowledge_file",
sa.Column("id", sa.Text(), primary_key=True),
sa.Column("user_id", sa.Text(), nullable=False),
sa.Column(
"knowledge_id",
sa.Text(),
sa.ForeignKey("knowledge.id", ondelete="CASCADE"),
nullable=False,
),
sa.Column(
"file_id",
sa.Text(),
sa.ForeignKey("file.id", ondelete="CASCADE"),
nullable=False,
),
sa.Column("created_at", sa.BigInteger(), nullable=False),
sa.Column("updated_at", sa.BigInteger(), nullable=False),
# indexes
sa.Index("ix_knowledge_file_knowledge_id", "knowledge_id"),
sa.Index("ix_knowledge_file_file_id", "file_id"),
sa.Index("ix_knowledge_file_user_id", "user_id"),
# unique constraints
sa.UniqueConstraint(
"knowledge_id", "file_id", name="uq_knowledge_file_knowledge_file"
), # prevent duplicate entries
)
connection = op.get_bind()
# 2. Read existing group with user_ids JSON column
knowledge_table = sa.Table(
"knowledge",
sa.MetaData(),
sa.Column("id", sa.Text()),
sa.Column("user_id", sa.Text()),
sa.Column("data", sa.JSON()), # JSON stored as text in SQLite + PG
)
results = connection.execute(
sa.select(
knowledge_table.c.id, knowledge_table.c.user_id, knowledge_table.c.data
)
).fetchall()
# 3. Insert members into group_member table
kf_table = sa.Table(
"knowledge_file",
sa.MetaData(),
sa.Column("id", sa.Text()),
sa.Column("user_id", sa.Text()),
sa.Column("knowledge_id", sa.Text()),
sa.Column("file_id", sa.Text()),
sa.Column("created_at", sa.BigInteger()),
sa.Column("updated_at", sa.BigInteger()),
)
file_table = sa.Table(
"file",
sa.MetaData(),
sa.Column("id", sa.Text()),
)
now = int(time.time())
for knowledge_id, user_id, data in results:
if not data:
continue
if isinstance(data, str):
try:
data = json.loads(data)
except Exception:
continue # skip invalid JSON
if not isinstance(data, dict):
continue
file_ids = data.get("file_ids", [])
for file_id in file_ids:
file_exists = connection.execute(
sa.select(file_table.c.id).where(file_table.c.id == file_id)
).fetchone()
if not file_exists:
continue # skip non-existing files
row = {
"id": str(uuid.uuid4()),
"user_id": user_id,
"knowledge_id": knowledge_id,
"file_id": file_id,
"created_at": now,
"updated_at": now,
}
connection.execute(kf_table.insert().values(**row))
with op.batch_alter_table("knowledge") as batch:
batch.drop_column("data")
def downgrade() -> None:
# 1. Add back the old data column
op.add_column("knowledge", sa.Column("data", sa.JSON(), nullable=True))
connection = op.get_bind()
# 2. Read knowledge_file entries and reconstruct data JSON
knowledge_table = sa.Table(
"knowledge",
sa.MetaData(),
sa.Column("id", sa.Text()),
sa.Column("data", sa.JSON()),
)
kf_table = sa.Table(
"knowledge_file",
sa.MetaData(),
sa.Column("id", sa.Text()),
sa.Column("knowledge_id", sa.Text()),
sa.Column("file_id", sa.Text()),
)
results = connection.execute(sa.select(knowledge_table.c.id)).fetchall()
for (knowledge_id,) in results:
file_ids = connection.execute(
sa.select(kf_table.c.file_id).where(kf_table.c.knowledge_id == knowledge_id)
).fetchall()
file_ids_list = [fid for (fid,) in file_ids]
data_json = {"file_ids": file_ids_list}
connection.execute(
knowledge_table.update()
.where(knowledge_table.c.id == knowledge_id)
.values(data=data_json)
)
# 3. Drop the knowledge_file table
op.drop_table("knowledge_file")

View file

@ -0,0 +1,81 @@
"""Update channel and channel members table
Revision ID: 90ef40d4714e
Revises: b10670c03dd5
Create Date: 2025-11-30 06:33:38.790341
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
import open_webui.internal.db
# revision identifiers, used by Alembic.
revision: str = "90ef40d4714e"
down_revision: Union[str, None] = "b10670c03dd5"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# Update 'channel' table
op.add_column("channel", sa.Column("is_private", sa.Boolean(), nullable=True))
op.add_column("channel", sa.Column("archived_at", sa.BigInteger(), nullable=True))
op.add_column("channel", sa.Column("archived_by", sa.Text(), nullable=True))
op.add_column("channel", sa.Column("deleted_at", sa.BigInteger(), nullable=True))
op.add_column("channel", sa.Column("deleted_by", sa.Text(), nullable=True))
op.add_column("channel", sa.Column("updated_by", sa.Text(), nullable=True))
# Update 'channel_member' table
op.add_column("channel_member", sa.Column("role", sa.Text(), nullable=True))
op.add_column("channel_member", sa.Column("invited_by", sa.Text(), nullable=True))
op.add_column(
"channel_member", sa.Column("invited_at", sa.BigInteger(), nullable=True)
)
# Create 'channel_webhook' table
op.create_table(
"channel_webhook",
sa.Column("id", sa.Text(), primary_key=True, unique=True, nullable=False),
sa.Column("user_id", sa.Text(), nullable=False),
sa.Column(
"channel_id",
sa.Text(),
sa.ForeignKey("channel.id", ondelete="CASCADE"),
nullable=False,
),
sa.Column("name", sa.Text(), nullable=False),
sa.Column("profile_image_url", sa.Text(), nullable=True),
sa.Column("token", sa.Text(), nullable=False),
sa.Column("last_used_at", sa.BigInteger(), nullable=True),
sa.Column("created_at", sa.BigInteger(), nullable=False),
sa.Column("updated_at", sa.BigInteger(), nullable=False),
)
pass
def downgrade() -> None:
# Downgrade 'channel' table
op.drop_column("channel", "is_private")
op.drop_column("channel", "archived_at")
op.drop_column("channel", "archived_by")
op.drop_column("channel", "deleted_at")
op.drop_column("channel", "deleted_by")
op.drop_column("channel", "updated_by")
# Downgrade 'channel_member' table
op.drop_column("channel_member", "role")
op.drop_column("channel_member", "invited_by")
op.drop_column("channel_member", "invited_at")
# Drop 'channel_webhook' table
op.drop_table("channel_webhook")
pass

View file

@ -0,0 +1,251 @@
"""Update user table
Revision ID: b10670c03dd5
Revises: 2f1211949ecc
Create Date: 2025-11-28 04:55:31.737538
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
import open_webui.internal.db
import json
import time
# revision identifiers, used by Alembic.
revision: str = "b10670c03dd5"
down_revision: Union[str, None] = "2f1211949ecc"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def _drop_sqlite_indexes_for_column(table_name, column_name, conn):
"""
SQLite requires manual removal of any indexes referencing a column
before ALTER TABLE ... DROP COLUMN can succeed.
"""
indexes = conn.execute(sa.text(f"PRAGMA index_list('{table_name}')")).fetchall()
for idx in indexes:
index_name = idx[1] # index name
# Get indexed columns
idx_info = conn.execute(
sa.text(f"PRAGMA index_info('{index_name}')")
).fetchall()
indexed_cols = [row[2] for row in idx_info] # col names
if column_name in indexed_cols:
conn.execute(sa.text(f"DROP INDEX IF EXISTS {index_name}"))
def _convert_column_to_json(table: str, column: str):
conn = op.get_bind()
dialect = conn.dialect.name
# SQLite cannot ALTER COLUMN → must recreate column
if dialect == "sqlite":
# 1. Add temporary column
op.add_column(table, sa.Column(f"{column}_json", sa.JSON(), nullable=True))
# 2. Load old data
rows = conn.execute(sa.text(f'SELECT id, {column} FROM "{table}"')).fetchall()
for row in rows:
uid, raw = row
if raw is None:
parsed = None
else:
try:
parsed = json.loads(raw)
except Exception:
parsed = None # fallback safe behavior
conn.execute(
sa.text(f'UPDATE "{table}" SET {column}_json = :val WHERE id = :id'),
{"val": json.dumps(parsed) if parsed else None, "id": uid},
)
# 3. Drop old TEXT column
op.drop_column(table, column)
# 4. Rename new JSON column → original name
op.alter_column(table, f"{column}_json", new_column_name=column)
else:
# PostgreSQL supports direct CAST
op.alter_column(
table,
column,
type_=sa.JSON(),
postgresql_using=f"{column}::json",
)
def _convert_column_to_text(table: str, column: str):
conn = op.get_bind()
dialect = conn.dialect.name
if dialect == "sqlite":
op.add_column(table, sa.Column(f"{column}_text", sa.Text(), nullable=True))
rows = conn.execute(sa.text(f'SELECT id, {column} FROM "{table}"')).fetchall()
for uid, raw in rows:
conn.execute(
sa.text(f'UPDATE "{table}" SET {column}_text = :val WHERE id = :id'),
{"val": json.dumps(raw) if raw else None, "id": uid},
)
op.drop_column(table, column)
op.alter_column(table, f"{column}_text", new_column_name=column)
else:
op.alter_column(
table,
column,
type_=sa.Text(),
postgresql_using=f"to_json({column})::text",
)
def upgrade() -> None:
op.add_column(
"user", sa.Column("profile_banner_image_url", sa.Text(), nullable=True)
)
op.add_column("user", sa.Column("timezone", sa.String(), nullable=True))
op.add_column("user", sa.Column("presence_state", sa.String(), nullable=True))
op.add_column("user", sa.Column("status_emoji", sa.String(), nullable=True))
op.add_column("user", sa.Column("status_message", sa.Text(), nullable=True))
op.add_column(
"user", sa.Column("status_expires_at", sa.BigInteger(), nullable=True)
)
op.add_column("user", sa.Column("oauth", sa.JSON(), nullable=True))
# Convert info (TEXT/JSONField) → JSON
_convert_column_to_json("user", "info")
# Convert settings (TEXT/JSONField) → JSON
_convert_column_to_json("user", "settings")
op.create_table(
"api_key",
sa.Column("id", sa.Text(), primary_key=True, unique=True),
sa.Column("user_id", sa.Text(), sa.ForeignKey("user.id", ondelete="CASCADE")),
sa.Column("key", sa.Text(), unique=True, nullable=False),
sa.Column("data", sa.JSON(), nullable=True),
sa.Column("expires_at", sa.BigInteger(), nullable=True),
sa.Column("last_used_at", sa.BigInteger(), nullable=True),
sa.Column("created_at", sa.BigInteger(), nullable=False),
sa.Column("updated_at", sa.BigInteger(), nullable=False),
)
conn = op.get_bind()
users = conn.execute(
sa.text('SELECT id, oauth_sub FROM "user" WHERE oauth_sub IS NOT NULL')
).fetchall()
for uid, oauth_sub in users:
if oauth_sub:
# Example formats supported:
# provider@sub
# plain sub (stored as {"oidc": {"sub": sub}})
if "@" in oauth_sub:
provider, sub = oauth_sub.split("@", 1)
else:
provider, sub = "oidc", oauth_sub
oauth_json = json.dumps({provider: {"sub": sub}})
conn.execute(
sa.text('UPDATE "user" SET oauth = :oauth WHERE id = :id'),
{"oauth": oauth_json, "id": uid},
)
users_with_keys = conn.execute(
sa.text('SELECT id, api_key FROM "user" WHERE api_key IS NOT NULL')
).fetchall()
now = int(time.time())
for uid, api_key in users_with_keys:
if api_key:
conn.execute(
sa.text(
"""
INSERT INTO api_key (id, user_id, key, created_at, updated_at)
VALUES (:id, :user_id, :key, :created_at, :updated_at)
"""
),
{
"id": f"key_{uid}",
"user_id": uid,
"key": api_key,
"created_at": now,
"updated_at": now,
},
)
if conn.dialect.name == "sqlite":
_drop_sqlite_indexes_for_column("user", "api_key", conn)
_drop_sqlite_indexes_for_column("user", "oauth_sub", conn)
with op.batch_alter_table("user") as batch_op:
batch_op.drop_column("api_key")
batch_op.drop_column("oauth_sub")
def downgrade() -> None:
# --- 1. Restore old oauth_sub column ---
op.add_column("user", sa.Column("oauth_sub", sa.Text(), nullable=True))
conn = op.get_bind()
users = conn.execute(
sa.text('SELECT id, oauth FROM "user" WHERE oauth IS NOT NULL')
).fetchall()
for uid, oauth in users:
try:
data = json.loads(oauth)
provider = list(data.keys())[0]
sub = data[provider].get("sub")
oauth_sub = f"{provider}@{sub}"
except Exception:
oauth_sub = None
conn.execute(
sa.text('UPDATE "user" SET oauth_sub = :oauth_sub WHERE id = :id'),
{"oauth_sub": oauth_sub, "id": uid},
)
op.drop_column("user", "oauth")
# --- 2. Restore api_key field ---
op.add_column("user", sa.Column("api_key", sa.String(), nullable=True))
# Restore values from api_key
keys = conn.execute(sa.text("SELECT user_id, key FROM api_key")).fetchall()
for uid, key in keys:
conn.execute(
sa.text('UPDATE "user" SET api_key = :key WHERE id = :id'),
{"key": key, "id": uid},
)
# Drop new table
op.drop_table("api_key")
with op.batch_alter_table("user") as batch_op:
batch_op.drop_column("profile_banner_image_url")
batch_op.drop_column("timezone")
batch_op.drop_column("presence_state")
batch_op.drop_column("status_emoji")
batch_op.drop_column("status_message")
batch_op.drop_column("status_expires_at")
# Convert info (JSON) → TEXT
_convert_column_to_text("user", "info")
# Convert settings (JSON) → TEXT
_convert_column_to_text("user", "settings")

View file

@ -3,7 +3,7 @@ import uuid
from typing import Optional from typing import Optional
from open_webui.internal.db import Base, get_db from open_webui.internal.db import Base, get_db
from open_webui.models.users import UserModel, Users from open_webui.models.users import UserModel, UserProfileImageResponse, Users
from open_webui.env import SRC_LOG_LEVELS from open_webui.env import SRC_LOG_LEVELS
from pydantic import BaseModel from pydantic import BaseModel
from sqlalchemy import Boolean, Column, String, Text from sqlalchemy import Boolean, Column, String, Text
@ -46,15 +46,7 @@ class ApiKey(BaseModel):
api_key: Optional[str] = None api_key: Optional[str] = None
class UserResponse(BaseModel): class SigninResponse(Token, UserProfileImageResponse):
id: str
email: str
name: str
role: str
profile_image_url: str
class SigninResponse(Token, UserResponse):
pass pass
@ -96,7 +88,7 @@ class AuthsTable:
name: str, name: str,
profile_image_url: str = "/user.png", profile_image_url: str = "/user.png",
role: str = "pending", role: str = "pending",
oauth_sub: Optional[str] = None, oauth: Optional[dict] = None,
) -> Optional[UserModel]: ) -> Optional[UserModel]:
with get_db() as db: with get_db() as db:
log.info("insert_new_auth") log.info("insert_new_auth")
@ -110,7 +102,7 @@ class AuthsTable:
db.add(result) db.add(result)
user = Users.insert_new_user( user = Users.insert_new_user(
id, name, email, profile_image_url, role, oauth_sub id, name, email, profile_image_url, role, oauth=oauth
) )
db.commit() db.commit()

View file

@ -4,10 +4,13 @@ import uuid
from typing import Optional from typing import Optional
from open_webui.internal.db import Base, get_db from open_webui.internal.db import Base, get_db
from open_webui.utils.access_control import has_access from open_webui.models.groups import Groups
from pydantic import BaseModel, ConfigDict from pydantic import BaseModel, ConfigDict
from sqlalchemy import BigInteger, Boolean, Column, String, Text, JSON from sqlalchemy.dialects.postgresql import JSONB
from sqlalchemy import BigInteger, Boolean, Column, String, Text, JSON, case, cast
from sqlalchemy import or_, func, select, and_, text from sqlalchemy import or_, func, select, and_, text
from sqlalchemy.sql import exists from sqlalchemy.sql import exists
@ -26,12 +29,23 @@ class Channel(Base):
name = Column(Text) name = Column(Text)
description = Column(Text, nullable=True) description = Column(Text, nullable=True)
# Used to indicate if the channel is private (for 'group' type channels)
is_private = Column(Boolean, nullable=True)
data = Column(JSON, nullable=True) data = Column(JSON, nullable=True)
meta = Column(JSON, nullable=True) meta = Column(JSON, nullable=True)
access_control = Column(JSON, nullable=True) access_control = Column(JSON, nullable=True)
created_at = Column(BigInteger) created_at = Column(BigInteger)
updated_at = Column(BigInteger) updated_at = Column(BigInteger)
updated_by = Column(Text, nullable=True)
archived_at = Column(BigInteger, nullable=True)
archived_by = Column(Text, nullable=True)
deleted_at = Column(BigInteger, nullable=True)
deleted_by = Column(Text, nullable=True)
class ChannelModel(BaseModel): class ChannelModel(BaseModel):
@ -39,17 +53,122 @@ class ChannelModel(BaseModel):
id: str id: str
user_id: str user_id: str
type: Optional[str] = None type: Optional[str] = None
name: str name: str
description: Optional[str] = None description: Optional[str] = None
is_private: Optional[bool] = None
data: Optional[dict] = None data: Optional[dict] = None
meta: Optional[dict] = None meta: Optional[dict] = None
access_control: Optional[dict] = None access_control: Optional[dict] = None
created_at: int # timestamp in epoch created_at: int # timestamp in epoch (time_ns)
updated_at: int # timestamp in epoch
updated_at: int # timestamp in epoch (time_ns)
updated_by: Optional[str] = None
archived_at: Optional[int] = None # timestamp in epoch (time_ns)
archived_by: Optional[str] = None
deleted_at: Optional[int] = None # timestamp in epoch (time_ns)
deleted_by: Optional[str] = None
class ChannelMember(Base):
__tablename__ = "channel_member"
id = Column(Text, primary_key=True, unique=True)
channel_id = Column(Text, nullable=False)
user_id = Column(Text, nullable=False)
role = Column(Text, nullable=True)
status = Column(Text, nullable=True)
is_active = Column(Boolean, nullable=False, default=True)
is_channel_muted = Column(Boolean, nullable=False, default=False)
is_channel_pinned = Column(Boolean, nullable=False, default=False)
data = Column(JSON, nullable=True)
meta = Column(JSON, nullable=True)
invited_at = Column(BigInteger, nullable=True)
invited_by = Column(Text, nullable=True)
joined_at = Column(BigInteger)
left_at = Column(BigInteger, nullable=True)
last_read_at = Column(BigInteger, nullable=True)
created_at = Column(BigInteger)
updated_at = Column(BigInteger)
class ChannelMemberModel(BaseModel):
model_config = ConfigDict(from_attributes=True)
id: str
channel_id: str
user_id: str
role: Optional[str] = None
status: Optional[str] = None
is_active: bool = True
is_channel_muted: bool = False
is_channel_pinned: bool = False
data: Optional[dict] = None
meta: Optional[dict] = None
invited_at: Optional[int] = None # timestamp in epoch (time_ns)
invited_by: Optional[str] = None
joined_at: Optional[int] = None # timestamp in epoch (time_ns)
left_at: Optional[int] = None # timestamp in epoch (time_ns)
last_read_at: Optional[int] = None # timestamp in epoch (time_ns)
created_at: Optional[int] = None # timestamp in epoch (time_ns)
updated_at: Optional[int] = None # timestamp in epoch (time_ns)
class ChannelWebhook(Base):
__tablename__ = "channel_webhook"
id = Column(Text, primary_key=True, unique=True)
channel_id = Column(Text, nullable=False)
user_id = Column(Text, nullable=False)
name = Column(Text, nullable=False)
profile_image_url = Column(Text, nullable=True)
token = Column(Text, nullable=False)
last_used_at = Column(BigInteger, nullable=True)
created_at = Column(BigInteger, nullable=False)
updated_at = Column(BigInteger, nullable=False)
class ChannelWebhookModel(BaseModel):
model_config = ConfigDict(from_attributes=True)
id: str
channel_id: str
user_id: str
name: str
profile_image_url: Optional[str] = None
token: str
last_used_at: Optional[int] = None # timestamp in epoch (time_ns)
created_at: int # timestamp in epoch (time_ns)
updated_at: int # timestamp in epoch (time_ns)
#################### ####################
@ -58,26 +177,94 @@ class ChannelModel(BaseModel):
class ChannelResponse(ChannelModel): class ChannelResponse(ChannelModel):
is_manager: bool = False
write_access: bool = False write_access: bool = False
user_count: Optional[int] = None
class ChannelForm(BaseModel): class ChannelForm(BaseModel):
name: str name: str = ""
description: Optional[str] = None description: Optional[str] = None
is_private: Optional[bool] = None
data: Optional[dict] = None data: Optional[dict] = None
meta: Optional[dict] = None meta: Optional[dict] = None
access_control: Optional[dict] = None access_control: Optional[dict] = None
group_ids: Optional[list[str]] = None
user_ids: Optional[list[str]] = None
class CreateChannelForm(ChannelForm):
type: Optional[str] = None
class ChannelTable: class ChannelTable:
def _collect_unique_user_ids(
self,
invited_by: str,
user_ids: Optional[list[str]] = None,
group_ids: Optional[list[str]] = None,
) -> set[str]:
"""
Collect unique user ids from:
- invited_by
- user_ids
- each group in group_ids
Returns a set for efficient SQL diffing.
"""
users = set(user_ids or [])
users.add(invited_by)
for group_id in group_ids or []:
users.update(Groups.get_group_user_ids_by_id(group_id))
return users
def _create_membership_models(
self,
channel_id: str,
invited_by: str,
user_ids: set[str],
) -> list[ChannelMember]:
"""
Takes a set of NEW user IDs (already filtered to exclude existing members).
Returns ORM ChannelMember objects to be added.
"""
now = int(time.time_ns())
memberships = []
for uid in user_ids:
model = ChannelMemberModel(
**{
"id": str(uuid.uuid4()),
"channel_id": channel_id,
"user_id": uid,
"status": "joined",
"is_active": True,
"is_channel_muted": False,
"is_channel_pinned": False,
"invited_at": now,
"invited_by": invited_by,
"joined_at": now,
"left_at": None,
"last_read_at": now,
"created_at": now,
"updated_at": now,
}
)
memberships.append(ChannelMember(**model.model_dump()))
return memberships
def insert_new_channel( def insert_new_channel(
self, type: Optional[str], form_data: ChannelForm, user_id: str self, form_data: CreateChannelForm, user_id: str
) -> Optional[ChannelModel]: ) -> Optional[ChannelModel]:
with get_db() as db: with get_db() as db:
channel = ChannelModel( channel = ChannelModel(
**{ **{
**form_data.model_dump(), **form_data.model_dump(),
"type": type, "type": form_data.type if form_data.type else None,
"name": form_data.name.lower(), "name": form_data.name.lower(),
"id": str(uuid.uuid4()), "id": str(uuid.uuid4()),
"user_id": user_id, "user_id": user_id,
@ -85,9 +272,21 @@ class ChannelTable:
"updated_at": int(time.time_ns()), "updated_at": int(time.time_ns()),
} }
) )
new_channel = Channel(**channel.model_dump()) new_channel = Channel(**channel.model_dump())
if form_data.type in ["group", "dm"]:
users = self._collect_unique_user_ids(
invited_by=user_id,
user_ids=form_data.user_ids,
group_ids=form_data.group_ids,
)
memberships = self._create_membership_models(
channel_id=new_channel.id,
invited_by=user_id,
user_ids=users,
)
db.add_all(memberships)
db.add(new_channel) db.add(new_channel)
db.commit() db.commit()
return channel return channel
@ -97,16 +296,346 @@ class ChannelTable:
channels = db.query(Channel).all() channels = db.query(Channel).all()
return [ChannelModel.model_validate(channel) for channel in channels] return [ChannelModel.model_validate(channel) for channel in channels]
def get_channels_by_user_id( def _has_permission(self, db, query, filter: dict, permission: str = "read"):
self, user_id: str, permission: str = "read" group_ids = filter.get("group_ids", [])
) -> list[ChannelModel]: user_id = filter.get("user_id")
channels = self.get_channels()
return [ dialect_name = db.bind.dialect.name
channel
for channel in channels # Public access
if channel.user_id == user_id conditions = []
or has_access(user_id, permission, channel.access_control) if group_ids or user_id:
conditions.extend(
[
Channel.access_control.is_(None),
cast(Channel.access_control, String) == "null",
] ]
)
# User-level permission
if user_id:
conditions.append(Channel.user_id == user_id)
# Group-level permission
if group_ids:
group_conditions = []
for gid in group_ids:
if dialect_name == "sqlite":
group_conditions.append(
Channel.access_control[permission]["group_ids"].contains([gid])
)
elif dialect_name == "postgresql":
group_conditions.append(
cast(
Channel.access_control[permission]["group_ids"],
JSONB,
).contains([gid])
)
conditions.append(or_(*group_conditions))
if conditions:
query = query.filter(or_(*conditions))
return query
def get_channels_by_user_id(self, user_id: str) -> list[ChannelModel]:
with get_db() as db:
user_group_ids = [
group.id for group in Groups.get_groups_by_member_id(user_id)
]
membership_channels = (
db.query(Channel)
.join(ChannelMember, Channel.id == ChannelMember.channel_id)
.filter(
Channel.deleted_at.is_(None),
Channel.archived_at.is_(None),
Channel.type.in_(["group", "dm"]),
ChannelMember.user_id == user_id,
ChannelMember.is_active.is_(True),
)
.all()
)
query = db.query(Channel).filter(
Channel.deleted_at.is_(None),
Channel.archived_at.is_(None),
or_(
Channel.type.is_(None), # True NULL/None
Channel.type == "", # Empty string
and_(Channel.type != "group", Channel.type != "dm"),
),
)
query = self._has_permission(
db, query, {"user_id": user_id, "group_ids": user_group_ids}
)
standard_channels = query.all()
all_channels = membership_channels + standard_channels
return [ChannelModel.model_validate(c) for c in all_channels]
def get_dm_channel_by_user_ids(self, user_ids: list[str]) -> Optional[ChannelModel]:
with get_db() as db:
# Ensure uniqueness in case a list with duplicates is passed
unique_user_ids = list(set(user_ids))
match_count = func.sum(
case(
(ChannelMember.user_id.in_(unique_user_ids), 1),
else_=0,
)
)
subquery = (
db.query(ChannelMember.channel_id)
.group_by(ChannelMember.channel_id)
# 1. Channel must have exactly len(user_ids) members
.having(func.count(ChannelMember.user_id) == len(unique_user_ids))
# 2. All those members must be in unique_user_ids
.having(match_count == len(unique_user_ids))
.subquery()
)
channel = (
db.query(Channel)
.filter(
Channel.id.in_(subquery),
Channel.type == "dm",
)
.first()
)
return ChannelModel.model_validate(channel) if channel else None
def add_members_to_channel(
self,
channel_id: str,
invited_by: str,
user_ids: Optional[list[str]] = None,
group_ids: Optional[list[str]] = None,
) -> list[ChannelMemberModel]:
with get_db() as db:
# 1. Collect all user_ids including groups + inviter
requested_users = self._collect_unique_user_ids(
invited_by, user_ids, group_ids
)
existing_users = {
row.user_id
for row in db.query(ChannelMember.user_id)
.filter(ChannelMember.channel_id == channel_id)
.all()
}
new_user_ids = requested_users - existing_users
if not new_user_ids:
return [] # Nothing to add
new_memberships = self._create_membership_models(
channel_id, invited_by, new_user_ids
)
db.add_all(new_memberships)
db.commit()
return [
ChannelMemberModel.model_validate(membership)
for membership in new_memberships
]
def remove_members_from_channel(
self,
channel_id: str,
user_ids: list[str],
) -> int:
with get_db() as db:
result = (
db.query(ChannelMember)
.filter(
ChannelMember.channel_id == channel_id,
ChannelMember.user_id.in_(user_ids),
)
.delete(synchronize_session=False)
)
db.commit()
return result # number of rows deleted
def is_user_channel_manager(self, channel_id: str, user_id: str) -> bool:
with get_db() as db:
# Check if the user is the creator of the channel
# or has a 'manager' role in ChannelMember
channel = db.query(Channel).filter(Channel.id == channel_id).first()
if channel and channel.user_id == user_id:
return True
membership = (
db.query(ChannelMember)
.filter(
ChannelMember.channel_id == channel_id,
ChannelMember.user_id == user_id,
ChannelMember.role == "manager",
)
.first()
)
return membership is not None
def join_channel(
self, channel_id: str, user_id: str
) -> Optional[ChannelMemberModel]:
with get_db() as db:
# Check if the membership already exists
existing_membership = (
db.query(ChannelMember)
.filter(
ChannelMember.channel_id == channel_id,
ChannelMember.user_id == user_id,
)
.first()
)
if existing_membership:
return ChannelMemberModel.model_validate(existing_membership)
# Create new membership
channel_member = ChannelMemberModel(
**{
"id": str(uuid.uuid4()),
"channel_id": channel_id,
"user_id": user_id,
"status": "joined",
"is_active": True,
"is_channel_muted": False,
"is_channel_pinned": False,
"joined_at": int(time.time_ns()),
"left_at": None,
"last_read_at": int(time.time_ns()),
"created_at": int(time.time_ns()),
"updated_at": int(time.time_ns()),
}
)
new_membership = ChannelMember(**channel_member.model_dump())
db.add(new_membership)
db.commit()
return channel_member
def leave_channel(self, channel_id: str, user_id: str) -> bool:
with get_db() as db:
membership = (
db.query(ChannelMember)
.filter(
ChannelMember.channel_id == channel_id,
ChannelMember.user_id == user_id,
)
.first()
)
if not membership:
return False
membership.status = "left"
membership.is_active = False
membership.left_at = int(time.time_ns())
membership.updated_at = int(time.time_ns())
db.commit()
return True
def get_member_by_channel_and_user_id(
self, channel_id: str, user_id: str
) -> Optional[ChannelMemberModel]:
with get_db() as db:
membership = (
db.query(ChannelMember)
.filter(
ChannelMember.channel_id == channel_id,
ChannelMember.user_id == user_id,
)
.first()
)
return ChannelMemberModel.model_validate(membership) if membership else None
def get_members_by_channel_id(self, channel_id: str) -> list[ChannelMemberModel]:
with get_db() as db:
memberships = (
db.query(ChannelMember)
.filter(ChannelMember.channel_id == channel_id)
.all()
)
return [
ChannelMemberModel.model_validate(membership)
for membership in memberships
]
def pin_channel(self, channel_id: str, user_id: str, is_pinned: bool) -> bool:
with get_db() as db:
membership = (
db.query(ChannelMember)
.filter(
ChannelMember.channel_id == channel_id,
ChannelMember.user_id == user_id,
)
.first()
)
if not membership:
return False
membership.is_channel_pinned = is_pinned
membership.updated_at = int(time.time_ns())
db.commit()
return True
def update_member_last_read_at(self, channel_id: str, user_id: str) -> bool:
with get_db() as db:
membership = (
db.query(ChannelMember)
.filter(
ChannelMember.channel_id == channel_id,
ChannelMember.user_id == user_id,
)
.first()
)
if not membership:
return False
membership.last_read_at = int(time.time_ns())
membership.updated_at = int(time.time_ns())
db.commit()
return True
def update_member_active_status(
self, channel_id: str, user_id: str, is_active: bool
) -> bool:
with get_db() as db:
membership = (
db.query(ChannelMember)
.filter(
ChannelMember.channel_id == channel_id,
ChannelMember.user_id == user_id,
)
.first()
)
if not membership:
return False
membership.is_active = is_active
membership.updated_at = int(time.time_ns())
db.commit()
return True
def is_user_channel_member(self, channel_id: str, user_id: str) -> bool:
with get_db() as db:
membership = (
db.query(ChannelMember)
.filter(
ChannelMember.channel_id == channel_id,
ChannelMember.user_id == user_id,
)
.first()
)
return membership is not None
def get_channel_by_id(self, id: str) -> Optional[ChannelModel]: def get_channel_by_id(self, id: str) -> Optional[ChannelModel]:
with get_db() as db: with get_db() as db:
@ -122,8 +651,12 @@ class ChannelTable:
return None return None
channel.name = form_data.name channel.name = form_data.name
channel.description = form_data.description
channel.is_private = form_data.is_private
channel.data = form_data.data channel.data = form_data.data
channel.meta = form_data.meta channel.meta = form_data.meta
channel.access_control = form_data.access_control channel.access_control = form_data.access_control
channel.updated_at = int(time.time_ns()) channel.updated_at = int(time.time_ns())

View file

@ -11,7 +11,18 @@ from open_webui.models.files import FileMetadataResponse
from pydantic import BaseModel, ConfigDict from pydantic import BaseModel, ConfigDict
from sqlalchemy import BigInteger, Column, String, Text, JSON, func, ForeignKey from sqlalchemy import (
BigInteger,
Column,
String,
Text,
JSON,
and_,
func,
ForeignKey,
cast,
or_,
)
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -41,7 +52,6 @@ class Group(Base):
class GroupModel(BaseModel): class GroupModel(BaseModel):
model_config = ConfigDict(from_attributes=True)
id: str id: str
user_id: str user_id: str
@ -56,6 +66,8 @@ class GroupModel(BaseModel):
created_at: int # timestamp in epoch created_at: int # timestamp in epoch
updated_at: int # timestamp in epoch updated_at: int # timestamp in epoch
model_config = ConfigDict(from_attributes=True)
class GroupMember(Base): class GroupMember(Base):
__tablename__ = "group_member" __tablename__ = "group_member"
@ -84,17 +96,8 @@ class GroupMemberModel(BaseModel):
#################### ####################
class GroupResponse(BaseModel): class GroupResponse(GroupModel):
id: str
user_id: str
name: str
description: str
permissions: Optional[dict] = None
data: Optional[dict] = None
meta: Optional[dict] = None
member_count: Optional[int] = None member_count: Optional[int] = None
created_at: int # timestamp in epoch
updated_at: int # timestamp in epoch
class GroupForm(BaseModel): class GroupForm(BaseModel):
@ -112,6 +115,11 @@ class GroupUpdateForm(GroupForm):
pass pass
class GroupListResponse(BaseModel):
items: list[GroupResponse] = []
total: int = 0
class GroupTable: class GroupTable:
def insert_new_group( def insert_new_group(
self, user_id: str, form_data: GroupForm self, user_id: str, form_data: GroupForm
@ -140,13 +148,87 @@ class GroupTable:
except Exception: except Exception:
return None return None
def get_groups(self) -> list[GroupModel]: def get_all_groups(self) -> list[GroupModel]:
with get_db() as db: with get_db() as db:
groups = db.query(Group).order_by(Group.updated_at.desc()).all()
return [GroupModel.model_validate(group) for group in groups]
def get_groups(self, filter) -> list[GroupResponse]:
with get_db() as db:
query = db.query(Group)
if filter:
if "query" in filter:
query = query.filter(Group.name.ilike(f"%{filter['query']}%"))
if "member_id" in filter:
query = query.join(
GroupMember, GroupMember.group_id == Group.id
).filter(GroupMember.user_id == filter["member_id"])
if "share" in filter:
share_value = filter["share"]
json_share = Group.data["config"]["share"].as_boolean()
if share_value:
query = query.filter(
or_(
Group.data.is_(None),
json_share.is_(None),
json_share == True,
)
)
else:
query = query.filter(
and_(Group.data.isnot(None), json_share == False)
)
groups = query.order_by(Group.updated_at.desc()).all()
return [ return [
GroupModel.model_validate(group) GroupResponse.model_validate(
for group in db.query(Group).order_by(Group.updated_at.desc()).all() {
**GroupModel.model_validate(group).model_dump(),
"member_count": self.get_group_member_count_by_id(group.id),
}
)
for group in groups
] ]
def search_groups(
self, filter: Optional[dict] = None, skip: int = 0, limit: int = 30
) -> GroupListResponse:
with get_db() as db:
query = db.query(Group)
if filter:
if "query" in filter:
query = query.filter(Group.name.ilike(f"%{filter['query']}%"))
if "member_id" in filter:
query = query.join(
GroupMember, GroupMember.group_id == Group.id
).filter(GroupMember.user_id == filter["member_id"])
if "share" in filter:
# 'share' is stored in data JSON, support both sqlite and postgres
share_value = filter["share"]
print("Filtering by share:", share_value)
query = query.filter(
Group.data.op("->>")("share") == str(share_value)
)
total = query.count()
query = query.order_by(Group.updated_at.desc())
groups = query.offset(skip).limit(limit).all()
return {
"items": [
GroupResponse.model_validate(
**GroupModel.model_validate(group).model_dump(),
member_count=self.get_group_member_count_by_id(group.id),
)
for group in groups
],
"total": total,
}
def get_groups_by_member_id(self, user_id: str) -> list[GroupModel]: def get_groups_by_member_id(self, user_id: str) -> list[GroupModel]:
with get_db() as db: with get_db() as db:
return [ return [
@ -177,6 +259,23 @@ class GroupTable:
return [m[0] for m in members] return [m[0] for m in members]
def get_group_user_ids_by_ids(self, group_ids: list[str]) -> dict[str, list[str]]:
with get_db() as db:
members = (
db.query(GroupMember.group_id, GroupMember.user_id)
.filter(GroupMember.group_id.in_(group_ids))
.all()
)
group_user_ids: dict[str, list[str]] = {
group_id: [] for group_id in group_ids
}
for group_id, user_id in members:
group_user_ids[group_id].append(user_id)
return group_user_ids
def set_group_user_ids_by_id(self, group_id: str, user_ids: list[str]) -> None: def set_group_user_ids_by_id(self, group_id: str, user_ids: list[str]) -> None:
with get_db() as db: with get_db() as db:
# Delete existing members # Delete existing members
@ -276,7 +375,7 @@ class GroupTable:
) -> list[GroupModel]: ) -> list[GroupModel]:
# check for existing groups # check for existing groups
existing_groups = self.get_groups() existing_groups = self.get_all_groups()
existing_group_names = {group.name for group in existing_groups} existing_group_names = {group.name for group in existing_groups}
new_groups = [] new_groups = []

View file

@ -7,13 +7,21 @@ import uuid
from open_webui.internal.db import Base, get_db from open_webui.internal.db import Base, get_db
from open_webui.env import SRC_LOG_LEVELS from open_webui.env import SRC_LOG_LEVELS
from open_webui.models.files import FileMetadataResponse from open_webui.models.files import File, FileModel, FileMetadataResponse
from open_webui.models.groups import Groups from open_webui.models.groups import Groups
from open_webui.models.users import Users, UserResponse from open_webui.models.users import Users, UserResponse
from pydantic import BaseModel, ConfigDict from pydantic import BaseModel, ConfigDict
from sqlalchemy import BigInteger, Column, String, Text, JSON from sqlalchemy import (
BigInteger,
Column,
ForeignKey,
String,
Text,
JSON,
UniqueConstraint,
)
from open_webui.utils.access_control import has_access from open_webui.utils.access_control import has_access
@ -34,9 +42,7 @@ class Knowledge(Base):
name = Column(Text) name = Column(Text)
description = Column(Text) description = Column(Text)
data = Column(JSON, nullable=True)
meta = Column(JSON, nullable=True) meta = Column(JSON, nullable=True)
access_control = Column(JSON, nullable=True) # Controls data access levels. access_control = Column(JSON, nullable=True) # Controls data access levels.
# Defines access control rules for this entry. # Defines access control rules for this entry.
# - `None`: Public access, available to all users with the "user" role. # - `None`: Public access, available to all users with the "user" role.
@ -67,7 +73,6 @@ class KnowledgeModel(BaseModel):
name: str name: str
description: str description: str
data: Optional[dict] = None
meta: Optional[dict] = None meta: Optional[dict] = None
access_control: Optional[dict] = None access_control: Optional[dict] = None
@ -76,11 +81,42 @@ class KnowledgeModel(BaseModel):
updated_at: int # timestamp in epoch updated_at: int # timestamp in epoch
class KnowledgeFile(Base):
__tablename__ = "knowledge_file"
id = Column(Text, unique=True, primary_key=True)
knowledge_id = Column(
Text, ForeignKey("knowledge.id", ondelete="CASCADE"), nullable=False
)
file_id = Column(Text, ForeignKey("file.id", ondelete="CASCADE"), nullable=False)
user_id = Column(Text, nullable=False)
created_at = Column(BigInteger, nullable=False)
updated_at = Column(BigInteger, nullable=False)
__table_args__ = (
UniqueConstraint(
"knowledge_id", "file_id", name="uq_knowledge_file_knowledge_file"
),
)
class KnowledgeFileModel(BaseModel):
id: str
knowledge_id: str
file_id: str
user_id: str
created_at: int # timestamp in epoch
updated_at: int # timestamp in epoch
model_config = ConfigDict(from_attributes=True)
#################### ####################
# Forms # Forms
#################### ####################
class KnowledgeUserModel(KnowledgeModel): class KnowledgeUserModel(KnowledgeModel):
user: Optional[UserResponse] = None user: Optional[UserResponse] = None
@ -96,7 +132,6 @@ class KnowledgeUserResponse(KnowledgeUserModel):
class KnowledgeForm(BaseModel): class KnowledgeForm(BaseModel):
name: str name: str
description: str description: str
data: Optional[dict] = None
access_control: Optional[dict] = None access_control: Optional[dict] = None
@ -182,6 +217,100 @@ class KnowledgeTable:
except Exception: except Exception:
return None return None
def get_knowledges_by_file_id(self, file_id: str) -> list[KnowledgeModel]:
try:
with get_db() as db:
knowledges = (
db.query(Knowledge)
.join(KnowledgeFile, Knowledge.id == KnowledgeFile.knowledge_id)
.filter(KnowledgeFile.file_id == file_id)
.all()
)
return [
KnowledgeModel.model_validate(knowledge) for knowledge in knowledges
]
except Exception:
return []
def get_files_by_id(self, knowledge_id: str) -> list[FileModel]:
try:
with get_db() as db:
files = (
db.query(File)
.join(KnowledgeFile, File.id == KnowledgeFile.file_id)
.filter(KnowledgeFile.knowledge_id == knowledge_id)
.all()
)
return [FileModel.model_validate(file) for file in files]
except Exception:
return []
def get_file_metadatas_by_id(self, knowledge_id: str) -> list[FileMetadataResponse]:
try:
with get_db() as db:
files = self.get_files_by_id(knowledge_id)
return [FileMetadataResponse(**file.model_dump()) for file in files]
except Exception:
return []
def add_file_to_knowledge_by_id(
self, knowledge_id: str, file_id: str, user_id: str
) -> Optional[KnowledgeFileModel]:
with get_db() as db:
knowledge_file = KnowledgeFileModel(
**{
"id": str(uuid.uuid4()),
"knowledge_id": knowledge_id,
"file_id": file_id,
"user_id": user_id,
"created_at": int(time.time()),
"updated_at": int(time.time()),
}
)
try:
result = KnowledgeFile(**knowledge_file.model_dump())
db.add(result)
db.commit()
db.refresh(result)
if result:
return KnowledgeFileModel.model_validate(result)
else:
return None
except Exception:
return None
def remove_file_from_knowledge_by_id(self, knowledge_id: str, file_id: str) -> bool:
try:
with get_db() as db:
db.query(KnowledgeFile).filter_by(
knowledge_id=knowledge_id, file_id=file_id
).delete()
db.commit()
return True
except Exception:
return False
def reset_knowledge_by_id(self, id: str) -> Optional[KnowledgeModel]:
try:
with get_db() as db:
# Delete all knowledge_file entries for this knowledge_id
db.query(KnowledgeFile).filter_by(knowledge_id=id).delete()
db.commit()
# Update the knowledge entry's updated_at timestamp
db.query(Knowledge).filter_by(id=id).update(
{
"updated_at": int(time.time()),
}
)
db.commit()
return self.get_knowledge_by_id(id=id)
except Exception as e:
log.exception(e)
return None
def update_knowledge_by_id( def update_knowledge_by_id(
self, id: str, form_data: KnowledgeForm, overwrite: bool = False self, id: str, form_data: KnowledgeForm, overwrite: bool = False
) -> Optional[KnowledgeModel]: ) -> Optional[KnowledgeModel]:

View file

@ -5,7 +5,8 @@ from typing import Optional
from open_webui.internal.db import Base, get_db from open_webui.internal.db import Base, get_db
from open_webui.models.tags import TagModel, Tag, Tags from open_webui.models.tags import TagModel, Tag, Tags
from open_webui.models.users import Users, UserNameResponse from open_webui.models.users import Users, User, UserNameResponse
from open_webui.models.channels import Channels, ChannelMember
from pydantic import BaseModel, ConfigDict from pydantic import BaseModel, ConfigDict
@ -39,7 +40,7 @@ class MessageReactionModel(BaseModel):
class Message(Base): class Message(Base):
__tablename__ = "message" __tablename__ = "message"
id = Column(Text, primary_key=True) id = Column(Text, primary_key=True, unique=True)
user_id = Column(Text) user_id = Column(Text)
channel_id = Column(Text, nullable=True) channel_id = Column(Text, nullable=True)
@ -47,6 +48,11 @@ class Message(Base):
reply_to_id = Column(Text, nullable=True) reply_to_id = Column(Text, nullable=True)
parent_id = Column(Text, nullable=True) parent_id = Column(Text, nullable=True)
# Pins
is_pinned = Column(Boolean, nullable=False, default=False)
pinned_at = Column(BigInteger, nullable=True)
pinned_by = Column(Text, nullable=True)
content = Column(Text) content = Column(Text)
data = Column(JSON, nullable=True) data = Column(JSON, nullable=True)
meta = Column(JSON, nullable=True) meta = Column(JSON, nullable=True)
@ -65,12 +71,17 @@ class MessageModel(BaseModel):
reply_to_id: Optional[str] = None reply_to_id: Optional[str] = None
parent_id: Optional[str] = None parent_id: Optional[str] = None
# Pins
is_pinned: bool = False
pinned_by: Optional[str] = None
pinned_at: Optional[int] = None # timestamp in epoch (time_ns)
content: str content: str
data: Optional[dict] = None data: Optional[dict] = None
meta: Optional[dict] = None meta: Optional[dict] = None
created_at: int # timestamp in epoch created_at: int # timestamp in epoch (time_ns)
updated_at: int # timestamp in epoch updated_at: int # timestamp in epoch (time_ns)
#################### ####################
@ -79,6 +90,7 @@ class MessageModel(BaseModel):
class MessageForm(BaseModel): class MessageForm(BaseModel):
temp_id: Optional[str] = None
content: str content: str
reply_to_id: Optional[str] = None reply_to_id: Optional[str] = None
parent_id: Optional[str] = None parent_id: Optional[str] = None
@ -88,7 +100,7 @@ class MessageForm(BaseModel):
class Reactions(BaseModel): class Reactions(BaseModel):
name: str name: str
user_ids: list[str] users: list[dict]
count: int count: int
@ -100,6 +112,10 @@ class MessageReplyToResponse(MessageUserResponse):
reply_to_message: Optional[MessageUserResponse] = None reply_to_message: Optional[MessageUserResponse] = None
class MessageWithReactionsResponse(MessageUserResponse):
reactions: list[Reactions]
class MessageResponse(MessageReplyToResponse): class MessageResponse(MessageReplyToResponse):
latest_reply_at: Optional[int] latest_reply_at: Optional[int]
reply_count: int reply_count: int
@ -111,9 +127,11 @@ class MessageTable:
self, form_data: MessageForm, channel_id: str, user_id: str self, form_data: MessageForm, channel_id: str, user_id: str
) -> Optional[MessageModel]: ) -> Optional[MessageModel]:
with get_db() as db: with get_db() as db:
id = str(uuid.uuid4()) channel_member = Channels.join_channel(channel_id, user_id)
id = str(uuid.uuid4())
ts = int(time.time_ns()) ts = int(time.time_ns())
message = MessageModel( message = MessageModel(
**{ **{
"id": id, "id": id,
@ -121,6 +139,9 @@ class MessageTable:
"channel_id": channel_id, "channel_id": channel_id,
"reply_to_id": form_data.reply_to_id, "reply_to_id": form_data.reply_to_id,
"parent_id": form_data.parent_id, "parent_id": form_data.parent_id,
"is_pinned": False,
"pinned_at": None,
"pinned_by": None,
"content": form_data.content, "content": form_data.content,
"data": form_data.data, "data": form_data.data,
"meta": form_data.meta, "meta": form_data.meta,
@ -128,8 +149,8 @@ class MessageTable:
"updated_at": ts, "updated_at": ts,
} }
) )
result = Message(**message.model_dump()) result = Message(**message.model_dump())
db.add(result) db.add(result)
db.commit() db.commit()
db.refresh(result) db.refresh(result)
@ -280,6 +301,30 @@ class MessageTable:
) )
return messages return messages
def get_last_message_by_channel_id(self, channel_id: str) -> Optional[MessageModel]:
with get_db() as db:
message = (
db.query(Message)
.filter_by(channel_id=channel_id)
.order_by(Message.created_at.desc())
.first()
)
return MessageModel.model_validate(message) if message else None
def get_pinned_messages_by_channel_id(
self, channel_id: str, skip: int = 0, limit: int = 50
) -> list[MessageModel]:
with get_db() as db:
all_messages = (
db.query(Message)
.filter_by(channel_id=channel_id, is_pinned=True)
.order_by(Message.pinned_at.desc())
.offset(skip)
.limit(limit)
.all()
)
return [MessageModel.model_validate(message) for message in all_messages]
def update_message_by_id( def update_message_by_id(
self, id: str, form_data: MessageForm self, id: str, form_data: MessageForm
) -> Optional[MessageModel]: ) -> Optional[MessageModel]:
@ -299,10 +344,44 @@ class MessageTable:
db.refresh(message) db.refresh(message)
return MessageModel.model_validate(message) if message else None return MessageModel.model_validate(message) if message else None
def update_is_pinned_by_id(
self, id: str, is_pinned: bool, pinned_by: Optional[str] = None
) -> Optional[MessageModel]:
with get_db() as db:
message = db.get(Message, id)
message.is_pinned = is_pinned
message.pinned_at = int(time.time_ns()) if is_pinned else None
message.pinned_by = pinned_by if is_pinned else None
db.commit()
db.refresh(message)
return MessageModel.model_validate(message) if message else None
def get_unread_message_count(
self, channel_id: str, user_id: str, last_read_at: Optional[int] = None
) -> int:
with get_db() as db:
query = db.query(Message).filter(
Message.channel_id == channel_id,
Message.parent_id == None, # only count top-level messages
Message.created_at > (last_read_at if last_read_at else 0),
)
if user_id:
query = query.filter(Message.user_id != user_id)
return query.count()
def add_reaction_to_message( def add_reaction_to_message(
self, id: str, user_id: str, name: str self, id: str, user_id: str, name: str
) -> Optional[MessageReactionModel]: ) -> Optional[MessageReactionModel]:
with get_db() as db: with get_db() as db:
# check for existing reaction
existing_reaction = (
db.query(MessageReaction)
.filter_by(message_id=id, user_id=user_id, name=name)
.first()
)
if existing_reaction:
return MessageReactionModel.model_validate(existing_reaction)
reaction_id = str(uuid.uuid4()) reaction_id = str(uuid.uuid4())
reaction = MessageReactionModel( reaction = MessageReactionModel(
id=reaction_id, id=reaction_id,
@ -319,17 +398,30 @@ class MessageTable:
def get_reactions_by_message_id(self, id: str) -> list[Reactions]: def get_reactions_by_message_id(self, id: str) -> list[Reactions]:
with get_db() as db: with get_db() as db:
all_reactions = db.query(MessageReaction).filter_by(message_id=id).all() # JOIN User so all user info is fetched in one query
results = (
db.query(MessageReaction, User)
.join(User, MessageReaction.user_id == User.id)
.filter(MessageReaction.message_id == id)
.all()
)
reactions = {} reactions = {}
for reaction in all_reactions:
for reaction, user in results:
if reaction.name not in reactions: if reaction.name not in reactions:
reactions[reaction.name] = { reactions[reaction.name] = {
"name": reaction.name, "name": reaction.name,
"user_ids": [], "users": [],
"count": 0, "count": 0,
} }
reactions[reaction.name]["user_ids"].append(reaction.user_id)
reactions[reaction.name]["users"].append(
{
"id": user.id,
"name": user.name,
}
)
reactions[reaction.name]["count"] += 1 reactions[reaction.name]["count"] += 1
return [Reactions(**reaction) for reaction in reactions.values()] return [Reactions(**reaction) for reaction in reactions.values()]

View file

@ -13,6 +13,8 @@ from pydantic import BaseModel, ConfigDict
from sqlalchemy import String, cast, or_, and_, func from sqlalchemy import String, cast, or_, and_, func
from sqlalchemy.dialects import postgresql, sqlite from sqlalchemy.dialects import postgresql, sqlite
from sqlalchemy.dialects.postgresql import JSONB
from sqlalchemy import BigInteger, Column, Text, JSON, Boolean from sqlalchemy import BigInteger, Column, Text, JSON, Boolean
@ -53,7 +55,7 @@ class ModelMeta(BaseModel):
class Model(Base): class Model(Base):
__tablename__ = "model" __tablename__ = "model"
id = Column(Text, primary_key=True) id = Column(Text, primary_key=True, unique=True)
""" """
The model's id as used in the API. If set to an existing model, it will override the model. The model's id as used in the API. If set to an existing model, it will override the model.
""" """
@ -220,6 +222,48 @@ class ModelsTable:
or has_access(user_id, permission, model.access_control, user_group_ids) or has_access(user_id, permission, model.access_control, user_group_ids)
] ]
def _has_permission(self, db, query, filter: dict, permission: str = "read"):
group_ids = filter.get("group_ids", [])
user_id = filter.get("user_id")
dialect_name = db.bind.dialect.name
# Public access
conditions = []
if group_ids or user_id:
conditions.extend(
[
Model.access_control.is_(None),
cast(Model.access_control, String) == "null",
]
)
# User-level permission
if user_id:
conditions.append(Model.user_id == user_id)
# Group-level permission
if group_ids:
group_conditions = []
for gid in group_ids:
if dialect_name == "sqlite":
group_conditions.append(
Model.access_control[permission]["group_ids"].contains([gid])
)
elif dialect_name == "postgresql":
group_conditions.append(
cast(
Model.access_control[permission]["group_ids"],
JSONB,
).contains([gid])
)
conditions.append(or_(*group_conditions))
if conditions:
query = query.filter(or_(*conditions))
return query
def search_models( def search_models(
self, user_id: str, filter: dict = {}, skip: int = 0, limit: int = 30 self, user_id: str, filter: dict = {}, skip: int = 0, limit: int = 30
) -> ModelListResponse: ) -> ModelListResponse:
@ -238,16 +282,20 @@ class ModelsTable:
) )
) )
if filter.get("user_id"):
query = query.filter(Model.user_id == filter.get("user_id"))
view_option = filter.get("view_option") view_option = filter.get("view_option")
if view_option == "created": if view_option == "created":
query = query.filter(Model.user_id == user_id) query = query.filter(Model.user_id == user_id)
elif view_option == "shared": elif view_option == "shared":
query = query.filter(Model.user_id != user_id) query = query.filter(Model.user_id != user_id)
# Apply access control filtering
query = self._has_permission(
db,
query,
filter,
permission="write",
)
tag = filter.get("tag") tag = filter.get("tag")
if tag: if tag:
# TODO: This is a simple implementation and should be improved for performance # TODO: This is a simple implementation and should be improved for performance
@ -290,10 +338,15 @@ class ModelsTable:
models = [] models = []
for model, user in items: for model, user in items:
model_model = ModelModel.model_validate(model)
user_model = UserResponse(**UserModel.model_validate(user).model_dump())
models.append( models.append(
ModelUserResponse(**model_model.model_dump(), user=user_model) ModelUserResponse(
**ModelModel.model_validate(model).model_dump(),
user=(
UserResponse(**UserModel.model_validate(user).model_dump())
if user
else None
),
)
) )
return ModelListResponse(items=models, total=total) return ModelListResponse(items=models, total=total)

View file

@ -23,7 +23,7 @@ from sqlalchemy.sql import exists
class Note(Base): class Note(Base):
__tablename__ = "note" __tablename__ = "note"
id = Column(Text, primary_key=True) id = Column(Text, primary_key=True, unique=True)
user_id = Column(Text) user_id = Column(Text)
title = Column(Text) title = Column(Text)

View file

@ -25,7 +25,7 @@ log.setLevel(SRC_LOG_LEVELS["MODELS"])
class OAuthSession(Base): class OAuthSession(Base):
__tablename__ = "oauth_session" __tablename__ = "oauth_session"
id = Column(Text, primary_key=True) id = Column(Text, primary_key=True, unique=True)
user_id = Column(Text, nullable=False) user_id = Column(Text, nullable=False)
provider = Column(Text, nullable=False) provider = Column(Text, nullable=False)
token = Column( token = Column(

View file

@ -24,7 +24,7 @@ log.setLevel(SRC_LOG_LEVELS["MODELS"])
class Tool(Base): class Tool(Base):
__tablename__ = "tool" __tablename__ = "tool"
id = Column(String, primary_key=True) id = Column(String, primary_key=True, unique=True)
user_id = Column(String) user_id = Column(String)
name = Column(Text) name = Column(Text)
content = Column(Text) content = Column(Text)

View file

@ -7,12 +7,27 @@ from open_webui.internal.db import Base, JSONField, get_db
from open_webui.env import DATABASE_USER_ACTIVE_STATUS_UPDATE_INTERVAL from open_webui.env import DATABASE_USER_ACTIVE_STATUS_UPDATE_INTERVAL
from open_webui.models.chats import Chats from open_webui.models.chats import Chats
from open_webui.models.groups import Groups, GroupMember from open_webui.models.groups import Groups, GroupMember
from open_webui.models.channels import ChannelMember
from open_webui.utils.misc import throttle from open_webui.utils.misc import throttle
from pydantic import BaseModel, ConfigDict from pydantic import BaseModel, ConfigDict
from sqlalchemy import BigInteger, Column, String, Text, Date from sqlalchemy import (
from sqlalchemy import or_ BigInteger,
JSON,
Column,
String,
Boolean,
Text,
Date,
exists,
select,
cast,
)
from sqlalchemy import or_, case
from sqlalchemy.dialects.postgresql import JSONB
import datetime import datetime
@ -21,59 +36,71 @@ import datetime
#################### ####################
class User(Base):
__tablename__ = "user"
id = Column(String, primary_key=True)
name = Column(String)
email = Column(String)
username = Column(String(50), nullable=True)
role = Column(String)
profile_image_url = Column(Text)
bio = Column(Text, nullable=True)
gender = Column(Text, nullable=True)
date_of_birth = Column(Date, nullable=True)
info = Column(JSONField, nullable=True)
settings = Column(JSONField, nullable=True)
api_key = Column(String, nullable=True, unique=True)
oauth_sub = Column(Text, unique=True)
last_active_at = Column(BigInteger)
updated_at = Column(BigInteger)
created_at = Column(BigInteger)
class UserSettings(BaseModel): class UserSettings(BaseModel):
ui: Optional[dict] = {} ui: Optional[dict] = {}
model_config = ConfigDict(extra="allow") model_config = ConfigDict(extra="allow")
pass pass
class User(Base):
__tablename__ = "user"
id = Column(String, primary_key=True, unique=True)
email = Column(String)
username = Column(String(50), nullable=True)
role = Column(String)
name = Column(String)
profile_image_url = Column(Text)
profile_banner_image_url = Column(Text, nullable=True)
bio = Column(Text, nullable=True)
gender = Column(Text, nullable=True)
date_of_birth = Column(Date, nullable=True)
timezone = Column(String, nullable=True)
presence_state = Column(String, nullable=True)
status_emoji = Column(String, nullable=True)
status_message = Column(Text, nullable=True)
status_expires_at = Column(BigInteger, nullable=True)
info = Column(JSON, nullable=True)
settings = Column(JSON, nullable=True)
oauth = Column(JSON, nullable=True)
last_active_at = Column(BigInteger)
updated_at = Column(BigInteger)
created_at = Column(BigInteger)
class UserModel(BaseModel): class UserModel(BaseModel):
id: str id: str
name: str
email: str email: str
username: Optional[str] = None username: Optional[str] = None
role: str = "pending" role: str = "pending"
name: str
profile_image_url: str profile_image_url: str
profile_banner_image_url: Optional[str] = None
bio: Optional[str] = None bio: Optional[str] = None
gender: Optional[str] = None gender: Optional[str] = None
date_of_birth: Optional[datetime.date] = None date_of_birth: Optional[datetime.date] = None
timezone: Optional[str] = None
presence_state: Optional[str] = None
status_emoji: Optional[str] = None
status_message: Optional[str] = None
status_expires_at: Optional[int] = None
info: Optional[dict] = None info: Optional[dict] = None
settings: Optional[UserSettings] = None settings: Optional[UserSettings] = None
api_key: Optional[str] = None oauth: Optional[dict] = None
oauth_sub: Optional[str] = None
last_active_at: int # timestamp in epoch last_active_at: int # timestamp in epoch
updated_at: int # timestamp in epoch updated_at: int # timestamp in epoch
@ -82,6 +109,38 @@ class UserModel(BaseModel):
model_config = ConfigDict(from_attributes=True) model_config = ConfigDict(from_attributes=True)
class UserStatusModel(UserModel):
is_active: bool = False
model_config = ConfigDict(from_attributes=True)
class ApiKey(Base):
__tablename__ = "api_key"
id = Column(Text, primary_key=True, unique=True)
user_id = Column(Text, nullable=False)
key = Column(Text, unique=True, nullable=False)
data = Column(JSON, nullable=True)
expires_at = Column(BigInteger, nullable=True)
last_used_at = Column(BigInteger, nullable=True)
created_at = Column(BigInteger, nullable=False)
updated_at = Column(BigInteger, nullable=False)
class ApiKeyModel(BaseModel):
id: str
user_id: str
key: str
data: Optional[dict] = None
expires_at: Optional[int] = None
last_used_at: Optional[int] = None
created_at: int # timestamp in epoch
updated_at: int # timestamp in epoch
model_config = ConfigDict(from_attributes=True)
#################### ####################
# Forms # Forms
#################### ####################
@ -99,12 +158,27 @@ class UserGroupIdsModel(UserModel):
group_ids: list[str] = [] group_ids: list[str] = []
class UserModelResponse(UserModel):
model_config = ConfigDict(extra="allow")
class UserListResponse(BaseModel): class UserListResponse(BaseModel):
users: list[UserModelResponse]
total: int
class UserGroupIdsListResponse(BaseModel):
users: list[UserGroupIdsModel] users: list[UserGroupIdsModel]
total: int total: int
class UserInfoResponse(BaseModel): class UserStatus(BaseModel):
status_emoji: Optional[str] = None
status_message: Optional[str] = None
status_expires_at: Optional[int] = None
class UserInfoResponse(UserStatus):
id: str id: str
name: str name: str
email: str email: str
@ -116,6 +190,12 @@ class UserIdNameResponse(BaseModel):
name: str name: str
class UserIdNameStatusResponse(UserStatus):
id: str
name: str
is_active: Optional[bool] = None
class UserInfoListResponse(BaseModel): class UserInfoListResponse(BaseModel):
users: list[UserInfoResponse] users: list[UserInfoResponse]
total: int total: int
@ -126,18 +206,18 @@ class UserIdNameListResponse(BaseModel):
total: int total: int
class UserResponse(BaseModel):
id: str
name: str
email: str
role: str
profile_image_url: str
class UserNameResponse(BaseModel): class UserNameResponse(BaseModel):
id: str id: str
name: str name: str
role: str role: str
class UserResponse(UserNameResponse):
email: str
class UserProfileImageResponse(UserNameResponse):
email: str
profile_image_url: str profile_image_url: str
@ -162,20 +242,20 @@ class UsersTable:
email: str, email: str,
profile_image_url: str = "/user.png", profile_image_url: str = "/user.png",
role: str = "pending", role: str = "pending",
oauth_sub: Optional[str] = None, oauth: Optional[dict] = None,
) -> Optional[UserModel]: ) -> Optional[UserModel]:
with get_db() as db: with get_db() as db:
user = UserModel( user = UserModel(
**{ **{
"id": id, "id": id,
"name": name,
"email": email, "email": email,
"name": name,
"role": role, "role": role,
"profile_image_url": profile_image_url, "profile_image_url": profile_image_url,
"last_active_at": int(time.time()), "last_active_at": int(time.time()),
"created_at": int(time.time()), "created_at": int(time.time()),
"updated_at": int(time.time()), "updated_at": int(time.time()),
"oauth_sub": oauth_sub, "oauth": oauth,
} }
) )
result = User(**user.model_dump()) result = User(**user.model_dump())
@ -198,8 +278,13 @@ class UsersTable:
def get_user_by_api_key(self, api_key: str) -> Optional[UserModel]: def get_user_by_api_key(self, api_key: str) -> Optional[UserModel]:
try: try:
with get_db() as db: with get_db() as db:
user = db.query(User).filter_by(api_key=api_key).first() user = (
return UserModel.model_validate(user) db.query(User)
.join(ApiKey, User.id == ApiKey.user_id)
.filter(ApiKey.key == api_key)
.first()
)
return UserModel.model_validate(user) if user else None
except Exception: except Exception:
return None return None
@ -211,12 +296,23 @@ class UsersTable:
except Exception: except Exception:
return None return None
def get_user_by_oauth_sub(self, sub: str) -> Optional[UserModel]: def get_user_by_oauth_sub(self, provider: str, sub: str) -> Optional[UserModel]:
try: try:
with get_db() as db: with get_db() as db: # type: Session
user = db.query(User).filter_by(oauth_sub=sub).first() dialect_name = db.bind.dialect.name
return UserModel.model_validate(user)
except Exception: query = db.query(User)
if dialect_name == "sqlite":
query = query.filter(User.oauth.contains({provider: {"sub": sub}}))
elif dialect_name == "postgresql":
query = query.filter(
User.oauth[provider].cast(JSONB)["sub"].astext == sub
)
user = query.first()
return UserModel.model_validate(user) if user else None
except Exception as e:
# You may want to log the exception here
return None return None
def get_users( def get_users(
@ -227,9 +323,7 @@ class UsersTable:
) -> dict: ) -> dict:
with get_db() as db: with get_db() as db:
# Join GroupMember so we can order by group_id when requested # Join GroupMember so we can order by group_id when requested
query = db.query(User).outerjoin( query = db.query(User)
GroupMember, GroupMember.user_id == User.id
)
if filter: if filter:
query_key = filter.get("query") query_key = filter.get("query")
@ -241,23 +335,76 @@ class UsersTable:
) )
) )
channel_id = filter.get("channel_id")
if channel_id:
query = query.filter(
exists(
select(ChannelMember.id).where(
ChannelMember.user_id == User.id,
ChannelMember.channel_id == channel_id,
)
)
)
user_ids = filter.get("user_ids")
group_ids = filter.get("group_ids")
if isinstance(user_ids, list) and isinstance(group_ids, list):
# If both are empty lists, return no users
if not user_ids and not group_ids:
return {"users": [], "total": 0}
if user_ids:
query = query.filter(User.id.in_(user_ids))
if group_ids:
query = query.filter(
exists(
select(GroupMember.id).where(
GroupMember.user_id == User.id,
GroupMember.group_id.in_(group_ids),
)
)
)
roles = filter.get("roles")
if roles:
include_roles = [role for role in roles if not role.startswith("!")]
exclude_roles = [role[1:] for role in roles if role.startswith("!")]
if include_roles:
query = query.filter(User.role.in_(include_roles))
if exclude_roles:
query = query.filter(~User.role.in_(exclude_roles))
order_by = filter.get("order_by") order_by = filter.get("order_by")
direction = filter.get("direction") direction = filter.get("direction")
if order_by and order_by.startswith("group_id:"): if order_by and order_by.startswith("group_id:"):
group_id = order_by.split(":", 1)[1] group_id = order_by.split(":", 1)[1]
if direction == "asc": # Subquery that checks if the user belongs to the group
query = query.order_by((GroupMember.group_id == group_id).asc()) membership_exists = exists(
else: select(GroupMember.id).where(
query = query.order_by( GroupMember.user_id == User.id,
(GroupMember.group_id == group_id).desc() GroupMember.group_id == group_id,
) )
)
# CASE: user in group → 1, user not in group → 0
group_sort = case((membership_exists, 1), else_=0)
if direction == "asc":
query = query.order_by(group_sort.asc(), User.name.asc())
else:
query = query.order_by(group_sort.desc(), User.name.asc())
elif order_by == "name": elif order_by == "name":
if direction == "asc": if direction == "asc":
query = query.order_by(User.name.asc()) query = query.order_by(User.name.asc())
else: else:
query = query.order_by(User.name.desc()) query = query.order_by(User.name.desc())
elif order_by == "email": elif order_by == "email":
if direction == "asc": if direction == "asc":
query = query.order_by(User.email.asc()) query = query.order_by(User.email.asc())
@ -293,9 +440,10 @@ class UsersTable:
# Count BEFORE pagination # Count BEFORE pagination
total = query.count() total = query.count()
if skip: # correct pagination logic
if skip is not None:
query = query.offset(skip) query = query.offset(skip)
if limit: if limit is not None:
query = query.limit(limit) query = query.limit(limit)
users = query.all() users = query.all()
@ -304,7 +452,17 @@ class UsersTable:
"total": total, "total": total,
} }
def get_users_by_user_ids(self, user_ids: list[str]) -> list[UserModel]: def get_users_by_group_id(self, group_id: str) -> list[UserModel]:
with get_db() as db:
users = (
db.query(User)
.join(GroupMember, User.id == GroupMember.user_id)
.filter(GroupMember.group_id == group_id)
.all()
)
return [UserModel.model_validate(user) for user in users]
def get_users_by_user_ids(self, user_ids: list[str]) -> list[UserStatusModel]:
with get_db() as db: with get_db() as db:
users = db.query(User).filter(User.id.in_(user_ids)).all() users = db.query(User).filter(User.id.in_(user_ids)).all()
return [UserModel.model_validate(user) for user in users] return [UserModel.model_validate(user) for user in users]
@ -360,6 +518,21 @@ class UsersTable:
except Exception: except Exception:
return None return None
def update_user_status_by_id(
self, id: str, form_data: UserStatus
) -> Optional[UserModel]:
try:
with get_db() as db:
db.query(User).filter_by(id=id).update(
{**form_data.model_dump(exclude_none=True)}
)
db.commit()
user = db.query(User).filter_by(id=id).first()
return UserModel.model_validate(user)
except Exception:
return None
def update_user_profile_image_url_by_id( def update_user_profile_image_url_by_id(
self, id: str, profile_image_url: str self, id: str, profile_image_url: str
) -> Optional[UserModel]: ) -> Optional[UserModel]:
@ -376,7 +549,7 @@ class UsersTable:
return None return None
@throttle(DATABASE_USER_ACTIVE_STATUS_UPDATE_INTERVAL) @throttle(DATABASE_USER_ACTIVE_STATUS_UPDATE_INTERVAL)
def update_user_last_active_by_id(self, id: str) -> Optional[UserModel]: def update_last_active_by_id(self, id: str) -> Optional[UserModel]:
try: try:
with get_db() as db: with get_db() as db:
db.query(User).filter_by(id=id).update( db.query(User).filter_by(id=id).update(
@ -389,16 +562,35 @@ class UsersTable:
except Exception: except Exception:
return None return None
def update_user_oauth_sub_by_id( def update_user_oauth_by_id(
self, id: str, oauth_sub: str self, id: str, provider: str, sub: str
) -> Optional[UserModel]: ) -> Optional[UserModel]:
"""
Update or insert an OAuth provider/sub pair into the user's oauth JSON field.
Example resulting structure:
{
"google": { "sub": "123" },
"github": { "sub": "abc" }
}
"""
try: try:
with get_db() as db: with get_db() as db:
db.query(User).filter_by(id=id).update({"oauth_sub": oauth_sub}) user = db.query(User).filter_by(id=id).first()
if not user:
return None
# Load existing oauth JSON or create empty
oauth = user.oauth or {}
# Update or insert provider entry
oauth[provider] = {"sub": sub}
# Persist updated JSON
db.query(User).filter_by(id=id).update({"oauth": oauth})
db.commit() db.commit()
user = db.query(User).filter_by(id=id).first()
return UserModel.model_validate(user) return UserModel.model_validate(user)
except Exception: except Exception:
return None return None
@ -452,23 +644,45 @@ class UsersTable:
except Exception: except Exception:
return False return False
def update_user_api_key_by_id(self, id: str, api_key: str) -> bool:
try:
with get_db() as db:
result = db.query(User).filter_by(id=id).update({"api_key": api_key})
db.commit()
return True if result == 1 else False
except Exception:
return False
def get_user_api_key_by_id(self, id: str) -> Optional[str]: def get_user_api_key_by_id(self, id: str) -> Optional[str]:
try: try:
with get_db() as db: with get_db() as db:
user = db.query(User).filter_by(id=id).first() api_key = db.query(ApiKey).filter_by(user_id=id).first()
return user.api_key return api_key.key if api_key else None
except Exception: except Exception:
return None return None
def update_user_api_key_by_id(self, id: str, api_key: str) -> bool:
try:
with get_db() as db:
db.query(ApiKey).filter_by(user_id=id).delete()
db.commit()
now = int(time.time())
new_api_key = ApiKey(
id=f"key_{id}",
user_id=id,
key=api_key,
created_at=now,
updated_at=now,
)
db.add(new_api_key)
db.commit()
return True
except Exception:
return False
def delete_user_api_key_by_id(self, id: str) -> bool:
try:
with get_db() as db:
db.query(ApiKey).filter_by(user_id=id).delete()
db.commit()
return True
except Exception:
return False
def get_valid_user_ids(self, user_ids: list[str]) -> list[str]: def get_valid_user_ids(self, user_ids: list[str]) -> list[str]:
with get_db() as db: with get_db() as db:
users = db.query(User).filter(User.id.in_(user_ids)).all() users = db.query(User).filter(User.id.in_(user_ids)).all()
@ -482,5 +696,23 @@ class UsersTable:
else: else:
return None return None
def get_active_user_count(self) -> int:
with get_db() as db:
# Consider user active if last_active_at within the last 3 minutes
three_minutes_ago = int(time.time()) - 180
count = (
db.query(User).filter(User.last_active_at >= three_minutes_ago).count()
)
return count
def is_user_active(self, user_id: str) -> bool:
with get_db() as db:
user = db.query(User).filter_by(id=user_id).first()
if user and user.last_active_at:
# Consider user active if last_active_at within the last 3 minutes
three_minutes_ago = int(time.time()) - 180
return user.last_active_at >= three_minutes_ago
return False
Users = UsersTable() Users = UsersTable()

View file

@ -132,8 +132,9 @@ class TikaLoader:
class DoclingLoader: class DoclingLoader:
def __init__(self, url, file_path=None, mime_type=None, params=None): def __init__(self, url, api_key=None, file_path=None, mime_type=None, params=None):
self.url = url.rstrip("/") self.url = url.rstrip("/")
self.api_key = api_key
self.file_path = file_path self.file_path = file_path
self.mime_type = mime_type self.mime_type = mime_type
@ -141,6 +142,10 @@ class DoclingLoader:
def load(self) -> list[Document]: def load(self) -> list[Document]:
with open(self.file_path, "rb") as f: with open(self.file_path, "rb") as f:
headers = {}
if self.api_key:
headers["Authorization"] = f"Bearer {self.api_key}"
files = { files = {
"files": ( "files": (
self.file_path, self.file_path,
@ -149,60 +154,15 @@ class DoclingLoader:
) )
} }
params = {"image_export_mode": "placeholder"} r = requests.post(
f"{self.url}/v1/convert/file",
if self.params: files=files,
if self.params.get("do_picture_description"): data={
params["do_picture_description"] = self.params.get( "image_export_mode": "placeholder",
"do_picture_description" **self.params,
},
headers=headers,
) )
picture_description_mode = self.params.get(
"picture_description_mode", ""
).lower()
if picture_description_mode == "local" and self.params.get(
"picture_description_local", {}
):
params["picture_description_local"] = json.dumps(
self.params.get("picture_description_local", {})
)
elif picture_description_mode == "api" and self.params.get(
"picture_description_api", {}
):
params["picture_description_api"] = json.dumps(
self.params.get("picture_description_api", {})
)
params["do_ocr"] = self.params.get("do_ocr")
params["force_ocr"] = self.params.get("force_ocr")
if (
self.params.get("do_ocr")
and self.params.get("ocr_engine")
and self.params.get("ocr_lang")
):
params["ocr_engine"] = self.params.get("ocr_engine")
params["ocr_lang"] = [
lang.strip()
for lang in self.params.get("ocr_lang").split(",")
if lang.strip()
]
if self.params.get("pdf_backend"):
params["pdf_backend"] = self.params.get("pdf_backend")
if self.params.get("table_mode"):
params["table_mode"] = self.params.get("table_mode")
if self.params.get("pipeline"):
params["pipeline"] = self.params.get("pipeline")
endpoint = f"{self.url}/v1/convert/file"
r = requests.post(endpoint, files=files, data=params)
if r.ok: if r.ok:
result = r.json() result = r.json()
document_data = result.get("document", {}) document_data = result.get("document", {})
@ -211,7 +171,6 @@ class DoclingLoader:
metadata = {"Content-Type": self.mime_type} if self.mime_type else {} metadata = {"Content-Type": self.mime_type} if self.mime_type else {}
log.debug("Docling extracted text: %s", text) log.debug("Docling extracted text: %s", text)
return [Document(page_content=text, metadata=metadata)] return [Document(page_content=text, metadata=metadata)]
else: else:
error_msg = f"Error calling Docling API: {r.reason}" error_msg = f"Error calling Docling API: {r.reason}"
@ -340,6 +299,7 @@ class Loader:
loader = DoclingLoader( loader = DoclingLoader(
url=self.kwargs.get("DOCLING_SERVER_URL"), url=self.kwargs.get("DOCLING_SERVER_URL"),
api_key=self.kwargs.get("DOCLING_API_KEY", None),
file_path=file_path, file_path=file_path,
mime_type=file_content_type, mime_type=file_content_type,
params=params, params=params,
@ -362,12 +322,14 @@ class Loader:
file_path=file_path, file_path=file_path,
api_endpoint=self.kwargs.get("DOCUMENT_INTELLIGENCE_ENDPOINT"), api_endpoint=self.kwargs.get("DOCUMENT_INTELLIGENCE_ENDPOINT"),
api_key=self.kwargs.get("DOCUMENT_INTELLIGENCE_KEY"), api_key=self.kwargs.get("DOCUMENT_INTELLIGENCE_KEY"),
api_model=self.kwargs.get("DOCUMENT_INTELLIGENCE_MODEL"),
) )
else: else:
loader = AzureAIDocumentIntelligenceLoader( loader = AzureAIDocumentIntelligenceLoader(
file_path=file_path, file_path=file_path,
api_endpoint=self.kwargs.get("DOCUMENT_INTELLIGENCE_ENDPOINT"), api_endpoint=self.kwargs.get("DOCUMENT_INTELLIGENCE_ENDPOINT"),
azure_credential=DefaultAzureCredential(), azure_credential=DefaultAzureCredential(),
api_model=self.kwargs.get("DOCUMENT_INTELLIGENCE_MODEL"),
) )
elif self.engine == "mineru" and file_ext in [ elif self.engine == "mineru" and file_ext in [
"pdf" "pdf"

View file

@ -782,6 +782,7 @@ def get_embedding_function(
key, key,
embedding_batch_size, embedding_batch_size,
azure_api_version=None, azure_api_version=None,
enable_async=True,
) -> Awaitable: ) -> Awaitable:
if embedding_engine == "": if embedding_engine == "":
# Sentence transformers: CPU-bound sync operation # Sentence transformers: CPU-bound sync operation
@ -816,16 +817,26 @@ def get_embedding_function(
query[i : i + embedding_batch_size] query[i : i + embedding_batch_size]
for i in range(0, len(query), embedding_batch_size) for i in range(0, len(query), embedding_batch_size)
] ]
if enable_async:
log.debug( log.debug(
f"generate_multiple_async: Processing {len(batches)} batches in parallel" f"generate_multiple_async: Processing {len(batches)} batches in parallel"
) )
# Execute all batches in parallel # Execute all batches in parallel
tasks = [ tasks = [
embedding_function(batch, prefix=prefix, user=user) embedding_function(batch, prefix=prefix, user=user)
for batch in batches for batch in batches
] ]
batch_results = await asyncio.gather(*tasks) batch_results = await asyncio.gather(*tasks)
else:
log.debug(
f"generate_multiple_async: Processing {len(batches)} batches sequentially"
)
batch_results = []
for batch in batches:
batch_results.append(
await embedding_function(batch, prefix=prefix, user=user)
)
# Flatten results # Flatten results
embeddings = [] embeddings = []
@ -1077,21 +1088,17 @@ async def get_sources_from_items(
or knowledge_base.user_id == user.id or knowledge_base.user_id == user.id
or has_access(user.id, "read", knowledge_base.access_control) or has_access(user.id, "read", knowledge_base.access_control)
): ):
files = Knowledges.get_files_by_id(knowledge_base.id)
file_ids = knowledge_base.data.get("file_ids", [])
documents = [] documents = []
metadatas = [] metadatas = []
for file_id in file_ids: for file in files:
file_object = Files.get_file_by_id(file_id) documents.append(file.data.get("content", ""))
if file_object:
documents.append(file_object.data.get("content", ""))
metadatas.append( metadatas.append(
{ {
"file_id": file_id, "file_id": file.id,
"name": file_object.filename, "name": file.filename,
"source": file_object.filename, "source": file.filename,
} }
) )

View file

@ -200,23 +200,24 @@ class MilvusClient(VectorDBBase):
def query(self, collection_name: str, filter: dict, limit: int = -1): def query(self, collection_name: str, filter: dict, limit: int = -1):
connections.connect(uri=MILVUS_URI, token=MILVUS_TOKEN, db_name=MILVUS_DB) connections.connect(uri=MILVUS_URI, token=MILVUS_TOKEN, db_name=MILVUS_DB)
# Construct the filter string for querying
collection_name = collection_name.replace("-", "_") collection_name = collection_name.replace("-", "_")
if not self.has_collection(collection_name): if not self.has_collection(collection_name):
log.warning( log.warning(
f"Query attempted on non-existent collection: {self.collection_prefix}_{collection_name}" f"Query attempted on non-existent collection: {self.collection_prefix}_{collection_name}"
) )
return None return None
filter_string = " && ".join(
[ filter_expressions = []
f'metadata["{key}"] == {json.dumps(value)}' for key, value in filter.items():
for key, value in filter.items() if isinstance(value, str):
] filter_expressions.append(f'metadata["{key}"] == "{value}"')
) else:
filter_expressions.append(f'metadata["{key}"] == {value}')
filter_string = " && ".join(filter_expressions)
collection = Collection(f"{self.collection_prefix}_{collection_name}") collection = Collection(f"{self.collection_prefix}_{collection_name}")
collection.load() collection.load()
all_results = []
try: try:
log.info( log.info(
@ -224,24 +225,25 @@ class MilvusClient(VectorDBBase):
) )
iterator = collection.query_iterator( iterator = collection.query_iterator(
filter=filter_string, expr=filter_string,
output_fields=[ output_fields=[
"id", "id",
"data", "data",
"metadata", "metadata",
], ],
limit=limit, # Pass the limit directly; -1 means no limit. limit=limit if limit > 0 else -1,
) )
all_results = []
while True: while True:
result = iterator.next() batch = iterator.next()
if not result: if not batch:
iterator.close() iterator.close()
break break
all_results += result all_results.extend(batch)
log.info(f"Total results from query: {len(all_results)}") log.debug(f"Total results from query: {len(all_results)}")
return self._result_to_get_result([all_results]) return self._result_to_get_result([all_results] if all_results else [[]])
except Exception as e: except Exception as e:
log.exception( log.exception(

View file

@ -157,7 +157,6 @@ class MilvusClient(VectorDBBase):
for item in items for item in items
] ]
collection.insert(entities) collection.insert(entities)
collection.flush()
def search( def search(
self, collection_name: str, vectors: List[List[float]], limit: int self, collection_name: str, vectors: List[List[float]], limit: int
@ -263,15 +262,23 @@ class MilvusClient(VectorDBBase):
else: else:
expr.append(f"metadata['{key}'] == {value}") expr.append(f"metadata['{key}'] == {value}")
results = collection.query( iterator = collection.query_iterator(
expr=" and ".join(expr), expr=" and ".join(expr),
output_fields=["id", "text", "metadata"], output_fields=["id", "text", "metadata"],
limit=limit, limit=limit if limit else -1,
) )
ids = [res["id"] for res in results] all_results = []
documents = [res["text"] for res in results] while True:
metadatas = [res["metadata"] for res in results] batch = iterator.next()
if not batch:
iterator.close()
break
all_results.extend(batch)
ids = [res["id"] for res in all_results]
documents = [res["text"] for res in all_results]
metadatas = [res["metadata"] for res in all_results]
return GetResult(ids=[ids], documents=[documents], metadatas=[metadatas]) return GetResult(ids=[ids], documents=[documents], metadatas=[metadatas])

View file

@ -5,7 +5,8 @@ from urllib.parse import urlparse
from pydantic import BaseModel from pydantic import BaseModel
from open_webui.retrieval.web.utils import is_string_allowed, resolve_hostname from open_webui.retrieval.web.utils import resolve_hostname
from open_webui.utils.misc import is_string_allowed
def get_filtered_results(results, filter_list): def get_filtered_results(results, filter_list):
@ -32,7 +33,7 @@ def get_filtered_results(results, filter_list):
except Exception: except Exception:
pass pass
if any(is_string_allowed(hostname, filter_list) for hostname in hostnames): if is_string_allowed(hostnames, filter_list):
filtered_results.append(result) filtered_results.append(result)
continue continue

View file

@ -42,7 +42,7 @@ from open_webui.config import (
WEB_FETCH_FILTER_LIST, WEB_FETCH_FILTER_LIST,
) )
from open_webui.env import SRC_LOG_LEVELS from open_webui.env import SRC_LOG_LEVELS
from open_webui.utils.misc import is_string_allowed
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["RAG"]) log.setLevel(SRC_LOG_LEVELS["RAG"])
@ -59,39 +59,6 @@ def resolve_hostname(hostname):
return ipv4_addresses, ipv6_addresses return ipv4_addresses, ipv6_addresses
def get_allow_block_lists(filter_list):
allow_list = []
block_list = []
if filter_list:
for d in filter_list:
if d.startswith("!"):
# Domains starting with "!" → blocked
block_list.append(d[1:])
else:
# Domains starting without "!" → allowed
allow_list.append(d)
return allow_list, block_list
def is_string_allowed(string: str, filter_list: Optional[list[str]] = None) -> bool:
if not filter_list:
return True
allow_list, block_list = get_allow_block_lists(filter_list)
# If allow list is non-empty, require domain to match one of them
if allow_list:
if not any(string.endswith(allowed) for allowed in allow_list):
return False
# Block list always removes matches
if any(string.endswith(blocked) for blocked in block_list):
return False
return True
def validate_url(url: Union[str, Sequence[str]]): def validate_url(url: Union[str, Sequence[str]]):
if isinstance(url, str): if isinstance(url, str):
if isinstance(validators.url(url), validators.ValidationError): if isinstance(validators.url(url), validators.ValidationError):

View file

@ -6,6 +6,7 @@ import logging
from aiohttp import ClientSession from aiohttp import ClientSession
import urllib import urllib
from open_webui.models.auths import ( from open_webui.models.auths import (
AddUserForm, AddUserForm,
ApiKey, ApiKey,
@ -16,9 +17,13 @@ from open_webui.models.auths import (
SigninResponse, SigninResponse,
SignupForm, SignupForm,
UpdatePasswordForm, UpdatePasswordForm,
UserResponse,
) )
from open_webui.models.users import Users, UpdateProfileForm from open_webui.models.users import (
UserProfileImageResponse,
Users,
UpdateProfileForm,
UserStatus,
)
from open_webui.models.groups import Groups from open_webui.models.groups import Groups
from open_webui.models.oauth_sessions import OAuthSessions from open_webui.models.oauth_sessions import OAuthSessions
@ -60,6 +65,11 @@ from open_webui.utils.auth import (
) )
from open_webui.utils.webhook import post_webhook from open_webui.utils.webhook import post_webhook
from open_webui.utils.access_control import get_permissions, has_permission from open_webui.utils.access_control import get_permissions, has_permission
from open_webui.utils.groups import apply_default_group_assignment
from open_webui.utils.redis import get_redis_client
from open_webui.utils.rate_limit import RateLimiter
from typing import Optional, List from typing import Optional, List
@ -73,17 +83,21 @@ router = APIRouter()
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["MAIN"]) log.setLevel(SRC_LOG_LEVELS["MAIN"])
signin_rate_limiter = RateLimiter(
redis_client=get_redis_client(), limit=5 * 3, window=60 * 3
)
############################ ############################
# GetSessionUser # GetSessionUser
############################ ############################
class SessionUserResponse(Token, UserResponse): class SessionUserResponse(Token, UserProfileImageResponse):
expires_at: Optional[int] = None expires_at: Optional[int] = None
permissions: Optional[dict] = None permissions: Optional[dict] = None
class SessionUserInfoResponse(SessionUserResponse): class SessionUserInfoResponse(SessionUserResponse, UserStatus):
bio: Optional[str] = None bio: Optional[str] = None
gender: Optional[str] = None gender: Optional[str] = None
date_of_birth: Optional[datetime.date] = None date_of_birth: Optional[datetime.date] = None
@ -140,6 +154,9 @@ async def get_session_user(
"bio": user.bio, "bio": user.bio,
"gender": user.gender, "gender": user.gender,
"date_of_birth": user.date_of_birth, "date_of_birth": user.date_of_birth,
"status_emoji": user.status_emoji,
"status_message": user.status_message,
"status_expires_at": user.status_expires_at,
"permissions": user_permissions, "permissions": user_permissions,
} }
@ -149,7 +166,7 @@ async def get_session_user(
############################ ############################
@router.post("/update/profile", response_model=UserResponse) @router.post("/update/profile", response_model=UserProfileImageResponse)
async def update_profile( async def update_profile(
form_data: UpdateProfileForm, session_user=Depends(get_verified_user) form_data: UpdateProfileForm, session_user=Depends(get_verified_user)
): ):
@ -401,6 +418,11 @@ async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
500, detail=ERROR_MESSAGES.CREATE_USER_ERROR 500, detail=ERROR_MESSAGES.CREATE_USER_ERROR
) )
apply_default_group_assignment(
request.app.state.config.DEFAULT_GROUP_ID,
user.id,
)
except HTTPException: except HTTPException:
raise raise
except Exception as err: except Exception as err:
@ -449,7 +471,6 @@ async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
): ):
if ENABLE_LDAP_GROUP_CREATION: if ENABLE_LDAP_GROUP_CREATION:
Groups.create_groups_by_group_names(user.id, user_groups) Groups.create_groups_by_group_names(user.id, user_groups)
try: try:
Groups.sync_groups_by_group_names(user.id, user_groups) Groups.sync_groups_by_group_names(user.id, user_groups)
log.info( log.info(
@ -544,6 +565,12 @@ async def signin(request: Request, response: Response, form_data: SigninForm):
admin_email.lower(), lambda pw: verify_password(admin_password, pw) admin_email.lower(), lambda pw: verify_password(admin_password, pw)
) )
else: else:
if signin_rate_limiter.is_limited(form_data.email.lower()):
raise HTTPException(
status_code=status.HTTP_429_TOO_MANY_REQUESTS,
detail=ERROR_MESSAGES.RATE_LIMIT_EXCEEDED,
)
password_bytes = form_data.password.encode("utf-8") password_bytes = form_data.password.encode("utf-8")
if len(password_bytes) > 72: if len(password_bytes) > 72:
# TODO: Implement other hashing algorithms that support longer passwords # TODO: Implement other hashing algorithms that support longer passwords
@ -700,9 +727,10 @@ async def signup(request: Request, response: Response, form_data: SignupForm):
# Disable signup after the first user is created # Disable signup after the first user is created
request.app.state.config.ENABLE_SIGNUP = False request.app.state.config.ENABLE_SIGNUP = False
default_group_id = getattr(request.app.state.config, "DEFAULT_GROUP_ID", "") apply_default_group_assignment(
if default_group_id and default_group_id: request.app.state.config.DEFAULT_GROUP_ID,
Groups.add_users_to_group(default_group_id, [user.id]) user.id,
)
return { return {
"token": token, "token": token,
@ -807,7 +835,9 @@ async def signout(request: Request, response: Response):
@router.post("/add", response_model=SigninResponse) @router.post("/add", response_model=SigninResponse)
async def add_user(form_data: AddUserForm, user=Depends(get_admin_user)): async def add_user(
request: Request, form_data: AddUserForm, user=Depends(get_admin_user)
):
if not validate_email_format(form_data.email.lower()): if not validate_email_format(form_data.email.lower()):
raise HTTPException( raise HTTPException(
status.HTTP_400_BAD_REQUEST, detail=ERROR_MESSAGES.INVALID_EMAIL_FORMAT status.HTTP_400_BAD_REQUEST, detail=ERROR_MESSAGES.INVALID_EMAIL_FORMAT
@ -832,6 +862,11 @@ async def add_user(form_data: AddUserForm, user=Depends(get_admin_user)):
) )
if user: if user:
apply_default_group_assignment(
request.app.state.config.DEFAULT_GROUP_ID,
user.id,
)
token = create_token(data={"id": user.id}) token = create_token(data={"id": user.id})
return { return {
"token": token, "token": token,
@ -901,6 +936,7 @@ async def get_admin_config(request: Request, user=Depends(get_admin_user)):
"JWT_EXPIRES_IN": request.app.state.config.JWT_EXPIRES_IN, "JWT_EXPIRES_IN": request.app.state.config.JWT_EXPIRES_IN,
"ENABLE_COMMUNITY_SHARING": request.app.state.config.ENABLE_COMMUNITY_SHARING, "ENABLE_COMMUNITY_SHARING": request.app.state.config.ENABLE_COMMUNITY_SHARING,
"ENABLE_MESSAGE_RATING": request.app.state.config.ENABLE_MESSAGE_RATING, "ENABLE_MESSAGE_RATING": request.app.state.config.ENABLE_MESSAGE_RATING,
"ENABLE_FOLDERS": request.app.state.config.ENABLE_FOLDERS,
"ENABLE_CHANNELS": request.app.state.config.ENABLE_CHANNELS, "ENABLE_CHANNELS": request.app.state.config.ENABLE_CHANNELS,
"ENABLE_NOTES": request.app.state.config.ENABLE_NOTES, "ENABLE_NOTES": request.app.state.config.ENABLE_NOTES,
"ENABLE_USER_WEBHOOKS": request.app.state.config.ENABLE_USER_WEBHOOKS, "ENABLE_USER_WEBHOOKS": request.app.state.config.ENABLE_USER_WEBHOOKS,
@ -922,6 +958,7 @@ class AdminConfig(BaseModel):
JWT_EXPIRES_IN: str JWT_EXPIRES_IN: str
ENABLE_COMMUNITY_SHARING: bool ENABLE_COMMUNITY_SHARING: bool
ENABLE_MESSAGE_RATING: bool ENABLE_MESSAGE_RATING: bool
ENABLE_FOLDERS: bool
ENABLE_CHANNELS: bool ENABLE_CHANNELS: bool
ENABLE_NOTES: bool ENABLE_NOTES: bool
ENABLE_USER_WEBHOOKS: bool ENABLE_USER_WEBHOOKS: bool
@ -946,6 +983,7 @@ async def update_admin_config(
form_data.API_KEYS_ALLOWED_ENDPOINTS form_data.API_KEYS_ALLOWED_ENDPOINTS
) )
request.app.state.config.ENABLE_FOLDERS = form_data.ENABLE_FOLDERS
request.app.state.config.ENABLE_CHANNELS = form_data.ENABLE_CHANNELS request.app.state.config.ENABLE_CHANNELS = form_data.ENABLE_CHANNELS
request.app.state.config.ENABLE_NOTES = form_data.ENABLE_NOTES request.app.state.config.ENABLE_NOTES = form_data.ENABLE_NOTES
@ -988,6 +1026,7 @@ async def update_admin_config(
"JWT_EXPIRES_IN": request.app.state.config.JWT_EXPIRES_IN, "JWT_EXPIRES_IN": request.app.state.config.JWT_EXPIRES_IN,
"ENABLE_COMMUNITY_SHARING": request.app.state.config.ENABLE_COMMUNITY_SHARING, "ENABLE_COMMUNITY_SHARING": request.app.state.config.ENABLE_COMMUNITY_SHARING,
"ENABLE_MESSAGE_RATING": request.app.state.config.ENABLE_MESSAGE_RATING, "ENABLE_MESSAGE_RATING": request.app.state.config.ENABLE_MESSAGE_RATING,
"ENABLE_FOLDERS": request.app.state.config.ENABLE_FOLDERS,
"ENABLE_CHANNELS": request.app.state.config.ENABLE_CHANNELS, "ENABLE_CHANNELS": request.app.state.config.ENABLE_CHANNELS,
"ENABLE_NOTES": request.app.state.config.ENABLE_NOTES, "ENABLE_NOTES": request.app.state.config.ENABLE_NOTES,
"ENABLE_USER_WEBHOOKS": request.app.state.config.ENABLE_USER_WEBHOOKS, "ENABLE_USER_WEBHOOKS": request.app.state.config.ENABLE_USER_WEBHOOKS,
@ -1130,8 +1169,7 @@ async def generate_api_key(request: Request, user=Depends(get_current_user)):
# delete api key # delete api key
@router.delete("/api_key", response_model=bool) @router.delete("/api_key", response_model=bool)
async def delete_api_key(user=Depends(get_current_user)): async def delete_api_key(user=Depends(get_current_user)):
success = Users.update_user_api_key_by_id(user.id, None) return Users.delete_user_api_key_by_id(user.id)
return success
# get api key # get api key

View file

@ -7,8 +7,20 @@ from fastapi import APIRouter, Depends, HTTPException, Request, status, Backgrou
from pydantic import BaseModel from pydantic import BaseModel
from open_webui.socket.main import sio, get_user_ids_from_room from open_webui.socket.main import (
from open_webui.models.users import Users, UserNameResponse emit_to_users,
enter_room_for_users,
sio,
get_user_ids_from_room,
)
from open_webui.models.users import (
UserIdNameResponse,
UserIdNameStatusResponse,
UserListResponse,
UserModelResponse,
Users,
UserNameResponse,
)
from open_webui.models.groups import Groups from open_webui.models.groups import Groups
from open_webui.models.channels import ( from open_webui.models.channels import (
@ -16,11 +28,13 @@ from open_webui.models.channels import (
ChannelModel, ChannelModel,
ChannelForm, ChannelForm,
ChannelResponse, ChannelResponse,
CreateChannelForm,
) )
from open_webui.models.messages import ( from open_webui.models.messages import (
Messages, Messages,
MessageModel, MessageModel,
MessageResponse, MessageResponse,
MessageWithReactionsResponse,
MessageForm, MessageForm,
) )
@ -38,7 +52,12 @@ from open_webui.utils.chat import generate_chat_completion
from open_webui.utils.auth import get_admin_user, get_verified_user from open_webui.utils.auth import get_admin_user, get_verified_user
from open_webui.utils.access_control import has_access, get_users_with_access from open_webui.utils.access_control import (
has_access,
get_users_with_access,
get_permitted_group_and_user_ids,
has_permission,
)
from open_webui.utils.webhook import post_webhook from open_webui.utils.webhook import post_webhook
from open_webui.utils.channels import extract_mentions, replace_mentions from open_webui.utils.channels import extract_mentions, replace_mentions
@ -52,9 +71,64 @@ router = APIRouter()
############################ ############################
@router.get("/", response_model=list[ChannelModel]) class ChannelListItemResponse(ChannelModel):
async def get_channels(user=Depends(get_verified_user)): user_ids: Optional[list[str]] = None # 'dm' channels only
return Channels.get_channels_by_user_id(user.id) users: Optional[list[UserIdNameStatusResponse]] = None # 'dm' channels only
last_message_at: Optional[int] = None # timestamp in epoch (time_ns)
unread_count: int = 0
@router.get("/", response_model=list[ChannelListItemResponse])
async def get_channels(request: Request, user=Depends(get_verified_user)):
if user.role != "admin" and not has_permission(
user.id, "features.channels", request.app.state.config.USER_PERMISSIONS
):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.UNAUTHORIZED,
)
channels = Channels.get_channels_by_user_id(user.id)
channel_list = []
for channel in channels:
last_message = Messages.get_last_message_by_channel_id(channel.id)
last_message_at = last_message.created_at if last_message else None
channel_member = Channels.get_member_by_channel_and_user_id(channel.id, user.id)
unread_count = (
Messages.get_unread_message_count(
channel.id, user.id, channel_member.last_read_at
)
if channel_member
else 0
)
user_ids = None
users = None
if channel.type == "dm":
user_ids = [
member.user_id
for member in Channels.get_members_by_channel_id(channel.id)
]
users = [
UserIdNameStatusResponse(
**{**user.model_dump(), "is_active": Users.is_user_active(user.id)}
)
for user in Users.get_users_by_user_ids(user_ids)
]
channel_list.append(
ChannelListItemResponse(
**channel.model_dump(),
user_ids=user_ids,
users=users,
last_message_at=last_message_at,
unread_count=unread_count,
)
)
return channel_list
@router.get("/list", response_model=list[ChannelModel]) @router.get("/list", response_model=list[ChannelModel])
@ -64,16 +138,141 @@ async def get_all_channels(user=Depends(get_verified_user)):
return Channels.get_channels_by_user_id(user.id) return Channels.get_channels_by_user_id(user.id)
############################
# GetDMChannelByUserId
############################
@router.get("/users/{user_id}", response_model=Optional[ChannelModel])
async def get_dm_channel_by_user_id(
request: Request, user_id: str, user=Depends(get_verified_user)
):
if user.role != "admin" and not has_permission(
user.id, "features.channels", request.app.state.config.USER_PERMISSIONS
):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.UNAUTHORIZED,
)
try:
existing_channel = Channels.get_dm_channel_by_user_ids([user.id, user_id])
if existing_channel:
participant_ids = [
member.user_id
for member in Channels.get_members_by_channel_id(existing_channel.id)
]
await emit_to_users(
"events:channel",
{"data": {"type": "channel:created"}},
participant_ids,
)
await enter_room_for_users(
f"channel:{existing_channel.id}", participant_ids
)
Channels.update_member_active_status(existing_channel.id, user.id, True)
return ChannelModel(**existing_channel.model_dump())
channel = Channels.insert_new_channel(
CreateChannelForm(
type="dm",
name="",
user_ids=[user_id],
),
user.id,
)
if channel:
participant_ids = [
member.user_id
for member in Channels.get_members_by_channel_id(channel.id)
]
await emit_to_users(
"events:channel",
{"data": {"type": "channel:created"}},
participant_ids,
)
await enter_room_for_users(f"channel:{channel.id}", participant_ids)
return ChannelModel(**channel.model_dump())
else:
raise Exception("Error creating channel")
except Exception as e:
log.exception(e)
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST, detail=ERROR_MESSAGES.DEFAULT()
)
############################ ############################
# CreateNewChannel # CreateNewChannel
############################ ############################
@router.post("/create", response_model=Optional[ChannelModel]) @router.post("/create", response_model=Optional[ChannelModel])
async def create_new_channel(form_data: ChannelForm, user=Depends(get_admin_user)): async def create_new_channel(
request: Request, form_data: CreateChannelForm, user=Depends(get_verified_user)
):
if user.role != "admin" and not has_permission(
user.id, "features.channels", request.app.state.config.USER_PERMISSIONS
):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.UNAUTHORIZED,
)
if form_data.type not in ["group", "dm"] and user.role != "admin":
# Only admins can create standard channels (joined by default)
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.UNAUTHORIZED,
)
try: try:
channel = Channels.insert_new_channel(None, form_data, user.id) if form_data.type == "dm":
existing_channel = Channels.get_dm_channel_by_user_ids(
[user.id, *form_data.user_ids]
)
if existing_channel:
participant_ids = [
member.user_id
for member in Channels.get_members_by_channel_id(
existing_channel.id
)
]
await emit_to_users(
"events:channel",
{"data": {"type": "channel:created"}},
participant_ids,
)
await enter_room_for_users(
f"channel:{existing_channel.id}", participant_ids
)
Channels.update_member_active_status(existing_channel.id, user.id, True)
return ChannelModel(**existing_channel.model_dump())
channel = Channels.insert_new_channel(form_data, user.id)
if channel:
participant_ids = [
member.user_id
for member in Channels.get_members_by_channel_id(channel.id)
]
await emit_to_users(
"events:channel",
{"data": {"type": "channel:created"}},
participant_ids,
)
await enter_room_for_users(f"channel:{channel.id}", participant_ids)
return ChannelModel(**channel.model_dump()) return ChannelModel(**channel.model_dump())
else:
raise Exception("Error creating channel")
except Exception as e: except Exception as e:
log.exception(e) log.exception(e)
raise HTTPException( raise HTTPException(
@ -86,7 +285,15 @@ async def create_new_channel(form_data: ChannelForm, user=Depends(get_admin_user
############################ ############################
@router.get("/{id}", response_model=Optional[ChannelResponse]) class ChannelFullResponse(ChannelResponse):
user_ids: Optional[list[str]] = None # 'group'/'dm' channels only
users: Optional[list[UserIdNameStatusResponse]] = None # 'group'/'dm' channels only
last_read_at: Optional[int] = None # timestamp in epoch (time_ns)
unread_count: int = 0
@router.get("/{id}", response_model=Optional[ChannelFullResponse])
async def get_channel_by_id(id: str, user=Depends(get_verified_user)): async def get_channel_by_id(id: str, user=Depends(get_verified_user)):
channel = Channels.get_channel_by_id(id) channel = Channels.get_channel_by_id(id)
if not channel: if not channel:
@ -94,6 +301,44 @@ async def get_channel_by_id(id: str, user=Depends(get_verified_user)):
status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND
) )
user_ids = None
users = None
if channel.type in ["group", "dm"]:
if not Channels.is_user_channel_member(channel.id, user.id):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
)
user_ids = [
member.user_id for member in Channels.get_members_by_channel_id(channel.id)
]
users = [
UserIdNameStatusResponse(
**{**user.model_dump(), "is_active": Users.is_user_active(user.id)}
)
for user in Users.get_users_by_user_ids(user_ids)
]
channel_member = Channels.get_member_by_channel_and_user_id(channel.id, user.id)
unread_count = Messages.get_unread_message_count(
channel.id, user.id, channel_member.last_read_at if channel_member else None
)
return ChannelFullResponse(
**{
**channel.model_dump(),
"user_ids": user_ids,
"users": users,
"is_manager": Channels.is_user_channel_manager(channel.id, user.id),
"write_access": True,
"user_count": len(user_ids),
"last_read_at": channel_member.last_read_at if channel_member else None,
"unread_count": unread_count,
}
)
else:
if user.role != "admin" and not has_access( if user.role != "admin" and not has_access(
user.id, type="read", access_control=channel.access_control user.id, type="read", access_control=channel.access_control
): ):
@ -105,14 +350,240 @@ async def get_channel_by_id(id: str, user=Depends(get_verified_user)):
user.id, type="write", access_control=channel.access_control, strict=False user.id, type="write", access_control=channel.access_control, strict=False
) )
return ChannelResponse( user_count = len(get_users_with_access("read", channel.access_control))
channel_member = Channels.get_member_by_channel_and_user_id(channel.id, user.id)
unread_count = Messages.get_unread_message_count(
channel.id, user.id, channel_member.last_read_at if channel_member else None
)
return ChannelFullResponse(
**{ **{
**channel.model_dump(), **channel.model_dump(),
"user_ids": user_ids,
"users": users,
"is_manager": Channels.is_user_channel_manager(channel.id, user.id),
"write_access": write_access or user.role == "admin", "write_access": write_access or user.role == "admin",
"user_count": user_count,
"last_read_at": channel_member.last_read_at if channel_member else None,
"unread_count": unread_count,
} }
) )
############################
# GetChannelMembersById
############################
PAGE_ITEM_COUNT = 30
@router.get("/{id}/members", response_model=UserListResponse)
async def get_channel_members_by_id(
id: str,
query: Optional[str] = None,
order_by: Optional[str] = None,
direction: Optional[str] = None,
page: Optional[int] = 1,
user=Depends(get_verified_user),
):
channel = Channels.get_channel_by_id(id)
if not channel:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND
)
limit = PAGE_ITEM_COUNT
page = max(1, page)
skip = (page - 1) * limit
if channel.type in ["group", "dm"]:
if not Channels.is_user_channel_member(channel.id, user.id):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
)
if channel.type == "dm":
user_ids = [
member.user_id for member in Channels.get_members_by_channel_id(channel.id)
]
users = Users.get_users_by_user_ids(user_ids)
total = len(users)
return {
"users": [
UserModelResponse(
**user.model_dump(), is_active=Users.is_user_active(user.id)
)
for user in users
],
"total": total,
}
else:
filter = {}
if query:
filter["query"] = query
if order_by:
filter["order_by"] = order_by
if direction:
filter["direction"] = direction
if channel.type == "group":
filter["channel_id"] = channel.id
else:
filter["roles"] = ["!pending"]
permitted_ids = get_permitted_group_and_user_ids(
"read", channel.access_control
)
if permitted_ids:
filter["user_ids"] = permitted_ids.get("user_ids")
filter["group_ids"] = permitted_ids.get("group_ids")
result = Users.get_users(filter=filter, skip=skip, limit=limit)
users = result["users"]
total = result["total"]
return {
"users": [
UserModelResponse(
**user.model_dump(), is_active=Users.is_user_active(user.id)
)
for user in users
],
"total": total,
}
#################################################
# UpdateIsActiveMemberByIdAndUserId
#################################################
class UpdateActiveMemberForm(BaseModel):
is_active: bool
@router.post("/{id}/members/active", response_model=bool)
async def update_is_active_member_by_id_and_user_id(
id: str,
form_data: UpdateActiveMemberForm,
user=Depends(get_verified_user),
):
channel = Channels.get_channel_by_id(id)
if not channel:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND
)
if not Channels.is_user_channel_member(channel.id, user.id):
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND
)
Channels.update_member_active_status(channel.id, user.id, form_data.is_active)
return True
#################################################
# AddMembersById
#################################################
class UpdateMembersForm(BaseModel):
user_ids: list[str] = []
group_ids: list[str] = []
@router.post("/{id}/update/members/add")
async def add_members_by_id(
request: Request,
id: str,
form_data: UpdateMembersForm,
user=Depends(get_verified_user),
):
if user.role != "admin" and not has_permission(
user.id, "features.channels", request.app.state.config.USER_PERMISSIONS
):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.UNAUTHORIZED,
)
channel = Channels.get_channel_by_id(id)
if not channel:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND
)
if channel.user_id != user.id and user.role != "admin":
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
)
try:
memberships = Channels.add_members_to_channel(
channel.id, user.id, form_data.user_ids, form_data.group_ids
)
return memberships
except Exception as e:
log.exception(e)
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST, detail=ERROR_MESSAGES.DEFAULT()
)
#################################################
#
#################################################
class RemoveMembersForm(BaseModel):
user_ids: list[str] = []
@router.post("/{id}/update/members/remove")
async def remove_members_by_id(
request: Request,
id: str,
form_data: RemoveMembersForm,
user=Depends(get_verified_user),
):
if user.role != "admin" and not has_permission(
user.id, "features.channels", request.app.state.config.USER_PERMISSIONS
):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.UNAUTHORIZED,
)
channel = Channels.get_channel_by_id(id)
if not channel:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND
)
if channel.user_id != user.id and user.role != "admin":
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
)
try:
deleted = Channels.remove_members_from_channel(channel.id, form_data.user_ids)
return deleted
except Exception as e:
log.exception(e)
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST, detail=ERROR_MESSAGES.DEFAULT()
)
############################ ############################
# UpdateChannelById # UpdateChannelById
############################ ############################
@ -120,14 +591,27 @@ async def get_channel_by_id(id: str, user=Depends(get_verified_user)):
@router.post("/{id}/update", response_model=Optional[ChannelModel]) @router.post("/{id}/update", response_model=Optional[ChannelModel])
async def update_channel_by_id( async def update_channel_by_id(
id: str, form_data: ChannelForm, user=Depends(get_admin_user) request: Request, id: str, form_data: ChannelForm, user=Depends(get_verified_user)
): ):
if user.role != "admin" and not has_permission(
user.id, "features.channels", request.app.state.config.USER_PERMISSIONS
):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.UNAUTHORIZED,
)
channel = Channels.get_channel_by_id(id) channel = Channels.get_channel_by_id(id)
if not channel: if not channel:
raise HTTPException( raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND
) )
if channel.user_id != user.id and user.role != "admin":
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
)
try: try:
channel = Channels.update_channel_by_id(id, form_data) channel = Channels.update_channel_by_id(id, form_data)
return ChannelModel(**channel.model_dump()) return ChannelModel(**channel.model_dump())
@ -144,13 +628,28 @@ async def update_channel_by_id(
@router.delete("/{id}/delete", response_model=bool) @router.delete("/{id}/delete", response_model=bool)
async def delete_channel_by_id(id: str, user=Depends(get_admin_user)): async def delete_channel_by_id(
request: Request, id: str, user=Depends(get_verified_user)
):
if user.role != "admin" and not has_permission(
user.id, "features.channels", request.app.state.config.USER_PERMISSIONS
):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.UNAUTHORIZED,
)
channel = Channels.get_channel_by_id(id) channel = Channels.get_channel_by_id(id)
if not channel: if not channel:
raise HTTPException( raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND
) )
if channel.user_id != user.id and user.role != "admin":
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
)
try: try:
Channels.delete_channel_by_id(id) Channels.delete_channel_by_id(id)
return True return True
@ -180,6 +679,12 @@ async def get_channel_messages(
status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND
) )
if channel.type in ["group", "dm"]:
if not Channels.is_user_channel_member(channel.id, user.id):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
)
else:
if user.role != "admin" and not has_access( if user.role != "admin" and not has_access(
user.id, type="read", access_control=channel.access_control user.id, type="read", access_control=channel.access_control
): ):
@ -187,6 +692,10 @@ async def get_channel_messages(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT() status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
) )
channel_member = Channels.join_channel(
id, user.id
) # Ensure user is a member of the channel
message_list = Messages.get_messages_by_channel_id(id, skip, limit) message_list = Messages.get_messages_by_channel_id(id, skip, limit)
users = {} users = {}
@ -216,6 +725,62 @@ async def get_channel_messages(
return messages return messages
############################
# GetPinnedChannelMessages
############################
PAGE_ITEM_COUNT_PINNED = 20
@router.get("/{id}/messages/pinned", response_model=list[MessageWithReactionsResponse])
async def get_pinned_channel_messages(
id: str, page: int = 1, user=Depends(get_verified_user)
):
channel = Channels.get_channel_by_id(id)
if not channel:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND
)
if channel.type in ["group", "dm"]:
if not Channels.is_user_channel_member(channel.id, user.id):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
)
else:
if user.role != "admin" and not has_access(
user.id, type="read", access_control=channel.access_control
):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
)
page = max(1, page)
skip = (page - 1) * PAGE_ITEM_COUNT_PINNED
limit = PAGE_ITEM_COUNT_PINNED
message_list = Messages.get_pinned_messages_by_channel_id(id, skip, limit)
users = {}
messages = []
for message in message_list:
if message.user_id not in users:
user = Users.get_user_by_id(message.user_id)
users[message.user_id] = user
messages.append(
MessageWithReactionsResponse(
**{
**message.model_dump(),
"reactions": Messages.get_reactions_by_message_id(message.id),
"user": UserNameResponse(**users[message.user_id].model_dump()),
}
)
)
return messages
############################ ############################
# PostNewMessage # PostNewMessage
############################ ############################
@ -225,7 +790,9 @@ async def send_notification(name, webui_url, channel, message, active_user_ids):
users = get_users_with_access("read", channel.access_control) users = get_users_with_access("read", channel.access_control)
for user in users: for user in users:
if user.id not in active_user_ids: if (user.id not in active_user_ids) and Channels.is_user_channel_member(
channel.id, user.id
):
if user.settings: if user.settings:
webhook_url = user.settings.ui.get("notifications", {}).get( webhook_url = user.settings.ui.get("notifications", {}).get(
"webhook_url", None "webhook_url", None
@ -429,6 +996,12 @@ async def new_message_handler(
status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND
) )
if channel.type in ["group", "dm"]:
if not Channels.is_user_channel_member(channel.id, user.id):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
)
else:
if user.role != "admin" and not has_access( if user.role != "admin" and not has_access(
user.id, type="write", access_control=channel.access_control, strict=False user.id, type="write", access_control=channel.access_control, strict=False
): ):
@ -439,13 +1012,21 @@ async def new_message_handler(
try: try:
message = Messages.insert_new_message(form_data, channel.id, user.id) message = Messages.insert_new_message(form_data, channel.id, user.id)
if message: if message:
if channel.type in ["group", "dm"]:
members = Channels.get_members_by_channel_id(channel.id)
for member in members:
if not member.is_active:
Channels.update_member_active_status(
channel.id, member.user_id, True
)
message = Messages.get_message_by_id(message.id) message = Messages.get_message_by_id(message.id)
event_data = { event_data = {
"channel_id": channel.id, "channel_id": channel.id,
"message_id": message.id, "message_id": message.id,
"data": { "data": {
"type": "message", "type": "message",
"data": message.model_dump(), "data": {"temp_id": form_data.temp_id, **message.model_dump()},
}, },
"user": UserNameResponse(**user.model_dump()).model_dump(), "user": UserNameResponse(**user.model_dump()).model_dump(),
"channel": channel.model_dump(), "channel": channel.model_dump(),
@ -537,6 +1118,12 @@ async def get_channel_message(
status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND
) )
if channel.type in ["group", "dm"]:
if not Channels.is_user_channel_member(channel.id, user.id):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
)
else:
if user.role != "admin" and not has_access( if user.role != "admin" and not has_access(
user.id, type="read", access_control=channel.access_control user.id, type="read", access_control=channel.access_control
): ):
@ -565,6 +1152,69 @@ async def get_channel_message(
) )
############################
# PinChannelMessage
############################
class PinMessageForm(BaseModel):
is_pinned: bool
@router.post(
"/{id}/messages/{message_id}/pin", response_model=Optional[MessageUserResponse]
)
async def pin_channel_message(
id: str, message_id: str, form_data: PinMessageForm, user=Depends(get_verified_user)
):
channel = Channels.get_channel_by_id(id)
if not channel:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND
)
if channel.type in ["group", "dm"]:
if not Channels.is_user_channel_member(channel.id, user.id):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
)
else:
if user.role != "admin" and not has_access(
user.id, type="read", access_control=channel.access_control
):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
)
message = Messages.get_message_by_id(message_id)
if not message:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND
)
if message.channel_id != id:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST, detail=ERROR_MESSAGES.DEFAULT()
)
try:
Messages.update_is_pinned_by_id(message_id, form_data.is_pinned, user.id)
message = Messages.get_message_by_id(message_id)
return MessageUserResponse(
**{
**message.model_dump(),
"user": UserNameResponse(
**Users.get_user_by_id(message.user_id).model_dump()
),
}
)
except Exception as e:
log.exception(e)
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST, detail=ERROR_MESSAGES.DEFAULT()
)
############################ ############################
# GetChannelThreadMessages # GetChannelThreadMessages
############################ ############################
@ -586,6 +1236,12 @@ async def get_channel_thread_messages(
status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND
) )
if channel.type in ["group", "dm"]:
if not Channels.is_user_channel_member(channel.id, user.id):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
)
else:
if user.role != "admin" and not has_access( if user.role != "admin" and not has_access(
user.id, type="read", access_control=channel.access_control user.id, type="read", access_control=channel.access_control
): ):
@ -645,10 +1301,18 @@ async def update_message_by_id(
status_code=status.HTTP_400_BAD_REQUEST, detail=ERROR_MESSAGES.DEFAULT() status_code=status.HTTP_400_BAD_REQUEST, detail=ERROR_MESSAGES.DEFAULT()
) )
if channel.type in ["group", "dm"]:
if not Channels.is_user_channel_member(channel.id, user.id):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
)
else:
if ( if (
user.role != "admin" user.role != "admin"
and message.user_id != user.id and message.user_id != user.id
and not has_access(user.id, type="read", access_control=channel.access_control) and not has_access(
user.id, type="read", access_control=channel.access_control
)
): ):
raise HTTPException( raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT() status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
@ -701,6 +1365,12 @@ async def add_reaction_to_message(
status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND
) )
if channel.type in ["group", "dm"]:
if not Channels.is_user_channel_member(channel.id, user.id):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
)
else:
if user.role != "admin" and not has_access( if user.role != "admin" and not has_access(
user.id, type="write", access_control=channel.access_control, strict=False user.id, type="write", access_control=channel.access_control, strict=False
): ):
@ -764,6 +1434,12 @@ async def remove_reaction_by_id_and_user_id_and_name(
status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND
) )
if channel.type in ["group", "dm"]:
if not Channels.is_user_channel_member(channel.id, user.id):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
)
else:
if user.role != "admin" and not has_access( if user.role != "admin" and not has_access(
user.id, type="write", access_control=channel.access_control, strict=False user.id, type="write", access_control=channel.access_control, strict=False
): ):
@ -841,11 +1517,20 @@ async def delete_message_by_id(
status_code=status.HTTP_400_BAD_REQUEST, detail=ERROR_MESSAGES.DEFAULT() status_code=status.HTTP_400_BAD_REQUEST, detail=ERROR_MESSAGES.DEFAULT()
) )
if channel.type in ["group", "dm"]:
if not Channels.is_user_channel_member(channel.id, user.id):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
)
else:
if ( if (
user.role != "admin" user.role != "admin"
and message.user_id != user.id and message.user_id != user.id
and not has_access( and not has_access(
user.id, type="write", access_control=channel.access_control, strict=False user.id,
type="write",
access_control=channel.access_control,
strict=False,
) )
): ):
raise HTTPException( raise HTTPException(

View file

@ -22,6 +22,7 @@ from fastapi import (
) )
from fastapi.responses import FileResponse, StreamingResponse from fastapi.responses import FileResponse, StreamingResponse
from open_webui.constants import ERROR_MESSAGES from open_webui.constants import ERROR_MESSAGES
from open_webui.env import SRC_LOG_LEVELS from open_webui.env import SRC_LOG_LEVELS
from open_webui.retrieval.vector.factory import VECTOR_DB_CLIENT from open_webui.retrieval.vector.factory import VECTOR_DB_CLIENT
@ -34,12 +35,19 @@ from open_webui.models.files import (
Files, Files,
) )
from open_webui.models.knowledge import Knowledges from open_webui.models.knowledge import Knowledges
from open_webui.models.groups import Groups
from open_webui.routers.knowledge import get_knowledge, get_knowledge_list from open_webui.routers.knowledge import get_knowledge, get_knowledge_list
from open_webui.routers.retrieval import ProcessFileForm, process_file from open_webui.routers.retrieval import ProcessFileForm, process_file
from open_webui.routers.audio import transcribe from open_webui.routers.audio import transcribe
from open_webui.storage.provider import Storage from open_webui.storage.provider import Storage
from open_webui.utils.auth import get_admin_user, get_verified_user from open_webui.utils.auth import get_admin_user, get_verified_user
from open_webui.utils.access_control import has_access
from pydantic import BaseModel from pydantic import BaseModel
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -53,31 +61,37 @@ router = APIRouter()
############################ ############################
# TODO: Optimize this function to use the knowledge_file table for faster lookups.
def has_access_to_file( def has_access_to_file(
file_id: Optional[str], access_type: str, user=Depends(get_verified_user) file_id: Optional[str], access_type: str, user=Depends(get_verified_user)
) -> bool: ) -> bool:
file = Files.get_file_by_id(file_id) file = Files.get_file_by_id(file_id)
log.debug(f"Checking if user has {access_type} access to file") log.debug(f"Checking if user has {access_type} access to file")
if not file: if not file:
raise HTTPException( raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, status_code=status.HTTP_404_NOT_FOUND,
detail=ERROR_MESSAGES.NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND,
) )
has_access = False knowledge_bases = Knowledges.get_knowledges_by_file_id(file_id)
knowledge_base_id = file.meta.get("collection_name") if file.meta else None user_group_ids = {group.id for group in Groups.get_groups_by_member_id(user.id)}
for knowledge_base in knowledge_bases:
if knowledge_base.user_id == user.id or has_access(
user.id, access_type, knowledge_base.access_control, user_group_ids
):
return True
knowledge_base_id = file.meta.get("collection_name") if file.meta else None
if knowledge_base_id: if knowledge_base_id:
knowledge_bases = Knowledges.get_knowledge_bases_by_user_id( knowledge_bases = Knowledges.get_knowledge_bases_by_user_id(
user.id, access_type user.id, access_type
) )
for knowledge_base in knowledge_bases: for knowledge_base in knowledge_bases:
if knowledge_base.id == knowledge_base_id: if knowledge_base.id == knowledge_base_id:
has_access = True return True
break
return has_access return False
############################ ############################

View file

@ -46,7 +46,23 @@ router = APIRouter()
@router.get("/", response_model=list[FolderNameIdResponse]) @router.get("/", response_model=list[FolderNameIdResponse])
async def get_folders(user=Depends(get_verified_user)): async def get_folders(request: Request, user=Depends(get_verified_user)):
if request.app.state.config.ENABLE_FOLDERS is False:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
)
if user.role != "admin" and not has_permission(
user.id,
"features.folders",
request.app.state.config.USER_PERMISSIONS,
):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
)
folders = Folders.get_folders_by_user_id(user.id) folders = Folders.get_folders_by_user_id(user.id)
# Verify folder data integrity # Verify folder data integrity

View file

@ -3,7 +3,7 @@ from pathlib import Path
from typing import Optional from typing import Optional
import logging import logging
from open_webui.models.users import Users from open_webui.models.users import Users, UserInfoResponse
from open_webui.models.groups import ( from open_webui.models.groups import (
Groups, Groups,
GroupForm, GroupForm,
@ -32,31 +32,17 @@ router = APIRouter()
@router.get("/", response_model=list[GroupResponse]) @router.get("/", response_model=list[GroupResponse])
async def get_groups(share: Optional[bool] = None, user=Depends(get_verified_user)): async def get_groups(share: Optional[bool] = None, user=Depends(get_verified_user)):
if user.role == "admin":
groups = Groups.get_groups()
else:
groups = Groups.get_groups_by_member_id(user.id)
group_list = [] filter = {}
if user.role != "admin":
filter["member_id"] = user.id
for group in groups:
if share is not None: if share is not None:
# Check if the group has data and a config with share key filter["share"] = share
if (
group.data
and "share" in group.data.get("config", {})
and group.data["config"]["share"] != share
):
continue
group_list.append( groups = Groups.get_groups(filter=filter)
GroupResponse(
**group.model_dump(),
member_count=Groups.get_group_member_count_by_id(group.id),
)
)
return group_list return groups
############################ ############################
@ -106,6 +92,50 @@ async def get_group_by_id(id: str, user=Depends(get_admin_user)):
) )
############################
# ExportGroupById
############################
class GroupExportResponse(GroupResponse):
user_ids: list[str] = []
pass
@router.get("/id/{id}/export", response_model=Optional[GroupExportResponse])
async def export_group_by_id(id: str, user=Depends(get_admin_user)):
group = Groups.get_group_by_id(id)
if group:
return GroupExportResponse(
**group.model_dump(),
member_count=Groups.get_group_member_count_by_id(group.id),
user_ids=Groups.get_group_user_ids_by_id(group.id),
)
else:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.NOT_FOUND,
)
############################
# GetUsersInGroupById
############################
@router.post("/id/{id}/users", response_model=list[UserInfoResponse])
async def get_users_in_group(id: str, user=Depends(get_admin_user)):
try:
users = Users.get_users_by_group_id(id)
return users
except Exception as e:
log.exception(f"Error adding users to group {id}: {e}")
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.DEFAULT(e),
)
############################ ############################
# UpdateGroupById # UpdateGroupById
############################ ############################

View file

@ -549,9 +549,7 @@ async def image_generations(
if ENABLE_FORWARD_USER_INFO_HEADERS: if ENABLE_FORWARD_USER_INFO_HEADERS:
headers = include_user_info_headers(headers, user) headers = include_user_info_headers(headers, user)
url = ( url = f"{request.app.state.config.IMAGES_OPENAI_API_BASE_URL}/images/generations"
f"{request.app.state.config.IMAGES_OPENAI_API_BASE_URL}/images/generations",
)
if request.app.state.config.IMAGES_OPENAI_API_VERSION: if request.app.state.config.IMAGES_OPENAI_API_VERSION:
url = f"{url}?api-version={request.app.state.config.IMAGES_OPENAI_API_VERSION}" url = f"{url}?api-version={request.app.state.config.IMAGES_OPENAI_API_VERSION}"

View file

@ -1,6 +1,7 @@
from typing import List, Optional from typing import List, Optional
from pydantic import BaseModel from pydantic import BaseModel
from fastapi import APIRouter, Depends, HTTPException, status, Request, Query from fastapi import APIRouter, Depends, HTTPException, status, Request, Query
from fastapi.concurrency import run_in_threadpool
import logging import logging
from open_webui.models.knowledge import ( from open_webui.models.knowledge import (
@ -41,97 +42,38 @@ router = APIRouter()
@router.get("/", response_model=list[KnowledgeUserResponse]) @router.get("/", response_model=list[KnowledgeUserResponse])
async def get_knowledge(user=Depends(get_verified_user)): async def get_knowledge(user=Depends(get_verified_user)):
# Return knowledge bases with read access
knowledge_bases = [] knowledge_bases = []
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL: if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
knowledge_bases = Knowledges.get_knowledge_bases() knowledge_bases = Knowledges.get_knowledge_bases()
else: else:
knowledge_bases = Knowledges.get_knowledge_bases_by_user_id(user.id, "read") knowledge_bases = Knowledges.get_knowledge_bases_by_user_id(user.id, "read")
# Get files for each knowledge base return [
knowledge_with_files = []
for knowledge_base in knowledge_bases:
files = []
if knowledge_base.data:
files = Files.get_file_metadatas_by_ids(
knowledge_base.data.get("file_ids", [])
)
# Check if all files exist
if len(files) != len(knowledge_base.data.get("file_ids", [])):
missing_files = list(
set(knowledge_base.data.get("file_ids", []))
- set([file.id for file in files])
)
if missing_files:
data = knowledge_base.data or {}
file_ids = data.get("file_ids", [])
for missing_file in missing_files:
file_ids.remove(missing_file)
data["file_ids"] = file_ids
Knowledges.update_knowledge_data_by_id(
id=knowledge_base.id, data=data
)
files = Files.get_file_metadatas_by_ids(file_ids)
knowledge_with_files.append(
KnowledgeUserResponse( KnowledgeUserResponse(
**knowledge_base.model_dump(), **knowledge_base.model_dump(),
files=files, files=Knowledges.get_file_metadatas_by_id(knowledge_base.id),
) )
) for knowledge_base in knowledge_bases
]
return knowledge_with_files
@router.get("/list", response_model=list[KnowledgeUserResponse]) @router.get("/list", response_model=list[KnowledgeUserResponse])
async def get_knowledge_list(user=Depends(get_verified_user)): async def get_knowledge_list(user=Depends(get_verified_user)):
# Return knowledge bases with write access
knowledge_bases = [] knowledge_bases = []
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL: if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
knowledge_bases = Knowledges.get_knowledge_bases() knowledge_bases = Knowledges.get_knowledge_bases()
else: else:
knowledge_bases = Knowledges.get_knowledge_bases_by_user_id(user.id, "write") knowledge_bases = Knowledges.get_knowledge_bases_by_user_id(user.id, "write")
# Get files for each knowledge base return [
knowledge_with_files = []
for knowledge_base in knowledge_bases:
files = []
if knowledge_base.data:
files = Files.get_file_metadatas_by_ids(
knowledge_base.data.get("file_ids", [])
)
# Check if all files exist
if len(files) != len(knowledge_base.data.get("file_ids", [])):
missing_files = list(
set(knowledge_base.data.get("file_ids", []))
- set([file.id for file in files])
)
if missing_files:
data = knowledge_base.data or {}
file_ids = data.get("file_ids", [])
for missing_file in missing_files:
file_ids.remove(missing_file)
data["file_ids"] = file_ids
Knowledges.update_knowledge_data_by_id(
id=knowledge_base.id, data=data
)
files = Files.get_file_metadatas_by_ids(file_ids)
knowledge_with_files.append(
KnowledgeUserResponse( KnowledgeUserResponse(
**knowledge_base.model_dump(), **knowledge_base.model_dump(),
files=files, files=Knowledges.get_file_metadatas_by_id(knowledge_base.id),
) )
) for knowledge_base in knowledge_bases
return knowledge_with_files ]
############################ ############################
@ -191,26 +133,9 @@ async def reindex_knowledge_files(request: Request, user=Depends(get_verified_us
log.info(f"Starting reindexing for {len(knowledge_bases)} knowledge bases") log.info(f"Starting reindexing for {len(knowledge_bases)} knowledge bases")
deleted_knowledge_bases = []
for knowledge_base in knowledge_bases: for knowledge_base in knowledge_bases:
# -- Robust error handling for missing or invalid data
if not knowledge_base.data or not isinstance(knowledge_base.data, dict):
log.warning(
f"Knowledge base {knowledge_base.id} has no data or invalid data ({knowledge_base.data!r}). Deleting."
)
try: try:
Knowledges.delete_knowledge_by_id(id=knowledge_base.id) files = Knowledges.get_files_by_id(knowledge_base.id)
deleted_knowledge_bases.append(knowledge_base.id)
except Exception as e:
log.error(
f"Failed to delete invalid knowledge base {knowledge_base.id}: {e}"
)
continue
try:
file_ids = knowledge_base.data.get("file_ids", [])
files = Files.get_files_by_ids(file_ids)
try: try:
if VECTOR_DB_CLIENT.has_collection(collection_name=knowledge_base.id): if VECTOR_DB_CLIENT.has_collection(collection_name=knowledge_base.id):
VECTOR_DB_CLIENT.delete_collection( VECTOR_DB_CLIENT.delete_collection(
@ -223,7 +148,8 @@ async def reindex_knowledge_files(request: Request, user=Depends(get_verified_us
failed_files = [] failed_files = []
for file in files: for file in files:
try: try:
process_file( await run_in_threadpool(
process_file,
request, request,
ProcessFileForm( ProcessFileForm(
file_id=file.id, collection_name=knowledge_base.id file_id=file.id, collection_name=knowledge_base.id
@ -249,9 +175,7 @@ async def reindex_knowledge_files(request: Request, user=Depends(get_verified_us
for failed in failed_files: for failed in failed_files:
log.warning(f"File ID: {failed['file_id']}, Error: {failed['error']}") log.warning(f"File ID: {failed['file_id']}, Error: {failed['error']}")
log.info( log.info(f"Reindexing completed.")
f"Reindexing completed. Deleted {len(deleted_knowledge_bases)} invalid knowledge bases: {deleted_knowledge_bases}"
)
return True return True
@ -269,19 +193,15 @@ async def get_knowledge_by_id(id: str, user=Depends(get_verified_user)):
knowledge = Knowledges.get_knowledge_by_id(id=id) knowledge = Knowledges.get_knowledge_by_id(id=id)
if knowledge: if knowledge:
if ( if (
user.role == "admin" user.role == "admin"
or knowledge.user_id == user.id or knowledge.user_id == user.id
or has_access(user.id, "read", knowledge.access_control) or has_access(user.id, "read", knowledge.access_control)
): ):
file_ids = knowledge.data.get("file_ids", []) if knowledge.data else []
files = Files.get_file_metadatas_by_ids(file_ids)
return KnowledgeFilesResponse( return KnowledgeFilesResponse(
**knowledge.model_dump(), **knowledge.model_dump(),
files=files, files=Knowledges.get_file_metadatas_by_id(knowledge.id),
) )
else: else:
raise HTTPException( raise HTTPException(
@ -333,12 +253,9 @@ async def update_knowledge_by_id(
knowledge = Knowledges.update_knowledge_by_id(id=id, form_data=form_data) knowledge = Knowledges.update_knowledge_by_id(id=id, form_data=form_data)
if knowledge: if knowledge:
file_ids = knowledge.data.get("file_ids", []) if knowledge.data else []
files = Files.get_file_metadatas_by_ids(file_ids)
return KnowledgeFilesResponse( return KnowledgeFilesResponse(
**knowledge.model_dump(), **knowledge.model_dump(),
files=files, files=Knowledges.get_file_metadatas_by_id(knowledge.id),
) )
else: else:
raise HTTPException( raise HTTPException(
@ -364,7 +281,6 @@ def add_file_to_knowledge_by_id(
user=Depends(get_verified_user), user=Depends(get_verified_user),
): ):
knowledge = Knowledges.get_knowledge_by_id(id=id) knowledge = Knowledges.get_knowledge_by_id(id=id)
if not knowledge: if not knowledge:
raise HTTPException( raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST, status_code=status.HTTP_400_BAD_REQUEST,
@ -393,6 +309,11 @@ def add_file_to_knowledge_by_id(
detail=ERROR_MESSAGES.FILE_NOT_PROCESSED, detail=ERROR_MESSAGES.FILE_NOT_PROCESSED,
) )
# Add file to knowledge base
Knowledges.add_file_to_knowledge_by_id(
knowledge_id=id, file_id=form_data.file_id, user_id=user.id
)
# Add content to the vector database # Add content to the vector database
try: try:
process_file( process_file(
@ -408,31 +329,9 @@ def add_file_to_knowledge_by_id(
) )
if knowledge: if knowledge:
data = knowledge.data or {}
file_ids = data.get("file_ids", [])
if form_data.file_id not in file_ids:
file_ids.append(form_data.file_id)
data["file_ids"] = file_ids
knowledge = Knowledges.update_knowledge_data_by_id(id=id, data=data)
if knowledge:
files = Files.get_file_metadatas_by_ids(file_ids)
return KnowledgeFilesResponse( return KnowledgeFilesResponse(
**knowledge.model_dump(), **knowledge.model_dump(),
files=files, files=Knowledges.get_file_metadatas_by_id(knowledge.id),
)
else:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.DEFAULT("knowledge"),
)
else:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.DEFAULT("file_id"),
) )
else: else:
raise HTTPException( raise HTTPException(
@ -492,14 +391,9 @@ def update_file_from_knowledge_by_id(
) )
if knowledge: if knowledge:
data = knowledge.data or {}
file_ids = data.get("file_ids", [])
files = Files.get_file_metadatas_by_ids(file_ids)
return KnowledgeFilesResponse( return KnowledgeFilesResponse(
**knowledge.model_dump(), **knowledge.model_dump(),
files=files, files=Knowledges.get_file_metadatas_by_id(knowledge.id),
) )
else: else:
raise HTTPException( raise HTTPException(
@ -544,11 +438,19 @@ def remove_file_from_knowledge_by_id(
detail=ERROR_MESSAGES.NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND,
) )
Knowledges.remove_file_from_knowledge_by_id(
knowledge_id=id, file_id=form_data.file_id
)
# Remove content from the vector database # Remove content from the vector database
try: try:
VECTOR_DB_CLIENT.delete( VECTOR_DB_CLIENT.delete(
collection_name=knowledge.id, filter={"file_id": form_data.file_id} collection_name=knowledge.id, filter={"file_id": form_data.file_id}
) ) # Remove by file_id first
VECTOR_DB_CLIENT.delete(
collection_name=knowledge.id, filter={"hash": file.hash}
) # Remove by hash as well in case of duplicates
except Exception as e: except Exception as e:
log.debug("This was most likely caused by bypassing embedding processing") log.debug("This was most likely caused by bypassing embedding processing")
log.debug(e) log.debug(e)
@ -569,31 +471,9 @@ def remove_file_from_knowledge_by_id(
Files.delete_file_by_id(form_data.file_id) Files.delete_file_by_id(form_data.file_id)
if knowledge: if knowledge:
data = knowledge.data or {}
file_ids = data.get("file_ids", [])
if form_data.file_id in file_ids:
file_ids.remove(form_data.file_id)
data["file_ids"] = file_ids
knowledge = Knowledges.update_knowledge_data_by_id(id=id, data=data)
if knowledge:
files = Files.get_file_metadatas_by_ids(file_ids)
return KnowledgeFilesResponse( return KnowledgeFilesResponse(
**knowledge.model_dump(), **knowledge.model_dump(),
files=files, files=Knowledges.get_file_metadatas_by_id(knowledge.id),
)
else:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.DEFAULT("knowledge"),
)
else:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.DEFAULT("file_id"),
) )
else: else:
raise HTTPException( raise HTTPException(
@ -695,8 +575,7 @@ async def reset_knowledge_by_id(id: str, user=Depends(get_verified_user)):
log.debug(e) log.debug(e)
pass pass
knowledge = Knowledges.update_knowledge_data_by_id(id=id, data={"file_ids": []}) knowledge = Knowledges.reset_knowledge_by_id(id=id)
return knowledge return knowledge
@ -706,7 +585,7 @@ async def reset_knowledge_by_id(id: str, user=Depends(get_verified_user)):
@router.post("/{id}/files/batch/add", response_model=Optional[KnowledgeFilesResponse]) @router.post("/{id}/files/batch/add", response_model=Optional[KnowledgeFilesResponse])
def add_files_to_knowledge_batch( async def add_files_to_knowledge_batch(
request: Request, request: Request,
id: str, id: str,
form_data: list[KnowledgeFileIdForm], form_data: list[KnowledgeFileIdForm],
@ -746,7 +625,7 @@ def add_files_to_knowledge_batch(
# Process files # Process files
try: try:
result = process_files_batch( result = await process_files_batch(
request=request, request=request,
form_data=BatchProcessFilesForm(files=files, collection_name=id), form_data=BatchProcessFilesForm(files=files, collection_name=id),
user=user, user=user,
@ -757,25 +636,19 @@ def add_files_to_knowledge_batch(
) )
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=str(e)) raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=str(e))
# Add successful files to knowledge base
data = knowledge.data or {}
existing_file_ids = data.get("file_ids", [])
# Only add files that were successfully processed # Only add files that were successfully processed
successful_file_ids = [r.file_id for r in result.results if r.status == "completed"] successful_file_ids = [r.file_id for r in result.results if r.status == "completed"]
for file_id in successful_file_ids: for file_id in successful_file_ids:
if file_id not in existing_file_ids: Knowledges.add_file_to_knowledge_by_id(
existing_file_ids.append(file_id) knowledge_id=id, file_id=file_id, user_id=user.id
)
data["file_ids"] = existing_file_ids
knowledge = Knowledges.update_knowledge_data_by_id(id=id, data=data)
# If there were any errors, include them in the response # If there were any errors, include them in the response
if result.errors: if result.errors:
error_details = [f"{err.file_id}: {err.error}" for err in result.errors] error_details = [f"{err.file_id}: {err.error}" for err in result.errors]
return KnowledgeFilesResponse( return KnowledgeFilesResponse(
**knowledge.model_dump(), **knowledge.model_dump(),
files=Files.get_file_metadatas_by_ids(existing_file_ids), files=Knowledges.get_file_metadatas_by_id(knowledge.id),
warnings={ warnings={
"message": "Some files failed to process", "message": "Some files failed to process",
"errors": error_details, "errors": error_details,
@ -784,5 +657,5 @@ def add_files_to_knowledge_batch(
return KnowledgeFilesResponse( return KnowledgeFilesResponse(
**knowledge.model_dump(), **knowledge.model_dump(),
files=Files.get_file_metadatas_by_ids(existing_file_ids), files=Knowledges.get_file_metadatas_by_id(knowledge.id),
) )

View file

@ -5,6 +5,7 @@ import json
import asyncio import asyncio
import logging import logging
from open_webui.models.groups import Groups
from open_webui.models.models import ( from open_webui.models.models import (
ModelForm, ModelForm,
ModelModel, ModelModel,
@ -78,6 +79,10 @@ async def get_models(
filter["direction"] = direction filter["direction"] = direction
if not user.role == "admin" or not BYPASS_ADMIN_ACCESS_CONTROL: if not user.role == "admin" or not BYPASS_ADMIN_ACCESS_CONTROL:
groups = Groups.get_groups_by_member_id(user.id)
if groups:
filter["group_ids"] = [group.id for group in groups]
filter["user_id"] = user.id filter["user_id"] = user.id
return Models.search_models(user.id, filter=filter, skip=skip, limit=limit) return Models.search_models(user.id, filter=filter, skip=skip, limit=limit)

View file

@ -879,6 +879,7 @@ async def delete_model(
url = request.app.state.config.OLLAMA_BASE_URLS[url_idx] url = request.app.state.config.OLLAMA_BASE_URLS[url_idx]
key = get_api_key(url_idx, url, request.app.state.config.OLLAMA_API_CONFIGS) key = get_api_key(url_idx, url, request.app.state.config.OLLAMA_API_CONFIGS)
r = None
try: try:
headers = { headers = {
"Content-Type": "application/json", "Content-Type": "application/json",
@ -892,7 +893,7 @@ async def delete_model(
method="DELETE", method="DELETE",
url=f"{url}/api/delete", url=f"{url}/api/delete",
headers=headers, headers=headers,
data=form_data.model_dump_json(exclude_none=True).encode(), json=form_data,
) )
r.raise_for_status() r.raise_for_status()
@ -949,10 +950,7 @@ async def show_model_info(
headers = include_user_info_headers(headers, user) headers = include_user_info_headers(headers, user)
r = requests.request( r = requests.request(
method="POST", method="POST", url=f"{url}/api/show", headers=headers, json=form_data
url=f"{url}/api/show",
headers=headers,
data=form_data.model_dump_json(exclude_none=True).encode(),
) )
r.raise_for_status() r.raise_for_status()

View file

@ -123,7 +123,7 @@ log.setLevel(SRC_LOG_LEVELS["RAG"])
def get_ef( def get_ef(
engine: str, engine: str,
embedding_model: str, embedding_model: str,
auto_update: bool = False, auto_update: bool = RAG_EMBEDDING_MODEL_AUTO_UPDATE,
): ):
ef = None ef = None
if embedding_model and engine == "": if embedding_model and engine == "":
@ -148,7 +148,7 @@ def get_rf(
reranking_model: Optional[str] = None, reranking_model: Optional[str] = None,
external_reranker_url: str = "", external_reranker_url: str = "",
external_reranker_api_key: str = "", external_reranker_api_key: str = "",
auto_update: bool = False, auto_update: bool = RAG_RERANKING_MODEL_AUTO_UPDATE,
): ):
rf = None rf = None
if reranking_model: if reranking_model:
@ -241,13 +241,14 @@ class SearchForm(BaseModel):
async def get_status(request: Request): async def get_status(request: Request):
return { return {
"status": True, "status": True,
"chunk_size": request.app.state.config.CHUNK_SIZE, "CHUNK_SIZE": request.app.state.config.CHUNK_SIZE,
"chunk_overlap": request.app.state.config.CHUNK_OVERLAP, "CHUNK_OVERLAP": request.app.state.config.CHUNK_OVERLAP,
"template": request.app.state.config.RAG_TEMPLATE, "RAG_TEMPLATE": request.app.state.config.RAG_TEMPLATE,
"embedding_engine": request.app.state.config.RAG_EMBEDDING_ENGINE, "RAG_EMBEDDING_ENGINE": request.app.state.config.RAG_EMBEDDING_ENGINE,
"embedding_model": request.app.state.config.RAG_EMBEDDING_MODEL, "RAG_EMBEDDING_MODEL": request.app.state.config.RAG_EMBEDDING_MODEL,
"reranking_model": request.app.state.config.RAG_RERANKING_MODEL, "RAG_RERANKING_MODEL": request.app.state.config.RAG_RERANKING_MODEL,
"embedding_batch_size": request.app.state.config.RAG_EMBEDDING_BATCH_SIZE, "RAG_EMBEDDING_BATCH_SIZE": request.app.state.config.RAG_EMBEDDING_BATCH_SIZE,
"ENABLE_ASYNC_EMBEDDING": request.app.state.config.ENABLE_ASYNC_EMBEDDING,
} }
@ -255,9 +256,10 @@ async def get_status(request: Request):
async def get_embedding_config(request: Request, user=Depends(get_admin_user)): async def get_embedding_config(request: Request, user=Depends(get_admin_user)):
return { return {
"status": True, "status": True,
"embedding_engine": request.app.state.config.RAG_EMBEDDING_ENGINE, "RAG_EMBEDDING_ENGINE": request.app.state.config.RAG_EMBEDDING_ENGINE,
"embedding_model": request.app.state.config.RAG_EMBEDDING_MODEL, "RAG_EMBEDDING_MODEL": request.app.state.config.RAG_EMBEDDING_MODEL,
"embedding_batch_size": request.app.state.config.RAG_EMBEDDING_BATCH_SIZE, "RAG_EMBEDDING_BATCH_SIZE": request.app.state.config.RAG_EMBEDDING_BATCH_SIZE,
"ENABLE_ASYNC_EMBEDDING": request.app.state.config.ENABLE_ASYNC_EMBEDDING,
"openai_config": { "openai_config": {
"url": request.app.state.config.RAG_OPENAI_API_BASE_URL, "url": request.app.state.config.RAG_OPENAI_API_BASE_URL,
"key": request.app.state.config.RAG_OPENAI_API_KEY, "key": request.app.state.config.RAG_OPENAI_API_KEY,
@ -294,18 +296,13 @@ class EmbeddingModelUpdateForm(BaseModel):
openai_config: Optional[OpenAIConfigForm] = None openai_config: Optional[OpenAIConfigForm] = None
ollama_config: Optional[OllamaConfigForm] = None ollama_config: Optional[OllamaConfigForm] = None
azure_openai_config: Optional[AzureOpenAIConfigForm] = None azure_openai_config: Optional[AzureOpenAIConfigForm] = None
embedding_engine: str RAG_EMBEDDING_ENGINE: str
embedding_model: str RAG_EMBEDDING_MODEL: str
embedding_batch_size: Optional[int] = 1 RAG_EMBEDDING_BATCH_SIZE: Optional[int] = 1
ENABLE_ASYNC_EMBEDDING: Optional[bool] = True
@router.post("/embedding/update") def unload_embedding_model(request: Request):
async def update_embedding_config(
request: Request, form_data: EmbeddingModelUpdateForm, user=Depends(get_admin_user)
):
log.info(
f"Updating embedding model: {request.app.state.config.RAG_EMBEDDING_MODEL} to {form_data.embedding_model}"
)
if request.app.state.config.RAG_EMBEDDING_ENGINE == "": if request.app.state.config.RAG_EMBEDDING_ENGINE == "":
# unloads current internal embedding model and clears VRAM cache # unloads current internal embedding model and clears VRAM cache
request.app.state.ef = None request.app.state.ef = None
@ -318,9 +315,25 @@ async def update_embedding_config(
if torch.cuda.is_available(): if torch.cuda.is_available():
torch.cuda.empty_cache() torch.cuda.empty_cache()
@router.post("/embedding/update")
async def update_embedding_config(
request: Request, form_data: EmbeddingModelUpdateForm, user=Depends(get_admin_user)
):
log.info(
f"Updating embedding model: {request.app.state.config.RAG_EMBEDDING_MODEL} to {form_data.RAG_EMBEDDING_MODEL}"
)
unload_embedding_model(request)
try: try:
request.app.state.config.RAG_EMBEDDING_ENGINE = form_data.embedding_engine request.app.state.config.RAG_EMBEDDING_ENGINE = form_data.RAG_EMBEDDING_ENGINE
request.app.state.config.RAG_EMBEDDING_MODEL = form_data.embedding_model request.app.state.config.RAG_EMBEDDING_MODEL = form_data.RAG_EMBEDDING_MODEL
request.app.state.config.RAG_EMBEDDING_BATCH_SIZE = (
form_data.RAG_EMBEDDING_BATCH_SIZE
)
request.app.state.config.ENABLE_ASYNC_EMBEDDING = (
form_data.ENABLE_ASYNC_EMBEDDING
)
if request.app.state.config.RAG_EMBEDDING_ENGINE in [ if request.app.state.config.RAG_EMBEDDING_ENGINE in [
"ollama", "ollama",
@ -354,10 +367,6 @@ async def update_embedding_config(
form_data.azure_openai_config.version form_data.azure_openai_config.version
) )
request.app.state.config.RAG_EMBEDDING_BATCH_SIZE = (
form_data.embedding_batch_size
)
request.app.state.ef = get_ef( request.app.state.ef = get_ef(
request.app.state.config.RAG_EMBEDDING_ENGINE, request.app.state.config.RAG_EMBEDDING_ENGINE,
request.app.state.config.RAG_EMBEDDING_MODEL, request.app.state.config.RAG_EMBEDDING_MODEL,
@ -391,13 +400,15 @@ async def update_embedding_config(
if request.app.state.config.RAG_EMBEDDING_ENGINE == "azure_openai" if request.app.state.config.RAG_EMBEDDING_ENGINE == "azure_openai"
else None else None
), ),
enable_async=request.app.state.config.ENABLE_ASYNC_EMBEDDING,
) )
return { return {
"status": True, "status": True,
"embedding_engine": request.app.state.config.RAG_EMBEDDING_ENGINE, "RAG_EMBEDDING_ENGINE": request.app.state.config.RAG_EMBEDDING_ENGINE,
"embedding_model": request.app.state.config.RAG_EMBEDDING_MODEL, "RAG_EMBEDDING_MODEL": request.app.state.config.RAG_EMBEDDING_MODEL,
"embedding_batch_size": request.app.state.config.RAG_EMBEDDING_BATCH_SIZE, "RAG_EMBEDDING_BATCH_SIZE": request.app.state.config.RAG_EMBEDDING_BATCH_SIZE,
"ENABLE_ASYNC_EMBEDDING": request.app.state.config.ENABLE_ASYNC_EMBEDDING,
"openai_config": { "openai_config": {
"url": request.app.state.config.RAG_OPENAI_API_BASE_URL, "url": request.app.state.config.RAG_OPENAI_API_BASE_URL,
"key": request.app.state.config.RAG_OPENAI_API_KEY, "key": request.app.state.config.RAG_OPENAI_API_KEY,
@ -453,20 +464,11 @@ async def get_rag_config(request: Request, user=Depends(get_admin_user)):
"EXTERNAL_DOCUMENT_LOADER_API_KEY": request.app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY, "EXTERNAL_DOCUMENT_LOADER_API_KEY": request.app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY,
"TIKA_SERVER_URL": request.app.state.config.TIKA_SERVER_URL, "TIKA_SERVER_URL": request.app.state.config.TIKA_SERVER_URL,
"DOCLING_SERVER_URL": request.app.state.config.DOCLING_SERVER_URL, "DOCLING_SERVER_URL": request.app.state.config.DOCLING_SERVER_URL,
"DOCLING_API_KEY": request.app.state.config.DOCLING_API_KEY,
"DOCLING_PARAMS": request.app.state.config.DOCLING_PARAMS, "DOCLING_PARAMS": request.app.state.config.DOCLING_PARAMS,
"DOCLING_DO_OCR": request.app.state.config.DOCLING_DO_OCR,
"DOCLING_FORCE_OCR": request.app.state.config.DOCLING_FORCE_OCR,
"DOCLING_OCR_ENGINE": request.app.state.config.DOCLING_OCR_ENGINE,
"DOCLING_OCR_LANG": request.app.state.config.DOCLING_OCR_LANG,
"DOCLING_PDF_BACKEND": request.app.state.config.DOCLING_PDF_BACKEND,
"DOCLING_TABLE_MODE": request.app.state.config.DOCLING_TABLE_MODE,
"DOCLING_PIPELINE": request.app.state.config.DOCLING_PIPELINE,
"DOCLING_DO_PICTURE_DESCRIPTION": request.app.state.config.DOCLING_DO_PICTURE_DESCRIPTION,
"DOCLING_PICTURE_DESCRIPTION_MODE": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_MODE,
"DOCLING_PICTURE_DESCRIPTION_LOCAL": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL,
"DOCLING_PICTURE_DESCRIPTION_API": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API,
"DOCUMENT_INTELLIGENCE_ENDPOINT": request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT, "DOCUMENT_INTELLIGENCE_ENDPOINT": request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT,
"DOCUMENT_INTELLIGENCE_KEY": request.app.state.config.DOCUMENT_INTELLIGENCE_KEY, "DOCUMENT_INTELLIGENCE_KEY": request.app.state.config.DOCUMENT_INTELLIGENCE_KEY,
"DOCUMENT_INTELLIGENCE_MODEL": request.app.state.config.DOCUMENT_INTELLIGENCE_MODEL,
"MISTRAL_OCR_API_BASE_URL": request.app.state.config.MISTRAL_OCR_API_BASE_URL, "MISTRAL_OCR_API_BASE_URL": request.app.state.config.MISTRAL_OCR_API_BASE_URL,
"MISTRAL_OCR_API_KEY": request.app.state.config.MISTRAL_OCR_API_KEY, "MISTRAL_OCR_API_KEY": request.app.state.config.MISTRAL_OCR_API_KEY,
# MinerU settings # MinerU settings
@ -642,20 +644,11 @@ class ConfigForm(BaseModel):
TIKA_SERVER_URL: Optional[str] = None TIKA_SERVER_URL: Optional[str] = None
DOCLING_SERVER_URL: Optional[str] = None DOCLING_SERVER_URL: Optional[str] = None
DOCLING_API_KEY: Optional[str] = None
DOCLING_PARAMS: Optional[dict] = None DOCLING_PARAMS: Optional[dict] = None
DOCLING_DO_OCR: Optional[bool] = None
DOCLING_FORCE_OCR: Optional[bool] = None
DOCLING_OCR_ENGINE: Optional[str] = None
DOCLING_OCR_LANG: Optional[str] = None
DOCLING_PDF_BACKEND: Optional[str] = None
DOCLING_TABLE_MODE: Optional[str] = None
DOCLING_PIPELINE: Optional[str] = None
DOCLING_DO_PICTURE_DESCRIPTION: Optional[bool] = None
DOCLING_PICTURE_DESCRIPTION_MODE: Optional[str] = None
DOCLING_PICTURE_DESCRIPTION_LOCAL: Optional[dict] = None
DOCLING_PICTURE_DESCRIPTION_API: Optional[dict] = None
DOCUMENT_INTELLIGENCE_ENDPOINT: Optional[str] = None DOCUMENT_INTELLIGENCE_ENDPOINT: Optional[str] = None
DOCUMENT_INTELLIGENCE_KEY: Optional[str] = None DOCUMENT_INTELLIGENCE_KEY: Optional[str] = None
DOCUMENT_INTELLIGENCE_MODEL: Optional[str] = None
MISTRAL_OCR_API_BASE_URL: Optional[str] = None MISTRAL_OCR_API_BASE_URL: Optional[str] = None
MISTRAL_OCR_API_KEY: Optional[str] = None MISTRAL_OCR_API_KEY: Optional[str] = None
@ -831,68 +824,16 @@ async def update_rag_config(
if form_data.DOCLING_SERVER_URL is not None if form_data.DOCLING_SERVER_URL is not None
else request.app.state.config.DOCLING_SERVER_URL else request.app.state.config.DOCLING_SERVER_URL
) )
request.app.state.config.DOCLING_API_KEY = (
form_data.DOCLING_API_KEY
if form_data.DOCLING_API_KEY is not None
else request.app.state.config.DOCLING_API_KEY
)
request.app.state.config.DOCLING_PARAMS = ( request.app.state.config.DOCLING_PARAMS = (
form_data.DOCLING_PARAMS form_data.DOCLING_PARAMS
if form_data.DOCLING_PARAMS is not None if form_data.DOCLING_PARAMS is not None
else request.app.state.config.DOCLING_PARAMS else request.app.state.config.DOCLING_PARAMS
) )
request.app.state.config.DOCLING_DO_OCR = (
form_data.DOCLING_DO_OCR
if form_data.DOCLING_DO_OCR is not None
else request.app.state.config.DOCLING_DO_OCR
)
request.app.state.config.DOCLING_FORCE_OCR = (
form_data.DOCLING_FORCE_OCR
if form_data.DOCLING_FORCE_OCR is not None
else request.app.state.config.DOCLING_FORCE_OCR
)
request.app.state.config.DOCLING_OCR_ENGINE = (
form_data.DOCLING_OCR_ENGINE
if form_data.DOCLING_OCR_ENGINE is not None
else request.app.state.config.DOCLING_OCR_ENGINE
)
request.app.state.config.DOCLING_OCR_LANG = (
form_data.DOCLING_OCR_LANG
if form_data.DOCLING_OCR_LANG is not None
else request.app.state.config.DOCLING_OCR_LANG
)
request.app.state.config.DOCLING_PDF_BACKEND = (
form_data.DOCLING_PDF_BACKEND
if form_data.DOCLING_PDF_BACKEND is not None
else request.app.state.config.DOCLING_PDF_BACKEND
)
request.app.state.config.DOCLING_TABLE_MODE = (
form_data.DOCLING_TABLE_MODE
if form_data.DOCLING_TABLE_MODE is not None
else request.app.state.config.DOCLING_TABLE_MODE
)
request.app.state.config.DOCLING_PIPELINE = (
form_data.DOCLING_PIPELINE
if form_data.DOCLING_PIPELINE is not None
else request.app.state.config.DOCLING_PIPELINE
)
request.app.state.config.DOCLING_DO_PICTURE_DESCRIPTION = (
form_data.DOCLING_DO_PICTURE_DESCRIPTION
if form_data.DOCLING_DO_PICTURE_DESCRIPTION is not None
else request.app.state.config.DOCLING_DO_PICTURE_DESCRIPTION
)
request.app.state.config.DOCLING_PICTURE_DESCRIPTION_MODE = (
form_data.DOCLING_PICTURE_DESCRIPTION_MODE
if form_data.DOCLING_PICTURE_DESCRIPTION_MODE is not None
else request.app.state.config.DOCLING_PICTURE_DESCRIPTION_MODE
)
request.app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL = (
form_data.DOCLING_PICTURE_DESCRIPTION_LOCAL
if form_data.DOCLING_PICTURE_DESCRIPTION_LOCAL is not None
else request.app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL
)
request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API = (
form_data.DOCLING_PICTURE_DESCRIPTION_API
if form_data.DOCLING_PICTURE_DESCRIPTION_API is not None
else request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API
)
request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT = ( request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT = (
form_data.DOCUMENT_INTELLIGENCE_ENDPOINT form_data.DOCUMENT_INTELLIGENCE_ENDPOINT
if form_data.DOCUMENT_INTELLIGENCE_ENDPOINT is not None if form_data.DOCUMENT_INTELLIGENCE_ENDPOINT is not None
@ -903,6 +844,11 @@ async def update_rag_config(
if form_data.DOCUMENT_INTELLIGENCE_KEY is not None if form_data.DOCUMENT_INTELLIGENCE_KEY is not None
else request.app.state.config.DOCUMENT_INTELLIGENCE_KEY else request.app.state.config.DOCUMENT_INTELLIGENCE_KEY
) )
request.app.state.config.DOCUMENT_INTELLIGENCE_MODEL = (
form_data.DOCUMENT_INTELLIGENCE_MODEL
if form_data.DOCUMENT_INTELLIGENCE_MODEL is not None
else request.app.state.config.DOCUMENT_INTELLIGENCE_MODEL
)
request.app.state.config.MISTRAL_OCR_API_BASE_URL = ( request.app.state.config.MISTRAL_OCR_API_BASE_URL = (
form_data.MISTRAL_OCR_API_BASE_URL form_data.MISTRAL_OCR_API_BASE_URL
@ -988,7 +934,6 @@ async def update_rag_config(
request.app.state.config.RAG_RERANKING_MODEL, request.app.state.config.RAG_RERANKING_MODEL,
request.app.state.config.RAG_EXTERNAL_RERANKER_URL, request.app.state.config.RAG_EXTERNAL_RERANKER_URL,
request.app.state.config.RAG_EXTERNAL_RERANKER_API_KEY, request.app.state.config.RAG_EXTERNAL_RERANKER_API_KEY,
True,
) )
request.app.state.RERANKING_FUNCTION = get_reranking_function( request.app.state.RERANKING_FUNCTION = get_reranking_function(
@ -1189,20 +1134,11 @@ async def update_rag_config(
"EXTERNAL_DOCUMENT_LOADER_API_KEY": request.app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY, "EXTERNAL_DOCUMENT_LOADER_API_KEY": request.app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY,
"TIKA_SERVER_URL": request.app.state.config.TIKA_SERVER_URL, "TIKA_SERVER_URL": request.app.state.config.TIKA_SERVER_URL,
"DOCLING_SERVER_URL": request.app.state.config.DOCLING_SERVER_URL, "DOCLING_SERVER_URL": request.app.state.config.DOCLING_SERVER_URL,
"DOCLING_API_KEY": request.app.state.config.DOCLING_API_KEY,
"DOCLING_PARAMS": request.app.state.config.DOCLING_PARAMS, "DOCLING_PARAMS": request.app.state.config.DOCLING_PARAMS,
"DOCLING_DO_OCR": request.app.state.config.DOCLING_DO_OCR,
"DOCLING_FORCE_OCR": request.app.state.config.DOCLING_FORCE_OCR,
"DOCLING_OCR_ENGINE": request.app.state.config.DOCLING_OCR_ENGINE,
"DOCLING_OCR_LANG": request.app.state.config.DOCLING_OCR_LANG,
"DOCLING_PDF_BACKEND": request.app.state.config.DOCLING_PDF_BACKEND,
"DOCLING_TABLE_MODE": request.app.state.config.DOCLING_TABLE_MODE,
"DOCLING_PIPELINE": request.app.state.config.DOCLING_PIPELINE,
"DOCLING_DO_PICTURE_DESCRIPTION": request.app.state.config.DOCLING_DO_PICTURE_DESCRIPTION,
"DOCLING_PICTURE_DESCRIPTION_MODE": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_MODE,
"DOCLING_PICTURE_DESCRIPTION_LOCAL": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL,
"DOCLING_PICTURE_DESCRIPTION_API": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API,
"DOCUMENT_INTELLIGENCE_ENDPOINT": request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT, "DOCUMENT_INTELLIGENCE_ENDPOINT": request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT,
"DOCUMENT_INTELLIGENCE_KEY": request.app.state.config.DOCUMENT_INTELLIGENCE_KEY, "DOCUMENT_INTELLIGENCE_KEY": request.app.state.config.DOCUMENT_INTELLIGENCE_KEY,
"DOCUMENT_INTELLIGENCE_MODEL": request.app.state.config.DOCUMENT_INTELLIGENCE_MODEL,
"MISTRAL_OCR_API_BASE_URL": request.app.state.config.MISTRAL_OCR_API_BASE_URL, "MISTRAL_OCR_API_BASE_URL": request.app.state.config.MISTRAL_OCR_API_BASE_URL,
"MISTRAL_OCR_API_KEY": request.app.state.config.MISTRAL_OCR_API_KEY, "MISTRAL_OCR_API_KEY": request.app.state.config.MISTRAL_OCR_API_KEY,
# MinerU settings # MinerU settings
@ -1320,7 +1256,7 @@ def save_docs_to_vector_db(
return ", ".join(docs_info) return ", ".join(docs_info)
log.info( log.debug(
f"save_docs_to_vector_db: document {_get_docs_info(docs)} {collection_name}" f"save_docs_to_vector_db: document {_get_docs_info(docs)} {collection_name}"
) )
@ -1512,6 +1448,9 @@ def process_file(
form_data: ProcessFileForm, form_data: ProcessFileForm,
user=Depends(get_verified_user), user=Depends(get_verified_user),
): ):
"""
Process a file and save its content to the vector database.
"""
if user.role == "admin": if user.role == "admin":
file = Files.get_file_by_id(form_data.file_id) file = Files.get_file_by_id(form_data.file_id)
else: else:
@ -1607,23 +1546,12 @@ def process_file(
EXTERNAL_DOCUMENT_LOADER_API_KEY=request.app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY, EXTERNAL_DOCUMENT_LOADER_API_KEY=request.app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY,
TIKA_SERVER_URL=request.app.state.config.TIKA_SERVER_URL, TIKA_SERVER_URL=request.app.state.config.TIKA_SERVER_URL,
DOCLING_SERVER_URL=request.app.state.config.DOCLING_SERVER_URL, DOCLING_SERVER_URL=request.app.state.config.DOCLING_SERVER_URL,
DOCLING_PARAMS={ DOCLING_API_KEY=request.app.state.config.DOCLING_API_KEY,
"do_ocr": request.app.state.config.DOCLING_DO_OCR, DOCLING_PARAMS=request.app.state.config.DOCLING_PARAMS,
"force_ocr": request.app.state.config.DOCLING_FORCE_OCR,
"ocr_engine": request.app.state.config.DOCLING_OCR_ENGINE,
"ocr_lang": request.app.state.config.DOCLING_OCR_LANG,
"pdf_backend": request.app.state.config.DOCLING_PDF_BACKEND,
"table_mode": request.app.state.config.DOCLING_TABLE_MODE,
"pipeline": request.app.state.config.DOCLING_PIPELINE,
"do_picture_description": request.app.state.config.DOCLING_DO_PICTURE_DESCRIPTION,
"picture_description_mode": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_MODE,
"picture_description_local": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL,
"picture_description_api": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API,
**request.app.state.config.DOCLING_PARAMS,
},
PDF_EXTRACT_IMAGES=request.app.state.config.PDF_EXTRACT_IMAGES, PDF_EXTRACT_IMAGES=request.app.state.config.PDF_EXTRACT_IMAGES,
DOCUMENT_INTELLIGENCE_ENDPOINT=request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT, DOCUMENT_INTELLIGENCE_ENDPOINT=request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT,
DOCUMENT_INTELLIGENCE_KEY=request.app.state.config.DOCUMENT_INTELLIGENCE_KEY, DOCUMENT_INTELLIGENCE_KEY=request.app.state.config.DOCUMENT_INTELLIGENCE_KEY,
DOCUMENT_INTELLIGENCE_MODEL=request.app.state.config.DOCUMENT_INTELLIGENCE_MODEL,
MISTRAL_OCR_API_BASE_URL=request.app.state.config.MISTRAL_OCR_API_BASE_URL, MISTRAL_OCR_API_BASE_URL=request.app.state.config.MISTRAL_OCR_API_BASE_URL,
MISTRAL_OCR_API_KEY=request.app.state.config.MISTRAL_OCR_API_KEY, MISTRAL_OCR_API_KEY=request.app.state.config.MISTRAL_OCR_API_KEY,
MINERU_API_MODE=request.app.state.config.MINERU_API_MODE, MINERU_API_MODE=request.app.state.config.MINERU_API_MODE,
@ -1750,7 +1678,7 @@ class ProcessTextForm(BaseModel):
@router.post("/process/text") @router.post("/process/text")
def process_text( async def process_text(
request: Request, request: Request,
form_data: ProcessTextForm, form_data: ProcessTextForm,
user=Depends(get_verified_user), user=Depends(get_verified_user),
@ -1768,7 +1696,9 @@ def process_text(
text_content = form_data.content text_content = form_data.content
log.debug(f"text_content: {text_content}") log.debug(f"text_content: {text_content}")
result = save_docs_to_vector_db(request, docs, collection_name, user=user) result = await run_in_threadpool(
save_docs_to_vector_db, request, docs, collection_name, user=user
)
if result: if result:
return { return {
"status": True, "status": True,
@ -1784,7 +1714,7 @@ def process_text(
@router.post("/process/youtube") @router.post("/process/youtube")
@router.post("/process/web") @router.post("/process/web")
def process_web( async def process_web(
request: Request, form_data: ProcessUrlForm, user=Depends(get_verified_user) request: Request, form_data: ProcessUrlForm, user=Depends(get_verified_user)
): ):
try: try:
@ -1792,11 +1722,14 @@ def process_web(
if not collection_name: if not collection_name:
collection_name = calculate_sha256_string(form_data.url)[:63] collection_name = calculate_sha256_string(form_data.url)[:63]
content, docs = get_content_from_url(request, form_data.url) content, docs = await run_in_threadpool(
get_content_from_url, request, form_data.url
)
log.debug(f"text_content: {content}") log.debug(f"text_content: {content}")
if not request.app.state.config.BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL: if not request.app.state.config.BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL:
save_docs_to_vector_db( await run_in_threadpool(
save_docs_to_vector_db,
request, request,
docs, docs,
collection_name, collection_name,
@ -2488,7 +2421,7 @@ class BatchProcessFilesResponse(BaseModel):
@router.post("/process/files/batch") @router.post("/process/files/batch")
def process_files_batch( async def process_files_batch(
request: Request, request: Request,
form_data: BatchProcessFilesForm, form_data: BatchProcessFilesForm,
user=Depends(get_verified_user), user=Depends(get_verified_user),
@ -2543,10 +2476,11 @@ def process_files_batch(
# Save all documents in one batch # Save all documents in one batch
if all_docs: if all_docs:
try: try:
save_docs_to_vector_db( await run_in_threadpool(
request=request, save_docs_to_vector_db,
docs=all_docs, request,
collection_name=collection_name, all_docs,
collection_name,
add=True, add=True,
user=user, user=user,
) )

View file

@ -719,7 +719,7 @@ async def get_groups(
): ):
"""List SCIM Groups""" """List SCIM Groups"""
# Get all groups # Get all groups
groups_list = Groups.get_groups() groups_list = Groups.get_all_groups()
# Apply pagination # Apply pagination
total = len(groups_list) total = len(groups_list)

View file

@ -6,7 +6,7 @@ import io
from fastapi import APIRouter, Depends, HTTPException, Request, status from fastapi import APIRouter, Depends, HTTPException, Request, status
from fastapi.responses import Response, StreamingResponse, FileResponse from fastapi.responses import Response, StreamingResponse, FileResponse
from pydantic import BaseModel from pydantic import BaseModel, ConfigDict
from open_webui.models.auths import Auths from open_webui.models.auths import Auths
@ -17,21 +17,16 @@ from open_webui.models.chats import Chats
from open_webui.models.users import ( from open_webui.models.users import (
UserModel, UserModel,
UserGroupIdsModel, UserGroupIdsModel,
UserListResponse, UserGroupIdsListResponse,
UserInfoListResponse,
UserInfoListResponse, UserInfoListResponse,
UserIdNameListResponse,
UserRoleUpdateForm, UserRoleUpdateForm,
UserStatus,
Users, Users,
UserSettings, UserSettings,
UserUpdateForm, UserUpdateForm,
) )
from open_webui.socket.main import (
get_active_status_by_user_id,
get_active_user_ids,
get_user_active_status,
)
from open_webui.constants import ERROR_MESSAGES from open_webui.constants import ERROR_MESSAGES
from open_webui.env import SRC_LOG_LEVELS, STATIC_DIR from open_webui.env import SRC_LOG_LEVELS, STATIC_DIR
@ -51,23 +46,6 @@ log.setLevel(SRC_LOG_LEVELS["MODELS"])
router = APIRouter() router = APIRouter()
############################
# GetActiveUsers
############################
@router.get("/active")
async def get_active_users(
user=Depends(get_verified_user),
):
"""
Get a list of active users.
"""
return {
"user_ids": get_active_user_ids(),
}
############################ ############################
# GetUsers # GetUsers
############################ ############################
@ -76,7 +54,7 @@ async def get_active_users(
PAGE_ITEM_COUNT = 30 PAGE_ITEM_COUNT = 30
@router.get("/", response_model=UserListResponse) @router.get("/", response_model=UserGroupIdsListResponse)
async def get_users( async def get_users(
query: Optional[str] = None, query: Optional[str] = None,
order_by: Optional[str] = None, order_by: Optional[str] = None,
@ -125,20 +103,31 @@ async def get_all_users(
return Users.get_users() return Users.get_users()
@router.get("/search", response_model=UserIdNameListResponse) @router.get("/search", response_model=UserInfoListResponse)
async def search_users( async def search_users(
query: Optional[str] = None, query: Optional[str] = None,
order_by: Optional[str] = None,
direction: Optional[str] = None,
page: Optional[int] = 1,
user=Depends(get_verified_user), user=Depends(get_verified_user),
): ):
limit = PAGE_ITEM_COUNT limit = PAGE_ITEM_COUNT
page = 1 # Always return the first page for search page = max(1, page)
skip = (page - 1) * limit skip = (page - 1) * limit
filter = {} filter = {}
if query: if query:
filter["query"] = query filter["query"] = query
filter = {}
if query:
filter["query"] = query
if order_by:
filter["order_by"] = order_by
if direction:
filter["direction"] = direction
return Users.get_users(filter=filter, skip=skip, limit=limit) return Users.get_users(filter=filter, skip=skip, limit=limit)
@ -219,11 +208,14 @@ class ChatPermissions(BaseModel):
class FeaturesPermissions(BaseModel): class FeaturesPermissions(BaseModel):
api_keys: bool = False api_keys: bool = False
notes: bool = True
channels: bool = True
folders: bool = True
direct_tool_servers: bool = False direct_tool_servers: bool = False
web_search: bool = True web_search: bool = True
image_generation: bool = True image_generation: bool = True
code_interpreter: bool = True code_interpreter: bool = True
notes: bool = True
class UserPermissions(BaseModel): class UserPermissions(BaseModel):
@ -308,6 +300,43 @@ async def update_user_settings_by_session_user(
) )
############################
# GetUserStatusBySessionUser
############################
@router.get("/user/status")
async def get_user_status_by_session_user(user=Depends(get_verified_user)):
user = Users.get_user_by_id(user.id)
if user:
return user
else:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.USER_NOT_FOUND,
)
############################
# UpdateUserStatusBySessionUser
############################
@router.post("/user/status/update")
async def update_user_status_by_session_user(
form_data: UserStatus, user=Depends(get_verified_user)
):
user = Users.get_user_by_id(user.id)
if user:
user = Users.update_user_status_by_id(user.id, form_data)
return user
else:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.USER_NOT_FOUND,
)
############################ ############################
# GetUserInfoBySessionUser # GetUserInfoBySessionUser
############################ ############################
@ -359,13 +388,15 @@ async def update_user_info_by_session_user(
############################ ############################
class UserResponse(BaseModel): class UserActiveResponse(UserStatus):
name: str name: str
profile_image_url: str profile_image_url: Optional[str] = None
active: Optional[bool] = None
is_active: bool
model_config = ConfigDict(extra="allow")
@router.get("/{user_id}", response_model=UserResponse) @router.get("/{user_id}", response_model=UserActiveResponse)
async def get_user_by_id(user_id: str, user=Depends(get_verified_user)): async def get_user_by_id(user_id: str, user=Depends(get_verified_user)):
# Check if user_id is a shared chat # Check if user_id is a shared chat
# If it is, get the user_id from the chat # If it is, get the user_id from the chat
@ -383,11 +414,10 @@ async def get_user_by_id(user_id: str, user=Depends(get_verified_user)):
user = Users.get_user_by_id(user_id) user = Users.get_user_by_id(user_id)
if user: if user:
return UserResponse( return UserActiveResponse(
**{ **{
"name": user.name, **user.model_dump(),
"profile_image_url": user.profile_image_url, "is_active": Users.is_user_active(user_id),
"active": get_active_status_by_user_id(user_id),
} }
) )
else: else:
@ -454,7 +484,7 @@ async def get_user_profile_image_by_id(user_id: str, user=Depends(get_verified_u
@router.get("/{user_id}/active", response_model=dict) @router.get("/{user_id}/active", response_model=dict)
async def get_user_active_status_by_id(user_id: str, user=Depends(get_verified_user)): async def get_user_active_status_by_id(user_id: str, user=Depends(get_verified_user)):
return { return {
"active": get_user_active_status(user_id), "active": Users.is_user_active(user_id),
} }

View file

@ -118,14 +118,16 @@ if WEBSOCKET_MANAGER == "redis":
redis_sentinels = get_sentinels_from_env( redis_sentinels = get_sentinels_from_env(
WEBSOCKET_SENTINEL_HOSTS, WEBSOCKET_SENTINEL_PORT WEBSOCKET_SENTINEL_HOSTS, WEBSOCKET_SENTINEL_PORT
) )
SESSION_POOL = RedisDict(
f"{REDIS_KEY_PREFIX}:session_pool", MODELS = RedisDict(
f"{REDIS_KEY_PREFIX}:models",
redis_url=WEBSOCKET_REDIS_URL, redis_url=WEBSOCKET_REDIS_URL,
redis_sentinels=redis_sentinels, redis_sentinels=redis_sentinels,
redis_cluster=WEBSOCKET_REDIS_CLUSTER, redis_cluster=WEBSOCKET_REDIS_CLUSTER,
) )
USER_POOL = RedisDict(
f"{REDIS_KEY_PREFIX}:user_pool", SESSION_POOL = RedisDict(
f"{REDIS_KEY_PREFIX}:session_pool",
redis_url=WEBSOCKET_REDIS_URL, redis_url=WEBSOCKET_REDIS_URL,
redis_sentinels=redis_sentinels, redis_sentinels=redis_sentinels,
redis_cluster=WEBSOCKET_REDIS_CLUSTER, redis_cluster=WEBSOCKET_REDIS_CLUSTER,
@ -148,8 +150,9 @@ if WEBSOCKET_MANAGER == "redis":
renew_func = clean_up_lock.renew_lock renew_func = clean_up_lock.renew_lock
release_func = clean_up_lock.release_lock release_func = clean_up_lock.release_lock
else: else:
MODELS = {}
SESSION_POOL = {} SESSION_POOL = {}
USER_POOL = {}
USAGE_POOL = {} USAGE_POOL = {}
aquire_func = release_func = renew_func = lambda: True aquire_func = release_func = renew_func = lambda: True
@ -225,16 +228,6 @@ def get_models_in_use():
return models_in_use return models_in_use
def get_active_user_ids():
"""Get the list of active user IDs."""
return list(USER_POOL.keys())
def get_user_active_status(user_id):
"""Check if a user is currently active."""
return user_id in USER_POOL
def get_user_id_from_session_pool(sid): def get_user_id_from_session_pool(sid):
user = SESSION_POOL.get(sid) user = SESSION_POOL.get(sid)
if user: if user:
@ -260,10 +253,36 @@ def get_user_ids_from_room(room):
return active_user_ids return active_user_ids
def get_active_status_by_user_id(user_id): async def emit_to_users(event: str, data: dict, user_ids: list[str]):
if user_id in USER_POOL: """
return True Send a message to specific users using their user:{id} rooms.
return False
Args:
event (str): The event name to emit.
data (dict): The payload/data to send.
user_ids (list[str]): The target users' IDs.
"""
try:
for user_id in user_ids:
await sio.emit(event, data, room=f"user:{user_id}")
except Exception as e:
log.debug(f"Failed to emit event {event} to users {user_ids}: {e}")
async def enter_room_for_users(room: str, user_ids: list[str]):
"""
Make all sessions of a user join a specific room.
Args:
room (str): The room to join.
user_ids (list[str]): The target user's IDs.
"""
try:
for user_id in user_ids:
session_ids = get_session_ids_from_room(f"user:{user_id}")
for sid in session_ids:
await sio.enter_room(sid, room)
except Exception as e:
log.debug(f"Failed to make users {user_ids} join room {room}: {e}")
@sio.on("usage") @sio.on("usage")
@ -293,11 +312,6 @@ async def connect(sid, environ, auth):
SESSION_POOL[sid] = user.model_dump( SESSION_POOL[sid] = user.model_dump(
exclude=["date_of_birth", "bio", "gender"] exclude=["date_of_birth", "bio", "gender"]
) )
if user.id in USER_POOL:
USER_POOL[user.id] = USER_POOL[user.id] + [sid]
else:
USER_POOL[user.id] = [sid]
await sio.enter_room(sid, f"user:{user.id}") await sio.enter_room(sid, f"user:{user.id}")
@ -316,21 +330,34 @@ async def user_join(sid, data):
if not user: if not user:
return return
SESSION_POOL[sid] = user.model_dump(exclude=["date_of_birth", "bio", "gender"]) SESSION_POOL[sid] = user.model_dump(
if user.id in USER_POOL: exclude=[
USER_POOL[user.id] = USER_POOL[user.id] + [sid] "profile_image_url",
else: "profile_banner_image_url",
USER_POOL[user.id] = [sid] "date_of_birth",
"bio",
"gender",
]
)
await sio.enter_room(sid, f"user:{user.id}") await sio.enter_room(sid, f"user:{user.id}")
# Join all the channels # Join all the channels
channels = Channels.get_channels_by_user_id(user.id) channels = Channels.get_channels_by_user_id(user.id)
log.debug(f"{channels=}") log.debug(f"{channels=}")
for channel in channels: for channel in channels:
await sio.enter_room(sid, f"channel:{channel.id}") await sio.enter_room(sid, f"channel:{channel.id}")
return {"id": user.id, "name": user.name} return {"id": user.id, "name": user.name}
@sio.on("heartbeat")
async def heartbeat(sid, data):
user = SESSION_POOL.get(sid)
if user:
Users.update_last_active_by_id(user["id"])
@sio.on("join-channels") @sio.on("join-channels")
async def join_channel(sid, data): async def join_channel(sid, data):
auth = data["auth"] if "auth" in data else None auth = data["auth"] if "auth" in data else None
@ -398,6 +425,11 @@ async def channel_events(sid, data):
event_data = data["data"] event_data = data["data"]
event_type = event_data["type"] event_type = event_data["type"]
user = SESSION_POOL.get(sid)
if not user:
return
if event_type == "typing": if event_type == "typing":
await sio.emit( await sio.emit(
"events:channel", "events:channel",
@ -405,10 +437,12 @@ async def channel_events(sid, data):
"channel_id": data["channel_id"], "channel_id": data["channel_id"],
"message_id": data.get("message_id", None), "message_id": data.get("message_id", None),
"data": event_data, "data": event_data,
"user": UserNameResponse(**SESSION_POOL[sid]).model_dump(), "user": UserNameResponse(**user).model_dump(),
}, },
room=room, room=room,
) )
elif event_type == "last_read_at":
Channels.update_member_last_read_at(data["channel_id"], user["id"])
@sio.on("ydoc:document:join") @sio.on("ydoc:document:join")
@ -652,13 +686,6 @@ async def disconnect(sid):
if sid in SESSION_POOL: if sid in SESSION_POOL:
user = SESSION_POOL[sid] user = SESSION_POOL[sid]
del SESSION_POOL[sid] del SESSION_POOL[sid]
user_id = user["id"]
USER_POOL[user_id] = [_sid for _sid in USER_POOL[user_id] if _sid != sid]
if len(USER_POOL[user_id]) == 0:
del USER_POOL[user_id]
await YDOC_MANAGER.remove_user_from_all_documents(sid) await YDOC_MANAGER.remove_user_from_all_documents(sid)
else: else:
pass pass

View file

@ -86,6 +86,15 @@ class RedisDict:
def items(self): def items(self):
return [(k, json.loads(v)) for k, v in self.redis.hgetall(self.name).items()] return [(k, json.loads(v)) for k, v in self.redis.hgetall(self.name).items()]
def set(self, mapping: dict):
pipe = self.redis.pipeline()
pipe.delete(self.name)
if mapping:
pipe.hset(self.name, mapping={k: json.dumps(v) for k, v in mapping.items()})
pipe.execute()
def get(self, key, default=None): def get(self, key, default=None):
try: try:
return self[key] return self[key]

View file

@ -105,6 +105,22 @@ def has_permission(
return get_permission(default_permissions, permission_hierarchy) return get_permission(default_permissions, permission_hierarchy)
def get_permitted_group_and_user_ids(
type: str = "write", access_control: Optional[dict] = None
) -> Union[Dict[str, List[str]], None]:
if access_control is None:
return None
permission_access = access_control.get(type, {})
permitted_group_ids = permission_access.get("group_ids", [])
permitted_user_ids = permission_access.get("user_ids", [])
return {
"group_ids": permitted_group_ids,
"user_ids": permitted_user_ids,
}
def has_access( def has_access(
user_id: str, user_id: str,
type: str = "write", type: str = "write",
@ -122,9 +138,12 @@ def has_access(
user_groups = Groups.get_groups_by_member_id(user_id) user_groups = Groups.get_groups_by_member_id(user_id)
user_group_ids = {group.id for group in user_groups} user_group_ids = {group.id for group in user_groups}
permission_access = access_control.get(type, {}) permitted_ids = get_permitted_group_and_user_ids(type, access_control)
permitted_group_ids = permission_access.get("group_ids", []) if permitted_ids is None:
permitted_user_ids = permission_access.get("user_ids", []) return False
permitted_group_ids = permitted_ids.get("group_ids", [])
permitted_user_ids = permitted_ids.get("user_ids", [])
return user_id in permitted_user_ids or any( return user_id in permitted_user_ids or any(
group_id in permitted_group_ids for group_id in user_group_ids group_id in permitted_group_ids for group_id in user_group_ids
@ -136,18 +155,20 @@ def get_users_with_access(
type: str = "write", access_control: Optional[dict] = None type: str = "write", access_control: Optional[dict] = None
) -> list[UserModel]: ) -> list[UserModel]:
if access_control is None: if access_control is None:
result = Users.get_users() result = Users.get_users(filter={"roles": ["!pending"]})
return result.get("users", []) return result.get("users", [])
permission_access = access_control.get(type, {}) permitted_ids = get_permitted_group_and_user_ids(type, access_control)
permitted_group_ids = permission_access.get("group_ids", []) if permitted_ids is None:
permitted_user_ids = permission_access.get("user_ids", []) return []
permitted_group_ids = permitted_ids.get("group_ids", [])
permitted_user_ids = permitted_ids.get("user_ids", [])
user_ids_with_access = set(permitted_user_ids) user_ids_with_access = set(permitted_user_ids)
for group_id in permitted_group_ids: group_user_ids_map = Groups.get_group_user_ids_by_ids(permitted_group_ids)
group_user_ids = Groups.get_group_user_ids_by_id(group_id) for user_ids in group_user_ids_map.values():
if group_user_ids: user_ids_with_access.update(user_ids)
user_ids_with_access.update(group_user_ids)
return Users.get_users_by_user_ids(list(user_ids_with_access)) return Users.get_users_by_user_ids(list(user_ids_with_access))

View file

@ -194,7 +194,7 @@ class AuditLoggingMiddleware:
auth_header = request.headers.get("Authorization") auth_header = request.headers.get("Authorization")
try: try:
user = get_current_user( user = await get_current_user(
request, None, None, get_http_authorization_cred(auth_header) request, None, None, get_http_authorization_cred(auth_header)
) )
return user return user

View file

@ -235,7 +235,7 @@ async def invalidate_token(request, token):
jti = decoded.get("jti") jti = decoded.get("jti")
exp = decoded.get("exp") exp = decoded.get("exp")
if jti: if jti and exp:
ttl = exp - int( ttl = exp - int(
datetime.now(UTC).timestamp() datetime.now(UTC).timestamp()
) # Calculate time-to-live for the token ) # Calculate time-to-live for the token
@ -344,9 +344,7 @@ async def get_current_user(
# Refresh the user's last active timestamp asynchronously # Refresh the user's last active timestamp asynchronously
# to prevent blocking the request # to prevent blocking the request
if background_tasks: if background_tasks:
background_tasks.add_task( background_tasks.add_task(Users.update_last_active_by_id, user.id)
Users.update_user_last_active_by_id, user.id
)
return user return user
else: else:
raise HTTPException( raise HTTPException(
@ -397,8 +395,7 @@ def get_current_user_by_api_key(request, api_key: str):
current_span.set_attribute("client.user.role", user.role) current_span.set_attribute("client.user.role", user.role)
current_span.set_attribute("client.auth.type", "api_key") current_span.set_attribute("client.auth.type", "api_key")
Users.update_user_last_active_by_id(user.id) Users.update_last_active_by_id(user.id)
return user return user

View file

@ -0,0 +1,24 @@
import logging
from open_webui.models.groups import Groups
log = logging.getLogger(__name__)
def apply_default_group_assignment(
default_group_id: str,
user_id: str,
) -> None:
"""
Apply default group assignment to a user if default_group_id is provided.
Args:
default_group_id: ID of the default group to add the user to
user_id: ID of the user to add to the default group
"""
if default_group_id:
try:
Groups.add_users_to_group(default_group_id, [user_id])
except Exception as e:
log.error(
f"Failed to add user {user_id} to default group {default_group_id}: {e}"
)

View file

@ -24,6 +24,7 @@ from fastapi.responses import HTMLResponse
from starlette.responses import Response, StreamingResponse, JSONResponse from starlette.responses import Response, StreamingResponse, JSONResponse
from open_webui.utils.misc import is_string_allowed
from open_webui.models.oauth_sessions import OAuthSessions from open_webui.models.oauth_sessions import OAuthSessions
from open_webui.models.chats import Chats from open_webui.models.chats import Chats
from open_webui.models.folders import Folders from open_webui.models.folders import Folders
@ -31,7 +32,6 @@ from open_webui.models.users import Users
from open_webui.socket.main import ( from open_webui.socket.main import (
get_event_call, get_event_call,
get_event_emitter, get_event_emitter,
get_active_status_by_user_id,
) )
from open_webui.routers.tasks import ( from open_webui.routers.tasks import (
generate_queries, generate_queries,
@ -458,12 +458,6 @@ async def chat_completion_tools_handler(
} }
) )
print(
f"Tool {tool_function_name} result: {tool_result}",
tool_result_files,
tool_result_embeds,
)
if tool_result: if tool_result:
tool = tools[tool_function_name] tool = tools[tool_function_name]
tool_id = tool.get("tool_id", "") tool_id = tool.get("tool_id", "")
@ -491,12 +485,6 @@ async def chat_completion_tools_handler(
} }
) )
# Citation is not enabled for this tool
body["messages"] = add_or_update_user_message(
f"\nTool `{tool_name}` Output: {tool_result}",
body["messages"],
)
if ( if (
tools[tool_function_name] tools[tool_function_name]
.get("metadata", {}) .get("metadata", {})
@ -772,9 +760,12 @@ async def chat_image_generation_handler(
if not chat_id: if not chat_id:
return form_data return form_data
chat = Chats.get_chat_by_id_and_user_id(chat_id, user.id)
__event_emitter__ = extra_params["__event_emitter__"] __event_emitter__ = extra_params["__event_emitter__"]
if chat_id.startswith("local:"):
message_list = form_data.get("messages", [])
else:
chat = Chats.get_chat_by_id_and_user_id(chat_id, user.id)
await __event_emitter__( await __event_emitter__(
{ {
"type": "status", "type": "status",
@ -785,6 +776,7 @@ async def chat_image_generation_handler(
messages_map = chat.chat.get("history", {}).get("messages", {}) messages_map = chat.chat.get("history", {}).get("messages", {})
message_id = chat.chat.get("history", {}).get("currentId") message_id = chat.chat.get("history", {}).get("currentId")
message_list = get_message_list(messages_map, message_id) message_list = get_message_list(messages_map, message_id)
user_message = get_last_user_message(message_list) user_message = get_last_user_message(message_list)
prompt = user_message prompt = user_message
@ -844,7 +836,7 @@ async def chat_image_generation_handler(
} }
) )
system_message_content = f"<context>Image generation was attempted but failed. The system is currently unable to generate the image. Tell the user that an error occurred: {error_message}</context>" system_message_content = f"<context>Image generation was attempted but failed. The system is currently unable to generate the image. Tell the user that the following error occurred: {error_message}</context>"
else: else:
# Create image(s) # Create image(s)
@ -907,7 +899,7 @@ async def chat_image_generation_handler(
} }
) )
system_message_content = "<context>The requested image has been created and is now being shown to the user. Let them know that it has been generated.</context>" system_message_content = "<context>The requested image has been created by the system successfully and is now being shown to the user. Let the user know that the image they requested has been generated and is now shown in the chat.</context>"
except Exception as e: except Exception as e:
log.debug(e) log.debug(e)
@ -928,7 +920,7 @@ async def chat_image_generation_handler(
} }
) )
system_message_content = f"<context>Image generation was attempted but failed. The system is currently unable to generate the image. Tell the user that an error occurred: {error_message}</context>" system_message_content = f"<context>Image generation was attempted but failed because of an error. The system is currently unable to generate the image. Tell the user that the following error occurred: {error_message}</context>"
if system_message_content: if system_message_content:
form_data["messages"] = add_or_update_system_message( form_data["messages"] = add_or_update_system_message(
@ -1408,6 +1400,13 @@ async def process_chat_payload(request, form_data, user, metadata, model):
headers=headers if headers else None, headers=headers if headers else None,
) )
function_name_filter_list = mcp_server_connection.get(
"config", {}
).get("function_name_filter_list", "")
if isinstance(function_name_filter_list, str):
function_name_filter_list = function_name_filter_list.split(",")
tool_specs = await mcp_clients[server_id].list_tool_specs() tool_specs = await mcp_clients[server_id].list_tool_specs()
for tool_spec in tool_specs: for tool_spec in tool_specs:
@ -1420,6 +1419,13 @@ async def process_chat_payload(request, form_data, user, metadata, model):
return tool_function return tool_function
if function_name_filter_list:
if not is_string_allowed(
tool_spec["name"], function_name_filter_list
):
# Skip this function
continue
tool_function = make_tool_function( tool_function = make_tool_function(
mcp_clients[server_id], tool_spec["name"] mcp_clients[server_id], tool_spec["name"]
) )
@ -1460,6 +1466,7 @@ async def process_chat_payload(request, form_data, user, metadata, model):
"__files__": metadata.get("files", []), "__files__": metadata.get("files", []),
}, },
) )
if mcp_tools_dict: if mcp_tools_dict:
tools_dict = {**tools_dict, **mcp_tools_dict} tools_dict = {**tools_dict, **mcp_tools_dict}
@ -1899,7 +1906,7 @@ async def process_chat_response(
) )
# Send a webhook notification if the user is not active # Send a webhook notification if the user is not active
if not get_active_status_by_user_id(user.id): if not Users.is_user_active(user.id):
webhook_url = Users.get_user_webhook_url_by_id(user.id) webhook_url = Users.get_user_webhook_url_by_id(user.id)
if webhook_url: if webhook_url:
await post_webhook( await post_webhook(
@ -3194,7 +3201,7 @@ async def process_chat_response(
) )
# Send a webhook notification if the user is not active # Send a webhook notification if the user is not active
if not get_active_status_by_user_id(user.id): if not Users.is_user_active(user.id):
webhook_url = Users.get_user_webhook_url_by_id(user.id) webhook_url = Users.get_user_webhook_url_by_id(user.id)
if webhook_url: if webhook_url:
await post_webhook( await post_webhook(

View file

@ -6,7 +6,7 @@ import uuid
import logging import logging
from datetime import timedelta from datetime import timedelta
from pathlib import Path from pathlib import Path
from typing import Callable, Optional from typing import Callable, Optional, Sequence, Union
import json import json
import aiohttp import aiohttp
@ -27,6 +27,49 @@ def deep_update(d, u):
return d return d
def get_allow_block_lists(filter_list):
allow_list = []
block_list = []
if filter_list:
for d in filter_list:
if d.startswith("!"):
# Domains starting with "!" → blocked
block_list.append(d[1:].strip())
else:
# Domains starting without "!" → allowed
allow_list.append(d.strip())
return allow_list, block_list
def is_string_allowed(
string: Union[str, Sequence[str]], filter_list: Optional[list[str]] = None
) -> bool:
"""
Checks if a string is allowed based on the provided filter list.
:param string: The string or sequence of strings to check (e.g., domain or hostname).
:param filter_list: List of allowed/blocked strings. Strings starting with "!" are blocked.
:return: True if the string or sequence of strings is allowed, False otherwise.
"""
if not filter_list:
return True
allow_list, block_list = get_allow_block_lists(filter_list)
strings = [string] if isinstance(string, str) else list(string)
# If allow list is non-empty, require domain to match one of them
if allow_list:
if not any(s.endswith(allowed) for s in strings for allowed in allow_list):
return False
# Block list always removes matches
if any(s.endswith(blocked) for s in strings for blocked in block_list):
return False
return True
def get_message_list(messages_map, message_id): def get_message_list(messages_map, message_id):
""" """
Reconstructs a list of messages in order up to the specified message_id. Reconstructs a list of messages in order up to the specified message_id.

View file

@ -6,6 +6,7 @@ import sys
from aiocache import cached from aiocache import cached
from fastapi import Request from fastapi import Request
from open_webui.socket.utils import RedisDict
from open_webui.routers import openai, ollama from open_webui.routers import openai, ollama
from open_webui.functions import get_function_models from open_webui.functions import get_function_models
@ -190,6 +191,8 @@ async def get_all_models(request, refresh: bool = False, user: UserModel = None)
): ):
# Custom model based on a base model # Custom model based on a base model
owned_by = "openai" owned_by = "openai"
connection_type = None
pipe = None pipe = None
for m in models: for m in models:
@ -200,6 +203,8 @@ async def get_all_models(request, refresh: bool = False, user: UserModel = None)
owned_by = m.get("owned_by", "unknown") owned_by = m.get("owned_by", "unknown")
if "pipe" in m: if "pipe" in m:
pipe = m["pipe"] pipe = m["pipe"]
connection_type = m.get("connection_type", None)
break break
model = { model = {
@ -208,6 +213,7 @@ async def get_all_models(request, refresh: bool = False, user: UserModel = None)
"object": "model", "object": "model",
"created": custom_model.created_at, "created": custom_model.created_at,
"owned_by": owned_by, "owned_by": owned_by,
"connection_type": connection_type,
"preset": True, "preset": True,
**({"pipe": pipe} if pipe is not None else {}), **({"pipe": pipe} if pipe is not None else {}),
} }
@ -323,7 +329,12 @@ async def get_all_models(request, refresh: bool = False, user: UserModel = None)
log.debug(f"get_all_models() returned {len(models)} models") log.debug(f"get_all_models() returned {len(models)} models")
request.app.state.MODELS = {model["id"]: model for model in models} models_dict = {model["id"]: model for model in models}
if isinstance(request.app.state.MODELS, RedisDict):
request.app.state.MODELS.set(models_dict)
else:
request.app.state.MODELS = models_dict
return models return models

View file

@ -43,6 +43,7 @@ from open_webui.config import (
ENABLE_OAUTH_GROUP_CREATION, ENABLE_OAUTH_GROUP_CREATION,
OAUTH_BLOCKED_GROUPS, OAUTH_BLOCKED_GROUPS,
OAUTH_GROUPS_SEPARATOR, OAUTH_GROUPS_SEPARATOR,
OAUTH_ROLES_SEPARATOR,
OAUTH_ROLES_CLAIM, OAUTH_ROLES_CLAIM,
OAUTH_SUB_CLAIM, OAUTH_SUB_CLAIM,
OAUTH_GROUPS_CLAIM, OAUTH_GROUPS_CLAIM,
@ -71,6 +72,7 @@ from open_webui.env import (
from open_webui.utils.misc import parse_duration from open_webui.utils.misc import parse_duration
from open_webui.utils.auth import get_password_hash, create_token from open_webui.utils.auth import get_password_hash, create_token
from open_webui.utils.webhook import post_webhook from open_webui.utils.webhook import post_webhook
from open_webui.utils.groups import apply_default_group_assignment
from mcp.shared.auth import ( from mcp.shared.auth import (
OAuthClientMetadata as MCPOAuthClientMetadata, OAuthClientMetadata as MCPOAuthClientMetadata,
@ -1032,7 +1034,13 @@ class OAuthManager:
if isinstance(claim_data, list): if isinstance(claim_data, list):
oauth_roles = claim_data oauth_roles = claim_data
if isinstance(claim_data, str) or isinstance(claim_data, int): elif isinstance(claim_data, str):
# Split by the configured separator if present
if OAUTH_ROLES_SEPARATOR and OAUTH_ROLES_SEPARATOR in claim_data:
oauth_roles = claim_data.split(OAUTH_ROLES_SEPARATOR)
else:
oauth_roles = [claim_data]
elif isinstance(claim_data, int):
oauth_roles = [str(claim_data)] oauth_roles = [str(claim_data)]
log.debug(f"Oauth Roles claim: {oauth_claim}") log.debug(f"Oauth Roles claim: {oauth_claim}")
@ -1095,7 +1103,7 @@ class OAuthManager:
user_oauth_groups = [] user_oauth_groups = []
user_current_groups: list[GroupModel] = Groups.get_groups_by_member_id(user.id) user_current_groups: list[GroupModel] = Groups.get_groups_by_member_id(user.id)
all_available_groups: list[GroupModel] = Groups.get_groups() all_available_groups: list[GroupModel] = Groups.get_all_groups()
# Create groups if they don't exist and creation is enabled # Create groups if they don't exist and creation is enabled
if auth_manager_config.ENABLE_OAUTH_GROUP_CREATION: if auth_manager_config.ENABLE_OAUTH_GROUP_CREATION:
@ -1139,7 +1147,7 @@ class OAuthManager:
# Refresh the list of all available groups if any were created # Refresh the list of all available groups if any were created
if groups_created: if groups_created:
all_available_groups = Groups.get_groups() all_available_groups = Groups.get_all_groups()
log.debug("Refreshed list of all available groups after creation.") log.debug("Refreshed list of all available groups after creation.")
log.debug(f"Oauth Groups claim: {oauth_claim}") log.debug(f"Oauth Groups claim: {oauth_claim}")
@ -1160,7 +1168,6 @@ class OAuthManager:
log.debug( log.debug(
f"Removing user from group {group_model.name} as it is no longer in their oauth groups" f"Removing user from group {group_model.name} as it is no longer in their oauth groups"
) )
Groups.remove_users_from_group(group_model.id, [user.id]) Groups.remove_users_from_group(group_model.id, [user.id])
# In case a group is created, but perms are never assigned to the group by hitting "save" # In case a group is created, but perms are never assigned to the group by hitting "save"
@ -1322,7 +1329,10 @@ class OAuthManager:
log.warning(f"OAuth callback failed, sub is missing: {user_data}") log.warning(f"OAuth callback failed, sub is missing: {user_data}")
raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED) raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED)
provider_sub = f"{provider}@{sub}" oauth_data = {}
oauth_data[provider] = {
"sub": sub,
}
# Email extraction # Email extraction
email_claim = auth_manager_config.OAUTH_EMAIL_CLAIM email_claim = auth_manager_config.OAUTH_EMAIL_CLAIM
@ -1369,12 +1379,12 @@ class OAuthManager:
log.warning(f"Error fetching GitHub email: {e}") log.warning(f"Error fetching GitHub email: {e}")
raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED) raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED)
elif ENABLE_OAUTH_EMAIL_FALLBACK: elif ENABLE_OAUTH_EMAIL_FALLBACK:
email = f"{provider_sub}.local" email = f"{provider}@{sub}.local"
else: else:
log.warning(f"OAuth callback failed, email is missing: {user_data}") log.warning(f"OAuth callback failed, email is missing: {user_data}")
raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED) raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED)
email = email.lower()
email = email.lower()
# If allowed domains are configured, check if the email domain is in the list # If allowed domains are configured, check if the email domain is in the list
if ( if (
"*" not in auth_manager_config.OAUTH_ALLOWED_DOMAINS "*" not in auth_manager_config.OAUTH_ALLOWED_DOMAINS
@ -1387,7 +1397,7 @@ class OAuthManager:
raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED) raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED)
# Check if the user exists # Check if the user exists
user = Users.get_user_by_oauth_sub(provider_sub) user = Users.get_user_by_oauth_sub(provider, sub)
if not user: if not user:
# If the user does not exist, check if merging is enabled # If the user does not exist, check if merging is enabled
if auth_manager_config.OAUTH_MERGE_ACCOUNTS_BY_EMAIL: if auth_manager_config.OAUTH_MERGE_ACCOUNTS_BY_EMAIL:
@ -1395,12 +1405,15 @@ class OAuthManager:
user = Users.get_user_by_email(email) user = Users.get_user_by_email(email)
if user: if user:
# Update the user with the new oauth sub # Update the user with the new oauth sub
Users.update_user_oauth_sub_by_id(user.id, provider_sub) Users.update_user_oauth_by_id(user.id, provider, sub)
if user: if user:
determined_role = self.get_user_role(user, user_data) determined_role = self.get_user_role(user, user_data)
if user.role != determined_role: if user.role != determined_role:
Users.update_user_role_by_id(user.id, determined_role) Users.update_user_role_by_id(user.id, determined_role)
# Update the user object in memory as well,
# to avoid problems with the ENABLE_OAUTH_GROUP_MANAGEMENT check below
user.role = determined_role
# Update profile picture if enabled and different from current # Update profile picture if enabled and different from current
if auth_manager_config.OAUTH_UPDATE_PICTURE_ON_LOGIN: if auth_manager_config.OAUTH_UPDATE_PICTURE_ON_LOGIN:
picture_claim = auth_manager_config.OAUTH_PICTURE_CLAIM picture_claim = auth_manager_config.OAUTH_PICTURE_CLAIM
@ -1451,7 +1464,7 @@ class OAuthManager:
name=name, name=name,
profile_image_url=picture_url, profile_image_url=picture_url,
role=self.get_user_role(None, user_data), role=self.get_user_role(None, user_data),
oauth_sub=provider_sub, oauth=oauth_data,
) )
if auth_manager_config.WEBHOOK_URL: if auth_manager_config.WEBHOOK_URL:
@ -1465,6 +1478,12 @@ class OAuthManager:
"user": user.model_dump_json(exclude_none=True), "user": user.model_dump_json(exclude_none=True),
}, },
) )
apply_default_group_assignment(
request.app.state.config.DEFAULT_GROUP_ID,
user.id,
)
else: else:
raise HTTPException( raise HTTPException(
status.HTTP_403_FORBIDDEN, status.HTTP_403_FORBIDDEN,

View file

@ -0,0 +1,139 @@
import time
from typing import Optional, Dict
from open_webui.env import REDIS_KEY_PREFIX
class RateLimiter:
"""
General-purpose rate limiter using Redis with a rolling window strategy.
Falls back to in-memory storage if Redis is not available.
"""
# In-memory fallback storage
_memory_store: Dict[str, Dict[int, int]] = {}
def __init__(
self,
redis_client,
limit: int,
window: int,
bucket_size: int = 60,
enabled: bool = True,
):
"""
:param redis_client: Redis client instance or None
:param limit: Max allowed events in the window
:param window: Time window in seconds
:param bucket_size: Bucket resolution
:param enabled: Turn on/off rate limiting globally
"""
self.r = redis_client
self.limit = limit
self.window = window
self.bucket_size = bucket_size
self.num_buckets = window // bucket_size
self.enabled = enabled
def _bucket_key(self, key: str, bucket_index: int) -> str:
return f"{REDIS_KEY_PREFIX}:ratelimit:{key.lower()}:{bucket_index}"
def _current_bucket(self) -> int:
return int(time.time()) // self.bucket_size
def _redis_available(self) -> bool:
return self.r is not None
def is_limited(self, key: str) -> bool:
"""
Main rate-limit check.
Gracefully handles missing or failing Redis.
"""
if not self.enabled:
return False
if self._redis_available():
try:
return self._is_limited_redis(key)
except Exception:
return self._is_limited_memory(key)
else:
return self._is_limited_memory(key)
def get_count(self, key: str) -> int:
if not self.enabled:
return 0
if self._redis_available():
try:
return self._get_count_redis(key)
except Exception:
return self._get_count_memory(key)
else:
return self._get_count_memory(key)
def remaining(self, key: str) -> int:
used = self.get_count(key)
return max(0, self.limit - used)
def _is_limited_redis(self, key: str) -> bool:
now_bucket = self._current_bucket()
bucket_key = self._bucket_key(key, now_bucket)
attempts = self.r.incr(bucket_key)
if attempts == 1:
self.r.expire(bucket_key, self.window + self.bucket_size)
# Collect buckets
buckets = [
self._bucket_key(key, now_bucket - i) for i in range(self.num_buckets + 1)
]
counts = self.r.mget(buckets)
total = sum(int(c) for c in counts if c)
return total > self.limit
def _get_count_redis(self, key: str) -> int:
now_bucket = self._current_bucket()
buckets = [
self._bucket_key(key, now_bucket - i) for i in range(self.num_buckets + 1)
]
counts = self.r.mget(buckets)
return sum(int(c) for c in counts if c)
def _is_limited_memory(self, key: str) -> bool:
now_bucket = self._current_bucket()
# Init storage
if key not in self._memory_store:
self._memory_store[key] = {}
store = self._memory_store[key]
# Increment bucket
store[now_bucket] = store.get(now_bucket, 0) + 1
# Drop expired buckets
min_bucket = now_bucket - self.num_buckets
expired = [b for b in store if b < min_bucket]
for b in expired:
del store[b]
# Count totals
total = sum(store.values())
return total > self.limit
def _get_count_memory(self, key: str) -> int:
now_bucket = self._current_bucket()
if key not in self._memory_store:
return 0
store = self._memory_store[key]
min_bucket = now_bucket - self.num_buckets
# Remove expired
expired = [b for b in store if b < min_bucket]
for b in expired:
del store[b]
return sum(store.values())

View file

@ -5,7 +5,13 @@ import logging
import redis import redis
from open_webui.env import REDIS_SENTINEL_MAX_RETRY_COUNT from open_webui.env import (
REDIS_CLUSTER,
REDIS_SENTINEL_HOSTS,
REDIS_SENTINEL_MAX_RETRY_COUNT,
REDIS_SENTINEL_PORT,
REDIS_URL,
)
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -108,6 +114,21 @@ def parse_redis_service_url(redis_url):
} }
def get_redis_client(async_mode=False):
try:
return get_redis_connection(
redis_url=REDIS_URL,
redis_sentinels=get_sentinels_from_env(
REDIS_SENTINEL_HOSTS, REDIS_SENTINEL_PORT
),
redis_cluster=REDIS_CLUSTER,
async_mode=async_mode,
)
except Exception as e:
log.debug(f"Failed to get Redis client: {e}")
return None
def get_redis_connection( def get_redis_connection(
redis_url, redis_url,
redis_sentinels, redis_sentinels,

View file

@ -45,7 +45,6 @@ from open_webui.env import (
OTEL_METRICS_OTLP_SPAN_EXPORTER, OTEL_METRICS_OTLP_SPAN_EXPORTER,
OTEL_METRICS_EXPORTER_OTLP_INSECURE, OTEL_METRICS_EXPORTER_OTLP_INSECURE,
) )
from open_webui.socket.main import get_active_user_ids
from open_webui.models.users import Users from open_webui.models.users import Users
_EXPORT_INTERVAL_MILLIS = 10_000 # 10 seconds _EXPORT_INTERVAL_MILLIS = 10_000 # 10 seconds
@ -135,7 +134,7 @@ def setup_metrics(app: FastAPI, resource: Resource) -> None:
) -> Sequence[metrics.Observation]: ) -> Sequence[metrics.Observation]:
return [ return [
metrics.Observation( metrics.Observation(
value=len(get_active_user_ids()), value=Users.get_active_user_count(),
) )
] ]

View file

@ -34,6 +34,7 @@ from langchain_core.utils.function_calling import (
) )
from open_webui.utils.misc import is_string_allowed
from open_webui.models.tools import Tools from open_webui.models.tools import Tools
from open_webui.models.users import UserModel from open_webui.models.users import UserModel
from open_webui.utils.plugin import load_tool_module_by_id from open_webui.utils.plugin import load_tool_module_by_id
@ -149,8 +150,21 @@ async def get_tools(
) )
specs = tool_server_data.get("specs", []) specs = tool_server_data.get("specs", [])
function_name_filter_list = tool_server_connection.get(
"config", {}
).get("function_name_filter_list", "")
if isinstance(function_name_filter_list, str):
function_name_filter_list = function_name_filter_list.split(",")
for spec in specs: for spec in specs:
function_name = spec["name"] function_name = spec["name"]
if function_name_filter_list:
if not is_string_allowed(
function_name, function_name_filter_list
):
# Skip this function
continue
auth_type = tool_server_connection.get("auth_type", "bearer") auth_type = tool_server_connection.get("auth_type", "bearer")

View file

@ -1,13 +1,13 @@
# Minimal requirements for backend to run # Minimal requirements for backend to run
# WIP: use this as a reference to build a minimal docker image # WIP: use this as a reference to build a minimal docker image
fastapi==0.118.0 fastapi==0.123.0
uvicorn[standard]==0.37.0 uvicorn[standard]==0.37.0
pydantic==2.11.9 pydantic==2.12.5
python-multipart==0.0.20 python-multipart==0.0.20
itsdangerous==2.2.0 itsdangerous==2.2.0
python-socketio==5.13.0 python-socketio==5.15.0
python-jose==3.5.0 python-jose==3.5.0
cryptography cryptography
bcrypt==5.0.0 bcrypt==5.0.0
@ -20,14 +20,14 @@ aiohttp==3.12.15
async-timeout async-timeout
aiocache aiocache
aiofiles aiofiles
starlette-compress==1.6.0 starlette-compress==1.6.1
httpx[socks,http2,zstd,cli,brotli]==0.28.1 httpx[socks,http2,zstd,cli,brotli]==0.28.1
starsessions[redis]==2.2.1 starsessions[redis]==2.2.1
sqlalchemy==2.0.38 sqlalchemy==2.0.38
alembic==1.14.0 alembic==1.17.2
peewee==3.18.1 peewee==3.18.3
peewee-migrate==1.12.2 peewee-migrate==1.14.3
pycrdt==0.12.25 pycrdt==0.12.25
redis redis
@ -36,9 +36,9 @@ APScheduler==3.10.4
RestrictedPython==8.0 RestrictedPython==8.0
loguru==0.7.3 loguru==0.7.3
asgiref==3.8.1 asgiref==3.11.0
mcp==1.21.2 mcp==1.22.0
openai openai
langchain==0.3.27 langchain==0.3.27
@ -46,5 +46,6 @@ langchain-community==0.3.29
fake-useragent==2.2.0 fake-useragent==2.2.0
chromadb==1.1.0 chromadb==1.1.0
black==25.9.0 black==25.11.0
pydub pydub
chardet==5.2.0

View file

@ -1,10 +1,10 @@
fastapi==0.118.0 fastapi==0.123.0
uvicorn[standard]==0.37.0 uvicorn[standard]==0.37.0
pydantic==2.11.9 pydantic==2.12.5
python-multipart==0.0.20 python-multipart==0.0.20
itsdangerous==2.2.0 itsdangerous==2.2.0
python-socketio==5.13.0 python-socketio==5.15.0
python-jose==3.5.0 python-jose==3.5.0
cryptography cryptography
bcrypt==5.0.0 bcrypt==5.0.0
@ -17,14 +17,14 @@ aiohttp==3.12.15
async-timeout async-timeout
aiocache aiocache
aiofiles aiofiles
starlette-compress==1.6.0 starlette-compress==1.6.1
httpx[socks,http2,zstd,cli,brotli]==0.28.1 httpx[socks,http2,zstd,cli,brotli]==0.28.1
starsessions[redis]==2.2.1 starsessions[redis]==2.2.1
sqlalchemy==2.0.38 sqlalchemy==2.0.38
alembic==1.14.0 alembic==1.17.2
peewee==3.18.1 peewee==3.18.3
peewee-migrate==1.12.2 peewee-migrate==1.14.3
pycrdt==0.12.25 pycrdt==0.12.25
redis redis
@ -33,11 +33,11 @@ APScheduler==3.10.4
RestrictedPython==8.0 RestrictedPython==8.0
loguru==0.7.3 loguru==0.7.3
asgiref==3.8.1 asgiref==3.11.0
# AI libraries # AI libraries
tiktoken tiktoken
mcp==1.21.2 mcp==1.22.0
openai openai
anthropic anthropic
@ -52,23 +52,24 @@ chromadb==1.1.0
weaviate-client==4.17.0 weaviate-client==4.17.0
opensearch-py==2.8.0 opensearch-py==2.8.0
transformers transformers==4.57.3
sentence-transformers==5.1.1 sentence-transformers==5.1.2
accelerate accelerate
pyarrow==20.0.0 # fix: pin pyarrow version to 20 for rpi compatibility #15897 pyarrow==20.0.0 # fix: pin pyarrow version to 20 for rpi compatibility #15897
einops==0.8.1 einops==0.8.1
ftfy==6.2.3 ftfy==6.3.1
pypdf==6.0.0 chardet==5.2.0
pypdf==6.4.0
fpdf2==2.8.2 fpdf2==2.8.2
pymdown-extensions==10.14.2 pymdown-extensions==10.17.2
docx2txt==0.8 docx2txt==0.8
python-pptx==1.0.2 python-pptx==1.0.2
unstructured==0.18.18 unstructured==0.18.21
msoffcrypto-tool==5.4.2 msoffcrypto-tool==5.4.2
nltk==3.9.1 nltk==3.9.1
Markdown==3.9 Markdown==3.10
pypandoc==1.15 pypandoc==1.16.2
pandas==2.2.3 pandas==2.2.3
openpyxl==3.1.5 openpyxl==3.1.5
pyxlsb==1.0.10 pyxlsb==1.0.10
@ -86,12 +87,12 @@ rank-bm25==0.2.2
onnxruntime==1.20.1 onnxruntime==1.20.1
faster-whisper==1.1.1 faster-whisper==1.1.1
black==25.9.0 black==25.11.0
youtube-transcript-api==1.2.2 youtube-transcript-api==1.2.2
pytube==15.0.0 pytube==15.0.0
pydub pydub
ddgs==9.0.0 ddgs==9.9.2
azure-ai-documentintelligence==1.0.2 azure-ai-documentintelligence==1.0.2
azure-identity==1.25.0 azure-identity==1.25.0
@ -103,7 +104,7 @@ google-api-python-client
google-auth-httplib2 google-auth-httplib2
google-auth-oauthlib google-auth-oauthlib
googleapis-common-protos==1.70.0 googleapis-common-protos==1.72.0
google-cloud-storage==2.19.0 google-cloud-storage==2.19.0
## Databases ## Databases
@ -112,11 +113,11 @@ psycopg2-binary==2.9.10
pgvector==0.4.1 pgvector==0.4.1
PyMySQL==1.1.1 PyMySQL==1.1.1
boto3==1.40.5 boto3==1.41.5
pymilvus==2.6.2 pymilvus==2.6.4
qdrant-client==1.14.3 qdrant-client==1.14.3
playwright==1.49.1 # Caution: version must match docker-compose.playwright.yaml playwright==1.56.0 # Caution: version must match docker-compose.playwright.yaml
elasticsearch==9.1.0 elasticsearch==9.1.0
pinecone==6.0.2 pinecone==6.0.2
oracledb==3.2.0 oracledb==3.2.0
@ -129,23 +130,23 @@ colbert-ai==0.2.21
## Tests ## Tests
docker~=7.1.0 docker~=7.1.0
pytest~=8.4.1 pytest~=8.4.1
pytest-docker~=3.1.1 pytest-docker~=3.2.5
## LDAP ## LDAP
ldap3==2.9.1 ldap3==2.9.1
## Firecrawl ## Firecrawl
firecrawl-py==4.5.0 firecrawl-py==4.10.0
## Trace ## Trace
opentelemetry-api==1.37.0 opentelemetry-api==1.38.0
opentelemetry-sdk==1.37.0 opentelemetry-sdk==1.38.0
opentelemetry-exporter-otlp==1.37.0 opentelemetry-exporter-otlp==1.38.0
opentelemetry-instrumentation==0.58b0 opentelemetry-instrumentation==0.59b0
opentelemetry-instrumentation-fastapi==0.58b0 opentelemetry-instrumentation-fastapi==0.59b0
opentelemetry-instrumentation-sqlalchemy==0.58b0 opentelemetry-instrumentation-sqlalchemy==0.59b0
opentelemetry-instrumentation-redis==0.58b0 opentelemetry-instrumentation-redis==0.59b0
opentelemetry-instrumentation-requests==0.58b0 opentelemetry-instrumentation-requests==0.59b0
opentelemetry-instrumentation-logging==0.58b0 opentelemetry-instrumentation-logging==0.59b0
opentelemetry-instrumentation-httpx==0.58b0 opentelemetry-instrumentation-httpx==0.59b0
opentelemetry-instrumentation-aiohttp-client==0.58b0 opentelemetry-instrumentation-aiohttp-client==0.59b0

4
package-lock.json generated
View file

@ -1,12 +1,12 @@
{ {
"name": "open-webui", "name": "open-webui",
"version": "0.6.38", "version": "0.6.41",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "open-webui", "name": "open-webui",
"version": "0.6.38", "version": "0.6.41",
"dependencies": { "dependencies": {
"@azure/msal-browser": "^4.5.0", "@azure/msal-browser": "^4.5.0",
"@codemirror/lang-javascript": "^6.2.2", "@codemirror/lang-javascript": "^6.2.2",

View file

@ -1,6 +1,6 @@
{ {
"name": "open-webui", "name": "open-webui",
"version": "0.6.38", "version": "0.6.41",
"private": true, "private": true,
"scripts": { "scripts": {
"dev": "npm run pyodide:fetch && vite dev --host", "dev": "npm run pyodide:fetch && vite dev --host",

View file

@ -6,13 +6,13 @@ authors = [
] ]
license = { file = "LICENSE" } license = { file = "LICENSE" }
dependencies = [ dependencies = [
"fastapi==0.118.0", "fastapi==0.123.0",
"uvicorn[standard]==0.37.0", "uvicorn[standard]==0.37.0",
"pydantic==2.11.9", "pydantic==2.12.5",
"python-multipart==0.0.20", "python-multipart==0.0.20",
"itsdangerous==2.2.0", "itsdangerous==2.2.0",
"python-socketio==5.13.0", "python-socketio==5.15.0",
"python-jose==3.5.0", "python-jose==3.5.0",
"cryptography", "cryptography",
"bcrypt==5.0.0", "bcrypt==5.0.0",
@ -25,14 +25,14 @@ dependencies = [
"async-timeout", "async-timeout",
"aiocache", "aiocache",
"aiofiles", "aiofiles",
"starlette-compress==1.6.0", "starlette-compress==1.6.1",
"httpx[socks,http2,zstd,cli,brotli]==0.28.1", "httpx[socks,http2,zstd,cli,brotli]==0.28.1",
"starsessions[redis]==2.2.1", "starsessions[redis]==2.2.1",
"sqlalchemy==2.0.38", "sqlalchemy==2.0.38",
"alembic==1.14.0", "alembic==1.17.2",
"peewee==3.18.1", "peewee==3.18.3",
"peewee-migrate==1.12.2", "peewee-migrate==1.14.3",
"pycrdt==0.12.25", "pycrdt==0.12.25",
"redis", "redis",
@ -41,10 +41,10 @@ dependencies = [
"RestrictedPython==8.0", "RestrictedPython==8.0",
"loguru==0.7.3", "loguru==0.7.3",
"asgiref==3.8.1", "asgiref==3.11.0",
"tiktoken", "tiktoken",
"mcp==1.21.2", "mcp==1.22.0",
"openai", "openai",
"anthropic", "anthropic",
@ -58,25 +58,26 @@ dependencies = [
"chromadb==1.0.20", "chromadb==1.0.20",
"opensearch-py==2.8.0", "opensearch-py==2.8.0",
"PyMySQL==1.1.1", "PyMySQL==1.1.1",
"boto3==1.40.5", "boto3==1.41.5",
"transformers", "transformers==4.57.3",
"sentence-transformers==5.1.1", "sentence-transformers==5.1.2",
"accelerate", "accelerate",
"pyarrow==20.0.0", "pyarrow==20.0.0",
"einops==0.8.1", "einops==0.8.1",
"ftfy==6.2.3", "ftfy==6.3.1",
"pypdf==6.0.0", "chardet==5.2.0",
"pypdf==6.4.0",
"fpdf2==2.8.2", "fpdf2==2.8.2",
"pymdown-extensions==10.14.2", "pymdown-extensions==10.17.2",
"docx2txt==0.8", "docx2txt==0.8",
"python-pptx==1.0.2", "python-pptx==1.0.2",
"unstructured==0.18.18", "unstructured==0.18.21",
"msoffcrypto-tool==5.4.2", "msoffcrypto-tool==5.4.2",
"nltk==3.9.1", "nltk==3.9.1",
"Markdown==3.9", "Markdown==3.10",
"pypandoc==1.15", "pypandoc==1.16.2",
"pandas==2.2.3", "pandas==2.2.3",
"openpyxl==3.1.5", "openpyxl==3.1.5",
"pyxlsb==1.0.10", "pyxlsb==1.0.10",
@ -95,18 +96,18 @@ dependencies = [
"onnxruntime==1.20.1", "onnxruntime==1.20.1",
"faster-whisper==1.1.1", "faster-whisper==1.1.1",
"black==25.9.0", "black==25.11.0",
"youtube-transcript-api==1.2.2", "youtube-transcript-api==1.2.2",
"pytube==15.0.0", "pytube==15.0.0",
"pydub", "pydub",
"ddgs==9.0.0", "ddgs==9.9.2",
"google-api-python-client", "google-api-python-client",
"google-auth-httplib2", "google-auth-httplib2",
"google-auth-oauthlib", "google-auth-oauthlib",
"googleapis-common-protos==1.70.0", "googleapis-common-protos==1.72.0",
"google-cloud-storage==2.19.0", "google-cloud-storage==2.19.0",
"azure-identity==1.25.0", "azure-identity==1.25.0",
@ -141,18 +142,18 @@ all = [
"gcp-storage-emulator>=2024.8.3", "gcp-storage-emulator>=2024.8.3",
"docker~=7.1.0", "docker~=7.1.0",
"pytest~=8.3.2", "pytest~=8.3.2",
"pytest-docker~=3.1.1", "pytest-docker~=3.2.5",
"playwright==1.49.1", "playwright==1.56.0",
"elasticsearch==9.1.0", "elasticsearch==9.1.0",
"qdrant-client==1.14.3", "qdrant-client==1.14.3",
"weaviate-client==4.17.0", "weaviate-client==4.17.0",
"pymilvus==2.6.2", "pymilvus==2.6.4",
"pinecone==6.0.2", "pinecone==6.0.2",
"oracledb==3.2.0", "oracledb==3.2.0",
"colbert-ai==0.2.21", "colbert-ai==0.2.21",
"firecrawl-py==4.5.0", "firecrawl-py==4.10.0",
"azure-search-documents==11.6.0", "azure-search-documents==11.6.0",
] ]

View file

@ -637,7 +637,7 @@ input[type='number'] {
.tiptap th, .tiptap th,
.tiptap td { .tiptap td {
@apply px-3 py-1.5 border border-gray-100 dark:border-gray-850; @apply px-3 py-1.5 border border-gray-100/30 dark:border-gray-850/30;
} }
.tiptap th { .tiptap th {

View file

@ -1,10 +1,13 @@
import { WEBUI_API_BASE_URL } from '$lib/constants'; import { WEBUI_API_BASE_URL } from '$lib/constants';
type ChannelForm = { type ChannelForm = {
type?: string;
name: string; name: string;
is_private?: boolean;
data?: object; data?: object;
meta?: object; meta?: object;
access_control?: object; access_control?: object;
user_ids?: string[];
}; };
export const createNewChannel = async (token: string = '', channel: ChannelForm) => { export const createNewChannel = async (token: string = '', channel: ChannelForm) => {
@ -101,6 +104,209 @@ export const getChannelById = async (token: string = '', channel_id: string) =>
return res; return res;
}; };
export const getDMChannelByUserId = async (token: string = '', user_id: string) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/channels/users/${user_id}`, {
method: 'GET',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
}
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.then((json) => {
return json;
})
.catch((err) => {
error = err.detail;
console.error(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const getChannelMembersById = async (
token: string,
channel_id: string,
query?: string,
orderBy?: string,
direction?: string,
page = 1
) => {
let error = null;
let res = null;
const searchParams = new URLSearchParams();
searchParams.set('page', `${page}`);
if (query) {
searchParams.set('query', query);
}
if (orderBy) {
searchParams.set('order_by', orderBy);
}
if (direction) {
searchParams.set('direction', direction);
}
res = await fetch(
`${WEBUI_API_BASE_URL}/channels/${channel_id}/members?${searchParams.toString()}`,
{
method: 'GET',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${token}`
}
}
)
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.catch((err) => {
console.error(err);
error = err.detail;
return null;
});
if (error) {
throw error;
}
return res;
};
export const updateChannelMemberActiveStatusById = async (
token: string = '',
channel_id: string,
is_active: boolean
) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/channels/${channel_id}/members/active`, {
method: 'POST',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
},
body: JSON.stringify({ is_active })
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.then((json) => {
return json;
})
.catch((err) => {
error = err.detail;
console.error(err);
return null;
});
if (error) {
throw error;
}
return res;
};
type UpdateMembersForm = {
user_ids?: string[];
group_ids?: string[];
};
export const addMembersById = async (
token: string = '',
channel_id: string,
formData: UpdateMembersForm
) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/channels/${channel_id}/update/members/add`, {
method: 'POST',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
},
body: JSON.stringify({ ...formData })
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.then((json) => {
return json;
})
.catch((err) => {
error = err.detail;
console.error(err);
return null;
});
if (error) {
throw error;
}
return res;
};
type RemoveMembersForm = {
user_ids?: string[];
group_ids?: string[];
};
export const removeMembersById = async (
token: string = '',
channel_id: string,
formData: RemoveMembersForm
) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/channels/${channel_id}/update/members/remove`, {
method: 'POST',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
},
body: JSON.stringify({ ...formData })
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.then((json) => {
return json;
})
.catch((err) => {
error = err.detail;
console.error(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const updateChannelById = async ( export const updateChannelById = async (
token: string = '', token: string = '',
channel_id: string, channel_id: string,
@ -207,6 +413,44 @@ export const getChannelMessages = async (
return res; return res;
}; };
export const getChannelPinnedMessages = async (
token: string = '',
channel_id: string,
page: number = 1
) => {
let error = null;
const res = await fetch(
`${WEBUI_API_BASE_URL}/channels/${channel_id}/messages/pinned?page=${page}`,
{
method: 'GET',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
}
}
)
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.then((json) => {
return json;
})
.catch((err) => {
error = err.detail;
console.error(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const getChannelThreadMessages = async ( export const getChannelThreadMessages = async (
token: string = '', token: string = '',
channel_id: string, channel_id: string,
@ -248,6 +492,7 @@ export const getChannelThreadMessages = async (
}; };
type MessageForm = { type MessageForm = {
temp_id?: string;
reply_to_id?: string; reply_to_id?: string;
parent_id?: string; parent_id?: string;
content: string; content: string;
@ -287,6 +532,46 @@ export const sendMessage = async (token: string = '', channel_id: string, messag
return res; return res;
}; };
export const pinMessage = async (
token: string = '',
channel_id: string,
message_id: string,
is_pinned: boolean
) => {
let error = null;
const res = await fetch(
`${WEBUI_API_BASE_URL}/channels/${channel_id}/messages/${message_id}/pin`,
{
method: 'POST',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
},
body: JSON.stringify({ is_pinned })
}
)
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.then((json) => {
return json;
})
.catch((err) => {
error = err.detail;
console.error(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const updateMessage = async ( export const updateMessage = async (
token: string = '', token: string = '',
channel_id: string, channel_id: string,

View file

@ -35,6 +35,7 @@ type ChunkConfigForm = {
type DocumentIntelligenceConfigForm = { type DocumentIntelligenceConfigForm = {
key: string; key: string;
endpoint: string; endpoint: string;
model: string;
}; };
type ContentExtractConfigForm = { type ContentExtractConfigForm = {
@ -295,42 +296,6 @@ export interface SearchDocument {
filenames: string[]; filenames: string[];
} }
export const processFile = async (
token: string,
file_id: string,
collection_name: string | null = null
) => {
let error = null;
const res = await fetch(`${RETRIEVAL_API_BASE_URL}/process/file`, {
method: 'POST',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
},
body: JSON.stringify({
file_id: file_id,
collection_name: collection_name ? collection_name : undefined
})
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.catch((err) => {
error = err.detail;
console.error(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const processYoutubeVideo = async (token: string, url: string) => { export const processYoutubeVideo = async (token: string, url: string) => {
let error = null; let error = null;

View file

@ -166,11 +166,33 @@ export const getUsers = async (
return res; return res;
}; };
export const getAllUsers = async (token: string) => { export const searchUsers = async (
token: string,
query?: string,
orderBy?: string,
direction?: string,
page = 1
) => {
let error = null; let error = null;
let res = null; let res = null;
res = await fetch(`${WEBUI_API_BASE_URL}/users/all`, { const searchParams = new URLSearchParams();
searchParams.set('page', `${page}`);
if (query) {
searchParams.set('query', query);
}
if (orderBy) {
searchParams.set('order_by', orderBy);
}
if (direction) {
searchParams.set('direction', direction);
}
res = await fetch(`${WEBUI_API_BASE_URL}/users/search?${searchParams.toString()}`, {
method: 'GET', method: 'GET',
headers: { headers: {
'Content-Type': 'application/json', 'Content-Type': 'application/json',
@ -194,11 +216,11 @@ export const getAllUsers = async (token: string) => {
return res; return res;
}; };
export const searchUsers = async (token: string, query: string) => { export const getAllUsers = async (token: string) => {
let error = null; let error = null;
let res = null; let res = null;
res = await fetch(`${WEBUI_API_BASE_URL}/users/search?query=${encodeURIComponent(query)}`, { res = await fetch(`${WEBUI_API_BASE_URL}/users/all`, {
method: 'GET', method: 'GET',
headers: { headers: {
'Content-Type': 'application/json', 'Content-Type': 'application/json',
@ -305,6 +327,36 @@ export const getUserById = async (token: string, userId: string) => {
return res; return res;
}; };
export const updateUserStatus = async (token: string, formData: object) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/users/user/status/update`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${token}`
},
body: JSON.stringify({
...formData
})
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.catch((err) => {
console.error(err);
error = err.detail;
return null;
});
if (error) {
throw error;
}
return res;
};
export const getUserInfo = async (token: string) => { export const getUserInfo = async (token: string) => {
let error = null; let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/users/user/info`, { const res = await fetch(`${WEBUI_API_BASE_URL}/users/user/info`, {

View file

@ -358,7 +358,7 @@
<div class="flex-shrink-0 self-start"> <div class="flex-shrink-0 self-start">
<select <select
id="select-bearer-or-session" id="select-bearer-or-session"
class={`w-full text-sm bg-transparent pr-5 ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`} class={`dark:bg-gray-900 w-full text-sm bg-transparent pr-5 ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`}
bind:value={auth_type} bind:value={auth_type}
> >
<option value="none">{$i18n.t('None')}</option> <option value="none">{$i18n.t('None')}</option>

View file

@ -47,6 +47,7 @@
let key = ''; let key = '';
let headers = ''; let headers = '';
let functionNameFilterList = '';
let accessControl = {}; let accessControl = {};
let id = ''; let id = '';
@ -303,7 +304,7 @@
key, key,
config: { config: {
enable: enable, enable: enable,
function_name_filter_list: functionNameFilterList,
access_control: accessControl access_control: accessControl
}, },
info: { info: {
@ -333,9 +334,11 @@
id = ''; id = '';
name = ''; name = '';
description = ''; description = '';
oauthClientInfo = null; oauthClientInfo = null;
enable = true; enable = true;
functionNameFilterList = '';
accessControl = null; accessControl = null;
}; };
@ -359,6 +362,7 @@
oauthClientInfo = connection.info?.oauth_client_info ?? null; oauthClientInfo = connection.info?.oauth_client_info ?? null;
enable = connection.config?.enable ?? true; enable = connection.config?.enable ?? true;
functionNameFilterList = connection.config?.function_name_filter_list ?? '';
accessControl = connection.config?.access_control ?? null; accessControl = connection.config?.access_control ?? null;
} }
}; };
@ -530,7 +534,7 @@
<div class="flex-shrink-0 self-start"> <div class="flex-shrink-0 self-start">
<select <select
id="select-bearer-or-session" id="select-bearer-or-session"
class={`w-full text-sm bg-transparent pr-5 ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`} class={`dark:bg-gray-900 w-full text-sm bg-transparent pr-5 ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`}
bind:value={spec_type} bind:value={spec_type}
> >
<option value="url">{$i18n.t('URL')}</option> <option value="url">{$i18n.t('URL')}</option>
@ -640,7 +644,7 @@
<div class="flex-shrink-0 self-start"> <div class="flex-shrink-0 self-start">
<select <select
id="select-bearer-or-session" id="select-bearer-or-session"
class={`w-full text-sm bg-transparent pr-5 ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`} class={`dark:bg-gray-900 w-full text-sm bg-transparent pr-5 ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`}
bind:value={auth_type} bind:value={auth_type}
> >
<option value="none">{$i18n.t('None')}</option> <option value="none">{$i18n.t('None')}</option>
@ -793,13 +797,30 @@
</div> </div>
</div> </div>
<div class="flex flex-col w-full mt-2">
<label
for="function-name-filter-list"
class={`mb-1 text-xs ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100 placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700 text-gray-500'}`}
>{$i18n.t('Function Name Filter List')}</label
>
<div class="flex-1">
<input
id="function-name-filter-list"
class={`w-full text-sm bg-transparent ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`}
type="text"
bind:value={functionNameFilterList}
placeholder={$i18n.t('Enter function name filter list (e.g. func1, !func2)')}
autocomplete="off"
/>
</div>
</div>
<hr class=" border-gray-100 dark:border-gray-700/10 my-2.5 w-full" /> <hr class=" border-gray-100 dark:border-gray-700/10 my-2.5 w-full" />
<div class="my-2 -mx-2"> <div class="my-2">
<div class="px-4 py-3 bg-gray-50 dark:bg-gray-950 rounded-3xl">
<AccessControl bind:accessControl /> <AccessControl bind:accessControl />
</div> </div>
</div>
{/if} {/if}
</div> </div>

View file

@ -186,7 +186,7 @@
class="w-full text-sm text-left text-gray-500 dark:text-gray-400 table-auto max-w-full" class="w-full text-sm text-left text-gray-500 dark:text-gray-400 table-auto max-w-full"
> >
<thead class="text-xs text-gray-800 uppercase bg-transparent dark:text-gray-200"> <thead class="text-xs text-gray-800 uppercase bg-transparent dark:text-gray-200">
<tr class=" border-b-[1.5px] border-gray-50 dark:border-gray-850"> <tr class=" border-b-[1.5px] border-gray-50 dark:border-gray-850/30">
<th <th
scope="col" scope="col"
class="px-2.5 py-2 cursor-pointer select-none w-3" class="px-2.5 py-2 cursor-pointer select-none w-3"

View file

@ -387,7 +387,7 @@
: ''}" : ''}"
> >
<thead class="text-xs text-gray-800 uppercase bg-transparent dark:text-gray-200"> <thead class="text-xs text-gray-800 uppercase bg-transparent dark:text-gray-200">
<tr class=" border-b-[1.5px] border-gray-50 dark:border-gray-850"> <tr class=" border-b-[1.5px] border-gray-50 dark:border-gray-850/30">
<th <th
scope="col" scope="col"
class="px-2.5 py-2 cursor-pointer select-none w-3" class="px-2.5 py-2 cursor-pointer select-none w-3"

View file

@ -343,7 +343,7 @@
</div> </div>
<div <div
class="py-2 bg-white dark:bg-gray-900 rounded-3xl border border-gray-100 dark:border-gray-850" class="py-2 bg-white dark:bg-gray-900 rounded-3xl border border-gray-100/30 dark:border-gray-850/30"
> >
<div class="px-3.5 flex flex-1 items-center w-full space-x-2 py-0.5 pb-2"> <div class="px-3.5 flex flex-1 items-center w-full space-x-2 py-0.5 pb-2">
<div class="flex flex-1"> <div class="flex flex-1">

View file

@ -63,7 +63,7 @@
</div> </div>
</div> </div>
<hr class="border-gray-50 dark:border-gray-850 my-1" /> <hr class="border-gray-50 dark:border-gray-850/30 my-1" />
{/if} {/if}
<DropdownMenu.Item <DropdownMenu.Item
@ -122,7 +122,7 @@
<div class="flex items-center">{$i18n.t('Export')}</div> <div class="flex items-center">{$i18n.t('Export')}</div>
</DropdownMenu.Item> </DropdownMenu.Item>
<hr class="border-gray-50 dark:border-gray-850 my-1" /> <hr class="border-gray-50 dark:border-gray-850/30 my-1" />
<DropdownMenu.Item <DropdownMenu.Item
class="flex gap-2 items-center px-3 py-1.5 text-sm font-medium cursor-pointer hover:bg-gray-50 dark:hover:bg-gray-800 rounded-md" class="flex gap-2 items-center px-3 py-1.5 text-sm font-medium cursor-pointer hover:bg-gray-50 dark:hover:bg-gray-800 rounded-md"

View file

@ -212,7 +212,7 @@
<div> <div>
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Speech-to-Text')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Speech-to-Text')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
{#if STT_ENGINE !== 'web'} {#if STT_ENGINE !== 'web'}
<div class="mb-2"> <div class="mb-2">
@ -263,7 +263,7 @@
</div> </div>
</div> </div>
<hr class="border-gray-100 dark:border-gray-850 my-2" /> <hr class="border-gray-100/30 dark:border-gray-850/30 my-2" />
<div> <div>
<div class=" mb-1.5 text-xs font-medium">{$i18n.t('STT Model')}</div> <div class=" mb-1.5 text-xs font-medium">{$i18n.t('STT Model')}</div>
@ -289,7 +289,7 @@
</div> </div>
</div> </div>
<hr class="border-gray-100 dark:border-gray-850 my-2" /> <hr class="border-gray-100/30 dark:border-gray-850/30 my-2" />
<div> <div>
<div class=" mb-1.5 text-xs font-medium">{$i18n.t('STT Model')}</div> <div class=" mb-1.5 text-xs font-medium">{$i18n.t('STT Model')}</div>
@ -323,7 +323,7 @@
/> />
</div> </div>
<hr class="border-gray-100 dark:border-gray-850 my-2" /> <hr class="border-gray-100/30 dark:border-gray-850/30 my-2" />
<div> <div>
<div class=" mb-1.5 text-xs font-medium">{$i18n.t('Azure Region')}</div> <div class=" mb-1.5 text-xs font-medium">{$i18n.t('Azure Region')}</div>
@ -391,7 +391,7 @@
</div> </div>
</div> </div>
<hr class="border-gray-100 dark:border-gray-850 my-2" /> <hr class="border-gray-100/30 dark:border-gray-850/30 my-2" />
<div> <div>
<div class=" mb-1.5 text-xs font-medium">{$i18n.t('STT Model')}</div> <div class=" mb-1.5 text-xs font-medium">{$i18n.t('STT Model')}</div>
@ -416,7 +416,7 @@
</div> </div>
</div> </div>
<hr class="border-gray-100 dark:border-gray-850 my-2" /> <hr class="border-gray-100/30 dark:border-gray-850/30 my-2" />
<div> <div>
<div class="flex items-center justify-between mb-2"> <div class="flex items-center justify-between mb-2">
@ -500,7 +500,7 @@
<div> <div>
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Text-to-Speech')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Text-to-Speech')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class="mb-2 py-0.5 flex w-full justify-between"> <div class="mb-2 py-0.5 flex w-full justify-between">
<div class=" self-center text-xs font-medium">{$i18n.t('Text-to-Speech Engine')}</div> <div class=" self-center text-xs font-medium">{$i18n.t('Text-to-Speech Engine')}</div>
@ -557,7 +557,7 @@
<SensitiveInput placeholder={$i18n.t('API Key')} bind:value={TTS_API_KEY} required /> <SensitiveInput placeholder={$i18n.t('API Key')} bind:value={TTS_API_KEY} required />
</div> </div>
<hr class="border-gray-100 dark:border-gray-850 my-2" /> <hr class="border-gray-100/30 dark:border-gray-850/30 my-2" />
<div> <div>
<div class=" mb-1.5 text-xs font-medium">{$i18n.t('Azure Region')}</div> <div class=" mb-1.5 text-xs font-medium">{$i18n.t('Azure Region')}</div>

View file

@ -43,7 +43,7 @@
<div class="mb-3.5"> <div class="mb-3.5">
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class="mb-2.5"> <div class="mb-2.5">
<div class=" flex w-full justify-between"> <div class=" flex w-full justify-between">
@ -166,7 +166,7 @@
<div class="mb-3.5"> <div class="mb-3.5">
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Code Interpreter')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Code Interpreter')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class="mb-2.5"> <div class="mb-2.5">
<div class=" flex w-full justify-between"> <div class=" flex w-full justify-between">
@ -288,7 +288,7 @@
</div> </div>
{/if} {/if}
<hr class="border-gray-100 dark:border-gray-850 my-2" /> <hr class="border-gray-100/30 dark:border-gray-850/30 my-2" />
<div> <div>
<div class="py-0.5 w-full"> <div class="py-0.5 w-full">

View file

@ -221,7 +221,7 @@
<div class="mb-3.5"> <div class="mb-3.5">
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class="my-2"> <div class="my-2">
<div class="mt-2 space-y-2"> <div class="mt-2 space-y-2">
@ -384,7 +384,7 @@
</div> </div>
</div> </div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class="my-2"> <div class="my-2">
<div class="flex justify-between items-center text-sm"> <div class="flex justify-between items-center text-sm">

View file

@ -143,7 +143,7 @@
</div> </div>
</button> </button>
<hr class="border-gray-50 dark:border-gray-850 my-1" /> <hr class="border-gray-50 dark:border-gray-850/30 my-1" />
{#if $config?.features.enable_admin_export ?? true} {#if $config?.features.enable_admin_export ?? true}
<div class=" flex w-full justify-between"> <div class=" flex w-full justify-between">

View file

@ -38,9 +38,11 @@
let showResetUploadDirConfirm = false; let showResetUploadDirConfirm = false;
let showReindexConfirm = false; let showReindexConfirm = false;
let embeddingEngine = ''; let RAG_EMBEDDING_ENGINE = '';
let embeddingModel = ''; let RAG_EMBEDDING_MODEL = '';
let embeddingBatchSize = 1; let RAG_EMBEDDING_BATCH_SIZE = 1;
let ENABLE_ASYNC_EMBEDDING = true;
let rerankingModel = ''; let rerankingModel = '';
let OpenAIUrl = ''; let OpenAIUrl = '';
@ -64,7 +66,7 @@
let RAGConfig = null; let RAGConfig = null;
const embeddingModelUpdateHandler = async () => { const embeddingModelUpdateHandler = async () => {
if (embeddingEngine === '' && embeddingModel.split('/').length - 1 > 1) { if (RAG_EMBEDDING_ENGINE === '' && RAG_EMBEDDING_MODEL.split('/').length - 1 > 1) {
toast.error( toast.error(
$i18n.t( $i18n.t(
'Model filesystem path detected. Model shortname is required for update, cannot continue.' 'Model filesystem path detected. Model shortname is required for update, cannot continue.'
@ -72,7 +74,7 @@
); );
return; return;
} }
if (embeddingEngine === 'ollama' && embeddingModel === '') { if (RAG_EMBEDDING_ENGINE === 'ollama' && RAG_EMBEDDING_MODEL === '') {
toast.error( toast.error(
$i18n.t( $i18n.t(
'Model filesystem path detected. Model shortname is required for update, cannot continue.' 'Model filesystem path detected. Model shortname is required for update, cannot continue.'
@ -81,7 +83,7 @@
return; return;
} }
if (embeddingEngine === 'openai' && embeddingModel === '') { if (RAG_EMBEDDING_ENGINE === 'openai' && RAG_EMBEDDING_MODEL === '') {
toast.error( toast.error(
$i18n.t( $i18n.t(
'Model filesystem path detected. Model shortname is required for update, cannot continue.' 'Model filesystem path detected. Model shortname is required for update, cannot continue.'
@ -91,20 +93,26 @@
} }
if ( if (
embeddingEngine === 'azure_openai' && RAG_EMBEDDING_ENGINE === 'azure_openai' &&
(AzureOpenAIKey === '' || AzureOpenAIUrl === '' || AzureOpenAIVersion === '') (AzureOpenAIKey === '' || AzureOpenAIUrl === '' || AzureOpenAIVersion === '')
) { ) {
toast.error($i18n.t('OpenAI URL/Key required.')); toast.error($i18n.t('OpenAI URL/Key required.'));
return; return;
} }
console.debug('Update embedding model attempt:', embeddingModel); console.debug('Update embedding model attempt:', {
RAG_EMBEDDING_ENGINE,
RAG_EMBEDDING_MODEL,
RAG_EMBEDDING_BATCH_SIZE,
ENABLE_ASYNC_EMBEDDING
});
updateEmbeddingModelLoading = true; updateEmbeddingModelLoading = true;
const res = await updateEmbeddingConfig(localStorage.token, { const res = await updateEmbeddingConfig(localStorage.token, {
embedding_engine: embeddingEngine, RAG_EMBEDDING_ENGINE: RAG_EMBEDDING_ENGINE,
embedding_model: embeddingModel, RAG_EMBEDDING_MODEL: RAG_EMBEDDING_MODEL,
embedding_batch_size: embeddingBatchSize, RAG_EMBEDDING_BATCH_SIZE: RAG_EMBEDDING_BATCH_SIZE,
ENABLE_ASYNC_EMBEDDING: ENABLE_ASYNC_EMBEDDING,
ollama_config: { ollama_config: {
key: OllamaKey, key: OllamaKey,
url: OllamaUrl url: OllamaUrl
@ -127,11 +135,6 @@
if (res) { if (res) {
console.debug('embeddingModelUpdateHandler:', res); console.debug('embeddingModelUpdateHandler:', res);
if (res.status === true) {
toast.success($i18n.t('Embedding model set to "{{embedding_model}}"', res), {
duration: 1000 * 10
});
}
} }
}; };
@ -151,26 +154,6 @@
toast.error($i18n.t('Docling Server URL required.')); toast.error($i18n.t('Docling Server URL required.'));
return; return;
} }
if (
RAGConfig.CONTENT_EXTRACTION_ENGINE === 'docling' &&
RAGConfig.DOCLING_DO_OCR &&
((RAGConfig.DOCLING_OCR_ENGINE === '' && RAGConfig.DOCLING_OCR_LANG !== '') ||
(RAGConfig.DOCLING_OCR_ENGINE !== '' && RAGConfig.DOCLING_OCR_LANG === ''))
) {
toast.error(
$i18n.t('Both Docling OCR Engine and Language(s) must be provided or both left empty.')
);
return;
}
if (
RAGConfig.CONTENT_EXTRACTION_ENGINE === 'docling' &&
RAGConfig.DOCLING_DO_OCR === false &&
RAGConfig.DOCLING_FORCE_OCR === true
) {
toast.error($i18n.t('In order to force OCR, performing OCR must be enabled.'));
return;
}
if ( if (
RAGConfig.CONTENT_EXTRACTION_ENGINE === 'datalab_marker' && RAGConfig.CONTENT_EXTRACTION_ENGINE === 'datalab_marker' &&
RAGConfig.DATALAB_MARKER_ADDITIONAL_CONFIG && RAGConfig.DATALAB_MARKER_ADDITIONAL_CONFIG &&
@ -238,12 +221,6 @@
ALLOWED_FILE_EXTENSIONS: RAGConfig.ALLOWED_FILE_EXTENSIONS.split(',') ALLOWED_FILE_EXTENSIONS: RAGConfig.ALLOWED_FILE_EXTENSIONS.split(',')
.map((ext) => ext.trim()) .map((ext) => ext.trim())
.filter((ext) => ext !== ''), .filter((ext) => ext !== ''),
DOCLING_PICTURE_DESCRIPTION_LOCAL: JSON.parse(
RAGConfig.DOCLING_PICTURE_DESCRIPTION_LOCAL || '{}'
),
DOCLING_PICTURE_DESCRIPTION_API: JSON.parse(
RAGConfig.DOCLING_PICTURE_DESCRIPTION_API || '{}'
),
DOCLING_PARAMS: DOCLING_PARAMS:
typeof RAGConfig.DOCLING_PARAMS === 'string' && RAGConfig.DOCLING_PARAMS.trim() !== '' typeof RAGConfig.DOCLING_PARAMS === 'string' && RAGConfig.DOCLING_PARAMS.trim() !== ''
? JSON.parse(RAGConfig.DOCLING_PARAMS) ? JSON.parse(RAGConfig.DOCLING_PARAMS)
@ -260,9 +237,10 @@
const embeddingConfig = await getEmbeddingConfig(localStorage.token); const embeddingConfig = await getEmbeddingConfig(localStorage.token);
if (embeddingConfig) { if (embeddingConfig) {
embeddingEngine = embeddingConfig.embedding_engine; RAG_EMBEDDING_ENGINE = embeddingConfig.RAG_EMBEDDING_ENGINE;
embeddingModel = embeddingConfig.embedding_model; RAG_EMBEDDING_MODEL = embeddingConfig.RAG_EMBEDDING_MODEL;
embeddingBatchSize = embeddingConfig.embedding_batch_size ?? 1; RAG_EMBEDDING_BATCH_SIZE = embeddingConfig.RAG_EMBEDDING_BATCH_SIZE ?? 1;
ENABLE_ASYNC_EMBEDDING = embeddingConfig.ENABLE_ASYNC_EMBEDDING ?? true;
OpenAIKey = embeddingConfig.openai_config.key; OpenAIKey = embeddingConfig.openai_config.key;
OpenAIUrl = embeddingConfig.openai_config.url; OpenAIUrl = embeddingConfig.openai_config.url;
@ -281,16 +259,6 @@
const config = await getRAGConfig(localStorage.token); const config = await getRAGConfig(localStorage.token);
config.ALLOWED_FILE_EXTENSIONS = (config?.ALLOWED_FILE_EXTENSIONS ?? []).join(', '); config.ALLOWED_FILE_EXTENSIONS = (config?.ALLOWED_FILE_EXTENSIONS ?? []).join(', ');
config.DOCLING_PICTURE_DESCRIPTION_LOCAL = JSON.stringify(
config.DOCLING_PICTURE_DESCRIPTION_LOCAL ?? {},
null,
2
);
config.DOCLING_PICTURE_DESCRIPTION_API = JSON.stringify(
config.DOCLING_PICTURE_DESCRIPTION_API ?? {},
null,
2
);
config.DOCLING_PARAMS = config.DOCLING_PARAMS =
typeof config.DOCLING_PARAMS === 'object' typeof config.DOCLING_PARAMS === 'object'
? JSON.stringify(config.DOCLING_PARAMS ?? {}, null, 2) ? JSON.stringify(config.DOCLING_PARAMS ?? {}, null, 2)
@ -359,7 +327,7 @@
<div class="mb-3"> <div class="mb-3">
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class="mb-2.5 flex flex-col w-full justify-between"> <div class="mb-2.5 flex flex-col w-full justify-between">
<div class="flex w-full justify-between mb-1"> <div class="flex w-full justify-between mb-1">
@ -589,173 +557,18 @@
</div> </div>
</div> </div>
{:else if RAGConfig.CONTENT_EXTRACTION_ENGINE === 'docling'} {:else if RAGConfig.CONTENT_EXTRACTION_ENGINE === 'docling'}
<div class="flex w-full mt-1"> <div class="my-0.5 flex gap-2 pr-2">
<input <input
class="flex-1 w-full text-sm bg-transparent outline-hidden" class="flex-1 w-full text-sm bg-transparent outline-hidden"
placeholder={$i18n.t('Enter Docling Server URL')} placeholder={$i18n.t('Enter Docling Server URL')}
bind:value={RAGConfig.DOCLING_SERVER_URL} bind:value={RAGConfig.DOCLING_SERVER_URL}
/> />
</div> <SensitiveInput
placeholder={$i18n.t('Enter Docling API Key')}
<div class="flex w-full mt-2"> bind:value={RAGConfig.DOCLING_API_KEY}
<div class="flex-1 flex justify-between"> required={false}
<div class=" self-center text-xs font-medium">
{$i18n.t('Perform OCR')}
</div>
<div class="flex items-center relative">
<Switch bind:state={RAGConfig.DOCLING_DO_OCR} />
</div>
</div>
</div>
{#if RAGConfig.DOCLING_DO_OCR}
<div class="flex w-full mt-2">
<input
class="flex-1 w-full text-sm bg-transparent outline-hidden"
placeholder={$i18n.t('Enter Docling OCR Engine')}
bind:value={RAGConfig.DOCLING_OCR_ENGINE}
/>
<input
class="flex-1 w-full text-sm bg-transparent outline-hidden"
placeholder={$i18n.t('Enter Docling OCR Language(s)')}
bind:value={RAGConfig.DOCLING_OCR_LANG}
/> />
</div> </div>
{/if}
<div class="flex w-full mt-2">
<div class="flex-1 flex justify-between">
<div class=" self-center text-xs font-medium">
{$i18n.t('Force OCR')}
</div>
<div class="flex items-center relative">
<Switch bind:state={RAGConfig.DOCLING_FORCE_OCR} />
</div>
</div>
</div>
<div class="flex justify-between w-full mt-2">
<div class="self-center text-xs font-medium">
<Tooltip content={''} placement="top-start">
{$i18n.t('PDF Backend')}
</Tooltip>
</div>
<div class="">
<select
class="dark:bg-gray-900 w-fit pr-8 rounded-sm px-2 text-xs bg-transparent outline-hidden text-right"
bind:value={RAGConfig.DOCLING_PDF_BACKEND}
>
<option value="pypdfium2">{$i18n.t('pypdfium2')}</option>
<option value="dlparse_v1">{$i18n.t('dlparse_v1')}</option>
<option value="dlparse_v2">{$i18n.t('dlparse_v2')}</option>
<option value="dlparse_v4">{$i18n.t('dlparse_v4')}</option>
</select>
</div>
</div>
<div class="flex justify-between w-full mt-2">
<div class="self-center text-xs font-medium">
<Tooltip content={''} placement="top-start">
{$i18n.t('Table Mode')}
</Tooltip>
</div>
<div class="">
<select
class="dark:bg-gray-900 w-fit pr-8 rounded-sm px-2 text-xs bg-transparent outline-hidden text-right"
bind:value={RAGConfig.DOCLING_TABLE_MODE}
>
<option value="fast">{$i18n.t('fast')}</option>
<option value="accurate">{$i18n.t('accurate')}</option>
</select>
</div>
</div>
<div class="flex justify-between w-full mt-2">
<div class="self-center text-xs font-medium">
<Tooltip content={''} placement="top-start">
{$i18n.t('Pipeline')}
</Tooltip>
</div>
<div class="">
<select
class="dark:bg-gray-900 w-fit pr-8 rounded-sm px-2 text-xs bg-transparent outline-hidden text-right"
bind:value={RAGConfig.DOCLING_PIPELINE}
>
<option value="standard">{$i18n.t('standard')}</option>
<option value="vlm">{$i18n.t('vlm')}</option>
</select>
</div>
</div>
<div class="flex w-full mt-2">
<div class="flex-1 flex justify-between">
<div class=" self-center text-xs font-medium">
{$i18n.t('Describe Pictures in Documents')}
</div>
<div class="flex items-center relative">
<Switch bind:state={RAGConfig.DOCLING_DO_PICTURE_DESCRIPTION} />
</div>
</div>
</div>
{#if RAGConfig.DOCLING_DO_PICTURE_DESCRIPTION}
<div class="flex justify-between w-full mt-2">
<div class="self-center text-xs font-medium">
<Tooltip content={''} placement="top-start">
{$i18n.t('Picture Description Mode')}
</Tooltip>
</div>
<div class="">
<select
class="dark:bg-gray-900 w-fit pr-8 rounded-sm px-2 text-xs bg-transparent outline-hidden text-right"
bind:value={RAGConfig.DOCLING_PICTURE_DESCRIPTION_MODE}
>
<option value="">{$i18n.t('Default')}</option>
<option value="local">{$i18n.t('Local')}</option>
<option value="api">{$i18n.t('API')}</option>
</select>
</div>
</div>
{#if RAGConfig.DOCLING_PICTURE_DESCRIPTION_MODE === 'local'}
<div class="flex flex-col gap-2 mt-2">
<div class=" flex flex-col w-full justify-between">
<div class=" mb-1 text-xs font-medium">
{$i18n.t('Picture Description Local Config')}
</div>
<div class="flex w-full items-center relative">
<Tooltip
content={$i18n.t(
'Options for running a local vision-language model in the picture description. The parameters refer to a model hosted on Hugging Face. This parameter is mutually exclusive with picture_description_api.'
)}
placement="top-start"
className="w-full"
>
<Textarea
bind:value={RAGConfig.DOCLING_PICTURE_DESCRIPTION_LOCAL}
placeholder={$i18n.t('Enter Config in JSON format')}
/>
</Tooltip>
</div>
</div>
</div>
{:else if RAGConfig.DOCLING_PICTURE_DESCRIPTION_MODE === 'api'}
<div class="flex flex-col gap-2 mt-2">
<div class=" flex flex-col w-full justify-between">
<div class=" mb-1 text-xs font-medium">
{$i18n.t('Picture Description API Config')}
</div>
<div class="flex w-full items-center relative">
<Tooltip
content={$i18n.t(
'API details for using a vision-language model in the picture description. This parameter is mutually exclusive with picture_description_local.'
)}
placement="top-start"
className="w-full"
>
<Textarea
bind:value={RAGConfig.DOCLING_PICTURE_DESCRIPTION_API}
placeholder={$i18n.t('Enter Config in JSON format')}
/>
</Tooltip>
</div>
</div>
</div>
{/if}
{/if}
<div class="flex flex-col gap-2 mt-2"> <div class="flex flex-col gap-2 mt-2">
<div class=" flex flex-col w-full justify-between"> <div class=" flex flex-col w-full justify-between">
@ -784,6 +597,20 @@
required={false} required={false}
/> />
</div> </div>
<div class="my-0.5 flex flex-col w-full">
<div class=" mb-1 text-xs font-medium">
{$i18n.t('Document Intelligence Model')}
</div>
<div class="flex w-full">
<div class="flex-1 mr-2">
<input
class="flex-1 w-full text-sm bg-transparent outline-hidden"
placeholder={$i18n.t('Enter Document Intelligence Model')}
bind:value={RAGConfig.DOCUMENT_INTELLIGENCE_MODEL}
/>
</div>
</div>
</div>
{:else if RAGConfig.CONTENT_EXTRACTION_ENGINE === 'mistral_ocr'} {:else if RAGConfig.CONTENT_EXTRACTION_ENGINE === 'mistral_ocr'}
<div class="my-0.5 flex gap-2 pr-2"> <div class="my-0.5 flex gap-2 pr-2">
<input <input
@ -949,7 +776,7 @@
<div class="mb-3"> <div class="mb-3">
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Embedding')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Embedding')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class=" mb-2.5 flex flex-col w-full justify-between"> <div class=" mb-2.5 flex flex-col w-full justify-between">
<div class="flex w-full justify-between"> <div class="flex w-full justify-between">
@ -959,17 +786,17 @@
<div class="flex items-center relative"> <div class="flex items-center relative">
<select <select
class="dark:bg-gray-900 w-fit pr-8 rounded-sm px-2 p-1 text-xs bg-transparent outline-hidden text-right" class="dark:bg-gray-900 w-fit pr-8 rounded-sm px-2 p-1 text-xs bg-transparent outline-hidden text-right"
bind:value={embeddingEngine} bind:value={RAG_EMBEDDING_ENGINE}
placeholder={$i18n.t('Select an embedding model engine')} placeholder={$i18n.t('Select an embedding model engine')}
on:change={(e) => { on:change={(e) => {
if (e.target.value === 'ollama') { if (e.target.value === 'ollama') {
embeddingModel = ''; RAG_EMBEDDING_MODEL = '';
} else if (e.target.value === 'openai') { } else if (e.target.value === 'openai') {
embeddingModel = 'text-embedding-3-small'; RAG_EMBEDDING_MODEL = 'text-embedding-3-small';
} else if (e.target.value === 'azure_openai') { } else if (e.target.value === 'azure_openai') {
embeddingModel = 'text-embedding-3-small'; RAG_EMBEDDING_MODEL = 'text-embedding-3-small';
} else if (e.target.value === '') { } else if (e.target.value === '') {
embeddingModel = 'sentence-transformers/all-MiniLM-L6-v2'; RAG_EMBEDDING_MODEL = 'sentence-transformers/all-MiniLM-L6-v2';
} }
}} }}
> >
@ -981,7 +808,7 @@
</div> </div>
</div> </div>
{#if embeddingEngine === 'openai'} {#if RAG_EMBEDDING_ENGINE === 'openai'}
<div class="my-0.5 flex gap-2 pr-2"> <div class="my-0.5 flex gap-2 pr-2">
<input <input
class="flex-1 w-full text-sm bg-transparent outline-hidden" class="flex-1 w-full text-sm bg-transparent outline-hidden"
@ -996,7 +823,7 @@
required={false} required={false}
/> />
</div> </div>
{:else if embeddingEngine === 'ollama'} {:else if RAG_EMBEDDING_ENGINE === 'ollama'}
<div class="my-0.5 flex gap-2 pr-2"> <div class="my-0.5 flex gap-2 pr-2">
<input <input
class="flex-1 w-full text-sm bg-transparent outline-hidden" class="flex-1 w-full text-sm bg-transparent outline-hidden"
@ -1011,7 +838,7 @@
required={false} required={false}
/> />
</div> </div>
{:else if embeddingEngine === 'azure_openai'} {:else if RAG_EMBEDDING_ENGINE === 'azure_openai'}
<div class="my-0.5 flex flex-col gap-2 pr-2 w-full"> <div class="my-0.5 flex flex-col gap-2 pr-2 w-full">
<div class="flex gap-2"> <div class="flex gap-2">
<input <input
@ -1038,12 +865,12 @@
<div class=" mb-1 text-xs font-medium">{$i18n.t('Embedding Model')}</div> <div class=" mb-1 text-xs font-medium">{$i18n.t('Embedding Model')}</div>
<div class=""> <div class="">
{#if embeddingEngine === 'ollama'} {#if RAG_EMBEDDING_ENGINE === 'ollama'}
<div class="flex w-full"> <div class="flex w-full">
<div class="flex-1 mr-2"> <div class="flex-1 mr-2">
<input <input
class="flex-1 w-full text-sm bg-transparent outline-hidden" class="flex-1 w-full text-sm bg-transparent outline-hidden"
bind:value={embeddingModel} bind:value={RAG_EMBEDDING_MODEL}
placeholder={$i18n.t('Set embedding model')} placeholder={$i18n.t('Set embedding model')}
required required
/> />
@ -1055,13 +882,13 @@
<input <input
class="flex-1 w-full text-sm bg-transparent outline-hidden" class="flex-1 w-full text-sm bg-transparent outline-hidden"
placeholder={$i18n.t('Set embedding model (e.g. {{model}})', { placeholder={$i18n.t('Set embedding model (e.g. {{model}})', {
model: embeddingModel.slice(-40) model: RAG_EMBEDDING_MODEL.slice(-40)
})} })}
bind:value={embeddingModel} bind:value={RAG_EMBEDDING_MODEL}
/> />
</div> </div>
{#if embeddingEngine === ''} {#if RAG_EMBEDDING_ENGINE === ''}
<button <button
class="px-2.5 bg-transparent text-gray-800 dark:bg-transparent dark:text-gray-100 rounded-lg transition" class="px-2.5 bg-transparent text-gray-800 dark:bg-transparent dark:text-gray-100 rounded-lg transition"
on:click={() => { on:click={() => {
@ -1101,7 +928,7 @@
</div> </div>
</div> </div>
{#if embeddingEngine === 'ollama' || embeddingEngine === 'openai' || embeddingEngine === 'azure_openai'} {#if RAG_EMBEDDING_ENGINE === 'ollama' || RAG_EMBEDDING_ENGINE === 'openai' || RAG_EMBEDDING_ENGINE === 'azure_openai'}
<div class=" mb-2.5 flex w-full justify-between"> <div class=" mb-2.5 flex w-full justify-between">
<div class=" self-center text-xs font-medium"> <div class=" self-center text-xs font-medium">
{$i18n.t('Embedding Batch Size')} {$i18n.t('Embedding Batch Size')}
@ -1109,7 +936,7 @@
<div class=""> <div class="">
<input <input
bind:value={embeddingBatchSize} bind:value={RAG_EMBEDDING_BATCH_SIZE}
type="number" type="number"
class=" bg-transparent text-center w-14 outline-none" class=" bg-transparent text-center w-14 outline-none"
min="-2" min="-2"
@ -1118,13 +945,29 @@
/> />
</div> </div>
</div> </div>
<div class=" mb-2.5 flex w-full justify-between">
<div class="self-center text-xs font-medium">
<Tooltip
content={$i18n.t(
'Runs embedding tasks concurrently to speed up processing. Turn off if rate limits become an issue.'
)}
placement="top-start"
>
{$i18n.t('Async Embedding Processing')}
</Tooltip>
</div>
<div class="flex items-center relative">
<Switch bind:state={ENABLE_ASYNC_EMBEDDING} />
</div>
</div>
{/if} {/if}
</div> </div>
<div class="mb-3"> <div class="mb-3">
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Retrieval')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Retrieval')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class=" mb-2.5 flex w-full justify-between"> <div class=" mb-2.5 flex w-full justify-between">
<div class=" self-center text-xs font-medium">{$i18n.t('Full Context Mode')}</div> <div class=" self-center text-xs font-medium">{$i18n.t('Full Context Mode')}</div>
@ -1382,7 +1225,7 @@
<div class="mb-3"> <div class="mb-3">
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Files')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Files')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class=" mb-2.5 flex w-full justify-between"> <div class=" mb-2.5 flex w-full justify-between">
<div class=" self-center text-xs font-medium">{$i18n.t('Allowed File Extensions')}</div> <div class=" self-center text-xs font-medium">{$i18n.t('Allowed File Extensions')}</div>
@ -1494,7 +1337,7 @@
<div class="mb-3"> <div class="mb-3">
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Integration')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Integration')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class=" mb-2.5 flex w-full justify-between"> <div class=" mb-2.5 flex w-full justify-between">
<div class=" self-center text-xs font-medium">{$i18n.t('Google Drive')}</div> <div class=" self-center text-xs font-medium">{$i18n.t('Google Drive')}</div>
@ -1514,13 +1357,14 @@
<div class="mb-3"> <div class="mb-3">
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Danger Zone')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Danger Zone')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class=" mb-2.5 flex w-full justify-between"> <div class=" mb-2.5 flex w-full justify-between">
<div class=" self-center text-xs font-medium">{$i18n.t('Reset Upload Directory')}</div> <div class=" self-center text-xs font-medium">{$i18n.t('Reset Upload Directory')}</div>
<div class="flex items-center relative"> <div class="flex items-center relative">
<button <button
class="text-xs" class="text-xs"
type="button"
on:click={() => { on:click={() => {
showResetUploadDirConfirm = true; showResetUploadDirConfirm = true;
}} }}
@ -1537,6 +1381,7 @@
<div class="flex items-center relative"> <div class="flex items-center relative">
<button <button
class="text-xs" class="text-xs"
type="button"
on:click={() => { on:click={() => {
showResetConfirm = true; showResetConfirm = true;
}} }}
@ -1552,6 +1397,7 @@
<div class="flex items-center relative"> <div class="flex items-center relative">
<button <button
class="text-xs" class="text-xs"
type="button"
on:click={() => { on:click={() => {
showReindexConfirm = true; showReindexConfirm = true;
}} }}

View file

@ -106,7 +106,7 @@
<div class="mb-3"> <div class="mb-3">
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class="mb-2.5 flex w-full justify-between"> <div class="mb-2.5 flex w-full justify-between">
<div class=" text-xs font-medium">{$i18n.t('Arena Models')}</div> <div class=" text-xs font-medium">{$i18n.t('Arena Models')}</div>
@ -139,7 +139,7 @@
</div> </div>
</div> </div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class="flex flex-col gap-2"> <div class="flex flex-col gap-2">
{#if (evaluationConfig?.EVALUATION_ARENA_MODELS ?? []).length > 0} {#if (evaluationConfig?.EVALUATION_ARENA_MODELS ?? []).length > 0}

View file

@ -292,11 +292,9 @@
<hr class=" border-gray-100 dark:border-gray-700/10 my-2.5 w-full" /> <hr class=" border-gray-100 dark:border-gray-700/10 my-2.5 w-full" />
<div class="my-2 -mx-2"> <div class="my-2">
<div class="px-4 py-3 bg-gray-50 dark:bg-gray-950 rounded-3xl">
<AccessControl bind:accessControl /> <AccessControl bind:accessControl />
</div> </div>
</div>
<hr class=" border-gray-100 dark:border-gray-700/10 my-2.5 w-full" /> <hr class=" border-gray-100 dark:border-gray-700/10 my-2.5 w-full" />
@ -352,7 +350,7 @@
<div class="flex items-center"> <div class="flex items-center">
<select <select
class="w-full py-1 text-sm rounded-lg bg-transparent {selectedModelId class="dark:bg-gray-900 w-full py-1 text-sm rounded-lg bg-transparent {selectedModelId
? '' ? ''
: 'text-gray-500'} placeholder:text-gray-300 dark:placeholder:text-gray-700 outline-hidden" : 'text-gray-500'} placeholder:text-gray-300 dark:placeholder:text-gray-700 outline-hidden"
bind:value={selectedModelId} bind:value={selectedModelId}

View file

@ -129,7 +129,7 @@
<div class="mb-3.5"> <div class="mb-3.5">
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class="mb-2.5"> <div class="mb-2.5">
<div class=" mb-1 text-xs font-medium flex space-x-2 items-center"> <div class=" mb-1 text-xs font-medium flex space-x-2 items-center">
@ -287,7 +287,7 @@
<div class="mb-3"> <div class="mb-3">
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Authentication')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Authentication')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class=" mb-2.5 flex w-full justify-between"> <div class=" mb-2.5 flex w-full justify-between">
<div class=" self-center text-xs font-medium">{$i18n.t('Default User Role')}</div> <div class=" self-center text-xs font-medium">{$i18n.t('Default User Role')}</div>
@ -660,7 +660,7 @@
<div class="mb-3"> <div class="mb-3">
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Features')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Features')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class="mb-2.5 flex w-full items-center justify-between pr-2"> <div class="mb-2.5 flex w-full items-center justify-between pr-2">
<div class=" self-center text-xs font-medium"> <div class=" self-center text-xs font-medium">
@ -676,6 +676,14 @@
<Switch bind:state={adminConfig.ENABLE_MESSAGE_RATING} /> <Switch bind:state={adminConfig.ENABLE_MESSAGE_RATING} />
</div> </div>
<div class="mb-2.5 flex w-full items-center justify-between pr-2">
<div class=" self-center text-xs font-medium">
{$i18n.t('Folders')}
</div>
<Switch bind:state={adminConfig.ENABLE_FOLDERS} />
</div>
<div class="mb-2.5 flex w-full items-center justify-between pr-2"> <div class="mb-2.5 flex w-full items-center justify-between pr-2">
<div class=" self-center text-xs font-medium"> <div class=" self-center text-xs font-medium">
{$i18n.t('Notes')} ({$i18n.t('Beta')}) {$i18n.t('Notes')} ({$i18n.t('Beta')})

View file

@ -291,7 +291,7 @@
<div class="mb-3"> <div class="mb-3">
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class="mb-2.5"> <div class="mb-2.5">
<div class="flex w-full justify-between items-center"> <div class="flex w-full justify-between items-center">
@ -309,7 +309,7 @@
<div class="mb-3"> <div class="mb-3">
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Create Image')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Create Image')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
{#if config.ENABLE_IMAGE_GENERATION} {#if config.ENABLE_IMAGE_GENERATION}
<div class="mb-2.5"> <div class="mb-2.5">
@ -882,7 +882,7 @@
<div class="mb-3"> <div class="mb-3">
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Edit Image')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Edit Image')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class="mb-2.5"> <div class="mb-2.5">
<div class="flex w-full justify-between items-center"> <div class="flex w-full justify-between items-center">

View file

@ -115,7 +115,7 @@
<div class="mb-3.5"> <div class="mb-3.5">
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Tasks')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Tasks')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class=" mb-2 font-medium flex items-center"> <div class=" mb-2 font-medium flex items-center">
<div class=" text-xs mr-1">{$i18n.t('Task Model')}</div> <div class=" text-xs mr-1">{$i18n.t('Task Model')}</div>
@ -423,7 +423,7 @@
<div class="mb-3.5"> <div class="mb-3.5">
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('UI')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('UI')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class="mb-2.5"> <div class="mb-2.5">
<div class="flex w-full justify-between"> <div class="flex w-full justify-between">

View file

@ -811,9 +811,8 @@
bind:value={deleteModelTag} bind:value={deleteModelTag}
placeholder={$i18n.t('Select a model')} placeholder={$i18n.t('Select a model')}
> >
{#if !deleteModelTag}
<option value="" disabled selected>{$i18n.t('Select a model')}</option> <option value="" disabled selected>{$i18n.t('Select a model')}</option>
{/if}
{#each ollamaModels as model} {#each ollamaModels as model}
<option value={model.id} class="bg-gray-50 dark:bg-gray-700" <option value={model.id} class="bg-gray-50 dark:bg-gray-700"
>{model.name + ' (' + (model.size / 1024 ** 3).toFixed(1) + ' GB)'}</option >{model.name + ' (' + (model.size / 1024 ** 3).toFixed(1) + ' GB)'}</option

View file

@ -19,7 +19,7 @@
<div class="flex items-center -mr-1"> <div class="flex items-center -mr-1">
<select <select
class="w-full py-1 text-sm rounded-lg bg-transparent {selectedModelId class="dark:bg-gray-900 w-full py-1 text-sm rounded-lg bg-transparent {selectedModelId
? '' ? ''
: 'text-gray-500'} placeholder:text-gray-300 dark:placeholder:text-gray-700 outline-hidden" : 'text-gray-500'} placeholder:text-gray-300 dark:placeholder:text-gray-700 outline-hidden"
bind:value={selectedModelId} bind:value={selectedModelId}

View file

@ -418,7 +418,7 @@
</div> </div>
</div> </div>
<hr class="border-gray-100 dark:border-gray-850 my-3 w-full" /> <hr class="border-gray-100/30 dark:border-gray-850/30 my-3 w-full" />
{#if pipelines !== null} {#if pipelines !== null}
{#if pipelines.length > 0} {#if pipelines.length > 0}

View file

@ -61,7 +61,7 @@
<div class="mb-3"> <div class="mb-3">
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class="mb-2.5 flex flex-col w-full justify-between"> <div class="mb-2.5 flex flex-col w-full justify-between">
<!-- {$i18n.t(`Failed to connect to {{URL}} OpenAPI tool server`, { <!-- {$i18n.t(`Failed to connect to {{URL}} OpenAPI tool server`, {

View file

@ -97,7 +97,7 @@
<div class="mb-3"> <div class="mb-3">
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class=" mb-2.5 flex w-full justify-between"> <div class=" mb-2.5 flex w-full justify-between">
<div class=" self-center text-xs font-medium"> <div class=" self-center text-xs font-medium">
@ -746,7 +746,7 @@
<div class="mb-3"> <div class="mb-3">
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Loader')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Loader')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2" />
<div class=" mb-2.5 flex w-full justify-between"> <div class=" mb-2.5 flex w-full justify-between">
<div class=" self-center text-xs font-medium"> <div class=" self-center text-xs font-medium">

View file

@ -100,6 +100,7 @@
<EditGroupModal <EditGroupModal
bind:show={showAddGroupModal} bind:show={showAddGroupModal}
edit={false} edit={false}
tabs={['general', 'permissions']}
permissions={defaultPermissions} permissions={defaultPermissions}
onSubmit={addGroupHandler} onSubmit={addGroupHandler}
/> />
@ -175,7 +176,7 @@
<div class="w-full basis-2/5 text-right">{$i18n.t('Users')}</div> <div class="w-full basis-2/5 text-right">{$i18n.t('Users')}</div>
</div> </div>
<hr class="mt-1.5 border-gray-100 dark:border-gray-850" /> <hr class="mt-1.5 border-gray-100/30 dark:border-gray-850/30" />
{#each filteredGroups as group} {#each filteredGroups as group}
<div class="my-2"> <div class="my-2">
@ -185,7 +186,7 @@
</div> </div>
{/if} {/if}
<hr class="mb-2 border-gray-100 dark:border-gray-850" /> <hr class="mb-2 border-gray-100/30 dark:border-gray-850/30" />
<EditGroupModal <EditGroupModal
bind:show={showDefaultPermissionsModal} bind:show={showDefaultPermissionsModal}

View file

@ -84,11 +84,13 @@
}, },
features: { features: {
api_keys: false, api_keys: false,
notes: true,
channels: true,
folders: true,
direct_tool_servers: false, direct_tool_servers: false,
web_search: true, web_search: true,
image_generation: true, image_generation: true,
code_interpreter: true, code_interpreter: true
notes: true
} }
}; };

View file

@ -65,7 +65,7 @@
</div> </div>
</div> </div>
<hr class="border-gray-50 dark:border-gray-850 my-1" /> <hr class="border-gray-50 dark:border-gray-850/30 my-1" />
<div class="flex flex-col w-full mt-2"> <div class="flex flex-col w-full mt-2">
<div class=" mb-1 text-xs text-gray-500">{$i18n.t('Setting')}</div> <div class=" mb-1 text-xs text-gray-500">{$i18n.t('Setting')}</div>

View file

@ -54,11 +54,13 @@
}, },
features: { features: {
api_keys: false, api_keys: false,
notes: true,
channels: true,
folders: true,
direct_tool_servers: false, direct_tool_servers: false,
web_search: true, web_search: true,
image_generation: true, image_generation: true,
code_interpreter: true, code_interpreter: true
notes: true
} }
}; };
@ -214,7 +216,7 @@
</div> </div>
</div> </div>
<hr class=" border-gray-100 dark:border-gray-850" /> <hr class=" border-gray-100/30 dark:border-gray-850/30" />
<div> <div>
<div class=" mb-2 text-sm font-medium">{$i18n.t('Sharing Permissions')}</div> <div class=" mb-2 text-sm font-medium">{$i18n.t('Sharing Permissions')}</div>
@ -390,7 +392,7 @@
{/if} {/if}
</div> </div>
<hr class=" border-gray-100 dark:border-gray-850" /> <hr class=" border-gray-100/30 dark:border-gray-850/30" />
<div> <div>
<div class=" mb-2 text-sm font-medium">{$i18n.t('Chat Permissions')}</div> <div class=" mb-2 text-sm font-medium">{$i18n.t('Chat Permissions')}</div>
@ -704,7 +706,7 @@
{/if} {/if}
</div> </div>
<hr class=" border-gray-100 dark:border-gray-850" /> <hr class=" border-gray-100/30 dark:border-gray-850/30" />
<div> <div>
<div class=" mb-2 text-sm font-medium">{$i18n.t('Features Permissions')}</div> <div class=" mb-2 text-sm font-medium">{$i18n.t('Features Permissions')}</div>
@ -725,6 +727,54 @@
{/if} {/if}
</div> </div>
<div class="flex flex-col w-full">
<div class="flex w-full justify-between my-1">
<div class=" self-center text-xs font-medium">
{$i18n.t('Notes')}
</div>
<Switch bind:state={permissions.features.notes} />
</div>
{#if defaultPermissions?.features?.notes && !permissions.features.notes}
<div>
<div class="text-xs text-gray-500">
{$i18n.t('This is a default user permission and will remain enabled.')}
</div>
</div>
{/if}
</div>
<div class="flex flex-col w-full">
<div class="flex w-full justify-between my-1">
<div class=" self-center text-xs font-medium">
{$i18n.t('Channels')}
</div>
<Switch bind:state={permissions.features.channels} />
</div>
{#if defaultPermissions?.features?.channels && !permissions.features.channels}
<div>
<div class="text-xs text-gray-500">
{$i18n.t('This is a default user permission and will remain enabled.')}
</div>
</div>
{/if}
</div>
<div class="flex flex-col w-full">
<div class="flex w-full justify-between my-1">
<div class=" self-center text-xs font-medium">
{$i18n.t('Folders')}
</div>
<Switch bind:state={permissions.features.folders} />
</div>
{#if defaultPermissions?.features?.folders && !permissions.features.folders}
<div>
<div class="text-xs text-gray-500">
{$i18n.t('This is a default user permission and will remain enabled.')}
</div>
</div>
{/if}
</div>
<div class="flex flex-col w-full"> <div class="flex flex-col w-full">
<div class="flex w-full justify-between my-1"> <div class="flex w-full justify-between my-1">
<div class=" self-center text-xs font-medium"> <div class=" self-center text-xs font-medium">
@ -788,21 +838,5 @@
</div> </div>
{/if} {/if}
</div> </div>
<div class="flex flex-col w-full">
<div class="flex w-full justify-between my-1">
<div class=" self-center text-xs font-medium">
{$i18n.t('Notes')}
</div>
<Switch bind:state={permissions.features.notes} />
</div>
{#if defaultPermissions?.features?.notes && !permissions.features.notes}
<div>
<div class="text-xs text-gray-500">
{$i18n.t('This is a default user permission and will remain enabled.')}
</div>
</div>
{/if}
</div>
</div> </div>
</div> </div>

View file

@ -11,21 +11,23 @@
import { getUsers } from '$lib/apis/users'; import { getUsers } from '$lib/apis/users';
import { toast } from 'svelte-sonner'; import { toast } from 'svelte-sonner';
import { addUserToGroup, removeUserFromGroup } from '$lib/apis/groups';
import { WEBUI_API_BASE_URL } from '$lib/constants';
import Tooltip from '$lib/components/common/Tooltip.svelte'; import Tooltip from '$lib/components/common/Tooltip.svelte';
import Checkbox from '$lib/components/common/Checkbox.svelte'; import Checkbox from '$lib/components/common/Checkbox.svelte';
import Badge from '$lib/components/common/Badge.svelte'; import Badge from '$lib/components/common/Badge.svelte';
import Search from '$lib/components/icons/Search.svelte'; import Search from '$lib/components/icons/Search.svelte';
import Pagination from '$lib/components/common/Pagination.svelte'; import Pagination from '$lib/components/common/Pagination.svelte';
import { addUserToGroup, removeUserFromGroup } from '$lib/apis/groups';
import ChevronDown from '$lib/components/icons/ChevronDown.svelte'; import ChevronDown from '$lib/components/icons/ChevronDown.svelte';
import ChevronUp from '$lib/components/icons/ChevronUp.svelte'; import ChevronUp from '$lib/components/icons/ChevronUp.svelte';
import { WEBUI_API_BASE_URL } from '$lib/constants'; import Spinner from '$lib/components/common/Spinner.svelte';
export let groupId: string; export let groupId: string;
export let userCount = 0; export let userCount = 0;
let users = []; let users = null;
let total = 0; let total = null;
let query = ''; let query = '';
let orderBy = `group_id:${groupId}`; // default sort key let orderBy = `group_id:${groupId}`; // default sort key
@ -100,13 +102,18 @@
</div> </div>
</div> </div>
{#if users === null || total === null}
<div class="my-10">
<Spinner className="size-5" />
</div>
{:else}
{#if users.length > 0} {#if users.length > 0}
<div class="scrollbar-hidden relative whitespace-nowrap overflow-x-auto max-w-full"> <div class="scrollbar-hidden relative whitespace-nowrap overflow-x-auto max-w-full">
<table <table
class="w-full text-sm text-left text-gray-500 dark:text-gray-400 table-auto max-w-full" class="w-full text-sm text-left text-gray-500 dark:text-gray-400 table-auto max-w-full"
> >
<thead class="text-xs text-gray-800 uppercase bg-transparent dark:text-gray-200"> <thead class="text-xs text-gray-800 uppercase bg-transparent dark:text-gray-200">
<tr class=" border-b-[1.5px] border-gray-50 dark:border-gray-850"> <tr class=" border-b-[1.5px] border-gray-50/50 dark:border-gray-800/10">
<th <th
scope="col" scope="col"
class="px-2.5 py-2 cursor-pointer text-left w-8" class="px-2.5 py-2 cursor-pointer text-left w-8"
@ -204,7 +211,7 @@
</tr> </tr>
</thead> </thead>
<tbody class=""> <tbody class="">
{#each users as user, userIdx} {#each users as user, userIdx (user?.id ?? userIdx)}
<tr class="bg-white dark:bg-gray-900 dark:border-gray-850 text-xs"> <tr class="bg-white dark:bg-gray-900 dark:border-gray-850 text-xs">
<td class=" px-3 py-1 w-8"> <td class=" px-3 py-1 w-8">
<div class="flex w-full justify-center"> <div class="flex w-full justify-center">
@ -259,4 +266,5 @@
{#if total > 30} {#if total > 30}
<Pagination bind:page count={total} perPage={30} /> <Pagination bind:page count={total} perPage={30} />
{/if} {/if}
{/if}
</div> </div>

View file

@ -33,6 +33,7 @@
import Banner from '$lib/components/common/Banner.svelte'; import Banner from '$lib/components/common/Banner.svelte';
import Markdown from '$lib/components/chat/Messages/Markdown.svelte'; import Markdown from '$lib/components/chat/Messages/Markdown.svelte';
import Spinner from '$lib/components/common/Spinner.svelte'; import Spinner from '$lib/components/common/Spinner.svelte';
import ProfilePreview from '$lib/components/channel/Messages/Message/ProfilePreview.svelte';
const i18n = getContext('i18n'); const i18n = getContext('i18n');
@ -96,11 +97,7 @@
} }
}; };
$: if (page) { $: if (query !== null && page !== null && orderBy !== null && direction !== null) {
getUserList();
}
$: if (query !== null && orderBy && direction) {
getUserList(); getUserList();
} }
</script> </script>
@ -221,7 +218,7 @@
<div class="scrollbar-hidden relative whitespace-nowrap overflow-x-auto max-w-full"> <div class="scrollbar-hidden relative whitespace-nowrap overflow-x-auto max-w-full">
<table class="w-full text-sm text-left text-gray-500 dark:text-gray-400 table-auto max-w-full"> <table class="w-full text-sm text-left text-gray-500 dark:text-gray-400 table-auto max-w-full">
<thead class="text-xs text-gray-800 uppercase bg-transparent dark:text-gray-200"> <thead class="text-xs text-gray-800 uppercase bg-transparent dark:text-gray-200">
<tr class=" border-b-[1.5px] border-gray-50 dark:border-gray-850"> <tr class=" border-b-[1.5px] border-gray-50 dark:border-gray-850/30">
<th <th
scope="col" scope="col"
class="px-2.5 py-2 cursor-pointer select-none" class="px-2.5 py-2 cursor-pointer select-none"
@ -359,14 +356,27 @@
</button> </button>
</td> </td>
<td class="px-3 py-1 font-medium text-gray-900 dark:text-white max-w-48"> <td class="px-3 py-1 font-medium text-gray-900 dark:text-white max-w-48">
<div class="flex items-center"> <div class="flex items-center gap-2">
<ProfilePreview {user} side="right" align="center" sideOffset={6}>
<img <img
class="rounded-full w-6 h-6 object-cover mr-2.5 flex-shrink-0" class="rounded-full w-6 h-6 object-cover mr-0.5 flex-shrink-0"
src={`${WEBUI_API_BASE_URL}/users/${user.id}/profile/image`} src={`${WEBUI_API_BASE_URL}/users/${user.id}/profile/image`}
alt="user" alt="user"
/> />
</ProfilePreview>
<div class="font-medium truncate">{user.name}</div> <div class="font-medium truncate">{user.name}</div>
{#if user?.last_active_at && Date.now() / 1000 - user.last_active_at < 180}
<div>
<span class="relative flex size-1.5">
<span
class="absolute inline-flex h-full w-full animate-ping rounded-full bg-green-400 opacity-75"
></span>
<span class="relative inline-flex size-1.5 rounded-full bg-green-500"></span>
</span>
</div>
{/if}
</div> </div>
</td> </td>
<td class=" px-3 py-1"> {user.email} </td> <td class=" px-3 py-1"> {user.email} </td>

View file

@ -180,7 +180,7 @@
<div class="flex-1"> <div class="flex-1">
<select <select
class="w-full capitalize rounded-lg text-sm bg-transparent dark:disabled:text-gray-500 outline-hidden" class="dark:bg-gray-900 w-full capitalize rounded-lg text-sm bg-transparent dark:disabled:text-gray-500 outline-hidden"
bind:value={_user.role} bind:value={_user.role}
placeholder={$i18n.t('Enter Your Role')} placeholder={$i18n.t('Enter Your Role')}
required required
@ -207,7 +207,7 @@
</div> </div>
</div> </div>
<hr class=" border-gray-100 dark:border-gray-850 my-2.5 w-full" /> <hr class=" border-gray-100/30 dark:border-gray-850/30 my-2.5 w-full" />
<div class="flex flex-col w-full"> <div class="flex flex-col w-full">
<div class=" mb-1 text-xs text-gray-500">{$i18n.t('Email')}</div> <div class=" mb-1 text-xs text-gray-500">{$i18n.t('Email')}</div>

View file

@ -180,12 +180,17 @@
</div> </div>
</div> </div>
{#if _user?.oauth_sub} {#if _user?.oauth}
<div class="flex flex-col w-full"> <div class="flex flex-col w-full">
<div class=" mb-1 text-xs text-gray-500">{$i18n.t('OAuth ID')}</div> <div class=" mb-1 text-xs text-gray-500">{$i18n.t('OAuth ID')}</div>
<div class="flex-1 text-sm break-all mb-1"> <div class="flex-1 text-sm break-all mb-1 flex flex-col space-y-1">
{_user.oauth_sub ?? ''} {#each Object.keys(_user.oauth) as key}
<div>
<span class="text-gray-500">{key}</span>
<span class="">{_user.oauth[key]?.sub}</span>
</div>
{/each}
</div> </div>
</div> </div>
{/if} {/if}

View file

@ -4,8 +4,16 @@
import { onDestroy, onMount, tick } from 'svelte'; import { onDestroy, onMount, tick } from 'svelte';
import { goto } from '$app/navigation'; import { goto } from '$app/navigation';
import { v4 as uuidv4 } from 'uuid';
import { chatId, showSidebar, socket, user } from '$lib/stores'; import {
chatId,
channels,
channelId as _channelId,
showSidebar,
socket,
user
} from '$lib/stores';
import { getChannelById, getChannelMessages, sendMessage } from '$lib/apis/channels'; import { getChannelById, getChannelMessages, sendMessage } from '$lib/apis/channels';
import Messages from './Messages.svelte'; import Messages from './Messages.svelte';
@ -15,9 +23,12 @@
import EllipsisVertical from '../icons/EllipsisVertical.svelte'; import EllipsisVertical from '../icons/EllipsisVertical.svelte';
import Thread from './Thread.svelte'; import Thread from './Thread.svelte';
import i18n from '$lib/i18n'; import i18n from '$lib/i18n';
import Spinner from '../common/Spinner.svelte';
export let id = ''; export let id = '';
let currentId = null;
let scrollEnd = true; let scrollEnd = true;
let messagesContainerElement = null; let messagesContainerElement = null;
let chatInputElement = null; let chatInputElement = null;
@ -43,7 +54,37 @@
} }
}; };
const updateLastReadAt = async (channelId) => {
$socket?.emit('events:channel', {
channel_id: channelId,
message_id: null,
data: {
type: 'last_read_at'
}
});
channels.set(
$channels.map((channel) => {
if (channel.id === channelId) {
return {
...channel,
unread_count: 0
};
}
return channel;
})
);
};
const initHandler = async () => { const initHandler = async () => {
if (currentId) {
updateLastReadAt(currentId);
}
currentId = id;
updateLastReadAt(id);
_channelId.set(id);
top = false; top = false;
messages = null; messages = null;
channel = null; channel = null;
@ -78,7 +119,8 @@
if (type === 'message') { if (type === 'message') {
if ((data?.parent_id ?? null) === null) { if ((data?.parent_id ?? null) === null) {
messages = [data, ...messages]; const tempId = data?.temp_id ?? null;
messages = [{ ...data, temp_id: null }, ...messages.filter((m) => m?.temp_id !== tempId)];
if (typingUsers.find((user) => user.id === event.user.id)) { if (typingUsers.find((user) => user.id === event.user.id)) {
typingUsers = typingUsers.filter((user) => user.id !== event.user.id); typingUsers = typingUsers.filter((user) => user.id !== event.user.id);
@ -143,11 +185,30 @@
return; return;
} }
const res = await sendMessage(localStorage.token, id, { const tempId = uuidv4();
const message = {
temp_id: tempId,
content: content, content: content,
data: data, data: data,
reply_to_id: replyToMessage?.id ?? null reply_to_id: replyToMessage?.id ?? null
}).catch((error) => { };
const ts = Date.now() * 1000000; // nanoseconds
messages = [
{
...message,
id: tempId,
user_id: $user?.id,
user: $user,
reply_to_message: replyToMessage ?? null,
created_at: ts,
updated_at: ts
},
...messages
];
const res = await sendMessage(localStorage.token, id, message).catch((error) => {
toast.error(`${error}`); toast.error(`${error}`);
return null; return null;
}); });
@ -170,6 +231,8 @@
} }
} }
}); });
updateLastReadAt(id);
}; };
let mediaQuery; let mediaQuery;
@ -197,12 +260,32 @@
}); });
onDestroy(() => { onDestroy(() => {
// last read at
updateLastReadAt(id);
_channelId.set(null);
$socket?.off('events:channel', channelEventHandler); $socket?.off('events:channel', channelEventHandler);
}); });
</script> </script>
<svelte:head> <svelte:head>
{#if channel?.type === 'dm'}
<title
>{channel?.name.trim() ||
channel?.users.reduce((a, e, i, arr) => {
if (e.id === $user?.id) {
return a;
}
if (a) {
return `${a}, ${e.name}`;
} else {
return e.name;
}
}, '')} • Open WebUI</title
>
{:else}
<title>#{channel?.name ?? 'Channel'} • Open WebUI</title> <title>#{channel?.name ?? 'Channel'} • Open WebUI</title>
{/if}
</svelte:head> </svelte:head>
<div <div
@ -213,10 +296,28 @@
> >
<PaneGroup direction="horizontal" class="w-full h-full"> <PaneGroup direction="horizontal" class="w-full h-full">
<Pane defaultSize={50} minSize={50} class="h-full flex flex-col w-full relative"> <Pane defaultSize={50} minSize={50} class="h-full flex flex-col w-full relative">
<Navbar {channel} /> <Navbar
{channel}
onPin={(messageId, pinned) => {
messages = messages.map((message) => {
if (message.id === messageId) {
return {
...message,
is_pinned: pinned
};
}
return message;
});
}}
onUpdate={async () => {
channel = await getChannelById(localStorage.token, id).catch((error) => {
return null;
});
}}
/>
{#if channel && messages !== null}
<div class="flex-1 overflow-y-auto"> <div class="flex-1 overflow-y-auto">
{#if channel}
<div <div
class=" pb-2.5 max-w-full z-10 scrollbar-hidden w-full h-full pt-6 flex-1 flex flex-col-reverse overflow-auto" class=" pb-2.5 max-w-full z-10 scrollbar-hidden w-full h-full pt-6 flex-1 flex flex-col-reverse overflow-auto"
id="messages-container" id="messages-container"
@ -256,7 +357,6 @@
/> />
{/key} {/key}
</div> </div>
{/if}
</div> </div>
<div class=" pb-[1rem] px-2.5"> <div class=" pb-[1rem] px-2.5">
@ -277,6 +377,13 @@
{scrollEnd} {scrollEnd}
/> />
</div> </div>
{:else}
<div class=" flex items-center justify-center h-full w-full">
<div class="m-auto">
<Spinner className="size-5" />
</div>
</div>
{/if}
</Pane> </Pane>
{#if !largeScreen} {#if !largeScreen}
@ -300,7 +407,7 @@
{/if} {/if}
{:else if threadId !== null} {:else if threadId !== null}
<PaneResizer <PaneResizer
class="relative flex items-center justify-center group border-l border-gray-50 dark:border-gray-850 hover:border-gray-200 dark:hover:border-gray-800 transition z-20" class="relative flex items-center justify-center group border-l border-gray-50 dark:border-gray-850/30 hover:border-gray-200 dark:hover:border-gray-800 transition z-20"
id="controls-resizer" id="controls-resizer"
> >
<div <div

View file

@ -0,0 +1,119 @@
<script lang="ts">
import { toast } from 'svelte-sonner';
import { getContext, onMount } from 'svelte';
const i18n = getContext('i18n');
import { removeMembersById } from '$lib/apis/channels';
import Spinner from '$lib/components/common/Spinner.svelte';
import Modal from '$lib/components/common/Modal.svelte';
import XMark from '$lib/components/icons/XMark.svelte';
import Hashtag from '../icons/Hashtag.svelte';
import Lock from '../icons/Lock.svelte';
import UserList from './ChannelInfoModal/UserList.svelte';
import AddMembersModal from './ChannelInfoModal/AddMembersModal.svelte';
export let show = false;
export let channel = null;
export let onUpdate = () => {};
let showAddMembersModal = false;
const submitHandler = async () => {};
const removeMemberHandler = async (userId) => {
const res = await removeMembersById(localStorage.token, channel.id, {
user_ids: [userId]
}).catch((error) => {
toast.error(`${error}`);
return null;
});
if (res) {
toast.success($i18n.t('Member removed successfully'));
onUpdate();
} else {
toast.error($i18n.t('Failed to remove member'));
}
};
const init = () => {};
$: if (show) {
init();
}
onMount(() => {
init();
});
</script>
{#if channel}
<AddMembersModal bind:show={showAddMembersModal} {channel} {onUpdate} />
<Modal size="sm" bind:show>
<div>
<div class=" flex justify-between dark:text-gray-100 px-5 pt-4 mb-1.5">
<div class="self-center text-base">
<div class="flex items-center gap-0.5 shrink-0">
{#if channel?.type === 'dm'}
<div class=" text-left self-center overflow-hidden w-full line-clamp-1 flex-1">
{$i18n.t('Direct Message')}
</div>
{:else}
<div class=" size-4 justify-center flex items-center">
{#if channel?.type === 'group' ? !channel?.is_private : channel?.access_control === null}
<Hashtag className="size-3.5" strokeWidth="2.5" />
{:else}
<Lock className="size-5.5" strokeWidth="2" />
{/if}
</div>
<div class=" text-left self-center overflow-hidden w-full line-clamp-1 flex-1">
{channel.name}
</div>
{/if}
</div>
</div>
<button
class="self-center"
on:click={() => {
show = false;
}}
>
<XMark className={'size-5'} />
</button>
</div>
<div class="flex flex-col md:flex-row w-full px-3 pb-4 md:space-x-4 dark:text-gray-200">
<div class=" flex flex-col w-full sm:flex-row sm:justify-center sm:space-x-6">
<form
class="flex flex-col w-full"
on:submit={(e) => {
e.preventDefault();
submitHandler();
}}
>
<div class="flex flex-col w-full h-full pb-2">
<UserList
{channel}
onAdd={channel?.type === 'group' && channel?.is_manager
? () => {
showAddMembersModal = true;
}
: null}
onRemove={channel?.type === 'group' && channel?.is_manager
? (userId) => {
removeMemberHandler(userId);
}
: null}
search={channel?.type !== 'dm'}
sort={channel?.type !== 'dm'}
/>
</div>
</form>
</div>
</div>
</div>
</Modal>
{/if}

View file

@ -0,0 +1,106 @@
<script lang="ts">
import { toast } from 'svelte-sonner';
import { getContext, onMount } from 'svelte';
const i18n = getContext('i18n');
import { addMembersById } from '$lib/apis/channels';
import Modal from '$lib/components/common/Modal.svelte';
import XMark from '$lib/components/icons/XMark.svelte';
import MemberSelector from '$lib/components/workspace/common/MemberSelector.svelte';
import Spinner from '$lib/components/common/Spinner.svelte';
export let show = false;
export let channel = null;
export let onUpdate = () => {};
let groupIds = [];
let userIds = [];
let loading = false;
const submitHandler = async () => {
const res = await addMembersById(localStorage.token, channel.id, {
user_ids: userIds,
group_ids: groupIds
}).catch((error) => {
toast.error(`${error}`);
return null;
});
if (res) {
toast.success($i18n.t('Members added successfully'));
onUpdate();
show = false;
} else {
toast.error($i18n.t('Failed to add members'));
}
};
const reset = () => {
userIds = [];
groupIds = [];
loading = false;
};
$: if (!show) {
reset();
}
</script>
{#if channel}
<Modal size="sm" bind:show>
<div>
<div class=" flex justify-between dark:text-gray-100 px-5 pt-4 mb-1.5">
<div class="self-center text-base">
<div class="flex items-center gap-0.5 shrink-0">
{$i18n.t('Add Members')}
</div>
</div>
<button
class="self-center"
on:click={() => {
show = false;
}}
>
<XMark className={'size-5'} />
</button>
</div>
<div class="flex flex-col md:flex-row w-full px-3 pb-4 md:space-x-4 dark:text-gray-200">
<div class=" flex flex-col w-full sm:flex-row sm:justify-center sm:space-x-6">
<form
class="flex flex-col w-full"
on:submit={(e) => {
e.preventDefault();
submitHandler();
}}
>
<div class="flex flex-col w-full h-full pb-2">
<MemberSelector bind:userIds bind:groupIds includeGroups={true} />
</div>
<div class="flex justify-end pt-3 text-sm font-medium gap-1.5">
<button
class="px-3.5 py-1.5 text-sm font-medium bg-black hover:bg-gray-950 text-white dark:bg-white dark:text-black dark:hover:bg-gray-100 transition rounded-full flex flex-row space-x-1 items-center {loading
? ' cursor-not-allowed'
: ''}"
type="submit"
disabled={loading}
>
{$i18n.t('Add')}
{#if loading}
<div class="ml-2 self-center">
<Spinner />
</div>
{/if}
</button>
</div>
</form>
</div>
</div>
</div>
</Modal>
{/if}

View file

@ -0,0 +1,278 @@
<script>
import { WEBUI_API_BASE_URL, WEBUI_BASE_URL } from '$lib/constants';
import { WEBUI_NAME, config, user as _user, showSidebar } from '$lib/stores';
import { goto } from '$app/navigation';
import { onMount, getContext } from 'svelte';
import dayjs from 'dayjs';
import relativeTime from 'dayjs/plugin/relativeTime';
import localizedFormat from 'dayjs/plugin/localizedFormat';
dayjs.extend(relativeTime);
dayjs.extend(localizedFormat);
import { toast } from 'svelte-sonner';
import { getChannelMembersById } from '$lib/apis/channels';
import Pagination from '$lib/components/common/Pagination.svelte';
import Tooltip from '$lib/components/common/Tooltip.svelte';
import Badge from '$lib/components/common/Badge.svelte';
import Plus from '$lib/components/icons/Plus.svelte';
import Spinner from '$lib/components/common/Spinner.svelte';
import ProfilePreview from '../Messages/Message/ProfilePreview.svelte';
import XMark from '$lib/components/icons/XMark.svelte';
const i18n = getContext('i18n');
export let channel = null;
export let onAdd = null;
export let onRemove = null;
export let search = true;
export let sort = true;
let page = 1;
let users = null;
let total = null;
let query = '';
let orderBy = 'name'; // default sort key
let direction = 'asc'; // default sort order
const setSortKey = (key) => {
if (!sort) {
return;
}
if (orderBy === key) {
direction = direction === 'asc' ? 'desc' : 'asc';
} else {
orderBy = key;
direction = 'asc';
}
};
const getUserList = async () => {
try {
const res = await getChannelMembersById(
localStorage.token,
channel.id,
query,
orderBy,
direction,
page
).catch((error) => {
toast.error(`${error}`);
return null;
});
if (res) {
users = res.users;
total = res.total;
}
} catch (err) {
console.error(err);
}
};
$: if (
channel !== null &&
page !== null &&
query !== null &&
orderBy !== null &&
direction !== null
) {
getUserList();
}
</script>
<div class="flex flex-col justify-center">
{#if users === null || total === null}
<div class="my-10">
<Spinner className="size-5" />
</div>
{:else}
<div class="flex items-center justify-between px-2 mb-1">
<div class="flex gap-1 items-center">
<span class="text-sm">
{$i18n.t('Members')}
</span>
<span class="text-sm text-gray-500">{total}</span>
</div>
{#if onAdd}
<div class="">
<button
type="button"
class=" px-3 py-1.5 gap-1 rounded-xl bg-black dark:text-white dark:bg-gray-850/50 text-black transition font-medium text-xs flex items-center justify-center"
on:click={onAdd}
>
<Plus className="size-3.5 " />
<span>{$i18n.t('Add Member')}</span>
</button>
</div>
{/if}
</div>
<!-- <hr class="my-1 border-gray-100/5- dark:border-gray-850/50" /> -->
{#if search}
<div class="flex gap-1 px-1 mb-1">
<div class=" flex w-full space-x-2">
<div class="flex flex-1 items-center">
<div class=" self-center ml-1 mr-3">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 20 20"
fill="currentColor"
class="w-4 h-4"
>
<path
fill-rule="evenodd"
d="M9 3.5a5.5 5.5 0 100 11 5.5 5.5 0 000-11zM2 9a7 7 0 1112.452 4.391l3.328 3.329a.75.75 0 11-1.06 1.06l-3.329-3.328A7 7 0 012 9z"
clip-rule="evenodd"
/>
</svg>
</div>
<input
class=" w-full text-sm pr-4 py-1 rounded-r-xl outline-hidden bg-transparent"
bind:value={query}
placeholder={$i18n.t('Search')}
/>
</div>
</div>
</div>
{/if}
{#if users.length > 0}
<div class="scrollbar-hidden relative whitespace-nowrap w-full max-w-full">
<div class=" text-sm text-left text-gray-500 dark:text-gray-400 w-full max-w-full">
<!-- <div
class="text-xs text-gray-800 uppercase bg-transparent dark:text-gray-200 w-full mb-0.5"
>
<div
class=" border-b-[1.5px] border-gray-50/50 dark:border-gray-800/10 flex items-center justify-between"
>
<button
type="button"
class="px-2.5 py-2 cursor-pointer select-none"
on:click={() => setSortKey('name')}
>
<div class="flex gap-1.5 items-center">
{$i18n.t('Name')}
{#if orderBy === 'name'}
<span class="font-normal"
>{#if direction === 'asc'}
<ChevronUp className="size-2" />
{:else}
<ChevronDown className="size-2" />
{/if}
</span>
{:else}
<span class="invisible">
<ChevronUp className="size-2" />
</span>
{/if}
</div>
</button>
<button
type="button"
class="px-2.5 py-2 cursor-pointer select-none"
on:click={() => setSortKey('role')}
>
<div class="flex gap-1.5 items-center">
{$i18n.t('Role')}
{#if orderBy === 'role'}
<span class="font-normal"
>{#if direction === 'asc'}
<ChevronUp className="size-2" />
{:else}
<ChevronDown className="size-2" />
{/if}
</span>
{:else}
<span class="invisible">
<ChevronUp className="size-2" />
</span>
{/if}
</div>
</button>
</div>
</div> -->
<div class="w-full">
{#each users as user, userIdx (user.id)}
<div class=" dark:border-gray-850 text-xs flex items-center justify-between">
<div class="px-2 py-1.5 font-medium text-gray-900 dark:text-white flex-1">
<div class="flex items-center gap-2">
<ProfilePreview {user} side="right" align="center" sideOffset={6}>
<img
class="rounded-2xl w-6 h-6 object-cover flex-shrink-0"
src={`${WEBUI_API_BASE_URL}/users/${user.id}/profile/image`}
alt="user"
/>
</ProfilePreview>
<Tooltip content={user.email} placement="top-start">
<div class="font-medium truncate">{user.name}</div>
</Tooltip>
{#if user?.is_active}
<div>
<span class="relative flex size-1.5">
<span
class="absolute inline-flex h-full w-full animate-ping rounded-full bg-green-400 opacity-75"
></span>
<span class="relative inline-flex size-1.5 rounded-full bg-green-500"
></span>
</span>
</div>
{/if}
</div>
</div>
<div class="px-2 py-1 flex items-center gap-1 translate-y-0.5">
<div class=" ">
<Badge
type={user.role === 'admin'
? 'info'
: user.role === 'user'
? 'success'
: 'muted'}
content={$i18n.t(user.role)}
/>
</div>
{#if onRemove}
<div>
<button
class=" rounded-full p-1 hover:bg-gray-100 dark:hover:bg-gray-850 transition disabled:opacity-50 disabled:cursor-not-allowed"
type="button"
disabled={user.id === $_user?.id}
on:click={() => {
onRemove(user.id);
}}
>
<XMark />
</button>
</div>
{/if}
</div>
</div>
{/each}
</div>
</div>
</div>
{#if total > 30}
<Pagination bind:page count={total} perPage={30} />
{/if}
{:else}
<div class="text-gray-500 text-xs text-center py-5 px-10">
{$i18n.t('No users were found.')}
</div>
{/if}
{/if}
</div>

View file

@ -775,7 +775,7 @@
> >
<div <div
id="message-input-container" id="message-input-container"
class="flex-1 flex flex-col relative w-full shadow-lg rounded-3xl border border-gray-50 dark:border-gray-850 hover:border-gray-100 focus-within:border-gray-100 hover:dark:border-gray-800 focus-within:dark:border-gray-800 transition px-1 bg-white/90 dark:bg-gray-400/5 dark:text-gray-100" class="flex-1 flex flex-col relative w-full shadow-lg rounded-3xl border border-gray-50 dark:border-gray-850/30 hover:border-gray-100 focus-within:border-gray-100 hover:dark:border-gray-800 focus-within:dark:border-gray-800 transition px-1 bg-white/90 dark:bg-gray-400/5 dark:text-gray-100"
dir={$settings?.chatDirection ?? 'auto'} dir={$settings?.chatDirection ?? 'auto'}
> >
{#if replyToMessage !== null} {#if replyToMessage !== null}
@ -865,7 +865,7 @@
<div <div
class="scrollbar-hidden rtl:text-right ltr:text-left bg-transparent dark:text-gray-100 outline-hidden w-full pt-2.5 pb-[5px] px-1 resize-none h-fit max-h-96 overflow-auto" class="scrollbar-hidden rtl:text-right ltr:text-left bg-transparent dark:text-gray-100 outline-hidden w-full pt-2.5 pb-[5px] px-1 resize-none h-fit max-h-96 overflow-auto"
> >
{#key $settings?.richTextInput} {#key $settings?.richTextInput && $settings?.showFormattingToolbar}
<RichTextInput <RichTextInput
id="chat-input" id="chat-input"
bind:this={chatInputElement} bind:this={chatInputElement}

View file

@ -111,7 +111,9 @@
if (channelSuggestions) { if (channelSuggestions) {
// Add a dummy channel item // Add a dummy channel item
_channels = [ _channels = [
...$channels.map((c) => ({ type: 'channel', id: c.id, label: c.name, data: c })) ...$channels
.filter((c) => c?.type !== 'dm')
.map((c) => ({ type: 'channel', id: c.id, label: c.name, data: c }))
]; ];
} else { } else {
if (userSuggestions) { if (userSuggestions) {

View file

@ -16,7 +16,14 @@
import Message from './Messages/Message.svelte'; import Message from './Messages/Message.svelte';
import Loader from '../common/Loader.svelte'; import Loader from '../common/Loader.svelte';
import Spinner from '../common/Spinner.svelte'; import Spinner from '../common/Spinner.svelte';
import { addReaction, deleteMessage, removeReaction, updateMessage } from '$lib/apis/channels'; import {
addReaction,
deleteMessage,
pinMessage,
removeReaction,
updateMessage
} from '$lib/apis/channels';
import { WEBUI_API_BASE_URL } from '$lib/constants';
const i18n = getContext('i18n'); const i18n = getContext('i18n');
@ -68,7 +75,31 @@
<div class="px-5 max-w-full mx-auto"> <div class="px-5 max-w-full mx-auto">
{#if channel} {#if channel}
<div class="flex flex-col gap-1.5 pb-5 pt-10"> <div class="flex flex-col gap-1.5 pb-5 pt-10">
<div class="text-2xl font-medium capitalize">{channel.name}</div> {#if channel?.type === 'dm'}
<div class="flex ml-[1px] mr-0.5">
{#each channel.users.filter((u) => u.id !== $user?.id).slice(0, 2) as u, index}
<img
src={`${WEBUI_API_BASE_URL}/users/${u.id}/profile/image`}
alt={u.name}
class=" size-7.5 rounded-full border-2 border-white dark:border-gray-900 {index ===
1
? '-ml-2.5'
: ''}"
/>
{/each}
</div>
{/if}
<div class="text-2xl font-medium capitalize">
{#if channel?.name}
{channel.name}
{:else}
{channel?.users
?.filter((u) => u.id !== $user?.id)
.map((u) => u.name)
.join(', ')}
{/if}
</div>
<div class=" text-gray-500"> <div class=" text-gray-500">
{$i18n.t( {$i18n.t(
@ -97,11 +128,12 @@
{message} {message}
{thread} {thread}
replyToMessage={replyToMessage?.id === message.id} replyToMessage={replyToMessage?.id === message.id}
disabled={!channel?.write_access} disabled={!channel?.write_access || message?.temp_id}
pending={!!message?.temp_id}
showUserProfile={messageIdx === 0 || showUserProfile={messageIdx === 0 ||
messageList.at(messageIdx - 1)?.user_id !== message.user_id || messageList.at(messageIdx - 1)?.user_id !== message.user_id ||
messageList.at(messageIdx - 1)?.meta?.model_id !== message?.meta?.model_id || messageList.at(messageIdx - 1)?.meta?.model_id !== message?.meta?.model_id ||
message?.reply_to_message} message?.reply_to_message !== null}
onDelete={() => { onDelete={() => {
messages = messages.filter((m) => m.id !== message.id); messages = messages.filter((m) => m.id !== message.id);
@ -130,6 +162,26 @@
onReply={(message) => { onReply={(message) => {
onReply(message); onReply(message);
}} }}
onPin={async (message) => {
messages = messages.map((m) => {
if (m.id === message.id) {
m.is_pinned = !m.is_pinned;
m.pinned_by = !m.is_pinned ? null : $user?.id;
m.pinned_at = !m.is_pinned ? null : Date.now() * 1000000;
}
return m;
});
const updatedMessage = await pinMessage(
localStorage.token,
message.channel_id,
message.id,
message.is_pinned
).catch((error) => {
toast.error(`${error}`);
return null;
});
}}
onThread={(id) => { onThread={(id) => {
onThread(id); onThread(id);
}} }}
@ -137,7 +189,7 @@
if ( if (
(message?.reactions ?? []) (message?.reactions ?? [])
.find((reaction) => reaction.name === name) .find((reaction) => reaction.name === name)
?.user_ids?.includes($user?.id) ?? ?.users?.some((u) => u.id === $user?.id) ??
false false
) { ) {
messages = messages.map((m) => { messages = messages.map((m) => {
@ -145,8 +197,8 @@
const reaction = m.reactions.find((reaction) => reaction.name === name); const reaction = m.reactions.find((reaction) => reaction.name === name);
if (reaction) { if (reaction) {
reaction.user_ids = reaction.user_ids.filter((id) => id !== $user?.id); reaction.users = reaction.users.filter((u) => u.id !== $user?.id);
reaction.count = reaction.user_ids.length; reaction.count = reaction.users.length;
if (reaction.count === 0) { if (reaction.count === 0) {
m.reactions = m.reactions.filter((r) => r.name !== name); m.reactions = m.reactions.filter((r) => r.name !== name);
@ -172,12 +224,12 @@
const reaction = m.reactions.find((reaction) => reaction.name === name); const reaction = m.reactions.find((reaction) => reaction.name === name);
if (reaction) { if (reaction) {
reaction.user_ids.push($user?.id); reaction.users.push({ id: $user?.id, name: $user?.name });
reaction.count = reaction.user_ids.length; reaction.count = reaction.users.length;
} else { } else {
m.reactions.push({ m.reactions.push({
name: name, name: name,
user_ids: [$user?.id], users: [{ id: $user?.id, name: $user?.name }],
count: 1 count: 1
}); });
} }

View file

@ -36,6 +36,10 @@
import Emoji from '$lib/components/common/Emoji.svelte'; import Emoji from '$lib/components/common/Emoji.svelte';
import Skeleton from '$lib/components/chat/Messages/Skeleton.svelte'; import Skeleton from '$lib/components/chat/Messages/Skeleton.svelte';
import ArrowUpLeftAlt from '$lib/components/icons/ArrowUpLeftAlt.svelte'; import ArrowUpLeftAlt from '$lib/components/icons/ArrowUpLeftAlt.svelte';
import PinSlash from '$lib/components/icons/PinSlash.svelte';
import Pin from '$lib/components/icons/Pin.svelte';
export let className = '';
export let message; export let message;
export let showUserProfile = true; export let showUserProfile = true;
@ -43,10 +47,12 @@
export let replyToMessage = false; export let replyToMessage = false;
export let disabled = false; export let disabled = false;
export let pending = false;
export let onDelete: Function = () => {}; export let onDelete: Function = () => {};
export let onEdit: Function = () => {}; export let onEdit: Function = () => {};
export let onReply: Function = () => {}; export let onReply: Function = () => {};
export let onPin: Function = () => {};
export let onThread: Function = () => {}; export let onThread: Function = () => {};
export let onReaction: Function = () => {}; export let onReaction: Function = () => {};
@ -69,13 +75,17 @@
{#if message} {#if message}
<div <div
id="message-{message.id}" id="message-{message.id}"
class="flex flex-col justify-between px-5 {showUserProfile class="flex flex-col justify-between w-full max-w-full mx-auto group hover:bg-gray-300/5 dark:hover:bg-gray-700/5 transition relative {className
? 'pt-1.5 pb-0.5' ? className
: ''} w-full max-w-full mx-auto group hover:bg-gray-300/5 dark:hover:bg-gray-700/5 transition relative {replyToMessage : `px-5 ${
? 'border-l-4 border-blue-500 bg-blue-100/10 dark:bg-blue-100/5 pl-4' replyToMessage ? 'border-l-4 border-blue-500 bg-blue-100/10 dark:bg-blue-100/5 pl-4' : ''
: ''} {(message?.reply_to_message?.meta?.model_id ?? message?.reply_to_message?.user_id) === } ${
(message?.reply_to_message?.meta?.model_id ?? message?.reply_to_message?.user_id) ===
$user?.id $user?.id
? 'border-l-4 border-orange-500 bg-orange-100/10 dark:bg-orange-100/5 pl-4' ? 'border-l-4 border-orange-500 bg-orange-100/10 dark:bg-orange-100/5 pl-4'
: ''
} ${message?.is_pinned ? 'bg-yellow-100/20 dark:bg-yellow-100/5' : ''}`} {showUserProfile
? 'pt-1.5 pb-0.5'
: ''}" : ''}"
> >
{#if !edit && !disabled} {#if !edit && !disabled}
@ -83,8 +93,9 @@
class=" absolute {showButtons ? '' : 'invisible group-hover:visible'} right-1 -top-2 z-10" class=" absolute {showButtons ? '' : 'invisible group-hover:visible'} right-1 -top-2 z-10"
> >
<div <div
class="flex gap-1 rounded-lg bg-white dark:bg-gray-850 shadow-md p-0.5 border border-gray-100 dark:border-gray-850" class="flex gap-1 rounded-lg bg-white dark:bg-gray-850 shadow-md p-0.5 border border-gray-100/30 dark:border-gray-850/30"
> >
{#if onReaction}
<EmojiPicker <EmojiPicker
onClose={() => (showButtons = false)} onClose={() => (showButtons = false)}
onSubmit={(name) => { onSubmit={(name) => {
@ -103,7 +114,9 @@
</button> </button>
</Tooltip> </Tooltip>
</EmojiPicker> </EmojiPicker>
{/if}
{#if onReply}
<Tooltip content={$i18n.t('Reply')}> <Tooltip content={$i18n.t('Reply')}>
<button <button
class="hover:bg-gray-100 dark:hover:bg-gray-800 transition rounded-lg p-0.5" class="hover:bg-gray-100 dark:hover:bg-gray-800 transition rounded-lg p-0.5"
@ -114,8 +127,24 @@
<ArrowUpLeftAlt className="size-5" /> <ArrowUpLeftAlt className="size-5" />
</button> </button>
</Tooltip> </Tooltip>
{/if}
{#if !thread} <Tooltip content={message?.is_pinned ? $i18n.t('Unpin') : $i18n.t('Pin')}>
<button
class="hover:bg-gray-100 dark:hover:bg-gray-800 transition rounded-lg p-1"
on:click={() => {
onPin(message);
}}
>
{#if message?.is_pinned}
<PinSlash className="size-4" />
{:else}
<Pin className="size-4" />
{/if}
</button>
</Tooltip>
{#if !thread && onThread}
<Tooltip content={$i18n.t('Reply in Thread')}> <Tooltip content={$i18n.t('Reply in Thread')}>
<button <button
class="hover:bg-gray-100 dark:hover:bg-gray-800 transition rounded-lg p-1" class="hover:bg-gray-100 dark:hover:bg-gray-800 transition rounded-lg p-1"
@ -129,6 +158,7 @@
{/if} {/if}
{#if message.user_id === $user?.id || $user?.role === 'admin'} {#if message.user_id === $user?.id || $user?.role === 'admin'}
{#if onEdit}
<Tooltip content={$i18n.t('Edit')}> <Tooltip content={$i18n.t('Edit')}>
<button <button
class="hover:bg-gray-100 dark:hover:bg-gray-800 transition rounded-lg p-1" class="hover:bg-gray-100 dark:hover:bg-gray-800 transition rounded-lg p-1"
@ -140,7 +170,9 @@
<Pencil /> <Pencil />
</button> </button>
</Tooltip> </Tooltip>
{/if}
{#if onDelete}
<Tooltip content={$i18n.t('Delete')}> <Tooltip content={$i18n.t('Delete')}>
<button <button
class="hover:bg-gray-100 dark:hover:bg-gray-800 transition rounded-lg p-1" class="hover:bg-gray-100 dark:hover:bg-gray-800 transition rounded-lg p-1"
@ -150,6 +182,16 @@
</button> </button>
</Tooltip> </Tooltip>
{/if} {/if}
{/if}
</div>
</div>
{/if}
{#if message?.is_pinned}
<div class="flex {showUserProfile ? 'mb-0.5' : 'mt-0.5'}">
<div class="ml-8.5 flex items-center gap-1 px-1 rounded-full text-xs">
<Pin className="size-3 text-yellow-500 dark:text-yellow-300" />
<span class="text-gray-500">{$i18n.t('Pinned')}</span>
</div> </div>
</div> </div>
{/if} {/if}
@ -203,12 +245,13 @@
</button> </button>
</div> </div>
{/if} {/if}
<div <div
class=" flex w-full message-{message.id} " class=" flex w-full message-{message.id} "
id="message-{message.id}" id="message-{message.id}"
dir={$settings.chatDirection} dir={$settings.chatDirection}
> >
<div class={`shrink-0 mr-3 w-9`}> <div class={`shrink-0 mr-1 w-9`}>
{#if showUserProfile} {#if showUserProfile}
{#if message?.meta?.model_id} {#if message?.meta?.model_id}
<img <img
@ -239,7 +282,7 @@
{/if} {/if}
</div> </div>
<div class="flex-auto w-0 pl-1"> <div class="flex-auto w-0 pl-2">
{#if showUserProfile} {#if showUserProfile}
<Name> <Name>
<div class=" self-end text-base shrink-0 font-medium truncate"> <div class=" self-end text-base shrink-0 font-medium truncate">
@ -252,14 +295,18 @@
{#if message.created_at} {#if message.created_at}
<div <div
class=" self-center text-xs invisible group-hover:visible text-gray-400 font-medium first-letter:capitalize ml-0.5 translate-y-[1px]" class=" self-center text-xs text-gray-400 font-medium first-letter:capitalize ml-0.5 translate-y-[1px]"
> >
<Tooltip content={dayjs(message.created_at / 1000000).format('LLLL')}> <Tooltip content={dayjs(message.created_at / 1000000).format('LLLL')}>
<span class="line-clamp-1"> <span class="line-clamp-1">
{#if dayjs(message.created_at / 1000000).isToday()}
{dayjs(message.created_at / 1000000).format('LT')}
{:else}
{$i18n.t(formatDate(message.created_at / 1000000), { {$i18n.t(formatDate(message.created_at / 1000000), {
LOCALIZED_TIME: dayjs(message.created_at / 1000000).format('LT'), LOCALIZED_TIME: dayjs(message.created_at / 1000000).format('LT'),
LOCALIZED_DATE: dayjs(message.created_at / 1000000).format('L') LOCALIZED_DATE: dayjs(message.created_at / 1000000).format('L')
})} })}
{/if}
</span> </span>
</Tooltip> </Tooltip>
</div> </div>
@ -334,15 +381,16 @@
</div> </div>
</div> </div>
{:else} {:else}
<div class=" min-w-full markdown-prose"> <div class=" min-w-full markdown-prose {pending ? 'opacity-50' : ''}">
{#if (message?.content ?? '').trim() === '' && message?.meta?.model_id} {#if (message?.content ?? '').trim() === '' && message?.meta?.model_id}
<Skeleton /> <Skeleton />
{:else} {:else}
<Markdown <Markdown
id={message.id} id={message.id}
content={message.content} content={message.content}
paragraphTag="span"
/>{#if message.created_at !== message.updated_at && (message?.meta?.model_id ?? null) === null}<span />{#if message.created_at !== message.updated_at && (message?.meta?.model_id ?? null) === null}<span
class="text-gray-500 text-[10px]">({$i18n.t('edited')})</span class="text-gray-500 text-[10px] pl-1 self-center">({$i18n.t('edited')})</span
>{/if} >{/if}
{/if} {/if}
</div> </div>
@ -351,28 +399,64 @@
<div> <div>
<div class="flex items-center flex-wrap gap-y-1.5 gap-1 mt-1 mb-2"> <div class="flex items-center flex-wrap gap-y-1.5 gap-1 mt-1 mb-2">
{#each message.reactions as reaction} {#each message.reactions as reaction}
<Tooltip content={`:${reaction.name}:`}> <Tooltip
content={$i18n.t('{{NAMES}} reacted with {{REACTION}}', {
NAMES: reaction.users
.reduce((acc, u, idx) => {
const name = u.id === $user?.id ? $i18n.t('You') : u.name;
const total = reaction.users.length;
// First three names always added normally
if (idx < 3) {
const separator =
idx === 0
? ''
: idx === Math.min(2, total - 1)
? ` ${$i18n.t('and')} `
: ', ';
return `${acc}${separator}${name}`;
}
// More than 4 → "and X others"
if (idx === 3 && total > 4) {
return (
acc +
` ${$i18n.t('and {{COUNT}} others', {
COUNT: total - 3
})}`
);
}
return acc;
}, '')
.trim(),
REACTION: `:${reaction.name}:`
})}
>
<button <button
class="flex items-center gap-1.5 transition rounded-xl px-2 py-1 cursor-pointer {reaction.user_ids.includes( class="flex items-center gap-1.5 transition rounded-xl px-2 py-1 cursor-pointer {reaction.users
$user?.id .map((u) => u.id)
) .includes($user?.id)
? ' bg-blue-300/10 outline outline-blue-500/50 outline-1' ? ' bg-blue-300/10 outline outline-blue-500/50 outline-1'
: 'bg-gray-300/10 dark:bg-gray-500/10 hover:outline hover:outline-gray-700/30 dark:hover:outline-gray-300/30 hover:outline-1'}" : 'bg-gray-300/10 dark:bg-gray-500/10 hover:outline hover:outline-gray-700/30 dark:hover:outline-gray-300/30 hover:outline-1'}"
on:click={() => { on:click={() => {
if (onReaction) {
onReaction(reaction.name); onReaction(reaction.name);
}
}} }}
> >
<Emoji shortCode={reaction.name} /> <Emoji shortCode={reaction.name} />
{#if reaction.user_ids.length > 0} {#if reaction.users.length > 0}
<div class="text-xs font-medium text-gray-500 dark:text-gray-400"> <div class="text-xs font-medium text-gray-500 dark:text-gray-400">
{reaction.user_ids?.length} {reaction.users?.length}
</div> </div>
{/if} {/if}
</button> </button>
</Tooltip> </Tooltip>
{/each} {/each}
{#if onReaction}
<EmojiPicker <EmojiPicker
onSubmit={(name) => { onSubmit={(name) => {
onReaction(name); onReaction(name);
@ -386,6 +470,7 @@
</div> </div>
</Tooltip> </Tooltip>
</EmojiPicker> </EmojiPicker>
{/if}
</div> </div>
</div> </div>
{/if} {/if}

View file

@ -7,12 +7,26 @@
import UserStatusLinkPreview from './UserStatusLinkPreview.svelte'; import UserStatusLinkPreview from './UserStatusLinkPreview.svelte';
export let user = null; export let user = null;
export let align = 'center';
export let side = 'right';
export let sideOffset = 8;
let openPreview = false;
</script> </script>
<LinkPreview.Root openDelay={0} closeDelay={0}> <LinkPreview.Root openDelay={0} closeDelay={200} bind:open={openPreview}>
<LinkPreview.Trigger class=" cursor-pointer no-underline! font-normal! "> <LinkPreview.Trigger class="flex items-center">
<button
type="button"
class=" cursor-pointer no-underline! font-normal!"
on:click={() => {
openPreview = !openPreview;
}}
>
<slot /> <slot />
</button>
</LinkPreview.Trigger> </LinkPreview.Trigger>
<UserStatusLinkPreview id={user?.id} side="right" align="center" sideOffset={8} /> <UserStatusLinkPreview id={user?.id} {side} {align} {sideOffset} />
</LinkPreview.Root> </LinkPreview.Root>

View file

@ -2,17 +2,43 @@
import { getContext, onMount } from 'svelte'; import { getContext, onMount } from 'svelte';
const i18n = getContext('i18n'); const i18n = getContext('i18n');
import { user as _user, channels, socket } from '$lib/stores';
import { WEBUI_API_BASE_URL, WEBUI_BASE_URL } from '$lib/constants'; import { WEBUI_API_BASE_URL, WEBUI_BASE_URL } from '$lib/constants';
import { getChannels, getDMChannelByUserId } from '$lib/apis/channels';
import ChatBubbles from '$lib/components/icons/ChatBubbles.svelte';
import ChatBubble from '$lib/components/icons/ChatBubble.svelte';
import ChatBubbleOval from '$lib/components/icons/ChatBubbleOval.svelte';
import { goto } from '$app/navigation';
import Emoji from '$lib/components/common/Emoji.svelte';
import Tooltip from '$lib/components/common/Tooltip.svelte';
export let user = null; export let user = null;
const directMessageHandler = async () => {
if (!user) {
return;
}
const res = await getDMChannelByUserId(localStorage.token, user.id).catch((error) => {
console.error('Error fetching DM channel:', error);
return null;
});
if (res) {
goto(`/channels/${res.id}`);
}
};
</script> </script>
{#if user} {#if user}
<div class=" flex gap-3.5 w-full py-3 px-3 items-center"> <div class="py-3">
<div class=" flex gap-3.5 w-full px-3 items-center">
<div class=" items-center flex shrink-0"> <div class=" items-center flex shrink-0">
<img <img
src={`${WEBUI_API_BASE_URL}/users/${user?.id}/profile/image`} src={`${WEBUI_API_BASE_URL}/users/${user?.id}/profile/image`}
class=" size-12 object-cover rounded-xl" class=" size-14 object-cover rounded-xl"
alt="profile" alt="profile"
/> />
</div> </div>
@ -23,7 +49,7 @@
</div> </div>
<div class=" flex items-center gap-2"> <div class=" flex items-center gap-2">
{#if user?.active} {#if user?.is_active}
<div> <div>
<span class="relative flex size-2"> <span class="relative flex size-2">
<span <span
@ -46,4 +72,56 @@
</div> </div>
</div> </div>
</div> </div>
{#if user?.status_emoji || user?.status_message}
<div class="mx-2 mt-2">
<Tooltip content={user?.status_message}>
<div
class="w-full gap-2 px-2.5 py-1.5 rounded-xl bg-gray-50 dark:text-white dark:bg-gray-900/50 text-black transition text-xs flex items-center"
>
{#if user?.status_emoji}
<div class=" self-center shrink-0">
<Emoji className="size-4" shortCode={user?.status_emoji} />
</div>
{/if}
<div class=" self-center line-clamp-2 flex-1 text-left">
{user?.status_message}
</div>
</div>
</Tooltip>
</div>
{/if}
{#if user?.bio}
<div class="mx-3.5 mt-2">
<Tooltip content={user?.bio}>
<div class=" self-center line-clamp-3 flex-1 text-left text-xs">
{user?.bio}
</div>
</Tooltip>
</div>
{/if}
{#if $_user?.id !== user.id}
<hr class="border-gray-100/50 dark:border-gray-800/50 my-2.5" />
<div class=" flex flex-col w-full px-2.5 items-center">
<button
class="w-full text-left px-3 py-1.5 rounded-xl border border-gray-100/50 dark:border-gray-800/50 hover:bg-gray-50 dark:hover:bg-gray-850 transition flex items-center gap-2 text-sm"
type="button"
on:click={() => {
directMessageHandler();
}}
>
<div>
<ChatBubbleOval className="size-4" />
</div>
<div class="font-medium">
{$i18n.t('Message')}
</div>
</button>
</div>
{/if}
</div>
{/if} {/if}

View file

@ -14,7 +14,6 @@
export let sideOffset = 6; export let sideOffset = 6;
let user = null; let user = null;
onMount(async () => { onMount(async () => {
if (id) { if (id) {
user = await getUserById(localStorage.token, id).catch((error) => { user = await getUserById(localStorage.token, id).catch((error) => {
@ -27,7 +26,7 @@
{#if user} {#if user}
<LinkPreview.Content <LinkPreview.Content
class="w-full max-w-[260px] rounded-2xl border border-gray-100 dark:border-gray-800 z-999 bg-white dark:bg-gray-850 dark:text-white shadow-lg transition" class="w-full max-w-[260px] rounded-2xl border border-gray-100 dark:border-gray-800 z-[9999] bg-white dark:bg-gray-850 dark:text-white shadow-lg transition"
{side} {side}
{align} {align}
{sideOffset} {sideOffset}

Some files were not shown because too many files have changed in this diff Show more