mirror of
https://github.com/open-webui/open-webui.git
synced 2025-12-23 09:45:24 +00:00
Compare commits
No commits in common. "main" and "v0.6.24" have entirely different histories.
602 changed files with 29414 additions and 71310 deletions
19
.github/ISSUE_TEMPLATE/bug_report.yaml
vendored
19
.github/ISSUE_TEMPLATE/bug_report.yaml
vendored
|
|
@ -11,9 +11,7 @@ body:
|
|||
|
||||
## Important Notes
|
||||
|
||||
- **Before submitting a bug report**: Please check the [Issues](https://github.com/open-webui/open-webui/issues) and [Discussions](https://github.com/open-webui/open-webui/discussions) sections to see if a similar issue has already been reported. If unsure, start a discussion first, as this helps us efficiently focus on improving the project. Duplicates may be closed without notice. **Please search for existing issues AND discussions. No matter open or closed.**
|
||||
|
||||
- Check for opened, **but also for (recently) CLOSED issues** as the issue you are trying to report **might already have been fixed on the dev branch!**
|
||||
- **Before submitting a bug report**: Please check the [Issues](https://github.com/open-webui/open-webui/issues) or [Discussions](https://github.com/open-webui/open-webui/discussions) sections to see if a similar issue has already been reported. If unsure, start a discussion first, as this helps us efficiently focus on improving the project.
|
||||
|
||||
- **Respectful collaboration**: Open WebUI is a volunteer-driven project with a single maintainer and contributors who also have full-time jobs. Please be constructive and respectful in your communication.
|
||||
|
||||
|
|
@ -21,19 +19,13 @@ body:
|
|||
|
||||
- **Bug Reproducibility**: If a bug cannot be reproduced using a `:main` or `:dev` Docker setup or with `pip install` on Python 3.11, community assistance may be required. In such cases, we will move it to the "[Issues](https://github.com/open-webui/open-webui/discussions/categories/issues)" Discussions section. Your help is appreciated!
|
||||
|
||||
- **Scope**: If you want to report a SECURITY VULNERABILITY, then do so through our [GitHub security page](https://github.com/open-webui/open-webui/security).
|
||||
|
||||
- type: checkboxes
|
||||
id: issue-check
|
||||
attributes:
|
||||
label: Check Existing Issues
|
||||
description: Confirm that you’ve checked for existing reports before submitting a new one.
|
||||
options:
|
||||
- label: I have searched for any existing and/or related issues.
|
||||
required: true
|
||||
- label: I have searched for any existing and/or related discussions.
|
||||
required: true
|
||||
- label: I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!).
|
||||
- label: I have searched the existing issues and discussions.
|
||||
required: true
|
||||
- label: I am using the latest version of Open WebUI.
|
||||
required: true
|
||||
|
|
@ -55,7 +47,7 @@ body:
|
|||
id: open-webui-version
|
||||
attributes:
|
||||
label: Open WebUI Version
|
||||
description: Specify the version (e.g., v0.6.26)
|
||||
description: Specify the version (e.g., v0.3.11)
|
||||
validations:
|
||||
required: true
|
||||
|
||||
|
|
@ -71,7 +63,7 @@ body:
|
|||
id: operating-system
|
||||
attributes:
|
||||
label: Operating System
|
||||
description: Specify the OS (e.g., Windows 10, macOS Sonoma, Ubuntu 22.04, Debian 12)
|
||||
description: Specify the OS (e.g., Windows 10, macOS Sonoma, Ubuntu 22.04)
|
||||
validations:
|
||||
required: true
|
||||
|
||||
|
|
@ -134,7 +126,6 @@ body:
|
|||
description: |
|
||||
Please provide a **very detailed, step-by-step guide** to reproduce the issue. Your instructions should be so clear and precise that anyone can follow them without guesswork. Include every relevant detail—settings, configuration options, exact commands used, values entered, and any prerequisites or environment variables.
|
||||
**If full reproduction steps and all relevant settings are not provided, your issue may not be addressed.**
|
||||
**If your steps to reproduction are incomplete, lacking detail or not reproducible, your issue can not be addressed.**
|
||||
|
||||
placeholder: |
|
||||
Example (include every detail):
|
||||
|
|
@ -172,5 +163,5 @@ body:
|
|||
attributes:
|
||||
value: |
|
||||
## Note
|
||||
**If the bug report is incomplete, does not follow instructions or is lacking details it may not be addressed.** Ensure that you've followed all the **README.md** and **troubleshooting.md** guidelines, and provide all necessary information for us to reproduce the issue.
|
||||
If the bug report is incomplete or does not follow instructions, it may not be addressed. Ensure that you've followed all the **README.md** and **troubleshooting.md** guidelines, and provide all necessary information for us to reproduce the issue.
|
||||
Thank you for contributing to Open WebUI!
|
||||
|
|
|
|||
26
.github/ISSUE_TEMPLATE/feature_request.yaml
vendored
26
.github/ISSUE_TEMPLATE/feature_request.yaml
vendored
|
|
@ -8,19 +8,8 @@ body:
|
|||
value: |
|
||||
## Important Notes
|
||||
### Before submitting
|
||||
|
||||
Please check the **open AND closed** [Issues](https://github.com/open-webui/open-webui/issues) AND [Discussions](https://github.com/open-webui/open-webui/discussions) to see if a similar request has been posted.
|
||||
Please check the [Issues](https://github.com/open-webui/open-webui/issues) or [Discussions](https://github.com/open-webui/open-webui/discussions) to see if a similar request has been posted.
|
||||
It's likely we're already tracking it! If you’re unsure, start a discussion post first.
|
||||
|
||||
#### Scope
|
||||
|
||||
If your feature request is likely to take more than a quick coding session to implement, test and verify, then open it in the **Ideas** section of the [Discussions](https://github.com/open-webui/open-webui/discussions) instead.
|
||||
**We will close and force move your feature request to the Ideas section, if we believe your feature request is not trivial/quick to implement.**
|
||||
This is to ensure the issues tab is used only for issues, quickly addressable feature requests and tracking tickets by the maintainers.
|
||||
Other feature requests belong in the **Ideas** section of the [Discussions](https://github.com/open-webui/open-webui/discussions).
|
||||
|
||||
If your feature request might impact others in the community, definitely open a discussion instead and evaluate whether and how to implement it.
|
||||
|
||||
This will help us efficiently focus on improving the project.
|
||||
|
||||
### Collaborate respectfully
|
||||
|
|
@ -33,6 +22,7 @@ body:
|
|||
|
||||
We appreciate your time and ask that you **respect ours**.
|
||||
|
||||
|
||||
### Contributing
|
||||
If you encounter an issue, we highly encourage you to submit a pull request or fork the project. We actively work to prevent contributor burnout to maintain the quality and continuity of Open WebUI.
|
||||
|
||||
|
|
@ -45,22 +35,14 @@ body:
|
|||
label: Check Existing Issues
|
||||
description: Please confirm that you've checked for existing similar requests
|
||||
options:
|
||||
- label: I have searched for all existing **open AND closed** issues and discussions for similar requests. I have found none that is comparable to my request.
|
||||
required: true
|
||||
- type: checkboxes
|
||||
id: feature-scope
|
||||
attributes:
|
||||
label: Verify Feature Scope
|
||||
description: Please confirm the feature's scope is within the described scope
|
||||
options:
|
||||
- label: I have read through and understood the scope definition for feature requests in the Issues section. I believe my feature request meets the definition and belongs in the Issues section instead of the Discussions.
|
||||
- label: I have searched the existing issues and discussions.
|
||||
required: true
|
||||
- type: textarea
|
||||
id: problem-description
|
||||
attributes:
|
||||
label: Problem Description
|
||||
description: Is your feature request related to a problem? Please provide a clear and concise description of what the problem is.
|
||||
placeholder: "Ex. I'm always frustrated when... / Not related to a problem"
|
||||
placeholder: "Ex. I'm always frustrated when..."
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
|
|
|
|||
6
.github/dependabot.yml
vendored
6
.github/dependabot.yml
vendored
|
|
@ -12,6 +12,12 @@ updates:
|
|||
interval: monthly
|
||||
target-branch: 'dev'
|
||||
|
||||
- package-ecosystem: npm
|
||||
directory: '/'
|
||||
schedule:
|
||||
interval: monthly
|
||||
target-branch: 'dev'
|
||||
|
||||
- package-ecosystem: 'github-actions'
|
||||
directory: '/'
|
||||
schedule:
|
||||
|
|
|
|||
20
.github/pull_request_template.md
vendored
20
.github/pull_request_template.md
vendored
|
|
@ -1,20 +1,17 @@
|
|||
# Pull Request Checklist
|
||||
|
||||
### Note to first-time contributors: Please open a discussion post in [Discussions](https://github.com/open-webui/open-webui/discussions) to discuss your idea/fix with the community before creating a pull request, and describe your changes before submitting a pull request.
|
||||
|
||||
This is to ensure large feature PRs are discussed with the community first, before starting work on it. If the community does not want this feature or it is not relevant for Open WebUI as a project, it can be identified in the discussion before working on the feature and submitting the PR.
|
||||
### Note to first-time contributors: Please open a discussion post in [Discussions](https://github.com/open-webui/open-webui/discussions) and describe your changes before submitting a pull request.
|
||||
|
||||
**Before submitting, make sure you've checked the following:**
|
||||
|
||||
- [ ] **Target branch:** Verify that the pull request targets the `dev` branch. **Not targeting the `dev` branch will lead to immediate closure of the PR.**
|
||||
- [ ] **Description:** Provide a concise description of the changes made in this pull request down below.
|
||||
- [ ] **Target branch:** Please verify that the pull request targets the `dev` branch.
|
||||
- [ ] **Description:** Provide a concise description of the changes made in this pull request.
|
||||
- [ ] **Changelog:** Ensure a changelog entry following the format of [Keep a Changelog](https://keepachangelog.com/) is added at the bottom of the PR description.
|
||||
- [ ] **Documentation:** If necessary, update relevant documentation [Open WebUI Docs](https://github.com/open-webui/docs) like environment variables, the tutorials, or other documentation sources.
|
||||
- [ ] **Documentation:** Have you updated relevant documentation [Open WebUI Docs](https://github.com/open-webui/docs), or other documentation sources?
|
||||
- [ ] **Dependencies:** Are there any new dependencies? Have you updated the dependency versions in the documentation?
|
||||
- [ ] **Testing:** Perform manual tests to **verify the implemented fix/feature works as intended AND does not break any other functionality**. Take this as an opportunity to **make screenshots of the feature/fix and include it in the PR description**.
|
||||
- [ ] **Agentic AI Code:** Confirm this Pull Request is **not written by any AI Agent** or has at least **gone through additional human review AND manual testing**. If any AI Agent is the co-author of this PR, it may lead to immediate closure of the PR.
|
||||
- [ ] **Testing:** Have you written and run sufficient tests to validate the changes?
|
||||
- [ ] **Code review:** Have you performed a self-review of your code, addressing any coding standard issues and ensuring adherence to the project's coding standards?
|
||||
- [ ] **Title Prefix:** To clearly categorize this pull request, prefix the pull request title using one of the following:
|
||||
- [ ] **Prefix:** To clearly categorize this pull request, prefix the pull request title using one of the following:
|
||||
- **BREAKING CHANGE**: Significant changes that may affect compatibility
|
||||
- **build**: Changes that affect the build system or external dependencies
|
||||
- **ci**: Changes to our continuous integration processes or workflows
|
||||
|
|
@ -76,7 +73,4 @@ This is to ensure large feature PRs are discussed with the community first, befo
|
|||
|
||||
### Contributor License Agreement
|
||||
|
||||
By submitting this pull request, I confirm that I have read and fully agree to the [Contributor License Agreement (CLA)](https://github.com/open-webui/open-webui/blob/main/CONTRIBUTOR_LICENSE_AGREEMENT), and I am providing my contributions under its terms.
|
||||
|
||||
> [!NOTE]
|
||||
> Deleting the CLA section will lead to immediate closure of your PR and it will not be merged in.
|
||||
By submitting this pull request, I confirm that I have read and fully agree to the [Contributor License Agreement (CLA)](/CONTRIBUTOR_LICENSE_AGREEMENT), and I am providing my contributions under its terms.
|
||||
|
|
|
|||
6
.github/workflows/build-release.yml
vendored
6
.github/workflows/build-release.yml
vendored
|
|
@ -11,7 +11,7 @@ jobs:
|
|||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Check for changes in package.json
|
||||
run: |
|
||||
|
|
@ -36,7 +36,7 @@ jobs:
|
|||
echo "::set-output name=content::$CHANGELOG_ESCAPED"
|
||||
|
||||
- name: Create GitHub release
|
||||
uses: actions/github-script@v8
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -61,7 +61,7 @@ jobs:
|
|||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Trigger Docker build workflow
|
||||
uses: actions/github-script@v8
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
github.rest.actions.createWorkflowDispatch({
|
||||
|
|
|
|||
5
.github/workflows/deploy-to-hf-spaces.yml
vendored
5
.github/workflows/deploy-to-hf-spaces.yml
vendored
|
|
@ -27,7 +27,7 @@ jobs:
|
|||
HF_TOKEN: ${{ secrets.HF_TOKEN }}
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
lfs: true
|
||||
|
||||
|
|
@ -57,8 +57,7 @@ jobs:
|
|||
git lfs install
|
||||
git lfs track "*.ttf"
|
||||
git lfs track "*.jpg"
|
||||
rm demo.png
|
||||
rm banner.png
|
||||
rm demo.gif
|
||||
git add .
|
||||
git commit -m "GitHub deploy: ${{ github.sha }}"
|
||||
git push --force https://open-webui:${HF_TOKEN}@huggingface.co/spaces/open-webui/open-webui main
|
||||
|
|
|
|||
180
.github/workflows/docker-build.yaml
vendored
180
.github/workflows/docker-build.yaml
vendored
|
|
@ -43,7 +43,7 @@ jobs:
|
|||
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
|
@ -141,11 +141,8 @@ jobs:
|
|||
platform=${{ matrix.platform }}
|
||||
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
|
||||
|
||||
- name: Delete huge unnecessary tools folder
|
||||
run: rm -rf /opt/hostedtoolcache
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
|
@ -246,11 +243,8 @@ jobs:
|
|||
platform=${{ matrix.platform }}
|
||||
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
|
||||
|
||||
- name: Delete huge unnecessary tools folder
|
||||
run: rm -rf /opt/hostedtoolcache
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
|
@ -353,7 +347,7 @@ jobs:
|
|||
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
|
@ -425,108 +419,6 @@ jobs:
|
|||
if-no-files-found: error
|
||||
retention-days: 1
|
||||
|
||||
build-slim-image:
|
||||
runs-on: ${{ matrix.runner }}
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- platform: linux/amd64
|
||||
runner: ubuntu-latest
|
||||
- platform: linux/arm64
|
||||
runner: ubuntu-24.04-arm
|
||||
|
||||
steps:
|
||||
# GitHub Packages requires the entire repository name to be in lowercase
|
||||
# although the repository owner has a lowercase username, this prevents some people from running actions after forking
|
||||
- name: Set repository and image name to lowercase
|
||||
run: |
|
||||
echo "IMAGE_NAME=${IMAGE_NAME,,}" >>${GITHUB_ENV}
|
||||
echo "FULL_IMAGE_NAME=ghcr.io/${IMAGE_NAME,,}" >>${GITHUB_ENV}
|
||||
env:
|
||||
IMAGE_NAME: '${{ github.repository }}'
|
||||
|
||||
- name: Prepare
|
||||
run: |
|
||||
platform=${{ matrix.platform }}
|
||||
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Log in to the Container registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Extract metadata for Docker images (slim tag)
|
||||
id: meta
|
||||
uses: docker/metadata-action@v5
|
||||
with:
|
||||
images: ${{ env.FULL_IMAGE_NAME }}
|
||||
tags: |
|
||||
type=ref,event=branch
|
||||
type=ref,event=tag
|
||||
type=sha,prefix=git-
|
||||
type=semver,pattern={{version}}
|
||||
type=semver,pattern={{major}}.{{minor}}
|
||||
type=raw,enable=${{ github.ref == 'refs/heads/main' }},prefix=,suffix=,value=slim
|
||||
flavor: |
|
||||
latest=${{ github.ref == 'refs/heads/main' }}
|
||||
suffix=-slim,onlatest=true
|
||||
|
||||
- name: Extract metadata for Docker cache
|
||||
id: cache-meta
|
||||
uses: docker/metadata-action@v5
|
||||
with:
|
||||
images: ${{ env.FULL_IMAGE_NAME }}
|
||||
tags: |
|
||||
type=ref,event=branch
|
||||
${{ github.ref_type == 'tag' && 'type=raw,value=main' || '' }}
|
||||
flavor: |
|
||||
prefix=cache-slim-${{ matrix.platform }}-
|
||||
latest=false
|
||||
|
||||
- name: Build Docker image (slim)
|
||||
uses: docker/build-push-action@v5
|
||||
id: build
|
||||
with:
|
||||
context: .
|
||||
push: true
|
||||
platforms: ${{ matrix.platform }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
outputs: type=image,name=${{ env.FULL_IMAGE_NAME }},push-by-digest=true,name-canonical=true,push=true
|
||||
cache-from: type=registry,ref=${{ steps.cache-meta.outputs.tags }}
|
||||
cache-to: type=registry,ref=${{ steps.cache-meta.outputs.tags }},mode=max
|
||||
build-args: |
|
||||
BUILD_HASH=${{ github.sha }}
|
||||
USE_SLIM=true
|
||||
|
||||
- name: Export digest
|
||||
run: |
|
||||
mkdir -p /tmp/digests
|
||||
digest="${{ steps.build.outputs.digest }}"
|
||||
touch "/tmp/digests/${digest#sha256:}"
|
||||
|
||||
- name: Upload digest
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: digests-slim-${{ env.PLATFORM_PAIR }}
|
||||
path: /tmp/digests/*
|
||||
if-no-files-found: error
|
||||
retention-days: 1
|
||||
|
||||
merge-main-images:
|
||||
runs-on: ubuntu-latest
|
||||
needs: [build-main-image]
|
||||
|
|
@ -541,7 +433,7 @@ jobs:
|
|||
IMAGE_NAME: '${{ github.repository }}'
|
||||
|
||||
- name: Download digests
|
||||
uses: actions/download-artifact@v5
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
pattern: digests-main-*
|
||||
path: /tmp/digests
|
||||
|
|
@ -595,7 +487,7 @@ jobs:
|
|||
IMAGE_NAME: '${{ github.repository }}'
|
||||
|
||||
- name: Download digests
|
||||
uses: actions/download-artifact@v5
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
pattern: digests-cuda-*
|
||||
path: /tmp/digests
|
||||
|
|
@ -651,7 +543,7 @@ jobs:
|
|||
IMAGE_NAME: '${{ github.repository }}'
|
||||
|
||||
- name: Download digests
|
||||
uses: actions/download-artifact@v5
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
pattern: digests-cuda126-*
|
||||
path: /tmp/digests
|
||||
|
|
@ -707,7 +599,7 @@ jobs:
|
|||
IMAGE_NAME: '${{ github.repository }}'
|
||||
|
||||
- name: Download digests
|
||||
uses: actions/download-artifact@v5
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
pattern: digests-ollama-*
|
||||
path: /tmp/digests
|
||||
|
|
@ -748,59 +640,3 @@ jobs:
|
|||
- name: Inspect image
|
||||
run: |
|
||||
docker buildx imagetools inspect ${{ env.FULL_IMAGE_NAME }}:${{ steps.meta.outputs.version }}
|
||||
|
||||
merge-slim-images:
|
||||
runs-on: ubuntu-latest
|
||||
needs: [build-slim-image]
|
||||
steps:
|
||||
# GitHub Packages requires the entire repository name to be in lowercase
|
||||
# although the repository owner has a lowercase username, this prevents some people from running actions after forking
|
||||
- name: Set repository and image name to lowercase
|
||||
run: |
|
||||
echo "IMAGE_NAME=${IMAGE_NAME,,}" >>${GITHUB_ENV}
|
||||
echo "FULL_IMAGE_NAME=ghcr.io/${IMAGE_NAME,,}" >>${GITHUB_ENV}
|
||||
env:
|
||||
IMAGE_NAME: '${{ github.repository }}'
|
||||
|
||||
- name: Download digests
|
||||
uses: actions/download-artifact@v5
|
||||
with:
|
||||
pattern: digests-slim-*
|
||||
path: /tmp/digests
|
||||
merge-multiple: true
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Log in to the Container registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Extract metadata for Docker images (default slim tag)
|
||||
id: meta
|
||||
uses: docker/metadata-action@v5
|
||||
with:
|
||||
images: ${{ env.FULL_IMAGE_NAME }}
|
||||
tags: |
|
||||
type=ref,event=branch
|
||||
type=ref,event=tag
|
||||
type=sha,prefix=git-
|
||||
type=semver,pattern={{version}}
|
||||
type=semver,pattern={{major}}.{{minor}}
|
||||
type=raw,enable=${{ github.ref == 'refs/heads/main' }},prefix=,suffix=,value=slim
|
||||
flavor: |
|
||||
latest=${{ github.ref == 'refs/heads/main' }}
|
||||
suffix=-slim,onlatest=true
|
||||
|
||||
- name: Create manifest list and push
|
||||
working-directory: /tmp/digests
|
||||
run: |
|
||||
docker buildx imagetools create $(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \
|
||||
$(printf '${{ env.FULL_IMAGE_NAME }}@sha256:%s ' *)
|
||||
|
||||
- name: Inspect image
|
||||
run: |
|
||||
docker buildx imagetools inspect ${{ env.FULL_IMAGE_NAME }}:${{ steps.meta.outputs.version }}
|
||||
|
|
|
|||
4
.github/workflows/format-backend.yaml
vendored
4
.github/workflows/format-backend.yaml
vendored
|
|
@ -30,10 +30,10 @@ jobs:
|
|||
- 3.12.x
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v6
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '${{ matrix.python-version }}'
|
||||
|
||||
|
|
|
|||
8
.github/workflows/format-build-frontend.yaml
vendored
8
.github/workflows/format-build-frontend.yaml
vendored
|
|
@ -24,10 +24,10 @@ jobs:
|
|||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout Repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v5
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '22'
|
||||
|
||||
|
|
@ -51,10 +51,10 @@ jobs:
|
|||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout Repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v5
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '22'
|
||||
|
||||
|
|
|
|||
6
.github/workflows/release-pypi.yml
vendored
6
.github/workflows/release-pypi.yml
vendored
|
|
@ -16,15 +16,15 @@ jobs:
|
|||
id-token: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Install Git
|
||||
run: sudo apt-get update && sudo apt-get install -y git
|
||||
- uses: actions/setup-node@v5
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 22
|
||||
- uses: actions/setup-python@v6
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: 3.11
|
||||
- name: Build
|
||||
|
|
|
|||
|
|
@ -3,6 +3,8 @@ pnpm-lock.yaml
|
|||
package-lock.json
|
||||
yarn.lock
|
||||
|
||||
kubernetes/
|
||||
|
||||
# Copy of .gitignore
|
||||
.DS_Store
|
||||
node_modules
|
||||
|
|
|
|||
790
CHANGELOG.md
790
CHANGELOG.md
|
|
@ -5,796 +5,6 @@ All notable changes to this project will be documented in this file.
|
|||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [0.6.43] - 2025-12-22
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🐍 **Python dependency installation issues** were resolved by correcting pip dependency handling, preventing installation failures in certain environments and improving setup reliability. [Commit](https://github.com/open-webui/open-webui/commit/5c5f87a)
|
||||
- 🎙️ **Speech-to-Text default content type handling** was fixed and refactored to ensure correct MIME type usage, improving compatibility across STT providers and preventing transcription errors caused by incorrect defaults. [Commit](https://github.com/open-webui/open-webui/commit/4ab917c)
|
||||
- 🖼️ **Temporary chat image handling** was fixed and refactored, ensuring images generated or edited in temporary chats are correctly processed, stored, and displayed without inconsistencies or missing references. [Commit](https://github.com/open-webui/open-webui/commit/423983f)
|
||||
- 🎨 **Image action button fixed**, restoring the ability to trigger image generation, editing, and related image actions from the chat UI. [Commit](https://github.com/open-webui/open-webui/commit/def8a00)
|
||||
|
||||
## [0.6.42] - 2025-12-21
|
||||
|
||||
### Added
|
||||
|
||||
- 📚 Knowledge base file management was overhauled with server-side pagination loading 30 files at a time instead of loading entire collections at once, dramatically improving performance and responsiveness for large knowledge bases with hundreds or thousands of files, reducing initial load times and memory usage while adding server-side search and filtering, view options for files added by the user versus shared files, customizable sorting by name or date, and file authorship tracking with upload timestamps. [Commit](https://github.com/open-webui/open-webui/commit/94a8439105f30203ea9d729787c9c5978f5c22a2)
|
||||
- ✨ Knowledge base file management was enhanced with automatic list refresh after file operations ensuring immediate UI updates, improved permission validation at the model layer, and automatic channel-file association for files uploaded with channel metadata. [Commit](https://github.com/open-webui/open-webui/commit/c15201620d03a9b60b800a34d8dc3426722c5b8b)
|
||||
- 🔎 Knowledge command in chat input now uses server-side search for massive performance increases when selecting knowledge bases and files. [Commit](https://github.com/open-webui/open-webui/commit/0addc1ea461d7b4eee8fe0ca2fedd615b3988b0e)
|
||||
- 🗂️ Knowledge workspace listing now uses server-side pagination loading 30 collections at a time with new search endpoints supporting query filtering and view options for created versus shared collections. [Commit](https://github.com/open-webui/open-webui/commit/ceae3d48e603f53313d5483abe94099e20e914e8)
|
||||
- 📖 Knowledge workspace now displays all collections with read access including shared read-only collections, enabling users to discover and explore knowledge bases they don't own while maintaining proper access controls through visual "Read Only" badges and automatically disabled editing controls for name, description, file uploads, content editing, and deletion operations. [Commit](https://github.com/open-webui/open-webui/commit/693636d971d0e8398fa0c9ec3897686750007af5)
|
||||
- 📁 Bulk website and YouTube video attachment now supports adding multiple URLs at once (newline-separated) with automatic YouTube detection and transcript retrieval, processed sequentially to prevent resource strain, and both websites and videos can now be added directly to knowledge bases through the workspace UI. [Commit](https://github.com/open-webui/open-webui/commit/7746e9f4b831f09953ad2b659b96e0fd52911031), [#6202](https://github.com/open-webui/open-webui/issues/6202), [#19587](https://github.com/open-webui/open-webui/pull/19587), [#8231](https://github.com/open-webui/open-webui/pull/8231)
|
||||
- 🪟 Sidebar width is now resizable on desktop devices with persistent storage in localStorage, enforcing minimum and maximum width constraints (220px to 480px) while all layout components now reference the dynamic sidebar width via CSS variables for consistent responsive behavior. [Commit](https://github.com/open-webui/open-webui/commit/b364cf43d3e8fd3557f65f17bc285bfaca5ed368)
|
||||
- 📝 Notes feature now supports server-side search and filtering with view options for notes created by the user versus notes shared with them, customizable sorting by name or date in both list and grid view modes within a redesigned interface featuring consolidated note management controls in a unified header, group-based permission sharing with read, write, and read-only access control displaying note authorship and sharing status for better collaboration, and paginated infinite scroll for improved performance with large note collections. [Commit](https://github.com/open-webui/open-webui/commit/9b24cddef6c4862bd899eb8d6332cafff54e871d)
|
||||
- 👁️ Notes now support read-only access permissions, allowing users to share notes for viewing without granting edit rights, with the editor automatically becoming non-editable and appropriate UI indicators when read-only access is detected. [Commit](https://github.com/open-webui/open-webui/commit/4363df175d50e0f9729381ac2ba9b37a3c3a966d)
|
||||
- 📄 Notes can now be created directly from the chat input field, allowing users to save drafted messages or content as notes without navigation or retyping. [Commit](https://github.com/open-webui/open-webui/commit/00c2b6ca405d617e3d7520953a00a36c19c790ec)
|
||||
- 🪟 Sidebar folders, channels, and pinned models sections now automatically expand when creating new items or pinning models, providing immediate visual feedback for user actions. [Commit](https://github.com/open-webui/open-webui/commit/f826d3ed75213a0a1b31b50d030bfb1d5e91d199), [#19929](https://github.com/open-webui/open-webui/pull/19929)
|
||||
- 📋 Chat file associations are now properly tracked in the database through a new "chat_file" table, enabling accurate file management across chats and ensuring proper cleanup of files when chats are deleted, while improving database consistency in multi-node deployments. [Commit](https://github.com/open-webui/open-webui/commit/f1bf4f20c53e6493f0eb6fa2f12cb84c2d22da52)
|
||||
- 🖼️ User-uploaded images are now automatically converted from base64 to actual file storage on the server, eliminating large inline base64 strings from being stored in chat history and reducing message payload sizes while enabling better image management and sharing across multiple chats. [Commit](https://github.com/open-webui/open-webui/commit/f1bf4f20c53e6493f0eb6fa2f12cb84c2d22da52)
|
||||
- 📸 Shared chats with generated or edited images now correctly display images when accessed by other users by properly linking generated images to their chat and message through the chat_file table, ensuring images remain accessible in shared chat links. [Commit](https://github.com/open-webui/open-webui/commit/446cc0ac6063402a743e949f50612376ed5a8437), [#19393](https://github.com/open-webui/open-webui/issues/19393)
|
||||
- 📊 File viewer modal was significantly enhanced with native-like viewers for Excel/CSV spreadsheets rendering as interactive scrollable tables with multi-sheet navigation support, Markdown documents displaying with full typography including headers, lists, links, and tables, and source code files showing syntax highlighting, all accessible through a tabbed interface defaulting to raw text view. [#20035](https://github.com/open-webui/open-webui/pull/20035), [#2867](https://github.com/open-webui/open-webui/issues/2867)
|
||||
- 📏 Chat input now displays an expand button in the top-right corner when messages exceed two lines, providing optional access to a full-screen editor for composing longer messages with enhanced workspace and visibility while temporarily disabling the main input to prevent editing conflicts. [Commit](https://github.com/open-webui/open-webui/commit/205c7111200c22da42e9b5fe1e676aec9cca6daa)
|
||||
- 💬 Channel message data lazy loading was implemented, deferring attachment and file metadata retrieval until needed to improve initial message list load performance. [Commit](https://github.com/open-webui/open-webui/commit/54b7ec56d6bcd2d79addc1694b757dab18cf18c5)
|
||||
- 🖼️ Channel image upload handling was optimized to process and store compressed images directly as files rather than inline data, improving memory efficiency and message load times. [Commit](https://github.com/open-webui/open-webui/commit/22f1b764a7ea1add0a896906a9ef00b4b6743adc)
|
||||
- 🎥 Video file playback support was added to channel messages, enabling inline video viewing with native player controls. [Commit](https://github.com/open-webui/open-webui/commit/7b126b23d50a0bd36a350fe09dc1dbe3df105318)
|
||||
- 🔐 LDAP authentication now supports user entries with multiple username attributes, correctly handling cases where the username field contains a list of values. [Commit](https://github.com/open-webui/open-webui/commit/379f888c9dc6dce21c3ef7a1fc455258aff993dc), [#19878](https://github.com/open-webui/open-webui/issues/19878)
|
||||
- 👨👩👧👦 The "ENABLE_PUBLIC_ACTIVE_USERS_COUNT" environment variable now allows restricting active user count visibility to administrators, reducing backend load and addressing privacy concerns in large deployments. [#20027](https://github.com/open-webui/open-webui/pull/20027), [#13026](https://github.com/open-webui/open-webui/issues/13026)
|
||||
- 🚀 Models page search input performance was optimized with a 300ms debounce to reduce server load and improve responsiveness. [#19832](https://github.com/open-webui/open-webui/pull/19832)
|
||||
- 💨 Frontend performance was optimized by preventing unnecessary API calls for API Keys and Channels features when they are disabled in admin settings, reducing backend noise and improving overall system efficiency. [#20043](https://github.com/open-webui/open-webui/pull/20043), [#19967](https://github.com/open-webui/open-webui/issues/19967)
|
||||
- 📎 Channel file association tracking was implemented, automatically linking uploaded files to their respective channels with a dedicated association table enabling better organization and future file management features within channels. [Commit](https://github.com/open-webui/open-webui/commit/2bccf8350d0915f69b8020934bb179c52e81b7b5)
|
||||
- 👥 User profile previews now display group membership information for easier identification of user roles and permissions. [Commit](https://github.com/open-webui/open-webui/commit/2b1a29d44bde9fbc20ff9f0a5ded1ce8ded9d90d)
|
||||
- 🌍 The "SEARXNG_LANGUAGE" environment variable now allows configuring search language for SearXNG queries, replacing the hardcoded "en-US" default with a configurable setting that defaults to "all". [#19909](https://github.com/open-webui/open-webui/pull/19909)
|
||||
- ⏳ The "MINERU_API_TIMEOUT" environment variable now allows configuring request timeouts for MinerU document processing operations. [#20016](https://github.com/open-webui/open-webui/pull/20016), [#18495](https://github.com/open-webui/open-webui/issues/18495)
|
||||
- 🔧 The "RAG_EXTERNAL_RERANKER_TIMEOUT" environment variable now allows configuring request timeouts for external reranker operations. [#20049](https://github.com/open-webui/open-webui/pull/20049), [#19900](https://github.com/open-webui/open-webui/issues/19900)
|
||||
- 🎨 OpenAI GPT-IMAGE 1.5 model support was added for image generation and editing with automatic image size capabilities. [Commit](https://github.com/open-webui/open-webui/commit/4c2e5c93e9287479f56f780708656136849ccaee)
|
||||
- 🔑 The "OAUTH_AUDIENCE" environment variable now allows OAuth providers to specify audience parameters for JWT access token generation. [#19768](https://github.com/open-webui/open-webui/pull/19768)
|
||||
- ⏰ The "REDIS_SOCKET_CONNECT_TIMEOUT" environment variable now allows configuring socket connection timeouts for Redis and Sentinel connections, addressing potential failover and responsiveness issues in distributed deployments. [#19799](https://github.com/open-webui/open-webui/pull/19799), [Docs:#882](https://github.com/open-webui/docs/pull/882)
|
||||
- ⏱️ The "WEB_LOADER_TIMEOUT" environment variable now allows configuring request timeouts for SafeWebBaseLoader operations. [#19804](https://github.com/open-webui/open-webui/pull/19804), [#19734](https://github.com/open-webui/open-webui/issues/19734)
|
||||
- 🚀 Models API endpoint performance was optimized through batched model loading, eliminating N+1 queries and significantly reducing response times when filtering models by user permissions. [Commit](https://github.com/open-webui/open-webui/commit/0dd2cfe1f273fbacdbe90300a97c021f2e678656)
|
||||
- 🔀 Custom model fallback handling was added, allowing workspace-created custom models to automatically fall back to the default chat model when their configured base model is not found; set "ENABLE_CUSTOM_MODEL_FALLBACK" to true to enable, preventing workflow disruption when base models are removed or renamed, while ensuring other requests remain unaffected. [Commit](https://github.com/open-webui/open-webui/commit/b35aeb8f46e0e278c6f4538382c2b6838e24cc5a), [#19985](https://github.com/open-webui/open-webui/pull/19985)
|
||||
- 📡 A new /feedbacks/all/ids API endpoint was added to return only feedback IDs without metadata, significantly improving performance for external integrations working with large feedback collections. [Commit](https://github.com/open-webui/open-webui/commit/53c1ca64b7205d85f6de06bd69e3e265d15546b8)
|
||||
- 📈 An experimental chat usage statistics endpoint (GET /api/v1/chats/stats/usage) was added with pagination support (50 chats per page) and comprehensive per-chat analytics including model usage counts, user and assistant message breakdowns, average response times calculated from message timestamps, average content lengths, and last activity timestamps; this endpoint remains experimental and not suitable for production use as it performs intensive calculations by processing entire message histories for each chat without caching. [Commit](https://github.com/open-webui/open-webui/commit/a7993f6f4e4591cd2aaa4718ece9e5623557d019)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Translations for German, Danish, Finnish, Korean, Portuguese (Brazil), Simplified Chinese, Traditional Chinese, Catalan, and Spanish were enhanced and expanded.
|
||||
|
||||
### Fixed
|
||||
|
||||
- ⚡ External reranker operations were optimized to prevent event loop blocking by offloading synchronous HTTP requests to a thread pool using asyncio.to_thread(), eliminating application freezes during RAG reranking queries. [#20049](https://github.com/open-webui/open-webui/pull/20049), [#19900](https://github.com/open-webui/open-webui/issues/19900)
|
||||
- 💭 Text loss in the explanation feature when using the "CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE" environment variable was resolved by correcting newline handling in streaming responses. [#19829](https://github.com/open-webui/open-webui/pull/19829)
|
||||
- 📚 Knowledge base batch file addition failures caused by Pydantic validation errors are now prevented by making the meta field optional in file metadata responses, allowing files without metadata to be processed correctly. [#20022](https://github.com/open-webui/open-webui/pull/20022), [#14220](https://github.com/open-webui/open-webui/issues/14220)
|
||||
- 🗄️ PostgreSQL null byte insertion failures when attaching web pages or processing embedded content are now prevented by consolidating text sanitization logic across chat messages, web search results, and knowledge base documents, removing null bytes and invalid UTF-8 surrogates before database insertion. [#20072](https://github.com/open-webui/open-webui/pull/20072), [#19867](https://github.com/open-webui/open-webui/issues/19867), [#18201](https://github.com/open-webui/open-webui/issues/18201), [#15616](https://github.com/open-webui/open-webui/issues/15616)
|
||||
- 🎫 MCP OAuth 2.1 token exchange failures are now fixed by removing duplicate credential passing that caused "ID1,ID1" concatenation and 401 errors from the token endpoint. [#20076](https://github.com/open-webui/open-webui/pull/20076), [#19823](https://github.com/open-webui/open-webui/issues/19823)
|
||||
- 📝 Notes "Improve" action now works correctly after the streaming API change in v0.6.41 by ensuring uploaded files are fully retrieved with complete metadata before processing, restoring note improvement and summarization functionality. [Commit](https://github.com/open-webui/open-webui/commit/a3458f492c53a3b00405f59fbe1ea953fe364f18), [#20078](https://github.com/open-webui/open-webui/discussions/20078)
|
||||
- 🔑 MCP OAuth 2.1 tool servers now work correctly in multi-node deployments through lazy-loading of OAuth clients from Redis-synced configuration, eliminating 404 errors when load balancers route requests to nodes that didn't process the original config update. [#20076](https://github.com/open-webui/open-webui/pull/20076), [#19902](https://github.com/open-webui/open-webui/pull/19902), [#19901](https://github.com/open-webui/open-webui/issues/19901)
|
||||
- 🧩 Chat loading failures when channels permissions were disabled are now prevented through graceful error handling. [Commit](https://github.com/open-webui/open-webui/commit/5c2df97f04cce5cb7087d288f816f91a739688c1)
|
||||
- 🔍 Search bar freezing and crashing issues in Models, Chat, and Archived Chat pages caused by excessively long queries exceeding server URL limits were resolved by truncating queries to 500 characters, and knowledge base layout shifting with long names was fixed by adjusting flex container properties. [#19832](https://github.com/open-webui/open-webui/pull/19832)
|
||||
- 🎛️ Rate limiting errors (HTTP 429) with Brave Search free tier when generating multiple queries are now prevented through asyncio.Semaphore-based concurrency control applied globally to all search engines. [#20070](https://github.com/open-webui/open-webui/pull/20070), [#20003](https://github.com/open-webui/open-webui/issues/20003), [#14107](https://github.com/open-webui/open-webui/issues/14107), [#15134](https://github.com/open-webui/open-webui/issues/15134)
|
||||
- 💥 UI crashes and white screen errors caused by null chat lists during loading or network failures were prevented by adding null safety checks to chat iteration in folder placeholders and archived chat modals. [#19898](https://github.com/open-webui/open-webui/pull/19898)
|
||||
- 🧩 Chat overview tab crashes caused by undefined model references were resolved by adding proper null checks when accessing deleted or ejected models. [#19935](https://github.com/open-webui/open-webui/pull/19935)
|
||||
- 🔄 MultiResponseMessages component crashes when navigating chat history after removing or changing selected models are now prevented through proper component re-initialization. [Commit](https://github.com/open-webui/open-webui/commit/870e29e3738da968c396b70532f365a3c2f71995), [#18599](https://github.com/open-webui/open-webui/issues/18599)
|
||||
- 🚫 Channel API endpoint access is now correctly blocked when channels are globally disabled, preventing users with channel permissions from accessing channel data via API requests when the feature is turned off in admin settings. [#19957](https://github.com/open-webui/open-webui/pull/19957), [#19914](https://github.com/open-webui/open-webui/issues/19914)
|
||||
- 👤 User list popup display in the admin panel was fixed to correctly track user identity when sorting or filtering changes the list order, preventing popups from showing incorrect user information. [Commit](https://github.com/open-webui/open-webui/commit/ae47101dc6aef2c7d8ae0d843985341fff820057), [#20046](https://github.com/open-webui/open-webui/issues/20046)
|
||||
- 👥 User selection in the "Edit User Group" modal now preserves pagination position, allowing administrators to select multiple users across pages without resetting to page 1. [#19959](https://github.com/open-webui/open-webui/pull/19959)
|
||||
- 📸 Model avatar images now update immediately in the admin models list through proper Cache-Control headers, eliminating the need for manual cache clearing. [#19959](https://github.com/open-webui/open-webui/pull/19959)
|
||||
- 🔒 Temporary chat permission enforcement now correctly prevents users from enabling the feature through personal settings when disabled in default or group permissions. [#19785](https://github.com/open-webui/open-webui/issues/19785)
|
||||
- 🎨 Image editing with reference images now correctly uses both previously generated images and newly uploaded reference images. [Commit](https://github.com/open-webui/open-webui/commit/bcd50ed8f1b7387fd700538ae0d74fc72f3c53d0)
|
||||
- 🧠 Image generation and editing operations are now explicitly injected into system context, improving LLM comprehension even for weaker models so they reliably acknowledge operations instead of incorrectly claiming they cannot generate images. [Commit](https://github.com/open-webui/open-webui/commit/28b2fcab0cd036dbe646a66fe81890f288c77121)
|
||||
- 📑 Source citation rendering errors when citation syntax appeared in user messages or contexts without source data were resolved. [Commit](https://github.com/open-webui/open-webui/commit/3c8f1cf8e58d52e86375634b0381374298b1b4f3)
|
||||
- 📄 DOCX file parsing now works correctly in temporary chats through client-side text extraction, preventing raw data from being displayed. [Commit](https://github.com/open-webui/open-webui/commit/6993b0b40b10af8cdbe6626702cc94080fff9e22)
|
||||
- 🔧 Pipeline settings save failures when valve properties contain null values are now handled correctly. [#19791](https://github.com/open-webui/open-webui/pull/19791)
|
||||
- ⚙️ Model usage settings are now correctly preserved when switching between models instead of being unexpectedly cleared or reset. [#19868](https://github.com/open-webui/open-webui/pull/19868), [#19549](https://github.com/open-webui/open-webui/issues/19549)
|
||||
- 🛡️ Invalid PASSWORD_VALIDATION_REGEX_PATTERN configurations no longer cause startup warnings, with automatic fallback to the default pattern when regex compilation fails. [#20058](https://github.com/open-webui/open-webui/pull/20058)
|
||||
- 🎯 The DefaultFiltersSelector component in model settings now correctly displays when only global toggleable filters are present, enabling per-model default configuration. [#20066](https://github.com/open-webui/open-webui/pull/20066)
|
||||
- 🎤 Audio file upload failures caused by MIME type matching issues with spacing variations and codec parameters were resolved by implementing proper MIME type parsing. [#17771](https://github.com/open-webui/open-webui/pull/17771), [#17761](https://github.com/open-webui/open-webui/issues/17761)
|
||||
- ⌨️ Regenerate response keyboard shortcut now only activates when chat input is selected, preventing unintended regeneration when modals are open or other UI elements are focused. [#19875](https://github.com/open-webui/open-webui/pull/19875)
|
||||
- 📋 Log truncation issues in Docker deployments during application crashes were resolved by disabling Python stdio buffering, ensuring complete diagnostic output is captured. [#19844](https://github.com/open-webui/open-webui/issues/19844)
|
||||
- 🔴 Redis cluster compatibility issues with disabled KEYS command were resolved by replacing blocking KEYS operations with production-safe SCAN iterations. [#19871](https://github.com/open-webui/open-webui/pull/19871), [#15834](https://github.com/open-webui/open-webui/issues/15834)
|
||||
- 🔤 File attachment container layout issues when using RTL languages were resolved by applying chat direction settings to file containers across all message types. [#19891](https://github.com/open-webui/open-webui/pull/19891), [#19742](https://github.com/open-webui/open-webui/issues/19742)
|
||||
- 🔃 Ollama model list now automatically refreshes after model deletion, preventing deleted models from persisting in the UI and being inadvertently re-downloaded during subsequent pull operations. [#19912](https://github.com/open-webui/open-webui/pull/19912)
|
||||
- 🌐 Ollama Cloud web search now correctly applies domain filtering to search results. [Commit](https://github.com/open-webui/open-webui/commit/d4bd938a77c22409a1643c058b937a06e07baca9)
|
||||
- 📜 Tool specification serialization now preserves non-ASCII characters including Chinese text, improving LLM comprehension and tool selection accuracy by avoiding Unicode escape sequences. [#19942](https://github.com/open-webui/open-webui/pull/19942)
|
||||
- 🛟 Model editor stability was improved with null safety checks for tools, functions, and file input operations, preventing crashes when stores are undefined or file objects are invalid. [#19939](https://github.com/open-webui/open-webui/pull/19939)
|
||||
- 🗣️ MoA completion handling stability was improved with null safety checks for response objects, boolean casting for settings, and proper timeout type definitions. [#19921](https://github.com/open-webui/open-webui/pull/19921)
|
||||
- 🎛️ Chat functionality failures caused by empty logit_bias parameter values are now prevented by properly handling empty strings in the parameter parsing middleware. [#19982](https://github.com/open-webui/open-webui/issues/19982)
|
||||
- 🔏 Administrators can now delete read-only knowledge bases from deleted users, resolving permission issues that previously prevented cleanup of orphaned read-only content. [Commit](https://github.com/open-webui/open-webui/commit/59d6eb2badf46f9c2b1e879484ac33432915b575)
|
||||
- 💾 Cloned prompts and tools now correctly preserve their access control settings instead of being reset to null, preventing unintended visibility changes when duplicating private or restricted items. [#19960](https://github.com/open-webui/open-webui/pull/19960), [#19360](https://github.com/open-webui/open-webui/issues/19360)
|
||||
- 🎚️ Text scale adjustment buttons in Interface Settings were fixed to correctly increment and decrement the scale value. [#19699](https://github.com/open-webui/open-webui/pull/19699)
|
||||
- 🎭 Group channel invite button text visibility in light theme was corrected to display properly against dark backgrounds. [#19828](https://github.com/open-webui/open-webui/issues/19828)
|
||||
- 📁 The move button is now hidden when no folders exist, preventing display of non-functional controls. [#19705](https://github.com/open-webui/open-webui/pull/19705)
|
||||
- 📦 Qdrant client dependency was updated to resolve startup version incompatibility warnings. [#19757](https://github.com/open-webui/open-webui/pull/19757)
|
||||
- 🧮 The "ENABLE_ASYNC_EMBEDDING" environment variable is now correctly applied to embedding operations when configured exclusively via environment variables. [#19748](https://github.com/open-webui/open-webui/pull/19748)
|
||||
- 🌄 The "COMFYUI_WORKFLOW_NODES" and "IMAGES_EDIT_COMFYUI_WORKFLOW_NODES" environment variables are now correctly loaded and parsed as JSON lists, and the configuration key name was corrected from "COMFYUI_WORKFLOW" to "COMFYUI_WORKFLOW_NODES". [#19918](https://github.com/open-webui/open-webui/pull/19918), [#19886](https://github.com/open-webui/open-webui/issues/19886)
|
||||
- 💫 Channel name length is now limited to 128 characters with validation to prevent display issues caused by excessively long names. [Commit](https://github.com/open-webui/open-webui/commit/f509f5542dde384d34402f6df763f49a06bea109)
|
||||
- 🔐 Invalid PASSWORD_VALIDATION_REGEX_PATTERN configurations no longer cause startup warnings, with automatic fallback to the default pattern when regex compilation fails. [#20058](https://github.com/open-webui/open-webui/pull/20058)
|
||||
- 🔎 Bocha search with filter list functionality now works correctly by returning results as a list instead of a dictionary wrapper, ensuring compatibility with result filtering operations. [Commit](https://github.com/open-webui/open-webui/commit/b5bd8704fe1672da839bb3be6210d7cb494797ce), [#19733](https://github.com/open-webui/open-webui/issues/19733)
|
||||
|
||||
### Changed
|
||||
|
||||
- ⚠️ This release includes database schema changes; multi-worker, multi-server, or load-balanced deployments must update all instances simultaneously rather than performing rolling updates, as running mixed versions will cause application failures due to schema incompatibility between old and new instances.
|
||||
- 📡 WEB_SEARCH_CONCURRENT_REQUESTS default changed from 10 to 0 (unlimited) — This setting now applies to all search engines instead of only DuckDuckGo; previously users were implicitly limited to 10 concurrent queries, but now have unlimited parallel requests by default; set to 1 for sequential execution if using rate-limited APIs like Brave free tier. [#20070](https://github.com/open-webui/open-webui/pull/20070)
|
||||
- 💾 SQLCipher absolute path handling was fixed to properly support absolute database paths (e.g., "/app/data.db") instead of incorrectly stripping leading slashes and converting them to relative paths; this restores functionality for Docker volume mounts and explicit absolute path configurations while maintaining backward compatibility with relative paths. [#20074](https://github.com/open-webui/open-webui/pull/20074)
|
||||
- 🔌 Knowledge base file listing API was redesigned with paginated responses and new filtering parameters; the GET /knowledge/{id}/files endpoint now returns paginated results with user attribution instead of embedding all files in the knowledge object, which may require updates to custom integrations or scripts accessing knowledge base data programmatically. [Commit](https://github.com/open-webui/open-webui/commit/94a8439105f30203ea9d729787c9c5978f5c22a2)
|
||||
- 🗑️ Legacy knowledge base support for deprecated document collections and tag-based collections was removed; users with pre-knowledge base documents must migrate to the current knowledge base system as legacy items will no longer appear in selectors or command menus. [Commit](https://github.com/open-webui/open-webui/commit/a934dc997ed67a036dd7975e380f8036c447d3ed)
|
||||
- 🔨 Source-level log environment variables (AUDIO_LOG_LEVEL, CONFIG_LOG_LEVEL, MODELS_LOG_LEVEL, etc.) were removed as they provided limited configuration options and added significant complexity across 100+ files; the GLOBAL_LOG_LEVEL environment variable, which already took precedence over source-level settings, now serves as the exclusive logging configuration method. [#20045](https://github.com/open-webui/open-webui/pull/20045)
|
||||
- 🐍 LangChain was upgraded to version 1.2.0, representing a major dependency update and significant progress toward Python 3.13 compatibility while improving RAG pipeline functionality for document loading and retrieval operations. [#19991](https://github.com/open-webui/open-webui/pull/19991)
|
||||
|
||||
## [0.6.41] - 2025-12-02
|
||||
|
||||
### Added
|
||||
|
||||
- 🚦 Sign-in rate limiting was implemented to protect against brute force attacks, limiting login attempts to 15 per 3-minute window per email address using Redis with automatic fallback to in-memory storage when Redis is unavailable. [Commit](https://github.com/open-webui/open-webui/commit/7b166370432414ce8f186747fb098e0c70fb2d6b)
|
||||
- 📂 Administrators can now globally disable the folders feature and control user-level folder permissions through the admin panel, enabling minimalist interface configurations for deployments that don't require workspace organization features. [#19529](https://github.com/open-webui/open-webui/pull/19529), [#19210](https://github.com/open-webui/open-webui/discussions/19210), [#18459](https://github.com/open-webui/open-webui/discussions/18459), [#18299](https://github.com/open-webui/open-webui/discussions/18299)
|
||||
- 👥 Group channels were introduced as a new channel type enabling membership-based collaboration spaces where users explicitly join as members rather than accessing through permissions, with support for public or private visibility, automatic member inclusion from specified user groups, member role tracking with invitation metadata, and post-creation member management allowing channel managers to add or remove members through the channel info modal. [Commit](https://github.com/open-webui/open-webui/commit/f589b7c1895a6a77166c047891acfa21bc0936c4), [Commit](https://github.com/open-webui/open-webui/commit/3f1d9ccbf8443a2fa5278f36202bad930a216680)
|
||||
- 💬 Direct Message channels were introduced with a dedicated channel type selector and multi-user member selection interface, enabling private conversations between specific users without requiring full channel visibility. [Commit](https://github.com/open-webui/open-webui/commit/64b4d5d9c280b926746584aaf92b447d09deb386)
|
||||
- 📨 Direct Message channels now support a complete user-to-user messaging system with member-based access control, automatic deduplication for one-on-one conversations, optional channel naming, and distinct visual presentation using participant avatars instead of channel icons. [Commit](https://github.com/open-webui/open-webui/commit/acccb9afdd557274d6296c70258bb897bbb6652f)
|
||||
- 🙈 Users can now hide Direct Message channels from their sidebar while preserving message history, with automatic reactivation when new messages arrive from other participants, providing a cleaner interface for managing active conversations. [Commit](https://github.com/open-webui/open-webui/commit/acccb9afdd557274d6296c70258bb897bbb6652f)
|
||||
- ☑️ A comprehensive user selection component was added to the channel creation modal, featuring search functionality, sortable user lists, pagination support, and multi-select checkboxes for building Direct Message participant lists. [Commit](https://github.com/open-webui/open-webui/commit/acccb9afdd557274d6296c70258bb897bbb6652f)
|
||||
- 🔴 Channel unread message count tracking was implemented with visual badge indicators in the sidebar, automatically updating counts in real-time and marking messages as read when users view channels, with join/leave functionality to manage membership status. [Commit](https://github.com/open-webui/open-webui/commit/64b4d5d9c280b926746584aaf92b447d09deb386)
|
||||
- 📌 Message pinning functionality was added to channels, allowing users to pin important messages for easy reference with visual highlighting, a dedicated pinned messages modal accessible from the navbar, and complete backend support for tracking pinned status, pin timestamp, and the user who pinned each message. [Commit](https://github.com/open-webui/open-webui/commit/64b4d5d9c280b926746584aaf92b447d09deb386), [Commit](https://github.com/open-webui/open-webui/commit/aae2fce17355419d9c29f8100409108037895201)
|
||||
- 🟢 Direct Message channels now display an active status indicator for one-on-one conversations, showing a green dot when the other participant is currently online or a gray dot when offline. [Commit](https://github.com/open-webui/open-webui/commit/4b6773885cd7527c5a56b963781dac5e95105eec), [Commit](https://github.com/open-webui/open-webui/commit/39645102d14f34e71b34e5ddce0625790be33f6f)
|
||||
- 🆔 Users can now start Direct Message conversations directly from user profile previews by clicking the "Message" button, enabling quick access to private messaging without navigating away from the current channel. [Commit](https://github.com/open-webui/open-webui/commit/a0826ec9fedb56320532616d568fa59dda831d4e)
|
||||
- ⚡ Channel messages now appear instantly when sent using optimistic UI rendering, displaying with a pending state while the server confirms delivery, providing a more responsive messaging experience. [Commit](https://github.com/open-webui/open-webui/commit/25994dd3da90600401f53596d4e4fb067c1b8eaa)
|
||||
- 👍 Channel message reactions now display the names of users who reacted when hovering over the emoji, showing up to three names with a count for additional reactors. [Commit](https://github.com/open-webui/open-webui/commit/05e79bdd0c7af70b631e958924e3656db1013b80)
|
||||
- 🛠️ Channel creators can now edit and delete their own group and DM channels without requiring administrator privileges, enabling users to manage the channels they create independently. [Commit](https://github.com/open-webui/open-webui/commit/f589b7c1895a6a77166c047891acfa21bc0936c4)
|
||||
- 🔌 A new API endpoint was added to directly get or create a Direct Message channel with a specific user by their ID, streamlining programmatic DM channel creation for integrations and frontend workflows. [Commit](https://github.com/open-webui/open-webui/commit/f589b7c1895a6a77166c047891acfa21bc0936c4)
|
||||
- 💭 Users can now set a custom status with an emoji and message that displays in profile previews, the sidebar user menu, and Direct Message channel items in the sidebar, with the ability to clear status at any time, providing visibility into availability or current focus similar to team communication platforms. [Commit](https://github.com/open-webui/open-webui/commit/51621ba91a982e52da168ce823abffd11ad3e4fa), [Commit](https://github.com/open-webui/open-webui/commit/f5e8d4d5a004115489c35725408b057e24dfe318)
|
||||
- 📤 A group export API endpoint was added, enabling administrators to export complete group data including member lists for backup and migration purposes. [Commit](https://github.com/open-webui/open-webui/commit/09b6ea38c579659f8ca43ae5ea3746df3ac561ad)
|
||||
- 📡 A new API endpoint was added to retrieve all users belonging to a specific group, enabling programmatic access to group membership information for administrative workflows. [Commit](https://github.com/open-webui/open-webui/commit/01868e856a10f474f74fbd1b4425dafdf949222f)
|
||||
- 👁️ The admin user list now displays an active status indicator next to each user, showing a visual green dot for users who have been active within the last three minutes. [Commit](https://github.com/open-webui/open-webui/commit/1b095d12ff2465b83afa94af89ded9593f8a8655)
|
||||
- 🔑 The admin user edit modal now displays OAuth identity information with a per-provider breakdown, showing each linked identity provider and its associated subject identifier separately. [#19573](https://github.com/open-webui/open-webui/pull/19573)
|
||||
- 🧩 OAuth role claim parsing now respects the "OAUTH_ROLES_SEPARATOR" configuration, enabling proper parsing of roles returned as comma-separated strings and providing consistent behavior with group claim handling. [#19514](https://github.com/open-webui/open-webui/pull/19514)
|
||||
- 🎛️ Channel feature access can now be controlled through both the "USER_PERMISSIONS_FEATURES_CHANNELS" environment variable and group permission toggles in the admin panel, allowing administrators to restrict channel functionality for specific users or groups while defaulting to enabled for all users. [Commit](https://github.com/open-webui/open-webui/commit/f589b7c1895a6a77166c047891acfa21bc0936c4)
|
||||
- 🎨 The model editor interface was refined with access control settings moved to a dedicated modal, group member counts now displayed when configuring permissions, reorganized layout with improved visual hierarchy, and redesigned prompt suggestions cards with tooltips for field guidance. [Commit](https://github.com/open-webui/open-webui/commit/e65d92fc6f49da5ca059e1c65a729e7973354b99), [Commit](https://github.com/open-webui/open-webui/commit/9d39b9b42c653ee2acf2674b2df343ecbceb4954)
|
||||
- 🏗️ Knowledge base file management was rebuilt with a dedicated database table replacing the previous JSON array storage, enabling pagination support for large knowledge bases, significantly faster file listing performance, and more reliable file-knowledge base relationship tracking. [Commit](https://github.com/open-webui/open-webui/commit/d19023288e2ca40f86e2dc3fd9f230540f3e70d7)
|
||||
- ☁️ Azure Document Intelligence model selection was added, allowing administrators to specify which model to use for document processing via the "DOCUMENT_INTELLIGENCE_MODEL" environment variable or admin UI setting, with "prebuilt-layout" as the default. [#19692](https://github.com/open-webui/open-webui/pull/19692), [Docs:#872](https://github.com/open-webui/docs/pull/872)
|
||||
- 🚀 Milvus multitenancy vector database performance was improved by removing manual flush calls after upsert operations, eliminating rate limit errors and reducing load on etcd and MinIO/S3 storage by allowing Milvus to manage segment persistence automatically via its WAL and auto-flush policies. [#19680](https://github.com/open-webui/open-webui/pull/19680)
|
||||
- ✨ Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌍 Translations for German, French, Portuguese (Brazil), Catalan, Simplified Chinese, and Traditional Chinese were enhanced and expanded.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🔄 Tool call response token duplication was fixed by removing redundant message history additions in non-native function calling mode, resolving an issue where tool results were included twice in the context and causing 2x token consumption. [#19656](https://github.com/open-webui/open-webui/issues/19656), [Commit](https://github.com/open-webui/open-webui/commit/52ccab8)
|
||||
- 🛡️ Web search domain filtering was corrected to properly block results when any resolved hostname or IP address matches a blocked domain, preventing blocked sites from appearing in search results due to permissive hostname resolution logic that previously allowed results through if any single resolved address passed the filter. [#19670](https://github.com/open-webui/open-webui/pull/19670), [#19669](https://github.com/open-webui/open-webui/issues/19669)
|
||||
- 🧠 Custom models based on Ollama or OpenAI now properly inherit the connection type from their base model, ensuring they appear correctly in the "Local" or "External" model selection tabs instead of only appearing under "All". [#19183](https://github.com/open-webui/open-webui/issues/19183), [Commit](https://github.com/open-webui/open-webui/commit/39f7575)
|
||||
- 🐍 SentenceTransformers embedding initialization was fixed by updating the transformers dependency to version 4.57.3, resolving a regression in v0.6.40 where document ingestion failed with "'NoneType' object has no attribute 'encode'" errors due to a bug in transformers 4.57.2. [#19512](https://github.com/open-webui/open-webui/issues/19512), [#19513](https://github.com/open-webui/open-webui/pull/19513)
|
||||
- 📈 Active user count accuracy was significantly improved by replacing the socket-based USER_POOL tracking with a database-backed heartbeat mechanism, resolving long-standing issues where Redis deployments displayed inflated user counts due to stale sessions never being cleaned up on disconnect. [#16074](https://github.com/open-webui/open-webui/discussions/16074), [Commit](https://github.com/open-webui/open-webui/commit/70948f8803e417459d5203839f8077fdbfbbb213)
|
||||
- 👥 Default group assignment now applies consistently across all user registration methods including OAuth/SSO, LDAP, and admin-created users, fixing an issue where the "DEFAULT_GROUP_ID" setting was only being applied to users who signed up via the email/password signup form. [#19685](https://github.com/open-webui/open-webui/pull/19685)
|
||||
- 🔦 Model list filtering in workspaces was corrected to properly include models shared with user groups, ensuring members can view models they have write access to through group permissions. [#19461](https://github.com/open-webui/open-webui/issues/19461), [Commit](https://github.com/open-webui/open-webui/commit/69722ba973768a5f689f2e2351bf583a8db9bba8)
|
||||
- 🖼️ User profile image display in preview contexts was fixed by resolving a Pydantic validation error that prevented proper rendering. [Commit](https://github.com/open-webui/open-webui/commit/c7eb7136893b0ddfdc5d55ffc7a05bd84a00f5d6)
|
||||
- 🔒 Redis TLS connection failures were resolved by updating the python-socketio dependency to version 5.15.0, restoring support for the "rediss://" URL schema. [#19480](https://github.com/open-webui/open-webui/issues/19480), [#19488](https://github.com/open-webui/open-webui/pull/19488)
|
||||
- 📝 MCP tool server configuration was corrected to properly handle the "Function Name Filter List" as both string and list types, preventing AttributeError when the field is empty and ensuring backward compatibility. [#19486](https://github.com/open-webui/open-webui/issues/19486), [Commit](https://github.com/open-webui/open-webui/commit/c5b73d71843edc024325d4a6e625ec939a747279), [Commit](https://github.com/open-webui/open-webui/commit/477097c2e42985c14892301d0127314629d07df1)
|
||||
- 📎 Web page attachment failures causing TypeError on metadata checks were resolved by correcting async threadpool parameter passing in vector database operations. [#19493](https://github.com/open-webui/open-webui/issues/19493), [Commit](https://github.com/open-webui/open-webui/commit/4370dee79e19d77062c03fba81780cb3b779fca3)
|
||||
- 💾 Model allowlist persistence in multi-worker deployments was fixed by implementing Redis-based shared state for the internal models dictionary, ensuring configuration changes are consistently visible across all worker processes. [#19395](https://github.com/open-webui/open-webui/issues/19395), [Commit](https://github.com/open-webui/open-webui/commit/b5e5617d7f7ad3e4eec9f15f4cc7f07cb5afc2fa)
|
||||
- ⏳ Chat history infinite loading was prevented by enhancing message data structure to properly track parent message relationships, resolving issues where missing parentId fields caused perpetual loading states. [#19225](https://github.com/open-webui/open-webui/issues/19225), [Commit](https://github.com/open-webui/open-webui/commit/ff4b1b9862d15adfa15eac17d2ce066c3d8ae38f)
|
||||
- 🩹 Database migration robustness was improved by automatically detecting and correcting missing primary key constraints on the user table, ensuring successful schema upgrades for databases with non-standard configurations. [#19487](https://github.com/open-webui/open-webui/discussions/19487), [Commit](https://github.com/open-webui/open-webui/commit/453ea9b9a167c0b03d86c46e6efd086bf10056ce)
|
||||
- 🏷️ OAuth group assignment now updates correctly on first login when users transition from admin to user role, ensuring group memberships reflect immediately when group management is enabled. [#19475](https://github.com/open-webui/open-webui/issues/19475), [#19476](https://github.com/open-webui/open-webui/pull/19476)
|
||||
- 💡 Knowledge base file tooltips now properly display the parent collection name when referencing files with the hash symbol, preventing confusion between identically-named files in different collections. [#19491](https://github.com/open-webui/open-webui/issues/19491), [Commit](https://github.com/open-webui/open-webui/commit/3fe5a47b0ff84ac97f8e4ff56a19fa2ec065bf66)
|
||||
- 🔐 Knowledge base file access inconsistencies were resolved where authorized non-admin users received "Not found" or permission errors for certain files due to race conditions during upload causing mismatched collection_name values, with file access validation now properly checking against knowledge base file associations. [#18689](https://github.com/open-webui/open-webui/issues/18689), [#19523](https://github.com/open-webui/open-webui/pull/19523), [Commit](https://github.com/open-webui/open-webui/commit/e301d1962e45900ababd3eabb7e9a2ad275a5761)
|
||||
- 📦 Knowledge API batch file addition endpoint was corrected to properly handle async operations, resolving 500 Internal Server Error responses when adding multiple files simultaneously. [#19538](https://github.com/open-webui/open-webui/issues/19538), [Commit](https://github.com/open-webui/open-webui/commit/28659f60d94feb4f6a99bb1a5b54d7f45e5ea10f)
|
||||
- 🤖 Embedding model auto-update functionality was fixed to properly respect the "RAG_EMBEDDING_MODEL_AUTO_UPDATE" setting by correctly passing the flag to the model path resolver, ensuring models update as expected when the auto-update option is enabled. [#19687](https://github.com/open-webui/open-webui/pull/19687)
|
||||
- 📉 API response payload sizes were dramatically reduced by removing base64-encoded profile images from most endpoints, eliminating multi-megabyte responses caused by high-resolution avatars and enabling better browser caching. [#19519](https://github.com/open-webui/open-webui/issues/19519), [Commit](https://github.com/open-webui/open-webui/commit/384753c4c17f62a68d38af4bbcf55a21ee08e0f2)
|
||||
- 📞 Redundant API calls on the admin user overview page were eliminated by consolidating reactive statements, reducing four duplicate requests to a single efficient call and significantly improving page load performance. [#19509](https://github.com/open-webui/open-webui/issues/19509), [Commit](https://github.com/open-webui/open-webui/commit/9f89cc5e9f7e1c6c9e2bc91177e08df7c79f66f9)
|
||||
- 🧹 Duplicate API calls on the workspace models page were eliminated by removing redundant model list fetching, reducing two identical requests to a single call and improving page responsiveness. [#19517](https://github.com/open-webui/open-webui/issues/19517), [Commit](https://github.com/open-webui/open-webui/commit/d1bbf6be7a4d1d53fa8ad46ca4f62fc4b2e6a8cb)
|
||||
- 🔘 The model valves button was corrected to prevent unintended form submission by adding explicit button type attribute, ensuring it no longer triggers message sending when the input area contains text. [#19534](https://github.com/open-webui/open-webui/pull/19534)
|
||||
- 🗑️ Ollama model deletion was fixed by correcting the request payload format and ensuring the model selector properly displays the placeholder option. [Commit](https://github.com/open-webui/open-webui/commit/0f3156651c64bc5af188a65fc2908bdcecf30c74)
|
||||
- 🎨 Image generation in temporary chats was fixed by correctly handling local chat sessions that are not persisted to the database. [Commit](https://github.com/open-webui/open-webui/commit/a7c7993bbf3a21cb7ba416525b89233cf2ad877f)
|
||||
- 🕵️♂️ Audit logging was fixed by correctly awaiting the async user authentication call, resolving failures where coroutine objects were passed instead of user data. [#19658](https://github.com/open-webui/open-webui/pull/19658), [Commit](https://github.com/open-webui/open-webui/commit/dba86bc)
|
||||
- 🌙 Dark mode select dropdown styling was corrected to use proper background colors, fixing an issue where dropdown borders and hover states appeared white instead of matching the dark theme. [#19693](https://github.com/open-webui/open-webui/pull/19693), [#19442](https://github.com/open-webui/open-webui/issues/19442)
|
||||
- 🔍 Milvus vector database query filtering was fixed by correcting string quote handling in filter expressions and using the proper parameter name for queries, resolving false "duplicate content detected" errors that prevented uploading multiple files to knowledge bases. [#19602](https://github.com/open-webui/open-webui/pull/19602), [#18119](https://github.com/open-webui/open-webui/issues/18119), [#16345](https://github.com/open-webui/open-webui/issues/16345), [#17088](https://github.com/open-webui/open-webui/issues/17088), [#18485](https://github.com/open-webui/open-webui/issues/18485)
|
||||
- 🆙 Milvus multitenancy vector database was updated to use query_iterator() for improved robustness and consistency with the standard Milvus implementation, fixing the same false duplicate detection errors and improving handling of large result sets in multi-tenant deployments. [#19695](https://github.com/open-webui/open-webui/pull/19695)
|
||||
|
||||
### Changed
|
||||
|
||||
- ⚠️ **IMPORTANT for Multi-Instance Deployments** — This release includes database schema changes; multi-worker, multi-server, or load-balanced deployments must update all instances simultaneously rather than performing rolling updates, as running mixed versions will cause application failures due to schema incompatibility between old and new instances.
|
||||
- 👮 Channel creation is now restricted to administrators only, with the channel add button hidden for regular users to maintain organizational control over communication channels. [Commit](https://github.com/open-webui/open-webui/commit/421aba7cd7cd708168b1f2565026c74525a67905)
|
||||
- ➖ The active user count indicator was removed from the bottom-left user menu in the sidebar to streamline the interface. [Commit](https://github.com/open-webui/open-webui/commit/848f3fd4d86ca66656e0ff0335773945af8d7d8d)
|
||||
- 🗂️ The user table was restructured with API keys migrated to a dedicated table supporting future multi-key functionality, OAuth data storage converted to a JSON structure enabling multiple identity providers per user account, and internal column types optimized from TEXT to JSON for the "info" and "settings" fields, with automatic migration preserving all existing data and associations. [#19573](https://github.com/open-webui/open-webui/pull/19573)
|
||||
- 🔄 The knowledge base API was restructured to support the new file relationship model.
|
||||
|
||||
## [0.6.40] - 2025-11-25
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🗄️ A critical PostgreSQL user listing performance issue was resolved by removing a redundant count operation that caused severe database slowdowns and potential timeouts when viewing user lists in admin panels.
|
||||
|
||||
## [0.6.39] - 2025-11-25
|
||||
|
||||
### Added
|
||||
|
||||
- 💬 A user list modal was added to channels, displaying all users with access and featuring search, sorting, and pagination capabilities. [Commit](https://github.com/open-webui/open-webui/commit/c0e120353824be00a2ef63cbde8be5d625bd6fd0)
|
||||
- 💬 Channel navigation now displays the total number of users with access to the channel. [Commit](https://github.com/open-webui/open-webui/commit/3b5710d0cd445cf86423187f5ee7c40472a0df0b)
|
||||
- 🔌 Tool servers and MCP connections now support function name filtering, allowing administrators to selectively enable or block specific functions using allow/block lists. [Commit](https://github.com/open-webui/open-webui/commit/743199f2d097ae1458381bce450d9025a0ab3f3d)
|
||||
- ⚡ A toggle to disable parallel embedding processing was added via "ENABLE_ASYNC_EMBEDDING", allowing sequential processing for rate-limited or resource-constrained local embedding setups. [#19444](https://github.com/open-webui/open-webui/pull/19444)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Localization improvements were made for German (de-DE) and Portuguese (Brazil) translations.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 📝 Inline citations now render correctly within markdown lists and nested elements instead of displaying as "undefined" values. [#19452](https://github.com/open-webui/open-webui/issues/19452)
|
||||
- 👥 Group member selection now works correctly without randomly selecting other users or causing the user list to jump around. [#19426](https://github.com/open-webui/open-webui/issues/19426)
|
||||
- 👥 Admin panel user list now displays the correct total user count and properly paginates 30 items per page after fixing database query issues with group member joins. [#19429](https://github.com/open-webui/open-webui/issues/19429)
|
||||
- 🔍 Knowledge base reindexing now works correctly after resolving async execution chain issues by implementing threadpool workers for embedding operations. [#19434](https://github.com/open-webui/open-webui/pull/19434)
|
||||
- 🖼️ OpenAI image generation now works correctly after fixing a connection adapter error caused by incorrect URL formatting. [#19435](https://github.com/open-webui/open-webui/pull/19435)
|
||||
|
||||
### Changed
|
||||
|
||||
- 🔧 BREAKING: Docling configuration has been consolidated from individual environment variables into a single "DOCLING_PARAMS" JSON configuration and now supports API key authentication via "DOCLING_API_KEY", requiring users to migrate existing Docling settings to the new format. [#16841](https://github.com/open-webui/open-webui/issues/16841), [#19427](https://github.com/open-webui/open-webui/pull/19427)
|
||||
- 🔧 The environment variable "REPLACE_IMAGE_URLS_IN_CHAT_RESPONSE" has been renamed to "ENABLE_CHAT_RESPONSE_BASE64_IMAGE_URL_CONVERSION" for naming consistency.
|
||||
|
||||
## [0.6.38] - 2025-11-24
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🔍 Hybrid search now works reliably after recent changes.
|
||||
- 🛠️ Tool server saving now handles errors gracefully, preventing failed saves from impacting the UI.
|
||||
- 🔐 SSO/OIDC code fixed to improve login reliability and better handle edge cases.
|
||||
|
||||
## [0.6.37] - 2025-11-24
|
||||
|
||||
### Added
|
||||
|
||||
- 🔐 Granular sharing permissions are now available with two-tiered control separating group sharing from public sharing, allowing administrators to independently configure whether users can share workspace items with groups or make them publicly accessible, with separate permission toggles for models, knowledge bases, prompts, tools, and notes, configurable via "USER_PERMISSIONS_WORKSPACE_MODELS_ALLOW_SHARING", "USER_PERMISSIONS_WORKSPACE_MODELS_ALLOW_PUBLIC_SHARING", and corresponding environment variables for other workspace item types, while groups can now be configured to opt-out of sharing via the "Allow Group Sharing" setting. [Commit](https://github.com/open-webui/open-webui/commit/7be750bcbb40da91912a0a66b7ab791effdcc3b6), [Commit](https://github.com/open-webui/open-webui/commit/f69e37a8507d6d57382d6670641b367f3127f90a)
|
||||
- 🔐 Password policy enforcement is now available with configurable validation rules, allowing administrators to require specific password complexity requirements via "ENABLE_PASSWORD_VALIDATION" and "PASSWORD_VALIDATION_REGEX_PATTERN" environment variables, with default pattern requiring minimum 8 characters including uppercase, lowercase, digit, and special character. [#17794](https://github.com/open-webui/open-webui/pull/17794)
|
||||
- 🔐 Granular import and export permissions are now available for workspace items, introducing six separate permission toggles for models, prompts, and tools that are disabled by default for enhanced security. [#19242](https://github.com/open-webui/open-webui/pull/19242)
|
||||
- 👥 Default group assignment is now available for new users, allowing administrators to automatically assign newly registered users to a specified group for streamlined access control to models, prompts, and tools, particularly useful for organizations with group-based model access policies. [#19325](https://github.com/open-webui/open-webui/pull/19325), [#17842](https://github.com/open-webui/open-webui/issues/17842)
|
||||
- 🔒 Password-based authentication can now be fully disabled via "ENABLE_PASSWORD_AUTH" environment variable, enforcing SSO-only authentication and preventing password login fallback when SSO is configured. [#19113](https://github.com/open-webui/open-webui/pull/19113)
|
||||
- 🖼️ Large stream chunk handling was implemented to support models that generate images directly in their output responses, with configurable buffer size via "CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE" environment variable, resolving compatibility issues with models like Gemini 2.5 Flash Image. [#18884](https://github.com/open-webui/open-webui/pull/18884), [#17626](https://github.com/open-webui/open-webui/issues/17626)
|
||||
- 🖼️ Streaming response middleware now handles images in delta updates with automatic base64 conversion, enabling proper display of images from models using the "choices[0].delta.images.image_url" format such as Gemini 2.5 Flash Image Preview on OpenRouter. [#19073](https://github.com/open-webui/open-webui/pull/19073), [#19019](https://github.com/open-webui/open-webui/issues/19019)
|
||||
- 📈 Model list API performance was optimized by pre-fetching user group memberships and removing profile image URLs from response payloads, significantly reducing both database queries and payload size for instances with large model lists, with profile images now served dynamically via dedicated endpoints. [#19097](https://github.com/open-webui/open-webui/pull/19097), [#18950](https://github.com/open-webui/open-webui/issues/18950)
|
||||
- ⏩ Batch file processing performance was improved by reducing database queries by 67% while ensuring data consistency between vector and relational databases. [#18953](https://github.com/open-webui/open-webui/pull/18953)
|
||||
- 🚀 Chat import performance was dramatically improved by replacing individual per-chat API requests with a bulk import endpoint, reducing import time by up to 95% for large chat collections and providing user feedback via toast notifications displaying the number of successfully imported chats. [#17861](https://github.com/open-webui/open-webui/pull/17861)
|
||||
- ⚡ Socket event broadcasting performance was optimized by implementing user-specific rooms, significantly reducing server overhead particularly for users with multiple concurrent sessions. [#18996](https://github.com/open-webui/open-webui/pull/18996)
|
||||
- 🗄️ Weaviate is now supported as a vector database option, providing an additional choice for RAG document storage alongside existing ChromaDB, Milvus, Qdrant, and OpenSearch integrations. [#14747](https://github.com/open-webui/open-webui/pull/14747)
|
||||
- 🗄️ PostgreSQL pgvector now supports HNSW index types and large dimensional embeddings exceeding 2000 dimensions through automatic halfvec type selection, with configurable index methods via "PGVECTOR_INDEX_METHOD", "PGVECTOR_HNSW_M", "PGVECTOR_HNSW_EF_CONSTRUCTION", and "PGVECTOR_IVFFLAT_LISTS" environment variables. [#19158](https://github.com/open-webui/open-webui/pull/19158), [#16890](https://github.com/open-webui/open-webui/issues/16890)
|
||||
- 🔍 Azure AI Search is now supported as a web search provider, enabling integration with Azure's cognitive search services via "AZURE_AI_SEARCH_API_KEY", "AZURE_AI_SEARCH_ENDPOINT", and "AZURE_AI_SEARCH_INDEX_NAME" configuration. [#19104](https://github.com/open-webui/open-webui/pull/19104)
|
||||
- ⚡ External embedding generation now processes API requests in parallel instead of sequential batches, reducing document processing time by 10-50x when using OpenAI, Azure OpenAI, or Ollama embedding providers, with large PDFs now processing in seconds instead of minutes. [#19296](https://github.com/open-webui/open-webui/pull/19296)
|
||||
- 💨 Base64 image conversion is now available for markdown content in chat responses, automatically uploading embedded images exceeding 1KB and replacing them with file URLs to reduce payload size and resource consumption, configurable via "REPLACE_IMAGE_URLS_IN_CHAT_RESPONSE" environment variable. [#19076](https://github.com/open-webui/open-webui/pull/19076)
|
||||
- 🎨 OpenAI image generation now supports additional API parameters including quality settings for GPT Image 1, configurable via "IMAGES_OPENAI_API_PARAMS" environment variable or through the admin interface, enabling cost-effective image generation with low, medium, or high quality options. [#19228](https://github.com/open-webui/open-webui/issues/19228)
|
||||
- 🖼️ Image editing can now be independently enabled or disabled via admin settings, allowing administrators to control whether sequential image prompts trigger image editing or new image generation, configurable via "ENABLE_IMAGE_EDIT" environment variable. [#19284](https://github.com/open-webui/open-webui/issues/19284)
|
||||
- 🔐 SSRF protection was implemented with a configurable URL blocklist that prevents access to cloud metadata endpoints and private networks, with default protections for AWS, Google Cloud, Azure, and Alibaba Cloud metadata services, customizable via "WEB_FETCH_FILTER_LIST" environment variable. [#19201](https://github.com/open-webui/open-webui/pull/19201)
|
||||
- ⚡ Workspace models page now supports server-side pagination dramatically improving load times and usability for instances with large numbers of workspace models.
|
||||
- 🔍 Hybrid search now indexes file metadata including filenames, titles, headings, sources, and snippets alongside document content, enabling keyword queries to surface documents where search terms appear only in metadata, configurable via "ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS" environment variable. [#19095](https://github.com/open-webui/open-webui/pull/19095)
|
||||
- 📂 Knowledge base upload page now supports folder drag-and-drop with recursive directory handling, enabling batch uploads of entire directory structures instead of requiring individual file selection. [#19320](https://github.com/open-webui/open-webui/pull/19320)
|
||||
- 🤖 Model cloning is now available in admin settings, allowing administrators to quickly create workspace models based on existing base models through a "Clone" option in the model dropdown menu. [#17937](https://github.com/open-webui/open-webui/pull/17937)
|
||||
- 🎨 UI scale adjustment is now available in interface settings, allowing users to increase the size of the entire interface from 1.0x to 1.5x for improved accessibility and readability, particularly beneficial for users with visual impairments. [#19186](https://github.com/open-webui/open-webui/pull/19186)
|
||||
- 📌 Default pinned models can now be configured by administrators for all new users, mirroring the behavior of default models where admin-configured defaults apply only to users who haven't customized their pinned models, configurable via "DEFAULT_PINNED_MODELS" environment variable. [#19273](https://github.com/open-webui/open-webui/pull/19273)
|
||||
- 🎙️ Text-to-Speech and Speech-to-Text services now receive user information headers when "ENABLE_FORWARD_USER_INFO_HEADERS" is enabled, allowing external TTS and STT providers to implement user-specific personalization, rate limiting, and usage tracking. [#19323](https://github.com/open-webui/open-webui/pull/19323), [#19312](https://github.com/open-webui/open-webui/issues/19312)
|
||||
- 🎙️ Voice mode now supports custom system prompts via "VOICE_MODE_PROMPT_TEMPLATE" configuration, allowing administrators to control response style and behavior for voice interactions. [#18607](https://github.com/open-webui/open-webui/pull/18607)
|
||||
- 🔧 WebSocket and Redis configuration options are now available including debug logging controls, custom ping timeout and interval settings, and arbitrary Redis connection options via "WEBSOCKET_SERVER_LOGGING", "WEBSOCKET_SERVER_ENGINEIO_LOGGING", "WEBSOCKET_SERVER_PING_TIMEOUT", "WEBSOCKET_SERVER_PING_INTERVAL", and "WEBSOCKET_REDIS_OPTIONS" environment variables. [#19091](https://github.com/open-webui/open-webui/pull/19091)
|
||||
- 🔧 MCP OAuth dynamic client registration now automatically detects and uses the appropriate token endpoint authentication method from server-supported options, enabling compatibility with OAuth servers that only support "client_secret_basic" instead of "client_secret_post". [#19193](https://github.com/open-webui/open-webui/issues/19193)
|
||||
- 🔧 Custom headers can now be configured for remote MCP and OpenAPI tool server connections, enabling integration with services that require additional authentication headers. [#18918](https://github.com/open-webui/open-webui/issues/18918)
|
||||
- 🔍 Perplexity Search now supports custom API endpoints via "PERPLEXITY_SEARCH_API_URL" configuration and automatically forwards user information headers to enable personalized search experiences. [#19147](https://github.com/open-webui/open-webui/pull/19147)
|
||||
- 🔍 User information headers can now be optionally forwarded to external web search engines when "ENABLE_FORWARD_USER_INFO_HEADERS" is enabled. [#19043](https://github.com/open-webui/open-webui/pull/19043)
|
||||
- 📊 Daily active user metric is now available for monitoring, tracking unique users active since midnight UTC via the "webui.users.active.today" Prometheus gauge. [#19236](https://github.com/open-webui/open-webui/pull/19236), [#19234](https://github.com/open-webui/open-webui/issues/19234)
|
||||
- 📊 Audit log file path is now configurable via "AUDIT_LOGS_FILE_PATH" environment variable, enabling storage in separate volumes or custom locations. [#19173](https://github.com/open-webui/open-webui/pull/19173)
|
||||
- 🎨 Sidebar collapse states for model lists and group information are now persistent across page refreshes, remembering user preferences through browser-based storage. [#19159](https://github.com/open-webui/open-webui/issues/19159)
|
||||
- 🎨 Background image display was enhanced with semi-transparent overlays for navbar and sidebar, creating a seamless and visually cohesive design across the entire interface. [#19157](https://github.com/open-webui/open-webui/issues/19157)
|
||||
- 📋 Tables in chat messages now include a copy button that appears on hover, enabling quick copying of table content alongside the existing CSV export functionality. [#19162](https://github.com/open-webui/open-webui/issues/19162)
|
||||
- 📝 Notes can now be created directly via the "/notes/new" URL endpoint with optional title and content query parameters, enabling faster note creation through bookmarks and shortcuts. [#19195](https://github.com/open-webui/open-webui/issues/19195)
|
||||
- 🏷️ Tag suggestions are now context-aware, displaying only relevant tags when creating or editing models versus chat conversations, preventing confusion between model and chat tags. [#19135](https://github.com/open-webui/open-webui/issues/19135)
|
||||
- ✍️ Prompt autocompletion is now available independently of the rich text input setting, improving accessibility to the feature. [#19150](https://github.com/open-webui/open-webui/issues/19150)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Translations for Simplified Chinese, Traditional Chinese, Portuguese (Brazil), Catalan, Spanish (Spain), Finnish, Irish, Farsi, Swedish, Danish, German, Korean, and Thai were improved and expanded.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🤖 Model update functionality now works correctly, resolving a database parameter binding error that prevented saving changes to model configurations via the Save & Update button. [#19335](https://github.com/open-webui/open-webui/issues/19335)
|
||||
- 🖼️ Multiple input images for image editing and generation are now correctly passed as an array using the "image[]" parameter syntax, enabling proper multi-image reference functionality with models like GPT Image 1. [#19339](https://github.com/open-webui/open-webui/issues/19339)
|
||||
- 📱 PWA installations on iOS now properly refresh after server container restarts, resolving freezing issues by automatically unregistering service workers when version or deployment changes are detected. [#19316](https://github.com/open-webui/open-webui/pull/19316)
|
||||
- 🗄️ S3 Vectors collection detection now correctly handles buckets with more than 2000 indexes by using direct index lookup instead of paginated list scanning, improving performance by approximately 8x and enabling RAG queries to work reliably at scale. [#19238](https://github.com/open-webui/open-webui/pull/19238), [#19233](https://github.com/open-webui/open-webui/issues/19233)
|
||||
- 📈 Feedback retrieval performance was optimized by eliminating N+1 query patterns through database joins, adding server-side pagination and sorting, significantly reducing database load for instances with large feedback datasets. [#17976](https://github.com/open-webui/open-webui/pull/17976)
|
||||
- 🔍 Chat search now works correctly with PostgreSQL when chat data contains null bytes, with comprehensive sanitization preventing null bytes during data writes, cleaning existing data on read, and stripping null bytes during search queries to ensure reliable search functionality. [#15616](https://github.com/open-webui/open-webui/issues/15616)
|
||||
- 🔍 Hybrid search with reranking now correctly handles attribute validation, preventing errors when collection results lack expected structure. [#19025](https://github.com/open-webui/open-webui/pull/19025), [#17046](https://github.com/open-webui/open-webui/issues/17046)
|
||||
- 🔎 Reranking functionality now works correctly after recent refactoring, resolving crashes caused by incorrect function argument handling. [#19270](https://github.com/open-webui/open-webui/pull/19270)
|
||||
- 🤖 Azure OpenAI models now support the "reasoning_effort" parameter, enabling proper configuration of reasoning capabilities for models like GPT-5.1 which default to no reasoning without this setting. [#19290](https://github.com/open-webui/open-webui/issues/19290)
|
||||
- 🤖 Models with very long IDs can now be deleted correctly, resolving URL length limitations that previously prevented management operations on such models. [#18230](https://github.com/open-webui/open-webui/pull/18230)
|
||||
- 🤖 Model-level streaming settings now correctly apply to API requests, ensuring "Stream Chat Response" toggle properly controls the streaming parameter. [#19154](https://github.com/open-webui/open-webui/issues/19154)
|
||||
- 🖼️ Image editing configuration now correctly preserves independent OpenAI API endpoints and keys, preventing them from being overwritten by image generation settings. [#19003](https://github.com/open-webui/open-webui/issues/19003)
|
||||
- 🎨 Gemini image edit settings now display correctly in the admin panel, fixing an incorrect configuration key reference that prevented proper rendering of edit options. [#19200](https://github.com/open-webui/open-webui/pull/19200)
|
||||
- 🖌️ Image generation settings menu now loads correctly, resolving validation errors with AUTOMATIC1111 API authentication parameters. [#19187](https://github.com/open-webui/open-webui/issues/19187), [#19246](https://github.com/open-webui/open-webui/issues/19246)
|
||||
- 📅 Date formatting in chat search and admin user chat search now correctly respects the "DEFAULT_LOCALE" environment variable, displaying dates according to the configured locale instead of always using MM/DD/YYYY format. [#19305](https://github.com/open-webui/open-webui/pull/19305), [#19020](https://github.com/open-webui/open-webui/issues/19020)
|
||||
- 📝 RAG template query placeholder escaping logic was corrected to prevent unintended replacements of context values when query placeholders appear in retrieved content. [#19102](https://github.com/open-webui/open-webui/pull/19102), [#19101](https://github.com/open-webui/open-webui/issues/19101)
|
||||
- 📄 RAG template prompt duplication was eliminated by removing redundant user query section from the default template. [#19099](https://github.com/open-webui/open-webui/pull/19099), [#19098](https://github.com/open-webui/open-webui/issues/19098)
|
||||
- 📋 MinerU local mode configuration no longer incorrectly requires an API key, allowing proper use of local content extraction without external API credentials. [#19258](https://github.com/open-webui/open-webui/issues/19258)
|
||||
- 📊 Excel file uploads now work correctly with the addition of the missing msoffcrypto-tool dependency, resolving import errors introduced by the unstructured package upgrade. [#19153](https://github.com/open-webui/open-webui/issues/19153)
|
||||
- 📑 Docling parameters now properly handle JSON serialization, preventing exceptions and ensuring configuration changes are saved correctly. [#19072](https://github.com/open-webui/open-webui/pull/19072)
|
||||
- 🛠️ UserValves configuration now correctly isolates settings per tool, preventing configuration contamination when multiple tools with UserValves are used simultaneously. [#19185](https://github.com/open-webui/open-webui/pull/19185), [#15569](https://github.com/open-webui/open-webui/issues/15569)
|
||||
- 🔧 Tool selection prompt now correctly handles user messages without duplication, removing redundant query prefixes and improving prompt clarity. [#19122](https://github.com/open-webui/open-webui/pull/19122), [#19121](https://github.com/open-webui/open-webui/issues/19121)
|
||||
- 📝 Notes chat feature now correctly submits messages to the completions endpoint, resolving errors that prevented AI model interactions. [#19079](https://github.com/open-webui/open-webui/pull/19079)
|
||||
- 📝 Note PDF downloads now sanitize HTML content using DOMPurify before rendering, preventing potential DOM-based XSS attacks from malicious content in notes. [Commit](https://github.com/open-webui/open-webui/commit/03cc6ce8eb5c055115406e2304fbf7e3338b8dce)
|
||||
- 📁 Archived chats now have their folder associations automatically removed to prevent unintended deletion when their previous folder is deleted. [#14578](https://github.com/open-webui/open-webui/issues/14578)
|
||||
- 🔐 ElevenLabs API key is now properly obfuscated in the admin settings page, preventing plain text exposure of sensitive credentials. [#19262](https://github.com/open-webui/open-webui/pull/19262), [#19260](https://github.com/open-webui/open-webui/issues/19260)
|
||||
- 🔧 MCP OAuth server metadata discovery now follows the correct specification order, ensuring proper authentication flow compliance. [#19244](https://github.com/open-webui/open-webui/pull/19244)
|
||||
- 🔒 API key endpoint restrictions now properly enforce access controls for all endpoints including SCIM, preventing unintended access when "API_KEY_ALLOWED_ENDPOINTS" is configured. [#19168](https://github.com/open-webui/open-webui/issues/19168)
|
||||
- 🔓 OAuth role claim parsing now supports both flat and nested claim structures, enabling compatibility with OAuth providers that deliver claims as direct properties on the user object rather than nested structures. [#19286](https://github.com/open-webui/open-webui/pull/19286)
|
||||
- 🔑 OAuth MCP server verification now correctly extracts the access token value for authorization headers instead of sending the entire token dictionary. [#19149](https://github.com/open-webui/open-webui/pull/19149), [#19148](https://github.com/open-webui/open-webui/issues/19148)
|
||||
- ⚙️ OAuth dynamic client registration now correctly converts empty strings to None for optional fields, preventing validation failures in MCP package integration. [#19144](https://github.com/open-webui/open-webui/pull/19144), [#19129](https://github.com/open-webui/open-webui/issues/19129)
|
||||
- 🔐 OIDC authentication now correctly passes client credentials in access token requests, ensuring compatibility with providers that require these parameters per RFC 6749. [#19132](https://github.com/open-webui/open-webui/pull/19132), [#19131](https://github.com/open-webui/open-webui/issues/19131)
|
||||
- 🔗 OAuth client creation now respects configured token endpoint authentication methods instead of defaulting to basic authentication, preventing failures with servers that don't support basic auth. [#19165](https://github.com/open-webui/open-webui/pull/19165)
|
||||
- 📋 Text copied from chat responses in Chrome now pastes without background formatting, improving readability when pasting into word processors. [#19083](https://github.com/open-webui/open-webui/issues/19083)
|
||||
|
||||
### Changed
|
||||
|
||||
- 🗄️ Group membership data storage was refactored from JSON arrays to a dedicated relational database table, significantly improving query performance and scalability for instances with large numbers of users and groups, while API responses now return member counts instead of full user ID arrays. [#19239](https://github.com/open-webui/open-webui/pull/19239)
|
||||
- 📄 MinerU parameter handling was refactored to pass parameters directly to the API, improving flexibility and fixing VLM backend configuration. [#19105](https://github.com/open-webui/open-webui/pull/19105), [#18446](https://github.com/open-webui/open-webui/discussions/18446)
|
||||
- 🔐 API key creation is now controlled by granular user and group permissions, with the "ENABLE_API_KEY" environment variable renamed to "ENABLE_API_KEYS" and disabled by default, requiring explicit configuration at both the global and user permission levels, while related environment variables "ENABLE_API_KEY_ENDPOINT_RESTRICTIONS" and "API_KEY_ALLOWED_ENDPOINTS" were renamed to "ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS" and "API_KEYS_ALLOWED_ENDPOINTS" respectively. [#18336](https://github.com/open-webui/open-webui/pull/18336)
|
||||
|
||||
## [0.6.36] - 2025-11-07
|
||||
|
||||
### Added
|
||||
|
||||
- 🔐 OAuth group parsing now supports configurable separators via the "OAUTH_GROUPS_SEPARATOR" environment variable, enabling proper handling of semicolon-separated group claims from providers like CILogon. [#18987](https://github.com/open-webui/open-webui/pull/18987), [#18979](https://github.com/open-webui/open-webui/issues/18979)
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🛠️ Tool calling functionality is restored by correcting asynchronous function handling in tool parameter updates. [#18981](https://github.com/open-webui/open-webui/issues/18981)
|
||||
- 🖼️ The ComfyUI image edit workflow editor modal now opens correctly when clicking the Edit button. [#18978](https://github.com/open-webui/open-webui/issues/18978)
|
||||
- 🔥 Firecrawl import errors are resolved by implementing lazy loading and using the correct class name. [#18973](https://github.com/open-webui/open-webui/issues/18973)
|
||||
- 🔌 Socket.IO CORS warning is resolved by properly configuring CORS origins for Socket.IO connections. [Commit](https://github.com/open-webui/open-webui/commit/639d26252e528c9c37a5f553b11eb94376d8792d)
|
||||
|
||||
## [0.6.35] - 2025-11-06
|
||||
|
||||
### Added
|
||||
|
||||
- 🖼️ Image generation system received a comprehensive overhaul with major new capabilities including full image editing support allowing users to modify existing images using text prompts with OpenAI, Gemini, or ComfyUI engines, adding Gemini 2.5 Flash Image (Nano Banana) support, Qwen Image Edit integration, resolution of base64-encoded image display issues, streamlined AUTOMATIC1111 configuration by consolidating parameters into a flexible JSON parameters field, and enhanced UI with a code editor modal for ComfyUI workflow management. [#17434](https://github.com/open-webui/open-webui/pull/17434), [#16976](https://github.com/open-webui/open-webui/issues/16976), [Commit](https://github.com/open-webui/open-webui/commit/8e5690aab4f632a57027e2acf880b8f89a8717c0), [Commit](https://github.com/open-webui/open-webui/commit/72f8539fd2e679fec0762945f22f4b8a6920afa0), [Commit](https://github.com/open-webui/open-webui/commit/8d34fcb586eeee1fac6da2f991518b8a68b00b72), [Commit](https://github.com/open-webui/open-webui/commit/72900cd686de1fa6be84b5a8a2fc857cff7b91b8)
|
||||
- 🔒 CORS origin validation was added to WebSocket connections as a defense-in-depth security measure against cross-site WebSocket hijacking attacks. [#18411](https://github.com/open-webui/open-webui/pull/18411), [#18410](https://github.com/open-webui/open-webui/issues/18410)
|
||||
- 🔄 Automatic page refresh now occurs when a version update is detected via WebSocket connection, ensuring users always run the latest version without cache issues. [Commit](https://github.com/open-webui/open-webui/commit/989f192c92d2fe55daa31336e7971e21798b96ae)
|
||||
- 🐍 Experimental initial preparations for Python 3.13 compatibility by updating dependencies with security enhancements and cryptographic improvements. [#18430](https://github.com/open-webui/open-webui/pull/18430), [#18424](https://github.com/open-webui/open-webui/pull/18424)
|
||||
- ⚡ Image compression now preserves the original image format instead of converting to PNG, significantly reducing file sizes and improving chat loading performance. [#18506](https://github.com/open-webui/open-webui/pull/18506)
|
||||
- 🎤 Mistral Voxtral model support was added for text-to-speech, including voxtral-small and voxtral-mini models with both transcription and chat completion API support. [#18934](https://github.com/open-webui/open-webui/pull/18934)
|
||||
- 🔊 Text-to-speech now uses a global audio queue system to prevent overlapping playback, ensuring only one TTS instance plays at a time with proper stop/start controls and automatic cleanup when switching between messages. [#16152](https://github.com/open-webui/open-webui/pull/16152), [#18744](https://github.com/open-webui/open-webui/pull/18744), [#16150](https://github.com/open-webui/open-webui/issues/16150)
|
||||
- 🔊 ELEVENLABS_API_BASE_URL environment variable now allows configuration of custom ElevenLabs API endpoints, enabling support for EU residency API requirements. [#18402](https://github.com/open-webui/open-webui/issues/18402)
|
||||
- 🔐 OAUTH_ROLES_SEPARATOR environment variable now allows custom role separators for OAuth roles that contain commas, useful for roles specified in LDAP syntax. [#18572](https://github.com/open-webui/open-webui/pull/18572)
|
||||
- 📄 External document loaders can now optionally forward user information headers when ENABLE_FORWARD_USER_INFO_HEADERS is enabled, enabling cost tracking, audit logs, and usage analytics for external services. [#18731](https://github.com/open-webui/open-webui/pull/18731)
|
||||
- 📄 MISTRAL_OCR_API_BASE_URL environment variable now allows configuration of custom Mistral OCR API endpoints for flexible deployment options. [Commit](https://github.com/open-webui/open-webui/commit/415b93c7c35c2e2db4425e6da1b88b3750f496b0)
|
||||
- ⌨️ Keyboard shortcut hints are now displayed on sidebar buttons with a refactored shortcuts modal that accurately reflects all available hotkeys across different keyboard layouts. [#18473](https://github.com/open-webui/open-webui/pull/18473)
|
||||
- 🛠️ Tooltips now display tool descriptions when hovering over tool names on the model edit page, improving usability and providing immediate context. [#18707](https://github.com/open-webui/open-webui/pull/18707)
|
||||
- 📝 "Create a new note" from the search modal now immediately creates a new private note and opens it in the editor instead of navigating to the generic notes page. [#18255](https://github.com/open-webui/open-webui/pull/18255)
|
||||
- 🖨️ Code block output now preserves whitespace formatting with monospace font to accurately reflect terminal behavior. [#18352](https://github.com/open-webui/open-webui/pull/18352)
|
||||
- ✏️ Edit button is now available in the three-dot menu of models in the workspace section for quick access to model editing, with the menu reorganized for better user experience and Edit, Clone, Copy Link, and Share options logically grouped. [#18574](https://github.com/open-webui/open-webui/pull/18574)
|
||||
- 📌 Sidebar models section is now collapsible, allowing users to expand and collapse the pinned models list for better sidebar organization. [Commit](https://github.com/open-webui/open-webui/commit/82c08a3b5d189f81c96b6548cc872198771015b0)
|
||||
- 🌙 Dark mode styles for select elements were added using Tailwind CSS classes, improving consistency across the interface. [#18636](https://github.com/open-webui/open-webui/pull/18636)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Translations for Portuguese (Brazil), Greek, German, Traditional Chinese, Simplified Chinese, Spanish, Georgian, Danish, and Estonian were enhanced and expanded.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🔒 Server-Sent Event (SSE) code injection vulnerability in Direct Connections is resolved by blocking event emission from untrusted external model servers; event emitters from direct connected model servers are no longer supported, preventing arbitrary JavaScript execution in user browsers. [Commit](https://github.com/open-webui/open-webui/commit/8af6a4cf21b756a66cd58378a01c60f74c39b7ca)
|
||||
- 🛡️ DOM XSS vulnerability in "Insert Prompt as Rich Text" is resolved by sanitizing HTML content with DOMPurify before rendering. [Commit](https://github.com/open-webui/open-webui/commit/eb9c4c0e358c274aea35f21c2856c0a20051e5f1)
|
||||
- ⚙️ MCP server cancellation scope corruption is prevented by reversing disconnection order to follow LIFO and properly handling exceptions, resolving 100% CPU usage when resuming chats with expired tokens or using multiple streamable MCP servers. [#18537](https://github.com/open-webui/open-webui/pull/18537)
|
||||
- 🔧 UI freeze when querying models with knowledge bases containing inconsistent distance metrics is resolved by properly initializing the distances array in citations. [#18585](https://github.com/open-webui/open-webui/pull/18585)
|
||||
- 🤖 Duplicate model IDs from multiple OpenAI endpoints are now automatically deduplicated server-side, preventing frontend crashes for users with unified gateway proxies that aggregate multiple providers. [Commit](https://github.com/open-webui/open-webui/commit/fdf7ca11d4f3cc8fe63e81c98dc0d1e48e52ba36)
|
||||
- 🔐 Login failures with passwords longer than 72 bytes are resolved by safely truncating oversized passwords for bcrypt compatibility. [#18157](https://github.com/open-webui/open-webui/issues/18157)
|
||||
- 🔐 OAuth 2.1 MCP tool connections now automatically re-register clients when stored client IDs become stale, preventing unauthorized_client errors after editing tool endpoints and providing detailed error messages for callback failures. [#18415](https://github.com/open-webui/open-webui/pull/18415), [#18309](https://github.com/open-webui/open-webui/issues/18309)
|
||||
- 🔓 OAuth 2.1 discovery, metadata fetching, and dynamic client registration now correctly use HTTP proxy environment variables when trust_env is enabled. [Commit](https://github.com/open-webui/open-webui/commit/bafeb76c411483bd6b135f0edbcdce048120f264)
|
||||
- 🔌 MCP server connection failures now display clear error messages in the chat interface instead of silently failing. [#18892](https://github.com/open-webui/open-webui/pull/18892), [#18889](https://github.com/open-webui/open-webui/issues/18889)
|
||||
- 💬 Chat titles are now properly generated even when title auto-generation is disabled in interface settings, fixing an issue where chats would remain labeled as "New chat". [#18761](https://github.com/open-webui/open-webui/pull/18761), [#18717](https://github.com/open-webui/open-webui/issues/18717), [#6478](https://github.com/open-webui/open-webui/issues/6478)
|
||||
- 🔍 Chat query errors are prevented by properly validating and handling the "order_by" parameter to ensure requested columns exist. [#18400](https://github.com/open-webui/open-webui/pull/18400), [#18452](https://github.com/open-webui/open-webui/pull/18452)
|
||||
- 🔧 Root-level max_tokens parameter is no longer dropped when proxying to Ollama, properly converting to num_predict to limit output token length as intended. [#18618](https://github.com/open-webui/open-webui/issues/18618)
|
||||
- 🔑 Self-hosted Marker instances can now be used without requiring an API key, while keeping it optional for datalab Marker service users. [#18617](https://github.com/open-webui/open-webui/issues/18617)
|
||||
- 🔧 OpenAPI specification endpoint conflict between "/api/v1/models" and "/api/v1/models/" is resolved by changing the models router endpoint to "/list", preventing duplicate operationId errors when generating TypeScript API clients. [#18758](https://github.com/open-webui/open-webui/issues/18758)
|
||||
- 🏷️ Model tags are now de-duplicated case-insensitively in both the model selector and workspace models page, preventing duplicate entries with different capitalization from appearing in filter dropdowns. [#18716](https://github.com/open-webui/open-webui/pull/18716), [#18711](https://github.com/open-webui/open-webui/issues/18711)
|
||||
- 📄 Docling RAG parameter configuration is now correctly saved in the admin UI by fixing the typo in the "DOCLING_PARAMS" parameter name. [#18390](https://github.com/open-webui/open-webui/pull/18390)
|
||||
- 📃 Tika document processing now automatically detects content types instead of relying on potentially incorrect browser-provided mime-types, improving file handling accuracy for formats like RTF. [#18765](https://github.com/open-webui/open-webui/pull/18765), [#18683](https://github.com/open-webui/open-webui/issues/18683)
|
||||
- 🖼️ Image and video uploads to knowledge bases now display proper error messages instead of showing an infinite spinner when the content extraction engine does not support these file types. [#18514](https://github.com/open-webui/open-webui/issues/18514)
|
||||
- 📝 Notes PDF export now properly detects and applies dark mode styling consistently across both the notes list and individual note pages, with a shared utility function to eliminate code duplication. [#18526](https://github.com/open-webui/open-webui/issues/18526)
|
||||
- 💭 Details tags for reasoning content are now correctly identified and rendered even when the same tag is present in user messages. [#18840](https://github.com/open-webui/open-webui/pull/18840), [#18294](https://github.com/open-webui/open-webui/issues/18294)
|
||||
- 📊 Mermaid and Vega rendering errors now display inline with the code instead of showing repetitive toast notifications, improving user experience when models generate invalid diagram syntax. [Commit](https://github.com/open-webui/open-webui/commit/fdc0f04a8b7dd0bc9f9dc0e7e30854f7a0eea3e9)
|
||||
- 📈 Mermaid diagram rendering errors no longer cause UI unavailability or display error messages below the input box. [#18493](https://github.com/open-webui/open-webui/pull/18493), [#18340](https://github.com/open-webui/open-webui/issues/18340)
|
||||
- 🔗 Web search SSL verification is now asynchronous, preventing the website from hanging during web search operations. [#18714](https://github.com/open-webui/open-webui/pull/18714), [#18699](https://github.com/open-webui/open-webui/issues/18699)
|
||||
- 🌍 Web search results now correctly use HTTP proxy environment variables when WEB_SEARCH_TRUST_ENV is enabled. [#18667](https://github.com/open-webui/open-webui/pull/18667), [#7008](https://github.com/open-webui/open-webui/discussions/7008)
|
||||
- 🔍 Google Programmable Search Engine now properly includes referer headers, enabling API keys with HTTP referrer restrictions configured in Google Cloud Console. [#18871](https://github.com/open-webui/open-webui/pull/18871), [#18870](https://github.com/open-webui/open-webui/issues/18870)
|
||||
- ⚡ YouTube video transcript fetching now works correctly when using a proxy connection. [#18419](https://github.com/open-webui/open-webui/pull/18419)
|
||||
- 🎙️ Speech-to-text transcription no longer deletes or replaces existing text in the prompt input field, properly preserving any previously entered content. [#18540](https://github.com/open-webui/open-webui/issues/18540)
|
||||
- 🎙️ The "Instant Auto-Send After Voice Transcription" setting now functions correctly and automatically sends transcribed text when enabled. [#18466](https://github.com/open-webui/open-webui/issues/18466)
|
||||
- ⚙️ Chat settings now load properly when reopening a tab or starting a new session by initializing defaults when sessionStorage is empty. [#18438](https://github.com/open-webui/open-webui/pull/18438)
|
||||
- 🔎 Folder tag search in the sidebar now correctly handles folder names with multiple spaces by replacing all spaces with underscores. [Commit](https://github.com/open-webui/open-webui/commit/a8fe979af68e47e4e4bb3eb76e48d93d60cd2a45)
|
||||
- 🛠️ Functions page now updates immediately after deleting a function, removing the need for a manual page reload. [#18912](https://github.com/open-webui/open-webui/pull/18912), [#18908](https://github.com/open-webui/open-webui/issues/18908)
|
||||
- 🛠️ Native tool calling now properly supports sequential tool calls with shared context, allowing tools to access images and data from previous tool executions in the same conversation. [#18664](https://github.com/open-webui/open-webui/pull/18664)
|
||||
- 🎯 Globally enabled actions in the model editor now correctly apply as global instead of being treated as disabled. [#18577](https://github.com/open-webui/open-webui/pull/18577)
|
||||
- 📋 Clipboard images pasted via the "{{CLIPBOARD}}" prompt variable are now correctly converted to base64 format before being sent to the backend, resolving base64 encoding errors. [#18432](https://github.com/open-webui/open-webui/pull/18432), [#18425](https://github.com/open-webui/open-webui/issues/18425)
|
||||
- 📋 File list is now cleared when switching to models that do not support file uploads, preventing files from being sent to incompatible models. [#18496](https://github.com/open-webui/open-webui/pull/18496)
|
||||
- 📂 Move menu no longer displays when folders are empty. [#18484](https://github.com/open-webui/open-webui/pull/18484)
|
||||
- 📁 Folder and channel creation now validates that names are not empty, preventing creation of folders or channels with no name and showing an error toast if attempted. [#18564](https://github.com/open-webui/open-webui/pull/18564)
|
||||
- 🖊️ Rich text input no longer removes text between equals signs when pasting code with comparison operators. [#18551](https://github.com/open-webui/open-webui/issues/18551)
|
||||
- ⌨️ Keyboard shortcuts now display the correct keys for international and non-QWERTY keyboard layouts by detecting the user's layout using the Keyboard API. [#18533](https://github.com/open-webui/open-webui/pull/18533)
|
||||
- 🌐 "Attach Webpage" button now displays with correct disabled styling when a model does not support file uploads. [#18483](https://github.com/open-webui/open-webui/pull/18483)
|
||||
- 🎚️ Divider no longer displays in the integrations menu when no integrations are enabled. [#18487](https://github.com/open-webui/open-webui/pull/18487)
|
||||
- 📱 Chat controls button is now properly hidden on mobile for users without admin or explicit chat control permissions. [#18641](https://github.com/open-webui/open-webui/pull/18641)
|
||||
- 📍 User menu, download submenu, and move submenu are now repositioned to prevent overlap with the Chat Controls sidebar when it is open. [Commit](https://github.com/open-webui/open-webui/commit/414ab51cb6df1ab0d6c85ac6c1f2c5c9a5f8e2aa)
|
||||
- 🎯 Artifacts button no longer appears in the chat menu when there are no artifacts to display. [Commit](https://github.com/open-webui/open-webui/commit/ed6449d35f84f68dc75ee5c6b3f4748a3fda0096)
|
||||
- 🎨 Artifacts view now automatically displays when opening an existing conversation containing artifacts, improving user experience. [#18215](https://github.com/open-webui/open-webui/pull/18215)
|
||||
- 🖌️ Formatting toolbar is no longer hidden under images or code blocks in chat and now displays correctly above all message content.
|
||||
- 🎨 Layout shift near system instructions is prevented by properly rendering the chat component when system prompts are empty. [#18594](https://github.com/open-webui/open-webui/pull/18594)
|
||||
- 📐 Modal layout shift caused by scrollbar appearance is prevented by adding a stable scrollbar gutter. [#18591](https://github.com/open-webui/open-webui/pull/18591)
|
||||
- ✨ Spacing between icon and label in the user menu dropdown items is now consistent. [#18595](https://github.com/open-webui/open-webui/pull/18595)
|
||||
- 💬 Duplicate prompt suggestions no longer cause the webpage to freeze or throw JavaScript errors by implementing proper key management with composite keys. [#18841](https://github.com/open-webui/open-webui/pull/18841), [#18566](https://github.com/open-webui/open-webui/issues/18566)
|
||||
- 🔍 Chat preview loading in the search modal now works correctly for all search results by fixing an index boundary check that previously caused out-of-bounds errors. [#18911](https://github.com/open-webui/open-webui/pull/18911)
|
||||
- ♿ Screen reader support was enhanced by wrapping messages in semantic elements with descriptive aria-labels, adding "Assistant is typing" and "Response complete" announcements for improved accessibility. [#18735](https://github.com/open-webui/open-webui/pull/18735)
|
||||
- 🔒 Incorrect await call in the OAuth 2.1 flow is removed, eliminating a logged exception during authentication. [#18236](https://github.com/open-webui/open-webui/pull/18236)
|
||||
- 🛡️ Duplicate crossorigin attribute in the manifest file was removed. [#18413](https://github.com/open-webui/open-webui/pull/18413)
|
||||
|
||||
### Changed
|
||||
|
||||
- 🔄 Firecrawl integration was refactored to use the official Firecrawl SDK instead of direct HTTP requests and langchain_community FireCrawlLoader, improving reliability and performance with batch scraping support and enhanced error handling. [#18635](https://github.com/open-webui/open-webui/pull/18635)
|
||||
- 📄 MinerU content extraction engine now only supports PDF files following the upstream removal of LibreOffice document conversion in version 2.0.0; users needing to process office documents should convert them to PDF format first. [#18448](https://github.com/open-webui/open-webui/issues/18448)
|
||||
|
||||
## [0.6.34] - 2025-10-16
|
||||
|
||||
### Added
|
||||
|
||||
- 📄 MinerU is now supported as a document parser backend, with support for both local and managed API deployments. [#18306](https://github.com/open-webui/open-webui/pull/18306)
|
||||
- 🔒 JWT token expiration default is now set to 4 weeks instead of never expiring, with security warnings displayed in backend logs and admin UI when set to unlimited. [#18261](https://github.com/open-webui/open-webui/pull/18261), [#18262](https://github.com/open-webui/open-webui/pull/18262)
|
||||
- ⚡ Page loading performance is improved by preventing unnecessary API requests when sidebar folders are not expanded. [#18179](https://github.com/open-webui/open-webui/pull/18179), [#17476](https://github.com/open-webui/open-webui/issues/17476)
|
||||
- 📁 File hash values are now included in the knowledge endpoint response, enabling efficient file synchronization through hash comparison. [#18284](https://github.com/open-webui/open-webui/pull/18284), [#18283](https://github.com/open-webui/open-webui/issues/18283)
|
||||
- 🎨 Chat dialog scrollbar visibility is improved by increasing its width, making it easier to use for navigation. [#18369](https://github.com/open-webui/open-webui/pull/18369), [#11782](https://github.com/open-webui/open-webui/issues/11782)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Translations for Catalan, Chinese, Czech, Finnish, German, Kabyle, Korean, Portuguese (Brazil), Spanish, Thai, and Turkish were enhanced and expanded.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 📚 Focused retrieval mode now works correctly, preventing the system from forcing full context mode and loading all documents in a knowledge base regardless of settings. [#18133](https://github.com/open-webui/open-webui/issues/18133)
|
||||
- 🔧 Filter inlet functions now correctly execute on tool call continuations, ensuring parameter persistence throughout tool interactions. [#18222](https://github.com/open-webui/open-webui/issues/18222)
|
||||
- 🛠️ External tool servers now properly support DELETE requests with body data. [#18289](https://github.com/open-webui/open-webui/pull/18289), [#18287](https://github.com/open-webui/open-webui/issues/18287)
|
||||
- 🗄️ Oracle23ai vector database client now correctly handles variable initialization, resolving UnboundLocalError when retrieving items from collections. [#18356](https://github.com/open-webui/open-webui/issues/18356)
|
||||
- 🔧 Model auto-pull functionality now works correctly even when user settings remain unmodified. [#18324](https://github.com/open-webui/open-webui/pull/18324)
|
||||
- 🎨 Duplicate HTML content in artifacts is now prevented by improving code block detection logic. [#18195](https://github.com/open-webui/open-webui/pull/18195), [#6154](https://github.com/open-webui/open-webui/issues/6154)
|
||||
- 💬 Pinned chats now appear in the Reference Chats list and can be referenced in conversations. [#18288](https://github.com/open-webui/open-webui/issues/18288)
|
||||
- 📝 Misleading knowledge base warning text in documents settings is clarified to correctly instruct users about reindexing vectors. [#18263](https://github.com/open-webui/open-webui/pull/18263)
|
||||
- 🔔 Toast notifications can now be dismissed even when a modal is open. [#18260](https://github.com/open-webui/open-webui/pull/18260)
|
||||
- 🔘 The "Chats" button in the sidebar now correctly toggles chat list visibility without navigating away from the current page. [#18232](https://github.com/open-webui/open-webui/pull/18232)
|
||||
- 🎯 The Integrations menu no longer closes prematurely when clicking outside the Valves modal. [#18310](https://github.com/open-webui/open-webui/pull/18310)
|
||||
- 🛠️ Tool ID display issues where "undefined" was incorrectly shown in the interface are now resolved. [#18178](https://github.com/open-webui/open-webui/pull/18178)
|
||||
- 🛠️ Model management issues caused by excessively long model IDs are now prevented through validation that limits model IDs to 256 characters. [#18125](https://github.com/open-webui/open-webui/issues/18125)
|
||||
|
||||
## [0.6.33] - 2025-10-08
|
||||
|
||||
### Added
|
||||
|
||||
- 🎨 Workspace interface received a comprehensive redesign across Models, Knowledge, Prompts, and Tools sections, featuring reorganized controls, view filters for created vs shared items, tag selectors, improved visual hierarchy, and streamlined import/export functionality. [Commit](https://github.com/open-webui/open-webui/commit/2c59a288603d8c5f004f223ee00fef37cc763a8e), [Commit](https://github.com/open-webui/open-webui/commit/6050c86ab6ef6b8c96dd3f99c62a6867011b67a4), [Commit](https://github.com/open-webui/open-webui/commit/96ecb47bc71c072aa34ef2be10781b042bef4e8c), [Commit](https://github.com/open-webui/open-webui/commit/2250d102b28075a9611696e911536547abb8b38a), [Commit](https://github.com/open-webui/open-webui/commit/23c8f6d507bfee75ab0015a3e2972d5c26f7e9bf), [Commit](https://github.com/open-webui/open-webui/commit/a743b16728c6ae24b8befbc2d7f24eb9e20c4ad5)
|
||||
- 🛠️ Functions admin interface received a comprehensive redesign with creator attribution display, ownership filters for created vs shared items, improved organization, and refined styling. [Commit](https://github.com/open-webui/open-webui/commit/f5e1a42f51acc0b9d5b63a33c1ca2e42470239c1)
|
||||
- ⚡ Page initialization performance is significantly improved through parallel data loading and optimized folder API calls, reducing initial page load time. [#17559](https://github.com/open-webui/open-webui/pull/17559), [#17889](https://github.com/open-webui/open-webui/pull/17889)
|
||||
- ⚡ Chat overview component is now dynamically loaded on demand, reducing initial page bundle size by approximately 470KB and improving first-screen loading speed. [#17595](https://github.com/open-webui/open-webui/pull/17595)
|
||||
- 📁 Folders can now be attached to chats using the "#" command, automatically expanding to include all files within the folder for streamlined knowledge base integration. [Commit](https://github.com/open-webui/open-webui/commit/d2cb78179d66dc85188172a08622d4c97a2ea1ee)
|
||||
- 📱 Progressive Web App now supports Android share target functionality, allowing users to share web pages, YouTube videos, and text directly to Open WebUI from the system share menu. [#17633](https://github.com/open-webui/open-webui/pull/17633), [#17125](https://github.com/open-webui/open-webui/issues/17125)
|
||||
- 🗄️ Redis session storage is now available as an experimental option for OAuth authentication flows via the ENABLE_STAR_SESSIONS_MIDDLEWARE environment variable, providing shared session state across multi-replica deployments to address CSRF errors, though currently only basic Redis setups are supported. [#17223](https://github.com/open-webui/open-webui/pull/17223), [#15373](https://github.com/open-webui/open-webui/issues/15373), [Docs:Commit](https://github.com/open-webui/docs/commit/14052347f165d1b597615370373d7289ce44c7f9)
|
||||
- 📊 Vega and Vega-Lite chart visualization renderers are now supported in code blocks, enabling inline rendering of data visualizations with automatic compilation of Vega-Lite specifications. [#18033](https://github.com/open-webui/open-webui/pull/18033), [#18040](https://github.com/open-webui/open-webui/pull/18040), [#18022](https://github.com/open-webui/open-webui/issues/18022)
|
||||
- 🔗 OpenAI connections now support custom HTTP headers, enabling users to configure authentication and routing headers for specific deployment requirements. [#18021](https://github.com/open-webui/open-webui/pull/18021), [#9732](https://github.com/open-webui/open-webui/discussions/9732)
|
||||
- 🔐 OpenID Connect authentication now supports OIDC providers without email scope via the ENABLE_OAUTH_WITHOUT_EMAIL environment variable, enabling compatibility with identity providers that don't expose email addresses. [#18047](https://github.com/open-webui/open-webui/pull/18047), [#18045](https://github.com/open-webui/open-webui/issues/18045)
|
||||
- 🤖 Ollama model management modal now features individual model update cancellation, comprehensive tooltips for all buttons, and streamlined notification behavior to reduce toast spam. [#16863](https://github.com/open-webui/open-webui/pull/16863)
|
||||
- ☁️ OneDrive file picker now includes search functionality and "My Organization" pivot for business accounts, enabling easier file discovery across organizational content. [#17930](https://github.com/open-webui/open-webui/pull/17930), [#17929](https://github.com/open-webui/open-webui/issues/17929)
|
||||
- 📊 Chat overview flow diagram now supports toggling between vertical and horizontal layout orientations for improved visualization flexibility. [#17941](https://github.com/open-webui/open-webui/pull/17941)
|
||||
- 🔊 OpenAI Text-to-Speech engine now supports additional parameters, allowing users to customize TTS behavior with provider-specific options via JSON configuration. [#17985](https://github.com/open-webui/open-webui/issues/17985), [#17188](https://github.com/open-webui/open-webui/pull/17188)
|
||||
- 🛠️ Tool server list now displays server name, URL, and type (OpenAPI or MCP) for easier identification and management. [#18062](https://github.com/open-webui/open-webui/issues/18062)
|
||||
- 📁 Folders now remember the last selected model, automatically applying it when starting new chats within that folder. [#17836](https://github.com/open-webui/open-webui/issues/17836)
|
||||
- 🔢 Ollama embedding endpoint now supports the optional dimensions parameter for controlling embedding output size, compatible with Ollama v0.11.11 and later. [#17942](https://github.com/open-webui/open-webui/pull/17942)
|
||||
- ⚡ Workspace knowledge page load time is improved by removing redundant API calls, enhancing overall responsiveness. [#18057](https://github.com/open-webui/open-webui/pull/18057)
|
||||
- ⚡ File metadata query performance is enhanced by selecting only relevant columns instead of retrieving entire records, reducing database overhead. [#18013](https://github.com/open-webui/open-webui/pull/18013)
|
||||
- 📄 Note PDF exports now include titles and properly render in dark mode with appropriate background colors. [Commit](https://github.com/open-webui/open-webui/commit/216fb5c3db1a223ffe6e72d97aa9551fe0e2d028)
|
||||
- 📄 Docling document extraction now supports additional parameters for VLM pipeline configuration, enabling customized vision model settings. [#17363](https://github.com/open-webui/open-webui/pull/17363)
|
||||
- ⚙️ Server startup script now supports passing arbitrary arguments to uvicorn, enabling custom server configuration options. [#17919](https://github.com/open-webui/open-webui/pull/17919), [#17918](https://github.com/open-webui/open-webui/issues/17918)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Translations for German, Danish, Spanish, Korean, Portuguese (Brazil), Simplified Chinese, and Traditional Chinese were enhanced and expanded.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 💬 System prompts are no longer duplicated in chat requests, eliminating confusion and excessive token usage caused by repeated instructions being sent to models. [#17198](https://github.com/open-webui/open-webui/issues/17198), [#16855](https://github.com/open-webui/open-webui/issues/16855)
|
||||
- 🔐 MCP OAuth 2.1 authentication now complies with the standard by implementing PKCE with S256 code challenge method and explicitly passing client credentials during token authorization, resolving "code_challenge: Field required" and "client_id: Field required" errors when connecting to OAuth-secured MCP servers. [Commit](https://github.com/open-webui/open-webui/commit/911a114ad459f5deebd97543c13c2b90196efb54), [#18010](https://github.com/open-webui/open-webui/issues/18010), [#18087](https://github.com/open-webui/open-webui/pull/18087)
|
||||
- 🔐 OAuth signup flow now handles password hashing correctly by migrating from passlib to native bcrypt, preventing failures when passwords exceed 72 bytes. [#17917](https://github.com/open-webui/open-webui/issues/17917)
|
||||
- 🔐 OAuth token refresh errors are resolved by properly registering and storing OAuth clients, fixing "Constructor parameter should be str" exceptions for Google, Microsoft, and OIDC providers. [#17829](https://github.com/open-webui/open-webui/issues/17829)
|
||||
- 🔐 OAuth server metadata URL is now correctly accessed via the proper attribute, fixing automatic token refresh and logout functionality for Microsoft OAuth provider when OPENID_PROVIDER_URL is not set. [#18065](https://github.com/open-webui/open-webui/pull/18065)
|
||||
- 🔐 OAuth credential decryption failures now allow the application to start gracefully with clear error messages instead of crashing, preventing complete service outages when WEBUI_SECRET_KEY mismatches occur during database migrations or environment changes. [#18094](https://github.com/open-webui/open-webui/pull/18094), [#18092](https://github.com/open-webui/open-webui/issues/18092)
|
||||
- 🔐 OAuth 2.1 server discovery now correctly attempts all configured discovery URLs in sequence instead of only trying the first URL. [#17906](https://github.com/open-webui/open-webui/pull/17906), [#17904](https://github.com/open-webui/open-webui/issues/17904), [#18026](https://github.com/open-webui/open-webui/pull/18026)
|
||||
- 🔐 Login redirect now correctly honors the redirect query parameter after authentication, ensuring users are returned to their intended destination with query parameters intact instead of defaulting to the homepage. [#18071](https://github.com/open-webui/open-webui/issues/18071)
|
||||
- ☁️ OneDrive Business integration authentication regression is resolved, ensuring the popup now properly triggers when connecting to OneDrive accounts. [#17902](https://github.com/open-webui/open-webui/pull/17902), [#17825](https://github.com/open-webui/open-webui/discussions/17825), [#17816](https://github.com/open-webui/open-webui/issues/17816)
|
||||
- 👥 Default group settings now persist correctly after page navigation, ensuring configuration changes are properly saved and retained. [#17899](https://github.com/open-webui/open-webui/issues/17899), [#18003](https://github.com/open-webui/open-webui/issues/18003)
|
||||
- 📁 Folder data integrity is now verified on retrieval, automatically fixing orphaned folders with invalid parent references and ensuring proper cascading deletion of nested folder structures. [Commit](https://github.com/open-webui/open-webui/commit/5448618dd5ea181b9635b77040cef60926a902ff)
|
||||
- 🗄️ Redis Sentinel and Redis Cluster configurations with the experimental ENABLE_STAR_SESSIONS_MIDDLEWARE feature are now properly isolated by making the feature opt-in only, preventing ReadOnlyError failures when connecting to read replicas in multi-node Redis deployments. [#18073](https://github.com/open-webui/open-webui/issues/18073)
|
||||
- 📊 Mermaid and Vega diagram rendering now displays error toast notifications when syntax errors are detected, helping users identify and fix diagram issues instead of silently failing. [#18068](https://github.com/open-webui/open-webui/pull/18068)
|
||||
- 🤖 Reasoning models that return reasoning_content instead of content no longer cause NoneType errors during chat title generation, follow-up suggestions, and tag generation. [#18080](https://github.com/open-webui/open-webui/pull/18080)
|
||||
- 📚 Citation rendering now correctly handles multiple source references in a single bracket, parsing formats like [1,2] and [1, 2] into separate clickable citation links. [#18120](https://github.com/open-webui/open-webui/pull/18120)
|
||||
- 🔍 Web search now handles individual source failures gracefully, continuing to process remaining sources instead of failing entirely when a single URL is unreachable or returns an error. [Commit](https://github.com/open-webui/open-webui/commit/e000494e488090c5f66989a2b3f89d3eaeb7946b), [Commit](https://github.com/open-webui/open-webui/commit/53e98620bff38ab9280aee5165af0a704bdd99b9)
|
||||
- 🔍 Hybrid search with reranking now handles empty result sets gracefully instead of crashing with ValueError when all results are filtered out due to relevance thresholds. [#18096](https://github.com/open-webui/open-webui/issues/18096)
|
||||
- 🔍 Reranking models without defined padding tokens now work correctly by automatically falling back to eos_token_id as pad_token_id, fixing "Cannot handle batch sizes > 1" errors for models like Qwen3-Reranker. [#18108](https://github.com/open-webui/open-webui/pull/18108), [#16027](https://github.com/open-webui/open-webui/discussions/16027)
|
||||
- 🔍 Model selector search now correctly returns results for non-admin users by dynamically updating the search index when the model list changes, fixing a race condition that caused empty search results. [#17996](https://github.com/open-webui/open-webui/pull/17996), [#17960](https://github.com/open-webui/open-webui/pull/17960)
|
||||
- ⚡ Task model function calling performance is improved by excluding base64 image data from payloads, significantly reducing token count and memory usage when images are present in conversations. [#17897](https://github.com/open-webui/open-webui/pull/17897)
|
||||
- 🤖 Text selection "Ask" action now correctly recognizes and uses local models configured via direct connections instead of only showing external provider models. [#17896](https://github.com/open-webui/open-webui/issues/17896)
|
||||
- 🛑 Task cancellation API now returns accurate response status, correctly reporting successful cancellations instead of incorrectly indicating failures. [#17920](https://github.com/open-webui/open-webui/issues/17920)
|
||||
- 💬 Follow-up query suggestions are now generated and displayed in temporary chats, matching the behavior of saved chats. [#14987](https://github.com/open-webui/open-webui/issues/14987)
|
||||
- 🔊 Azure Text-to-Speech now properly escapes special characters like ampersands in SSML, preventing HTTP 400 errors and ensuring audio generation succeeds for all text content. [#17962](https://github.com/open-webui/open-webui/issues/17962)
|
||||
- 🛠️ OpenAPI tool server calls with optional parameters now execute successfully even when no arguments are provided, removing the incorrect requirement for a request body. [#18036](https://github.com/open-webui/open-webui/issues/18036)
|
||||
- 🛠️ MCP mode tool server connections no longer incorrectly validate the OpenAPI path field, allowing seamless switching between OpenAPI and MCP connection types. [#17989](https://github.com/open-webui/open-webui/pull/17989), [#17988](https://github.com/open-webui/open-webui/issues/17988)
|
||||
- 🛠️ Third-party tool responses containing non-UTF8 or invalid byte sequences are now handled gracefully without causing request failures. [#17882](https://github.com/open-webui/open-webui/pull/17882)
|
||||
- 🎨 Workspace filter dropdown now correctly renders model tags as strings instead of displaying individual characters, fixing broken filtering interface when models have multiple tags. [#18034](https://github.com/open-webui/open-webui/issues/18034)
|
||||
- ⌨️ Ctrl+Enter keyboard shortcut now correctly sends messages in mobile and narrow browser views on Chrome instead of inserting newlines. [#17975](https://github.com/open-webui/open-webui/issues/17975)
|
||||
- ⌨️ Tab characters are now preserved when pasting code or formatted text into the chat input box in plain text mode. [#17958](https://github.com/open-webui/open-webui/issues/17958)
|
||||
- 📋 Text selection copying from the chat input box now correctly copies only the selected text instead of the entire textbox content. [#17911](https://github.com/open-webui/open-webui/issues/17911)
|
||||
- 🔍 Web search query logging now uses debug level instead of info level, preventing user search queries from appearing in production logs. [#17888](https://github.com/open-webui/open-webui/pull/17888)
|
||||
- 📝 Debug print statements in middleware were removed to prevent excessive log pollution and respect configured logging levels. [#17943](https://github.com/open-webui/open-webui/issues/17943)
|
||||
|
||||
### Changed
|
||||
|
||||
- 🗄️ Milvus vector database dependency is updated from pymilvus 2.5.0 to 2.6.2, ensuring compatibility with newer Milvus versions but requiring users on older Milvus instances to either upgrade their database or manually downgrade the pymilvus package. [#18066](https://github.com/open-webui/open-webui/pull/18066)
|
||||
|
||||
## [0.6.32] - 2025-09-29
|
||||
|
||||
### Added
|
||||
|
||||
- ⚡ JSON model import moved to backend processing for significant performance improvements when importing large model files. [#17871](https://github.com/open-webui/open-webui/pull/17871)
|
||||
- ⚠️ Visual warnings for group permissions that display when a permission is disabled in a group but remains enabled in the default user role, clarifying inheritance behavior for administrators. [#17848](https://github.com/open-webui/open-webui/pull/17848)
|
||||
- 🗄️ Milvus multi-tenancy mode using shared collections with resource ID filtering for improved scalability, mirroring the existing Qdrant implementation and configurable via ENABLE_MILVUS_MULTITENANCY_MODE environment variable. [#17837](https://github.com/open-webui/open-webui/pull/17837)
|
||||
- 🛠️ Enhanced tool result processing with improved error handling, better MCP tool result handling, and performance improvements for embedded UI components. [Commit](https://github.com/open-webui/open-webui/commit/4f06f29348b2c9d71c87d1bbe5b748a368f5101f)
|
||||
- 👥 New user groups now automatically inherit default group permissions, streamlining the admin setup process by eliminating manual permission configuration. [#17843](https://github.com/open-webui/open-webui/pull/17843)
|
||||
- 🗂️ Bulk unarchive functionality for all chats, providing a single backend endpoint to efficiently restore all archived chats at once. [#17857](https://github.com/open-webui/open-webui/pull/17857)
|
||||
- 🏷️ Browser tab title toggle setting allows users to control whether chat titles appear in the browser tab or display only "Open WebUI". [#17851](https://github.com/open-webui/open-webui/pull/17851)
|
||||
- 💬 Reply-to-message functionality in channels, allowing users to reply directly to specific messages with visual threading and context display. [Commit](https://github.com/open-webui/open-webui/commit/1a18928c94903ad1f1f0391b8ade042c3e60205b)
|
||||
- 🔧 Tool server import and export functionality, allowing direct upload of openapi.json and openapi.yaml files as an alternative to URL-based configuration. [#14446](https://github.com/open-webui/open-webui/issues/14446)
|
||||
- 🔧 User valve configuration for Functions is now available in the integration menu, providing consistent management alongside Tools. [#17784](https://github.com/open-webui/open-webui/issues/17784)
|
||||
- 🔐 Admin permission toggle for controlling public sharing of notes, configurable via USER_PERMISSIONS_NOTES_ALLOW_PUBLIC_SHARING environment variable. [#17801](https://github.com/open-webui/open-webui/pull/17801), [Docs:#715](https://github.com/open-webui/docs/pull/715)
|
||||
- 🗄️ DISKANN index type support for Milvus vector database with configurable maximum degree and search list size parameters. [#17770](https://github.com/open-webui/open-webui/pull/17770), [Docs:Commit](https://github.com/open-webui/docs/commit/cec50ab4d4b659558ca1ccd4b5e6fc024f05fb83)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Translations for Chinese (Simplified & Traditional) and Bosnian (Latin) were enhanced and expanded.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🛠️ MCP tool calls are now correctly routed to the appropriate server when multiple streamable-http MCP servers are enabled, preventing "Tool not found" errors. [#17817](https://github.com/open-webui/open-webui/issues/17817)
|
||||
- 🛠️ External tool servers (OpenAPI/MCP) now properly process and return tool results to the model, restoring functionality that was broken in v0.6.31. [#17764](https://github.com/open-webui/open-webui/issues/17764)
|
||||
- 🔧 User valve detection now correctly identifies valves in imported tool code, ensuring gear icons appear in the integrations menu for all tools with user valves. [#17765](https://github.com/open-webui/open-webui/issues/17765)
|
||||
- 🔐 MCP OAuth discovery now correctly handles multi-tenant configurations by including subpaths in metadata URL discovery. [#17768](https://github.com/open-webui/open-webui/issues/17768)
|
||||
- 🗄️ Milvus query operations now correctly use -1 instead of None for unlimited queries, preventing TypeError exceptions. [#17769](https://github.com/open-webui/open-webui/pull/17769), [#17088](https://github.com/open-webui/open-webui/issues/17088)
|
||||
- 📁 File upload error messages are now displayed when files are modified during upload, preventing user confusion on Android and Windows devices. [#17777](https://github.com/open-webui/open-webui/pull/17777)
|
||||
- 🎨 MessageInput Integrations button hover effect now displays correctly with proper visual feedback. [#17767](https://github.com/open-webui/open-webui/pull/17767)
|
||||
- 🎯 "Set as default" label positioning is fixed to ensure it remains clickable in all scenarios, including multi-model configurations. [#17779](https://github.com/open-webui/open-webui/pull/17779)
|
||||
- 🎛️ Floating buttons now correctly retrieve message context by using the proper messageId parameter in createMessagesList calls. [#17823](https://github.com/open-webui/open-webui/pull/17823)
|
||||
- 📌 Pinned chats are now properly cleared from the sidebar after archiving all chats, ensuring UI consistency without requiring a page refresh. [#17832](https://github.com/open-webui/open-webui/pull/17832)
|
||||
- 🗑️ Delete confirmation modals now properly truncate long names for Notes, Prompts, Tools, and Functions to prevent modal overflow. [#17812](https://github.com/open-webui/open-webui/pull/17812)
|
||||
- 🌐 Internationalization function calls now use proper Svelte store subscription syntax, preventing "i18n.t is not a function" errors on the model creation page. [#17819](https://github.com/open-webui/open-webui/pull/17819)
|
||||
- 🎨 Playground chat interface button layout is corrected to prevent vertical text rendering for Assistant/User role buttons. [#17819](https://github.com/open-webui/open-webui/pull/17819)
|
||||
- 🏷️ UI text truncation is improved across multiple components including usernames in admin panels, arena model names, model tags, and filter tags to prevent layout overflow issues. [#17805](https://github.com/open-webui/open-webui/pull/17805), [#17803](https://github.com/open-webui/open-webui/pull/17803), [#17791](https://github.com/open-webui/open-webui/pull/17791), [#17796](https://github.com/open-webui/open-webui/pull/17796)
|
||||
|
||||
## [0.6.31] - 2025-09-25
|
||||
|
||||
### Added
|
||||
|
||||
- 🔌 MCP (streamable HTTP) server support was added alongside existing OpenAPI server integration, allowing users to connect both server types through an improved server configuration interface. [#15932](https://github.com/open-webui/open-webui/issues/15932) [#16651](https://github.com/open-webui/open-webui/pull/16651), [Commit](https://github.com/open-webui/open-webui/commit/fd7385c3921eb59af76a26f4c475aedb38ce2406), [Commit](https://github.com/open-webui/open-webui/commit/777e81f7a8aca957a359d51df8388e5af4721a68), [Commit](https://github.com/open-webui/open-webui/commit/de7f7b3d855641450f8e5aac34fbae0665e0b80e), [Commit](https://github.com/open-webui/open-webui/commit/f1bbf3a91e4713039364b790e886e59b401572d0), [Commit](https://github.com/open-webui/open-webui/commit/c55afc42559c32a6f0c8beb0f1bb18e9360ab8af), [Commit](https://github.com/open-webui/open-webui/commit/61f20acf61f4fe30c0e5b0180949f6e1a8cf6524)
|
||||
- 🔐 To enable MCP server authentication, OAuth 2.1 dynamic client registration was implemented with secure automatic client registration, encrypted session management, and seamless authentication flows. [Commit](https://github.com/open-webui/open-webui/commit/972be4eda5a394c111e849075f94099c9c0dd9aa), [Commit](https://github.com/open-webui/open-webui/commit/77e971dd9fbeee806e2864e686df5ec75e82104b), [Commit](https://github.com/open-webui/open-webui/commit/879abd7feea3692a2f157da4a458d30f27217508), [Commit](https://github.com/open-webui/open-webui/commit/422d38fd114b1ebd8a7dbb114d64e14791e67d7a), [Docs:#709](https://github.com/open-webui/docs/pull/709)
|
||||
- 🛠️ External & Built-In Tools can now support rich UI element embedding ([Docs](https://docs.openwebui.com/features/plugin/tools/development)), allowing tools to return HTML content and interactive iframes that display directly within chat conversations with configurable security settings. [Commit](https://github.com/open-webui/open-webui/commit/07c5b25bc8b63173f406feb3ba183d375fedee6a), [Commit](https://github.com/open-webui/open-webui/commit/a5d8882bba7933a2c2c31c0a1405aba507c370bb), [Commit](https://github.com/open-webui/open-webui/commit/7be5b7f50f498de97359003609fc5993a172f084), [Commit](https://github.com/open-webui/open-webui/commit/a89ffccd7e96705a4a40e845289f4fcf9c4ae596)
|
||||
- 📝 Note editor now supports drag-and-drop reordering of list items with visual drag handles, making list organization more intuitive and efficient. [Commit](https://github.com/open-webui/open-webui/commit/e4e97e727e9b4971f1c363b1280ca3a101599d88), [Commit](https://github.com/open-webui/open-webui/commit/aeb5288a3c7a6e9e0a47b807cc52f870c1b7dbe6)
|
||||
- 🔍 Search modal was enhanced with quick action buttons for starting new conversations and creating notes, with intelligent content pre-population from search queries. [Commit](https://github.com/open-webui/open-webui/commit/aa6f63a335e172fec1dc94b2056541f52c1167a6), [Commit](https://github.com/open-webui/open-webui/commit/612a52d7bb7dbe9fa0bbbc8ac0a552d2b9801146), [Commit](https://github.com/open-webui/open-webui/commit/b03529b006f3148e895b1094584e1ab129ecac5b)
|
||||
- 🛠️ Tool user valve configuration interface was added to the integrations menu, displaying clickable gear icon buttons with tooltips for tools that support user-specific settings, making personal tool configurations easily accessible. [Commit](https://github.com/open-webui/open-webui/commit/27d61307cdce97ed11a05ec13fc300249d6022cd)
|
||||
- 👥 Channel access control was enhanced to require write permissions for posting, editing, and deleting messages, while read-only users can view content but cannot contribute. [#17543](https://github.com/open-webui/open-webui/pull/17543)
|
||||
- 💬 Channel models now support image processing, allowing AI assistants to view and analyze images shared in conversation threads. [Commit](https://github.com/open-webui/open-webui/commit/9f0010e234a6f40782a66021435d3c02b9c23639)
|
||||
- 🌐 Attach Webpage button was added to the message input menu, providing a user-friendly modal interface for attaching web content and YouTube videos as an alternative to the existing URL syntax. [#17534](https://github.com/open-webui/open-webui/pull/17534)
|
||||
- 🔐 Redis session storage support was added for OAuth redirects, providing better state handling in multi-pod Kubernetes deployments and resolving CSRF mismatch errors. [#17223](https://github.com/open-webui/open-webui/pull/17223), [#15373](https://github.com/open-webui/open-webui/issues/15373)
|
||||
- 🔍 Ollama Cloud web search integration was added as a new search engine option, providing access to web search functionality through Ollama's cloud infrastructure. [Commit](https://github.com/open-webui/open-webui/commit/e06489d92baca095b8f376fbef223298c7772579), [Commit](https://github.com/open-webui/open-webui/commit/4b6d34438bcfc45463dc7a9cb984794b32c1f0a1), [Commit](https://github.com/open-webui/open-webui/commit/05c46008da85357dc6890b846789dfaa59f4a520), [Commit](https://github.com/open-webui/open-webui/commit/fe65fe0b97ec5a8fff71592ff04a25c8e123d108), [Docs:#708](https://github.com/open-webui/docs/pull/708)
|
||||
- 🔍 Perplexity Websearch API integration was added as a new search engine option, providing access to the new websearch functionality provided by Perplexity. [#17756](https://github.com/open-webui/open-webui/issues/17756), [Commit](https://github.com/open-webui/open-webui/pull/17747/commits/7f411dd5cc1c29733216f79e99eeeed0406a2afe)
|
||||
- ☁️ OneDrive integration was improved to support separate client IDs for personal and business authentication, enabling both integrations to work simultaneously. [#17619](https://github.com/open-webui/open-webui/pull/17619), [Docs](https://docs.openwebui.com/tutorials/integrations/onedrive-sharepoint), [Docs](https://docs.openwebui.com/getting-started/env-configuration/#onedrive)
|
||||
- 📝 Pending user overlay content now supports markdown formatting, enabling rich text display for custom messages similar to banner functionality. [#17681](https://github.com/open-webui/open-webui/pull/17681)
|
||||
- 🎨 Image generation model selection was centralized to enable dynamic model override in function calls, allowing pipes and tools to specify different models than the global default while maintaining backward compatibility. [#17689](https://github.com/open-webui/open-webui/pull/17689)
|
||||
- 🎨 Interface design was modernized with updated visual styling, improved spacing, and refined component layouts across modals, sidebar, settings, and navigation elements. [Commit](https://github.com/open-webui/open-webui/commit/27a91cc80a24bda0a3a188bc3120a8ab57b00881), [Commit](https://github.com/open-webui/open-webui/commit/4ad743098615f9c58daa9df392f31109aeceeb16), [Commit](https://github.com/open-webui/open-webui/commit/fd7385c3921eb59af76a26f4c475aedb38ce2406)
|
||||
- 📊 Notes query performance was optimized through database-level filtering and separated access control logic, reducing memory usage and eliminating N+1 query problems for better scalability. [#17607](https://github.com/open-webui/open-webui/pull/17607) [Commit](https://github.com/open-webui/open-webui/pull/17747/commits/da661756fa7eec754270e6dd8c67cbf74a28a17f)
|
||||
- ⚡ Page loading performance was optimized by deferring API requests until components are actually opened, including ChangelogModal, ModelSelector, RecursiveFolder, ArchivedChatsModal, and SearchModal. [#17542](https://github.com/open-webui/open-webui/pull/17542), [#17555](https://github.com/open-webui/open-webui/pull/17555), [#17557](https://github.com/open-webui/open-webui/pull/17557), [#17541](https://github.com/open-webui/open-webui/pull/17541), [#17640](https://github.com/open-webui/open-webui/pull/17640)
|
||||
- ⚡ Bundle size was reduced by 1.58MB through optimized highlight.js language support, improving page loading speed and reducing bandwidth usage. [#17645](https://github.com/open-webui/open-webui/pull/17645)
|
||||
- ⚡ Editor collaboration functionality was refactored to reduce package size by 390KB and minimize compilation errors, improving build performance and reliability. [#17593](https://github.com/open-webui/open-webui/pull/17593)
|
||||
- ♿ Enhanced user interface accessibility through the addition of unique element IDs, improving targeting for testing, styling, and assistive technologies while providing better semantic markup for screen readers and accessibility tools. [#17746](https://github.com/open-webui/open-webui/pull/17746)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Translations for Portuguese (Brazil), Chinese (Simplified and Traditional), Korean, Irish, Spanish, Finnish, French, Kabyle, Russian, and Catalan were enhanced and improved.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🛡️ SVG content security was enhanced by implementing DOMPurify sanitization to prevent XSS attacks through malicious SVG elements, ensuring safe rendering of user-generated SVG content. [Commit](https://github.com/open-webui/open-webui/pull/17747/commits/750a659a9fee7687e667d9d755e17b8a0c77d557)
|
||||
- ☁️ OneDrive attachment menu rendering issues were resolved by restructuring the submenu interface from dropdown to tabbed navigation, preventing menu items from being hidden or clipped due to overflow constraints. [#17554](https://github.com/open-webui/open-webui/issues/17554), [Commit](https://github.com/open-webui/open-webui/pull/17747/commits/90e4b49b881b644465831cc3028bb44f0f7a2196)
|
||||
- 💬 Attached conversation references now persist throughout the entire chat session, ensuring models can continue querying referenced conversations after multiple conversation turns. [#17750](https://github.com/open-webui/open-webui/issues/17750)
|
||||
- 🔍 Search modal text box focus issues after pinning or unpinning chats were resolved, allowing users to properly exit the search interface by clicking outside the text box. [#17743](https://github.com/open-webui/open-webui/issues/17743)
|
||||
- 🔍 Search function chat list is now properly updated in real-time when chats are created or deleted, eliminating stale search results and preview loading failures. [#17741](https://github.com/open-webui/open-webui/issues/17741)
|
||||
- 💬 Chat jitter and delayed code block expansion in multi-model sessions were resolved by reverting dynamic CodeEditor loading, restoring stable rendering behavior. [#17715](https://github.com/open-webui/open-webui/pull/17715), [#17684](https://github.com/open-webui/open-webui/issues/17684)
|
||||
- 📎 File upload handling was improved to properly recognize uploaded files even when no accompanying text message is provided, resolving issues where attachments were ignored in custom prompts. [#17492](https://github.com/open-webui/open-webui/issues/17492)
|
||||
- 💬 Chat conversation referencing within projects was restored by including foldered chats in the reference menu, allowing users to properly quote conversations from within their project scope. [#17530](https://github.com/open-webui/open-webui/issues/17530)
|
||||
- 🔍 RAG query generation is now skipped when all attached files are set to full context mode, preventing unnecessary retrieval operations and improving system efficiency. [#17744](https://github.com/open-webui/open-webui/pull/17744)
|
||||
- 💾 Memory leaks in file handling and HTTP connections are prevented through proper resource cleanup, ensuring stable memory usage during large file downloads and processing operations. [#17608](https://github.com/open-webui/open-webui/pull/17608)
|
||||
- 🔐 OAuth access token refresh errors are resolved by properly implementing async/await patterns, preventing "coroutine object has no attribute get" failures during token expiry. [#17585](https://github.com/open-webui/open-webui/issues/17585), [#17678](https://github.com/open-webui/open-webui/issues/17678)
|
||||
- ⚙️ Valve behavior was improved to properly handle default values and array types, ensuring only explicitly set values are persisted while maintaining consistent distinction between custom and default valve states. [#17664](https://github.com/open-webui/open-webui/pull/17664)
|
||||
- 🔍 Hybrid search functionality was enhanced to handle inconsistent parameter types and prevent failures when collection results are None, empty, or in unexpected formats. [#17617](https://github.com/open-webui/open-webui/pull/17617)
|
||||
- 📁 Empty folder deletion is now allowed regardless of chat deletion permission restrictions, resolving cases where users couldn't remove folders after deleting all contained chats. [#17683](https://github.com/open-webui/open-webui/pull/17683)
|
||||
- 📝 Rich text editor console errors were resolved by adding proper error handling when the TipTap editor view is not available or not yet mounted. [#17697](https://github.com/open-webui/open-webui/issues/17697)
|
||||
- 🗒️ Hidden models are now properly excluded from the notes section dropdown and default model selection, preventing users from accessing models they shouldn't see. [#17722](https://github.com/open-webui/open-webui/pull/17722)
|
||||
- 🖼️ AI-generated image download filenames now use a clean, translatable "Generated Image" format instead of potentially problematic response text, improving file management and compatibility. [#17721](https://github.com/open-webui/open-webui/pull/17721)
|
||||
- 🎨 Toggle switch display issues in the Integrations interface are fixed, preventing background highlighting and obscuring on hover. [#17564](https://github.com/open-webui/open-webui/issues/17564)
|
||||
|
||||
### Changed
|
||||
|
||||
- 👥 Channel permissions now require write access for message posting, editing, and deletion, with existing user groups defaulting to read-only access requiring manual admin migration to write permissions for full participation.
|
||||
- ☁️ OneDrive environment variable configuration was updated to use separate ONEDRIVE_CLIENT_ID_PERSONAL and ONEDRIVE_CLIENT_ID_BUSINESS variables for better client ID separation, while maintaining backward compatibility with the legacy ONEDRIVE_CLIENT_ID variable. [Docs](https://docs.openwebui.com/tutorials/integrations/onedrive-sharepoint), [Docs](https://docs.openwebui.com/getting-started/env-configuration/#onedrive)
|
||||
|
||||
## [0.6.30] - 2025-09-17
|
||||
|
||||
### Added
|
||||
|
||||
- 🔑 Microsoft Entra ID authentication type support was added for Azure OpenAI connections, enabling enhanced security and streamlined authentication workflows.
|
||||
|
||||
### Fixed
|
||||
|
||||
- ☁️ OneDrive integration was fixed after recent breakage, restoring reliable account connectivity and file access.
|
||||
|
||||
## [0.6.29] - 2025-09-17
|
||||
|
||||
### Added
|
||||
|
||||
- 🎨 The chat input menu has been completely overhauled with a revolutionary new design, consolidating attachments under a unified '+' button, organizing integrations into a streamlined options menu, and introducing powerful, interactive selectors for attaching chats, notes, and knowledge base items. [Commit](https://github.com/open-webui/open-webui/commit/a68342d5a887e36695e21f8c2aec593b159654ff), [Commit](https://github.com/open-webui/open-webui/commit/96b8aaf83ff341fef432649366bc5155bac6cf20), [Commit](https://github.com/open-webui/open-webui/commit/4977e6d50f7b931372c96dd5979ca635d58aeb78), [Commit](https://github.com/open-webui/open-webui/commit/d973db829f7ec98b8f8fe7d3b2822d588e79f94e), [Commit](https://github.com/open-webui/open-webui/commit/d4c628de09654df76653ad9bce9cb3263e2f27c8), [Commit](https://github.com/open-webui/open-webui/commit/cd740f436db4ea308dbede14ef7ff56e8126f51b), [Commit](https://github.com/open-webui/open-webui/commit/5c2db102d06b5c18beb248d795682ff422e9b6d1), [Commit](https://github.com/open-webui/open-webui/commit/031cf38655a1a2973194d2eaa0fbbd17aca8ee92), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/3ed0a6d11fea1a054e0bc8aa8dfbe417c7c53e51), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/eadec9e86e01bc8f9fb90dfe7a7ae4fc3bfa6420), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/c03ca7270e64e3a002d321237160c0ddaf2bb129), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/b53ddfbd19aa94e9cbf7210acb31c3cfafafa5fe), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/c923461882fcde30ae297a95e91176c95b9b72e1)
|
||||
- 🤖 AI models can now be mentioned in channels to automatically generate responses, enabling multi-model conversations where mentioned models participate directly in threaded discussions with full context awareness. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/4fe97d8794ee18e087790caab9e5d82886006145)
|
||||
- 💬 The Channels feature now utilizes the modern rich text editor, including support for '/', '@', and '#' command suggestions. [Commit](https://github.com/open-webui/open-webui/commit/06c1426e14ac0dfaf723485dbbc9723a4d89aba9), [Commit](https://github.com/open-webui/open-webui/commit/02f7c3258b62970ce79716f75d15467a96565054)
|
||||
- 📎 Channel message input now supports direct paste functionality for images and files from the clipboard, streamlining content sharing workflows. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/6549fc839f86c40c26c2ef4dedcaf763a9304418)
|
||||
- ⚙️ Models can now be configured with default features (Web Search, Image Generation) and filters that automatically activate when a user selects the model. [Commit](https://github.com/open-webui/open-webui/commit/9a555478273355a5177bfc7f7211c64778e4c8de), [Commit](https://github.com/open-webui/open-webui/commit/384a53b339820068e92f7eaea0d9f3e0536c19c2), [Commit](https://github.com/open-webui/open-webui/commit/d7f43bfc1a30c065def8c50d77c2579c1a3c5c67), [Commit](https://github.com/open-webui/open-webui/commit/6a67a2217cc5946ad771e479e3a37ac213210748)
|
||||
- 💬 The ability to reference other chats as context within a conversation was added via the attachment menu. [Commit](https://github.com/open-webui/open-webui/commit/e097bbdf11ae4975c622e086df00d054291cdeb3), [Commit](https://github.com/open-webui/open-webui/commit/f3cd2ffb18e7dedbe88430f9ae7caa6b3cfd79d0), [Commit](https://github.com/open-webui/open-webui/commit/74263c872c5d574a9bb0944d7984f748dc772dba), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/aa8ab349ed2fcb46d1cf994b9c0de2ec2ea35d0d), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/025eef754f0d46789981defd473d001e3b1d0ca2)
|
||||
- 🎨 The command suggestion UI for prompts ('/'), models ('@'), and knowledge ('#') was completely overhauled with a more responsive and keyboard-navigable interface. [Commit](https://github.com/open-webui/open-webui/commit/6b69c4da0fb9329ccf7024483960e070cf52ccab), [Commit](https://github.com/open-webui/open-webui/commit/06a6855f844456eceaa4d410c93379460e208202), [Commit](https://github.com/open-webui/open-webui/commit/c55f5578280b936cf581a743df3703e3db1afd54), [Commit](https://github.com/open-webui/open-webui/commit/f68d1ba394d4423d369f827894cde99d760b2402)
|
||||
- 👥 User and channel suggestions were added to the mention system, enabling '@' mentions for users and models, and '#' mentions for channels with searchable user lookup and clickable navigation. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/bbd1d2b58c89b35daea234f1fc9208f2af840899), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/aef1e06f0bb72065a25579c982dd49157e320268), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/779db74d7e9b7b00d099b7d65cfbc8a831e74690)
|
||||
- 📁 Folder functionality was enhanced with custom background image support, improved drag-and-drop capabilities for moving folders to root level, and better menu interactions. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/2a234829f5dfdfde27fdfd30591caa908340efb4), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/2b1ee8b0dc5f7c0caaafdd218f20705059fa72e2), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/b1e5bc8e490745f701909c19b6a444b67c04660e), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/3e584132686372dfeef187596a7c557aa5f48308)
|
||||
- ☁️ OneDrive integration configuration now supports selecting between personal and work/school account types via ENABLE_ONEDRIVE_PERSONAL and ENABLE_ONEDRIVE_BUSINESS environment variables. [#17354](https://github.com/open-webui/open-webui/pull/17354), [Commit](https://github.com/open-webui/open-webui/commit/e1e3009a30f9808ce06582d81a60e391f5ca09ec), [Docs:#697](https://github.com/open-webui/docs/pull/697)
|
||||
- ⚡ Mermaid.js is now dynamically loaded on demand, significantly reducing first-screen loading time and improving initial page performance. [#17476](https://github.com/open-webui/open-webui/issues/17476), [#17477](https://github.com/open-webui/open-webui/pull/17477)
|
||||
- ⚡ Azure MSAL browser library is now dynamically loaded on demand, reducing initial bundle size by 730KB and improving first-screen loading speed. [#17479](https://github.com/open-webui/open-webui/pull/17479)
|
||||
- ⚡ CodeEditor component is now dynamically loaded on demand, reducing initial bundle size by 1MB and improving first-screen loading speed. [#17498](https://github.com/open-webui/open-webui/pull/17498)
|
||||
- ⚡ Hugging Face Transformers library is now dynamically loaded on demand, reducing initial bundle size by 1.9MB and improving first-screen loading speed. [#17499](https://github.com/open-webui/open-webui/pull/17499)
|
||||
- ⚡ jsPDF and html2canvas-pro libraries are now dynamically loaded on demand, reducing initial bundle size by 980KB and improving first-screen loading speed. [#17502](https://github.com/open-webui/open-webui/pull/17502)
|
||||
- ⚡ Leaflet mapping library is now dynamically loaded on demand, reducing initial bundle size by 454KB and improving first-screen loading speed. [#17503](https://github.com/open-webui/open-webui/pull/17503)
|
||||
- 📊 OpenTelemetry metrics collection was enhanced to properly handle HTTP 500 errors and ensure metrics are recorded even during exceptions. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/b14617a653c6bdcfd3102c12f971924fd1faf572)
|
||||
- 🔒 OAuth token retrieval logic was refactored, improving the reliability and consistency of authentication handling across the backend. [Commit](https://github.com/open-webui/open-webui/commit/6c0a5fa91cdbf6ffb74667ee61ca96bebfdfbc50)
|
||||
- 💻 Code block output processing was improved to handle Python execution results more reliably, along with refined visual styling and button layouts. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/0e5320c39e308ff97f2ca9e289618af12479eb6e)
|
||||
- ⚡ Message input processing was optimized to skip unnecessary text variable handling when input is empty, improving performance. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/e1386fe80b77126a12dabc4ad058abe9b024b275)
|
||||
- 📄 Individual chat PDF export was added to the sidebar chat menu, allowing users to export single conversations as PDF documents with both stylized and plain text options. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/d041d58bb619689cd04a391b4f8191b23941ca62)
|
||||
- 🛠️ Function validation was enhanced with improved valve validation and better error handling during function loading and synchronization. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/e66e0526ed6a116323285f79f44237538b6c75e6), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/8edfd29102e0a61777b23d3575eaa30be37b59a5)
|
||||
- 🔔 Notification toast interaction was enhanced with drag detection to prevent accidental clicks and added keyboard support for accessibility. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/621e7679c427b6f0efa85f95235319238bf171ad)
|
||||
- 🗓️ Improved date and time formatting dynamically adapts to the selected language, ensuring consistent localization across the UI. [#17409](https://github.com/open-webui/open-webui/pull/17409), [Commit](https://github.com/open-webui/open-webui/commit/2227f24bd6d861b1fad8d2cabacf7d62ce137d0c)
|
||||
- 🔒 Feishu SSO integration was added, allowing users to authenticate via Feishu. [#17284](https://github.com/open-webui/open-webui/pull/17284), [Docs:#685](https://github.com/open-webui/docs/pull/685)
|
||||
- 🔠 Toggle filters in the chat input options menu are now sorted alphabetically for easier navigation. [Commit](https://github.com/open-webui/open-webui/commit/ca853ca4656180487afcd84230d214f91db52533)
|
||||
- 🎨 Long chat titles in the sidebar are now truncated to prevent text overflow and maintain a clean layout. [#17356](https://github.com/open-webui/open-webui/pull/17356)
|
||||
- 🎨 Temporary chat interface design was refined with improved layout and visual consistency. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/67549dcadd670285d491bd41daf3d081a70fd094), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/2ca34217e68f3b439899c75881dfb050f49c9eb2), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/fb02ec52a5df3f58b53db4ab3a995c15f83503cd)
|
||||
- 🎨 Download icon consistency was improved across the entire interface by standardizing the icon component used in menus, functions, tools, and export features. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/596be451ece7e11b5cd25465d49670c27a1cb33f)
|
||||
- 🎨 Settings interface was enhanced with improved iconography and reorganized the 'Chats' section into 'Data Controls' for better clarity. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/8bf0b40fdd978b5af6548a6e1fb3aabd90bcd5cd)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Translations for Finnish, German, Kabyle, Portuguese (Brazil), Simplified Chinese, Spanish (Spain), and Traditional Chinese (Taiwan) were enhanced and expanded.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 📚 Knowledge base permission logic was corrected to ensure private collection owners can access their own content when embedding bypass is enabled. [#17432](https://github.com/open-webui/open-webui/issues/17432), [Commit](https://github.com/open-webui/open-webui/commit/a51f0c30ec1472d71487eab3e15d0351a2716b12)
|
||||
- ⚙️ Connection URL editing in Admin Settings now properly saves changes instead of reverting to original values, fixing issues with both Ollama and OpenAI-compatible endpoints. [#17435](https://github.com/open-webui/open-webui/issues/17435), [Commit](https://github.com/open-webui/open-webui/commit/e4c864de7eb0d577843a80688677ce3659d1f81f)
|
||||
- 📊 Usage information collection from Google models was corrected to handle providers that send usage data alongside content chunks instead of separately. [#17421](https://github.com/open-webui/open-webui/pull/17421), [Commit](https://github.com/open-webui/open-webui/commit/c2f98a4cd29ed738f395fef09c42ab8e73cd46a0)
|
||||
- ⚙️ Settings modal scrolling issue was resolved by moving image compression controls to a dedicated modal, preventing the main settings from becoming scrollable out of view. [#17474](https://github.com/open-webui/open-webui/issues/17474), [Commit](https://github.com/open-webui/open-webui/commit/fed5615c19b0045a55b0be426b468a57bfda4b66)
|
||||
- 📁 Folder click behavior was improved to prevent accidental actions by implementing proper double-click detection and timing delays for folder expansion and selection. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/19e3214997170eea6ee92452e8c778e04a28e396)
|
||||
- 🔐 Access control component reliability was improved with better null checking and error handling for group permissions and private access scenarios. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/c8780a7f934c5e49a21b438f2f30232f83cf75d2), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/32015c392dbc6b7367a6a91d9e173e675ea3402c)
|
||||
- 🔗 The citation modal now correctly displays and links to external web page sources in addition to internal documents. [Commit](https://github.com/open-webui/open-webui/commit/9208a84185a7e59524f00a7576667d493c3ac7d4)
|
||||
- 🔗 Web and YouTube attachment handling was fixed, ensuring their content is now reliably processed and included in the chat context for retrieval. [Commit](https://github.com/open-webui/open-webui/commit/210197fd438b52080cda5d6ce3d47b92cdc264c8)
|
||||
- 📂 Large file upload failures are resolved by correcting the processing logic for scenarios where document embedding is bypassed. [Commit](https://github.com/open-webui/open-webui/commit/051b6daa8299fd332503bd584563556e2ae6adab)
|
||||
- 🌐 Rich text input placeholder text now correctly updates when the interface language is switched, ensuring proper localization. [#17473](https://github.com/open-webui/open-webui/pull/17473), [Commit](https://github.com/open-webui/open-webui/commit/77358031f5077e6efe5cc08d8d4e5831c7cd1cd9)
|
||||
- 📊 Llama.cpp server timing metrics are now correctly parsed and displayed by fixing a typo in the response handling. [#17350](https://github.com/open-webui/open-webui/issues/17350), [Commit](https://github.com/open-webui/open-webui/commit/cf72f5503f39834b9da44ebbb426a3674dad0caa)
|
||||
- 🛠️ Filter functions with file_handler configuration now properly handle messages without file attachments, preventing runtime errors. [#17423](https://github.com/open-webui/open-webui/pull/17423)
|
||||
- 🔔 Channel notification delivery was fixed to properly handle background task execution and user access checking. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/1077b2ac8b96e49c2ad2620e76eb65bbb2a3a1f3)
|
||||
|
||||
### Changed
|
||||
|
||||
- 📝 Prompt template variables are now optional by default instead of being forced as required, allowing flexible workflows with optional metadata fields. [#17447](https://github.com/open-webui/open-webui/issues/17447), [Commit](https://github.com/open-webui/open-webui/commit/d5824b1b495fcf86e57171769bcec2a0f698b070), [Docs:#696](https://github.com/open-webui/docs/pull/696)
|
||||
- 🛠️ Direct external tool servers now require explicit user selection from the input interface instead of being automatically included in conversations, providing better control over tool usage. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/0f04227c34ca32746c43a9323e2df32299fcb6af), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/99bba12de279dd55c55ded35b2e4f819af1c9ab5)
|
||||
- 📺 Widescreen mode option was removed from Channels interface, with all channel layouts now using full-width display. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/d46b7b8f1b99a8054b55031fe935c8a16d5ec956)
|
||||
- 🎛️ The plain textarea input option was deprecated, and the custom text editor is now the standard for all chat inputs. [Commit](https://github.com/open-webui/open-webui/commit/153afd832ccd12a1e5fd99b085008d080872c161)
|
||||
|
||||
## [0.6.28] - 2025-09-10
|
||||
|
||||
### Added
|
||||
|
||||
- 🔍 The "@" command for model selection now supports real-time search and filtering, improving usability and aligning its behavior with other input commands. [#17307](https://github.com/open-webui/open-webui/issues/17307), [Commit](https://github.com/open-webui/open-webui/commit/f2a09c71499489ee71599af4a179e7518aaf658b)
|
||||
- 🛠️ External tool server data handling is now more robust, automatically attempting to parse specifications as JSON before falling back to YAML, regardless of the URL extension. [Commit](https://github.com/open-webui/open-webui/commit/774c0056bde88ed4831422efa81506488e3d6641)
|
||||
- 🎯 The "Title" field is now automatically focused when creating a new chat folder, streamlining the folder creation process. [#17315](https://github.com/open-webui/open-webui/issues/17315), [Commit](https://github.com/open-webui/open-webui/commit/c51a651a2d5e2a27546416666812e9b92205562d)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Brazilian Portuguese and Simplified Chinese translations were expanded and refined.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🔊 A regression affecting Text-to-Speech for local providers using the OpenAI engine was fixed by reverting a URL joining change. [#17316](https://github.com/open-webui/open-webui/issues/17316), [Commit](https://github.com/open-webui/open-webui/commit/8339f59cdfc63f2d58c8e26933d1bf1438479d75)
|
||||
- 🪧 A regression was fixed where the input modal for prompts with placeholders would not open, causing the raw prompt text to be pasted into the chat input field instead. [#17325](https://github.com/open-webui/open-webui/issues/17325), [Commit](https://github.com/open-webui/open-webui/commit/d5cb65527eaa4831459a4c7dbf187daa9c0525ae)
|
||||
- 🔑 An issue was resolved where modified connection keys in the OpenAIConnection component did not take effect. [#17324](https://github.com/open-webui/open-webui/pull/17324)
|
||||
|
||||
## [0.6.27] - 2025-09-09
|
||||
|
||||
### Added
|
||||
|
||||
- 📁 Emoji folder icons were added, allowing users to personalize workspace organization with visual cues, including improved chevron display. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/1588f42fe777ad5d807e3f2fc8dbbc47a8db87c0), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/b70c0f36c0f5bbfc2a767429984d6fba1a7bb26c), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/11dea8795bfce42aa5d8d58ef316ded05173bd87), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/c0a47169fa059154d5f5a9ea6b94f9a66d82f255)
|
||||
- 📁 The 'Search Collection' input field now dynamically displays the total number of files within the knowledge base. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/fbbe1117ae4c9c8fec6499d790eee275818eccc5)
|
||||
- ☁️ A provider toggle in connection settings now allows users to manually specify Azure OpenAI deployments. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/5bdd334b74fbd154085f2d590f4afdba32469c8a)
|
||||
- ⚡ Model list caching performance was optimized by fixing cache key generation to reduce redundant API calls. [#17158](https://github.com/open-webui/open-webui/pull/17158)
|
||||
- 🎨 Azure OpenAI image generation is now supported, with configurations for IMAGES_OPENAI_API_VERSION via environment variable and admin UI. [#17147](https://github.com/open-webui/open-webui/pull/17147), [#16274](https://github.com/open-webui/open-webui/discussions/16274), [Docs:#679](https://github.com/open-webui/docs/pull/679)
|
||||
- ⚡ Comprehensive N+1 query performance is optimized by reducing database queries from 1+N to 1+1 patterns across major listing endpoints. [#17165](https://github.com/open-webui/open-webui/pull/17165), [#17160](https://github.com/open-webui/open-webui/pull/17160), [#17161](https://github.com/open-webui/open-webui/pull/17161), [#17162](https://github.com/open-webui/open-webui/pull/17162), [#17159](https://github.com/open-webui/open-webui/pull/17159), [#17166](https://github.com/open-webui/open-webui/pull/17166)
|
||||
- ⚡ The PDF.js library is now dynamically loaded, significantly reducing initial page load size and improving responsiveness. [#17222](https://github.com/open-webui/open-webui/pull/17222)
|
||||
- ⚡ The heic2any library is now dynamically loaded across various message input components, including channels, for faster page loads. [#17225](https://github.com/open-webui/open-webui/pull/17225), [#17229](https://github.com/open-webui/open-webui/pull/17229)
|
||||
- 📚 The knowledge API now supports a "delete_file" query parameter, allowing configurable file deletion behavior. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/22c4ef4fb096498066b73befe993ae3a82f7a8e7)
|
||||
- 📊 Llama.cpp timing statistics are now integrated into the usage field for comprehensive model performance metrics. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/e830b4959ecd4b2795e29e53026984a58a7696a9)
|
||||
- 🗄️ The PGVECTOR_CREATE_EXTENSION environment variable now allows control over automatic pgvector extension creation. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/c2b4976c82d335ed524bd80dc914b5e2f5bfbd9e), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/b45219c8b15b48d5ee3d42983e1107bbcefbab01), [Docs:#672](https://github.com/open-webui/docs/pull/672)
|
||||
- 🔒 Comprehensive server-side OAuth token management was implemented, securely storing encrypted tokens in a new database table and introducing an automatic refresh mechanism, enabling seamless and secure forwarding of valid user-specific OAuth tokens to downstream services, including OpenAI-compatible endpoints and external tool servers via the new "system_oauth" authentication type, resolving long-standing issues such as large token size limitations, stale/expired tokens, and reliable token propagation, and enhancing overall security by minimizing client-side token exposure, configurable via "ENABLE_OAUTH_ID_TOKEN_COOKIE" and "OAUTH_SESSION_TOKEN_ENCRYPTION_KEY" environment variables. [Docs:#683](https://github.com/open-webui/docs/pull/683), [#17210](https://github.com/open-webui/open-webui/pull/17210), [#8957](https://github.com/open-webui/open-webui/discussions/8957), [#11029](https://github.com/open-webui/open-webui/discussions/11029), [#17178](https://github.com/open-webui/open-webui/issues/17178), [#17183](https://github.com/open-webui/open-webui/issues/17183), [Commit](https://github.com/open-webui/open-webui/commit/217f4daef09b36d3d4cc4681e11d3ebd9984a1a5), [Commit](https://github.com/open-webui/open-webui/commit/fc11e4384fe98fac659e10596f67c23483578867), [Commit](https://github.com/open-webui/open-webui/commit/f11bdc6ab5dd5682bb3e27166e77581f5b8af3e0), [Commit](https://github.com/open-webui/open-webui/commit/f71834720e623761d972d4d740e9bbd90a3a86c6), [Commit](https://github.com/open-webui/open-webui/commit/b5bb6ae177dcdc4e8274d7e5ffa50bc8099fd466), [Commit](https://github.com/open-webui/open-webui/commit/b786d1e3f3308ef4f0f95d7130ddbcaaca4fc927), [Commit](https://github.com/open-webui/open-webui/commit/8a9f8627017bd0a74cbd647891552b26e56aabb7), [Commit](https://github.com/open-webui/open-webui/commit/30d1dc2c60e303756120fe1c5538968c4e6139f4), [Commit](https://github.com/open-webui/open-webui/commit/2b2d123531eb3f42c0e940593832a64e2806240d), [Commit](https://github.com/open-webui/open-webui/commit/6f6412dd16c63c2bb4df79a96b814bf69cb3f880)
|
||||
- 🔒 Conditional Permission Hardening for OpenShift Deployments: Added a build argument to enable optional permission hardening for OpenShift and container environments. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/0ebe4f8f8490451ac8e85a4846f010854d9b54e5)
|
||||
- 👥 Regex pattern support is added for OAuth blocked groups, allowing more flexible group filtering rules. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/df66e21472646648d008ebb22b0e8d5424d491df)
|
||||
- 💬 Web search result display was enhanced to include titles and favicons, providing a clearer overview of search sources. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/33f04a771455e3fabf8f0e8ebb994ae7f41b8ed4), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/0a85dd4bca23022729eafdbc82c8c139fa365af2), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/16090bc2721fde492afa2c4af5927e2b668527e1), [#17197](https://github.com/open-webui/open-webui/pull/17197), [#14179](https://github.com/open-webui/open-webui/issues/14179), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/1cdb7aed1ee9bf81f2fd0404be52dcfa64f8ed4f), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/f2525ebc447c008cf7269ef20ce04fa456f302c4), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/7f523de408ede4075349d8de71ae0214b7e1a62e), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/3d37e4a42d344051ae715ab59bd7b5718e46c343), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/cd5e2be27b613314aadda6107089331783987985), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/6dc0df247347aede2762fe2065cf30275fd137ae)
|
||||
- 💬 A new setting was added to control whether clicking a suggested prompt automatically sends the message or only inserts the text. [#17192](https://github.com/open-webui/open-webui/issues/17192), [Commit](https://github.com/open-webui/open-webui/commit/e023a98f11fc52feb21e4065ec707cc98e50c7d3)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Translations for Portuguese (Brazil), Simplified Chinese, Catalan, and Spanish were enhanced and expanded.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🔍 Hybrid search functionality now correctly handles lexical-semantic weight labels and avoids errors when BM25 weight is zero. [#17049](https://github.com/open-webui/open-webui/pull/17049), [#17046](https://github.com/open-webui/open-webui/issues/17046)
|
||||
- 🛑 Task stopping errors are prevented by gracefully handling multiple stop requests for the same task. [#17195](https://github.com/open-webui/open-webui/pull/17195)
|
||||
- 🐍 Code execution package detection precision is improved in Pyodide to prevent unnecessary package inclusions. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/bbe116795860a81a647d9567e0d9cb1950650095)
|
||||
- 🛠️ Tool message format API compliance is fixed by ensuring content fields in tool call responses contain valid string values instead of null. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/37bf0087e5b8a324009c9d06b304027df351ea6b)
|
||||
- 📱 Mobile app config API authentication now supports Authorization header token verification with cookie fallback for iOS and Android requests. [#17175](https://github.com/open-webui/open-webui/pull/17175)
|
||||
- 💾 Knowledge file save race conditions are prevented by serializing API calls and adding an "isSaving" guard. [#17137](https://github.com/open-webui/open-webui/pull/17137), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/4ca936f0bf9813bee11ec8aea41d7e34fb6b16a9)
|
||||
- 🔐 The SSO login button visibility is restored for OIDC PKCE authentication without a client secret. [#17012](https://github.com/open-webui/open-webui/pull/17012)
|
||||
- 🔊 Text-to-Speech (TTS) API requests now use proper URL joining methods, ensuring reliable functionality regardless of trailing slashes in the base URL. [#17061](https://github.com/open-webui/open-webui/pull/17061)
|
||||
- 🛡️ Admin account creation on Hugging Face Spaces now correctly detects the configured port, resolving issues with custom port deployments. [#17064](https://github.com/open-webui/open-webui/pull/17064)
|
||||
- 📁 Unicode filename support is improved for external document loaders by properly URL-encoding filenames in HTTP headers. [#17013](https://github.com/open-webui/open-webui/pull/17013), [#17000](https://github.com/open-webui/open-webui/issues/17000)
|
||||
- 🔗 Web page and YouTube attachments are now correctly processed by setting their type as "text" and using collection names for accurate content retrieval. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/487979859a6ffcfd60468f523822cdf838fbef5b)
|
||||
- ✍️ Message input composition event handling is fixed to properly manage text input for multilingual users using Input Method Editors (IME). [#17085](https://github.com/open-webui/open-webui/pull/17085)
|
||||
- 💬 Follow-up tooltip duplication is removed, streamlining the user interface and preventing visual clutter. [#17186](https://github.com/open-webui/open-webui/pull/17186)
|
||||
- 🎨 Chat button text display is corrected by preventing clipping of descending characters and removing unnecessary capitalization. [#17191](https://github.com/open-webui/open-webui/pull/17191)
|
||||
- 🧠 RAG Loop/Error with Gemma 3.1 2B Instruct is fixed by correctly unwrapping unexpected single-item list responses from models. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/1bc9711afd2b72cd07c4e539a83783868733767c), [#17213](https://github.com/open-webui/open-webui/issues/17213)
|
||||
- 🖼️ HEIC conversion failures are resolved, improving robustness of image handling. [#17225](https://github.com/open-webui/open-webui/pull/17225)
|
||||
- 📦 The slim Docker image size regression has been fixed by refining the build process to correctly exclude components when USE_SLIM=true. [#16997](https://github.com/open-webui/open-webui/issues/16997), [Commit](https://github.com/open-webui/open-webui/commit/be373e9fd42ac73b0302bdb487e16dbeae178b4e), [Commit](https://github.com/open-webui/open-webui/commit/0ebe4f8f8490451ac8e85a4846f010854d9b54e5)
|
||||
- 📁 Knowledge base update validation errors are resolved, ensuring seamless management via UI or API. [#17244](https://github.com/open-webui/open-webui/issues/17244), [Commit](https://github.com/open-webui/open-webui/commit/9aac1489080a5c9441e89b1a56de0d3a672bc5fb)
|
||||
- 🔐 Resolved a security issue where a global web search setting overrode model-specific restrictions, ensuring model-level settings are now correctly prioritized. [#17151](https://github.com/open-webui/open-webui/issues/17151), [Commit](https://github.com/open-webui/open-webui/commit/9368d0ac751ec3072d5a96712b80a9b20a642ce6)
|
||||
- 🔐 OAuth redirect reliability is improved by robustly preserving the intended redirect path using session storage. [#17235](https://github.com/open-webui/open-webui/issues/17235), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/4f2b821088367da18374027919594365c7a3f459), [#15575](https://github.com/open-webui/open-webui/pull/15575), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/d9f97c832c556fae4b116759da0177bf4fe619de)
|
||||
- 🔐 Fixed a security vulnerability where knowledge base access within chat folders persisted after permissions were revoked. [#17182](https://github.com/open-webui/open-webui/issues/17182), [Commit](https://github.com/open-webui/open-webui/commit/40e40d1dddf9ca937e99af41c8ca038dbc93a7e6)
|
||||
- 🔒 OIDC access denied errors are now displayed as user-friendly toast notifications instead of raw JSON. [#17208](https://github.com/open-webui/open-webui/issues/17208), [Commit](https://github.com/open-webui/open-webui/commit/3d6d050ad82d360adc42d6e9f42e8faf8d13c9f4)
|
||||
- 💬 Chat exception handling is enhanced to prevent system instability during message generation and ensure graceful error recovery. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/f56889c5c7f0cf1a501c05d35dfa614e4f8b6958)
|
||||
- 🔒 Static asset authentication is improved by adding crossorigin="use-credentials" attributes to all link elements, enabling proper cookie forwarding for proxy environments and authenticated requests to favicon, manifest, and stylesheet resources. [#17280](https://github.com/open-webui/open-webui/pull/17280), [Commit](https://github.com/open-webui/open-webui/commit/f17d8b5d19e1a05df7d63f53e939c99772a59c1e)
|
||||
|
||||
### Changed
|
||||
|
||||
- 🛠️ Renamed "Tools" to "External Tools" across the UI for clearer distinction between built-in and external functionalities. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/0bca4e230ef276bec468889e3be036242ad11086f)
|
||||
- 🛡️ Default permission validation for message regeneration and deletion actions is enhanced to provide more restrictive access controls, improving chat security and user data protection. [#17285](https://github.com/open-webui/open-webui/pull/17285)
|
||||
|
||||
## [0.6.26] - 2025-08-28
|
||||
|
||||
### Added
|
||||
|
||||
- 🛂 **Granular Chat Interaction Permissions**: Added fine-grained permission controls for individual chat actions including "Continue Response", "Regenerate Response", "Rate Response", and "Delete Messages". Administrators can now configure these permissions per user group or set system defaults via environment variables, providing enhanced security and governance by preventing potential system prompt leakage through response continuation and enabling precise control over user interactions with AI responses.
|
||||
- 🧠 **Custom Reasoning Tags Configuration**: Added configurable reasoning tag detection for AI model responses, allowing administrators and users to customize how the system identifies and processes reasoning content. Users can now define custom reasoning tag pairs, use default tags like "think" and "reasoning", or disable reasoning detection entirely through the Advanced Parameters interface, providing enhanced control over AI thought process visibility.
|
||||
- 📱 **Pull-to-Refresh Support**: Added pull-to-refresh functionality allowing user to easily refresh the interface by pulling down on the navbar area. This resolves timeout issues that occurred when temporarily switching away from the app during long AI response generations, eliminating the need to close and relaunch the PWA.
|
||||
- 📁 **Configurable File Upload Processing Mode**: Added "process_in_background" query parameter to the file upload API endpoint, allowing clients to choose between asynchronous (default) and synchronous file processing. Setting "process_in_background=false" forces the upload request to wait until extraction and embedding complete, returning immediately usable files and simplifying integration for backend API consumers that prefer blocking calls over polling workflows.
|
||||
- 🔐 **Azure Document Intelligence DefaultAzureCredential Support**: Added support for authenticating with Azure Document Intelligence using DefaultAzureCredential in addition to API key authentication, enabling seamless integration with Azure Entra ID and managed identity authentication for enterprise Azure environments.
|
||||
- 🔐 **Authentication Bootstrapping Enhancements**: Added "ENABLE_INITIAL_ADMIN_SIGNUP" environment variable and "?form=true" URL parameter to enable initial admin user creation and forced login form display in SSO-only deployments. This resolves bootstrap issues where administrators couldn't create the first user when login forms were disabled, allowing proper initialization of SSO-configured deployments without requiring temporary configuration changes.
|
||||
- ⚡ **Query Generation Caching**: Added "ENABLE_QUERIES_CACHE" environment variable to enable request-scoped caching of generated search queries. When both web search and file retrieval are active, queries generated for web search are automatically reused for file retrieval, eliminating duplicate LLM API calls and reducing token usage and costs while maintaining search quality.
|
||||
- 🔧 **Configurable Tool Call Retry Limit**: Added "CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES" environment variable to control the maximum number of sequential tool calls allowed before safety stopping a chat session. This replaces the previous hardcoded limit of 10, enabling administrators to configure higher limits for complex workflows requiring extensive tool interactions.
|
||||
- 📦 **Slim Docker Image Variant**: Added new slim Docker image option via "USE_SLIM" build argument that excludes embedded AI models and Ollama installation, reducing image size by approximately 1GB. This variant enables faster image pulls and deployments for environments where AI models are managed externally, particularly beneficial for auto-scaling clusters and distributed deployments.
|
||||
- 🗂️ **Shift-to-Delete Functionality for Workspace Prompts**: Added keyboard shortcut support for quick prompt deletion on the Workspace Prompts page. Hold Shift and hover over any prompt to reveal a trash icon for instant deletion, bringing consistent interaction patterns across all workspace sections (Models, Tools, Functions, and now Prompts) and streamlining prompt management workflows.
|
||||
- ♿ **Accessibility Enhancements**: Enhanced user interface accessibility with improved keyboard navigation, ARIA labels, and screen reader compatibility across key platform components.
|
||||
- 📄 **Optimized PDF Export for Smaller File Size**: PDF exports are now significantly optimized, producing much smaller files for faster downloads and easier sharing or archiving of your chats and documents.
|
||||
- 📦 **Slimmed Default Install with Optional Full Dependencies**: Installing Open WebUI via pip now defaults to a slimmer package; PostgreSQL support is no longer included by default—simply use 'pip install open-webui[all]' to include all optional dependencies for full feature compatibility.
|
||||
- 🔄 **General Backend Refactoring**: Implemented various backend improvements to enhance performance, stability, and security, ensuring a more resilient and reliable platform for all users.
|
||||
- 🌐 **Localization & Internationalization Improvements**: Enhanced and expanded translations for Finnish, Spanish, Japanese, Polish, Portuguese (Brazil), and Chinese, including missing translations and typo corrections, providing a more natural and professional user experience for speakers of these languages across the entire interface.
|
||||
|
||||
### Fixed
|
||||
|
||||
- ⚠️ **Chat Error Feedback Restored**: Fixed an issue where various backend errors (tool server failures, API connection issues, malformed responses) would cause chats to load indefinitely without providing user feedback. The system now properly displays error messages when failures occur during chat generation, allowing users to understand issues and retry as needed instead of waiting indefinitely.
|
||||
- 🖼️ **Image Generation Steps Setting Visibility Fixed**: Fixed a UI issue where the "Set Steps" configuration option was incorrectly displayed for OpenAI and Gemini image generation engines that don't support this parameter. The setting is now only visible for compatible engines like ComfyUI and Automatic1111, eliminating user confusion about non-functional configuration options.
|
||||
- 📄 **Datalab Marker API Document Loader Fixed**: Fixed broken Datalab Marker API document loader functionality by correcting URL path handling for both hosted Datalab services and self-hosted Marker servers. Removed hardcoded "/marker" paths from the loader code and restored proper default URL structure, ensuring PDF and document processing works correctly with both deployment types.
|
||||
- 📋 **Citation Error Handling Improved**: Fixed an issue where malformed citation or source objects from external tools, pipes, or filters would cause JavaScript errors and make the chat interface completely unresponsive. The system now gracefully handles missing or undefined citation properties, allowing conversations to load properly even when tools generate defective citation events.
|
||||
- 👥 **Group User Add API Endpoint Fixed**: Fixed an issue where the "/api/v1/groups/id/{group_id}/users/add" API endpoint would accept requests without errors but fail to actually add users to groups. The system now properly initializes and deduplicates user ID lists, ensuring users are correctly added to and removed from groups via API calls.
|
||||
- 🛠️ **External Tool Server Error Handling Improved**: Fixed an issue where unreachable or misconfigured external tool servers would cause JavaScript errors and prevent the interface from loading properly. The system now gracefully handles connection failures, displays appropriate error messages, and filters out inaccessible servers while maintaining full functionality for working connections.
|
||||
- 📋 **Code Block Copy Button Content Fixed**: Fixed an issue where the copy button in code blocks would copy the original AI-generated code instead of any user-edited content, ensuring the copy function always captures the currently displayed code as modified by users.
|
||||
- 📄 **PDF Export Content Mismatch Fixed**: Resolved an issue where exporting a PDF of one chat while viewing another chat would incorrectly generate the PDF using the currently viewed chat's content instead of the intended chat's content. Additionally optimized the PDF generation algorithm with improved canvas slicing, better memory management, and enhanced image quality, while removing the problematic PDF export option from individual chat menus to prevent further confusion.
|
||||
- 🖱️ **Windows Sidebar Cursor Icon Corrected**: Fixed confusing cursor icons on Windows systems where sidebar toggle buttons displayed resize cursors (ew-resize) instead of appropriate pointer cursors. The sidebar buttons now show standard pointer cursors on Windows, eliminating user confusion about whether the buttons expand/collapse the sidebar or resize it.
|
||||
- 📝 **Safari IME Composition Bug Fixed**: Resolved an issue where pressing Enter while composing Chinese text using Input Method Editors (IMEs) on Safari would prematurely send messages instead of completing text composition. The system now properly detects composition states and ignores keydown events that occur immediately after composition ends, ensuring smooth multilingual text input across all browsers.
|
||||
- 🔍 **Hybrid Search Parameter Handling Fixed**: Fixed an issue where the "hybrid" parameter in collection query requests was not being properly evaluated, causing the system to ignore user-specified hybrid search preferences and only check global configuration. Additionally resolved a division by zero error that occurred in hybrid search when BM25Retriever was called with empty document lists, ensuring robust search functionality across all collection states.
|
||||
- 💬 **RTL Text Orientation in Messages Fixed**: Fixed text alignment issues in user messages and AI responses for Right-to-Left languages, ensuring proper text direction based on user language settings. Code blocks now consistently use Left-to-Right orientation regardless of the user's language preference, maintaining code readability across all supported languages.
|
||||
- 📁 **File Content Preview in Modal Restored**: Fixed an issue where clicking on uploaded files would display an empty preview modal, even when the files were successfully processed and available for AI context. File content now displays correctly in the preview modal, ensuring users can verify and review their uploaded documents as intended.
|
||||
- 🌐 **Playwright Timeout Configuration Corrected**: Fixed an issue where Playwright timeout values were incorrectly converted from milliseconds to seconds with an additional 1000x multiplier, causing excessively long web loading timeouts. The timeout parameter now correctly uses the configured millisecond values as intended, ensuring responsive web search and document loading operations.
|
||||
|
||||
### Changed
|
||||
|
||||
- 🔄 **Follow-Up Question Language Constraint Removed**: Follow-up question suggestions no longer strictly adhere to the chat's primary language setting, allowing for more flexible and diverse suggestion generation that may include questions in different languages based on conversation context and relevance rather than enforced language matching.
|
||||
|
||||
## [0.6.25] - 2025-08-22
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🖼️ **Image Generation Reliability Restored**: Fixed a key issue causing image generation failures.
|
||||
- 🏆 **Reranking Functionality Restored**: Resolved errors with rerank feature.
|
||||
|
||||
## [0.6.24] - 2025-08-21
|
||||
|
||||
### Added
|
||||
|
|
|
|||
22
Dockerfile
22
Dockerfile
|
|
@ -3,8 +3,6 @@
|
|||
# use build args in the docker build command with --build-arg="BUILDARG=true"
|
||||
ARG USE_CUDA=false
|
||||
ARG USE_OLLAMA=false
|
||||
ARG USE_SLIM=false
|
||||
ARG USE_PERMISSION_HARDENING=false
|
||||
# Tested with cu117 for CUDA 11 and cu121 for CUDA 12 (default)
|
||||
ARG USE_CUDA_VER=cu128
|
||||
# any sentence transformer model; models to use can be found at https://huggingface.co/models?library=sentence-transformers
|
||||
|
|
@ -26,9 +24,6 @@ ARG GID=0
|
|||
FROM --platform=$BUILDPLATFORM node:22-alpine3.20 AS build
|
||||
ARG BUILD_HASH
|
||||
|
||||
# Set Node.js options (heap limit Allocation failed - JavaScript heap out of memory)
|
||||
# ENV NODE_OPTIONS="--max-old-space-size=4096"
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# to store git revision in build
|
||||
|
|
@ -48,23 +43,17 @@ FROM python:3.11-slim-bookworm AS base
|
|||
ARG USE_CUDA
|
||||
ARG USE_OLLAMA
|
||||
ARG USE_CUDA_VER
|
||||
ARG USE_SLIM
|
||||
ARG USE_PERMISSION_HARDENING
|
||||
ARG USE_EMBEDDING_MODEL
|
||||
ARG USE_RERANKING_MODEL
|
||||
ARG UID
|
||||
ARG GID
|
||||
|
||||
# Python settings
|
||||
ENV PYTHONUNBUFFERED=1
|
||||
|
||||
## Basis ##
|
||||
ENV ENV=prod \
|
||||
PORT=8080 \
|
||||
# pass build args to the build
|
||||
USE_OLLAMA_DOCKER=${USE_OLLAMA} \
|
||||
USE_CUDA_DOCKER=${USE_CUDA} \
|
||||
USE_SLIM_DOCKER=${USE_SLIM} \
|
||||
USE_CUDA_DOCKER_VER=${USE_CUDA_VER} \
|
||||
USE_EMBEDDING_MODEL_DOCKER=${USE_EMBEDDING_MODEL} \
|
||||
USE_RERANKING_MODEL_DOCKER=${USE_RERANKING_MODEL}
|
||||
|
|
@ -141,14 +130,11 @@ RUN pip3 install --no-cache-dir uv && \
|
|||
else \
|
||||
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu --no-cache-dir && \
|
||||
uv pip install --system -r requirements.txt --no-cache-dir && \
|
||||
if [ "$USE_SLIM" != "true" ]; then \
|
||||
python -c "import os; from sentence_transformers import SentenceTransformer; SentenceTransformer(os.environ['RAG_EMBEDDING_MODEL'], device='cpu')" && \
|
||||
python -c "import os; from faster_whisper import WhisperModel; WhisperModel(os.environ['WHISPER_MODEL'], device='cpu', compute_type='int8', download_root=os.environ['WHISPER_MODEL_DIR'])"; \
|
||||
python -c "import os; import tiktoken; tiktoken.get_encoding(os.environ['TIKTOKEN_ENCODING_NAME'])"; \
|
||||
fi; \
|
||||
fi; \
|
||||
mkdir -p /app/backend/data && chown -R $UID:$GID /app/backend/data/ && \
|
||||
rm -rf /var/lib/apt/lists/*;
|
||||
chown -R $UID:$GID /app/backend/data/
|
||||
|
||||
# Install Ollama if requested
|
||||
RUN if [ "$USE_OLLAMA" = "true" ]; then \
|
||||
|
|
@ -177,13 +163,11 @@ HEALTHCHECK CMD curl --silent --fail http://localhost:${PORT:-8080}/health | jq
|
|||
# Minimal, atomic permission hardening for OpenShift (arbitrary UID):
|
||||
# - Group 0 owns /app and /root
|
||||
# - Directories are group-writable and have SGID so new files inherit GID 0
|
||||
RUN if [ "$USE_PERMISSION_HARDENING" = "true" ]; then \
|
||||
set -eux; \
|
||||
RUN set -eux; \
|
||||
chgrp -R 0 /app /root || true; \
|
||||
chmod -R g+rwX /app /root || true; \
|
||||
find /app -type d -exec chmod g+s {} + || true; \
|
||||
find /root -type d -exec chmod g+s {} + || true; \
|
||||
fi
|
||||
find /root -type d -exec chmod g+s {} + || true
|
||||
|
||||
USER $UID:$GID
|
||||
|
||||
|
|
|
|||
35
INSTALLATION.md
Normal file
35
INSTALLATION.md
Normal file
|
|
@ -0,0 +1,35 @@
|
|||
### Installing Both Ollama and Open WebUI Using Kustomize
|
||||
|
||||
For cpu-only pod
|
||||
|
||||
```bash
|
||||
kubectl apply -f ./kubernetes/manifest/base
|
||||
```
|
||||
|
||||
For gpu-enabled pod
|
||||
|
||||
```bash
|
||||
kubectl apply -k ./kubernetes/manifest
|
||||
```
|
||||
|
||||
### Installing Both Ollama and Open WebUI Using Helm
|
||||
|
||||
Package Helm file first
|
||||
|
||||
```bash
|
||||
helm package ./kubernetes/helm/
|
||||
```
|
||||
|
||||
For cpu-only pod
|
||||
|
||||
```bash
|
||||
helm install ollama-webui ./ollama-webui-*.tgz
|
||||
```
|
||||
|
||||
For gpu-enabled pod
|
||||
|
||||
```bash
|
||||
helm install ollama-webui ./ollama-webui-*.tgz --set ollama.resources.limits.nvidia.com/gpu="1"
|
||||
```
|
||||
|
||||
Check the `kubernetes/helm/values.yaml` file to know which parameters are available for customization
|
||||
2
LICENSE
2
LICENSE
|
|
@ -1,4 +1,4 @@
|
|||
Copyright (c) 2023- Open WebUI Inc. [Created by Timothy Jaeryang Baek]
|
||||
Copyright (c) 2023-2025 Timothy Jaeryang Baek (Open WebUI)
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
|
|
|
|||
|
|
@ -1,11 +0,0 @@
|
|||
# Open WebUI Multi-License Notice
|
||||
|
||||
This repository contains code governed by multiple licenses based on the date and origin of contribution:
|
||||
|
||||
1. All code committed prior to commit a76068d69cd59568b920dfab85dc573dbbb8f131 is licensed under the MIT License (see LICENSE_HISTORY).
|
||||
|
||||
2. All code committed from commit a76068d69cd59568b920dfab85dc573dbbb8f131 up to and including commit 60d84a3aae9802339705826e9095e272e3c83623 is licensed under the BSD 3-Clause License (see LICENSE_HISTORY).
|
||||
|
||||
3. All code contributed or modified after commit 60d84a3aae9802339705826e9095e272e3c83623 is licensed under the Open WebUI License (see LICENSE).
|
||||
|
||||
For details on which commits are covered by which license, refer to LICENSE_HISTORY.
|
||||
77
README.md
77
README.md
|
|
@ -10,16 +10,14 @@
|
|||
[](https://discord.gg/5rJgQTnV4s)
|
||||
[](https://github.com/sponsors/tjbck)
|
||||
|
||||

|
||||
|
||||
**Open WebUI is an [extensible](https://docs.openwebui.com/features/plugin/), feature-rich, and user-friendly self-hosted AI platform designed to operate entirely offline.** It supports various LLM runners like **Ollama** and **OpenAI-compatible APIs**, with **built-in inference engine** for RAG, making it a **powerful AI deployment solution**.
|
||||
|
||||
Passionate about open-source AI? [Join our team →](https://careers.openwebui.com/)
|
||||
|
||||

|
||||

|
||||
|
||||
> [!TIP]
|
||||
> **Looking for an [Enterprise Plan](https://docs.openwebui.com/enterprise)?** – **[Speak with Our Sales Team Today!](https://docs.openwebui.com/enterprise)**
|
||||
> **Looking for an [Enterprise Plan](https://docs.openwebui.com/enterprise)?** – **[Speak with Our Sales Team Today!](mailto:sales@openwebui.com)**
|
||||
>
|
||||
> Get **enhanced capabilities**, including **custom theming and branding**, **Service Level Agreement (SLA) support**, **Long-Term Support (LTS) versions**, and **more!**
|
||||
|
||||
|
|
@ -33,44 +31,32 @@ For more information, be sure to check out our [Open WebUI Documentation](https:
|
|||
|
||||
- 🛡️ **Granular Permissions and User Groups**: By allowing administrators to create detailed user roles and permissions, we ensure a secure user environment. This granularity not only enhances security but also allows for customized user experiences, fostering a sense of ownership and responsibility amongst users.
|
||||
|
||||
- 🔄 **SCIM 2.0 Support**: Enterprise-grade user and group provisioning through SCIM 2.0 protocol, enabling seamless integration with identity providers like Okta, Azure AD, and Google Workspace for automated user lifecycle management.
|
||||
|
||||
- 📱 **Responsive Design**: Enjoy a seamless experience across Desktop PC, Laptop, and Mobile devices.
|
||||
|
||||
- 📱 **Progressive Web App (PWA) for Mobile**: Enjoy a native app-like experience on your mobile device with our PWA, providing offline access on localhost and a seamless user interface.
|
||||
|
||||
- ✒️🔢 **Full Markdown and LaTeX Support**: Elevate your LLM experience with comprehensive Markdown and LaTeX capabilities for enriched interaction.
|
||||
|
||||
- 🎤📹 **Hands-Free Voice/Video Call**: Experience seamless communication with integrated hands-free voice and video call features using multiple Speech-to-Text providers (Local Whisper, OpenAI, Deepgram, Azure) and Text-to-Speech engines (Azure, ElevenLabs, OpenAI, Transformers, WebAPI), allowing for dynamic and interactive chat environments.
|
||||
- 🎤📹 **Hands-Free Voice/Video Call**: Experience seamless communication with integrated hands-free voice and video call features, allowing for a more dynamic and interactive chat environment.
|
||||
|
||||
- 🛠️ **Model Builder**: Easily create Ollama models via the Web UI. Create and add custom characters/agents, customize chat elements, and import models effortlessly through [Open WebUI Community](https://openwebui.com/) integration.
|
||||
|
||||
- 🐍 **Native Python Function Calling Tool**: Enhance your LLMs with built-in code editor support in the tools workspace. Bring Your Own Function (BYOF) by simply adding your pure Python functions, enabling seamless integration with LLMs.
|
||||
|
||||
- 💾 **Persistent Artifact Storage**: Built-in key-value storage API for artifacts, enabling features like journals, trackers, leaderboards, and collaborative tools with both personal and shared data scopes across sessions.
|
||||
- 📚 **Local RAG Integration**: Dive into the future of chat interactions with groundbreaking Retrieval Augmented Generation (RAG) support. This feature seamlessly integrates document interactions into your chat experience. You can load documents directly into the chat or add files to your document library, effortlessly accessing them using the `#` command before a query.
|
||||
|
||||
- 📚 **Local RAG Integration**: Dive into the future of chat interactions with groundbreaking Retrieval Augmented Generation (RAG) support using your choice of 9 vector databases and multiple content extraction engines (Tika, Docling, Document Intelligence, Mistral OCR, External loaders). Load documents directly into chat or add files to your document library, effortlessly accessing them using the `#` command before a query.
|
||||
|
||||
- 🔍 **Web Search for RAG**: Perform web searches using 15+ providers including `SearXNG`, `Google PSE`, `Brave Search`, `Kagi`, `Mojeek`, `Tavily`, `Perplexity`, `serpstack`, `serper`, `Serply`, `DuckDuckGo`, `SearchApi`, `SerpApi`, `Bing`, `Jina`, `Exa`, `Sougou`, `Azure AI Search`, and `Ollama Cloud`, injecting results directly into your chat experience.
|
||||
- 🔍 **Web Search for RAG**: Perform web searches using providers like `SearXNG`, `Google PSE`, `Brave Search`, `serpstack`, `serper`, `Serply`, `DuckDuckGo`, `TavilySearch`, `SearchApi` and `Bing` and inject the results directly into your chat experience.
|
||||
|
||||
- 🌐 **Web Browsing Capability**: Seamlessly integrate websites into your chat experience using the `#` command followed by a URL. This feature allows you to incorporate web content directly into your conversations, enhancing the richness and depth of your interactions.
|
||||
|
||||
- 🎨 **Image Generation & Editing Integration**: Create and edit images using multiple engines including OpenAI's DALL-E, Gemini, ComfyUI (local), and AUTOMATIC1111 (local), with support for both generation and prompt-based editing workflows.
|
||||
- 🎨 **Image Generation Integration**: Seamlessly incorporate image generation capabilities using options such as AUTOMATIC1111 API or ComfyUI (local), and OpenAI's DALL-E (external), enriching your chat experience with dynamic visual content.
|
||||
|
||||
- ⚙️ **Many Models Conversations**: Effortlessly engage with various models simultaneously, harnessing their unique strengths for optimal responses. Enhance your experience by leveraging a diverse set of models in parallel.
|
||||
|
||||
- 🔐 **Role-Based Access Control (RBAC)**: Ensure secure access with restricted permissions; only authorized individuals can access your Ollama, and exclusive model creation/pulling rights are reserved for administrators.
|
||||
|
||||
- 🗄️ **Flexible Database & Storage Options**: Choose from SQLite (with optional encryption), PostgreSQL, or configure cloud storage backends (S3, Google Cloud Storage, Azure Blob Storage) for scalable deployments.
|
||||
|
||||
- 🔍 **Advanced Vector Database Support**: Select from 9 vector database options including ChromaDB, PGVector, Qdrant, Milvus, Elasticsearch, OpenSearch, Pinecone, S3Vector, and Oracle 23ai for optimal RAG performance.
|
||||
|
||||
- 🔐 **Enterprise Authentication**: Full support for LDAP/Active Directory integration, SCIM 2.0 automated provisioning, and SSO via trusted headers alongside OAuth providers. Enterprise-grade user and group provisioning through SCIM 2.0 protocol, enabling seamless integration with identity providers like Okta, Azure AD, and Google Workspace for automated user lifecycle management.
|
||||
|
||||
- ☁️ **Cloud-Native Integration**: Native support for Google Drive and OneDrive/SharePoint file picking, enabling seamless document import from enterprise cloud storage.
|
||||
|
||||
- 📊 **Production Observability**: Built-in OpenTelemetry support for traces, metrics, and logs, enabling comprehensive monitoring with your existing observability stack.
|
||||
|
||||
- ⚖️ **Horizontal Scalability**: Redis-backed session management and WebSocket support for multi-worker and multi-node deployments behind load balancers.
|
||||
|
||||
- 🌐🌍 **Multilingual Support**: Experience Open WebUI in your preferred language with our internationalization (i18n) support. Join us in expanding our supported languages! We're actively seeking contributors!
|
||||
|
||||
- 🧩 **Pipelines, Open WebUI Plugin Support**: Seamlessly integrate custom logic and Python libraries into Open WebUI using [Pipelines Plugin Framework](https://github.com/open-webui/pipelines). Launch your Pipelines instance, set the OpenAI URL to the Pipelines URL, and explore endless possibilities. [Examples](https://github.com/open-webui/pipelines/tree/main/examples) include **Function Calling**, User **Rate Limiting** to control access, **Usage Monitoring** with tools like Langfuse, **Live Translation with LibreTranslate** for multilingual support, **Toxic Message Filtering** and much more.
|
||||
|
|
@ -79,6 +65,43 @@ For more information, be sure to check out our [Open WebUI Documentation](https:
|
|||
|
||||
Want to learn more about Open WebUI's features? Check out our [Open WebUI documentation](https://docs.openwebui.com/features) for a comprehensive overview!
|
||||
|
||||
## Sponsors 🙌
|
||||
|
||||
#### Emerald
|
||||
|
||||
<table>
|
||||
<!-- <tr>
|
||||
<td>
|
||||
<a href="https://n8n.io/" target="_blank">
|
||||
<img src="https://docs.openwebui.com/sponsors/logos/n8n.png" alt="n8n" style="width: 8rem; height: 8rem; border-radius: .75rem;" />
|
||||
</a>
|
||||
</td>
|
||||
<td>
|
||||
<a href="https://n8n.io/">n8n</a> • Does your interface have a backend yet?<br>Try <a href="https://n8n.io/">n8n</a>
|
||||
</td>
|
||||
</tr> -->
|
||||
<tr>
|
||||
<td>
|
||||
<a href="https://tailscale.com/blog/self-host-a-local-ai-stack/?utm_source=OpenWebUI&utm_medium=paid-ad-placement&utm_campaign=OpenWebUI-Docs" target="_blank">
|
||||
<img src="https://docs.openwebui.com/sponsors/logos/tailscale.png" alt="Tailscale" style="width: 8rem; height: 8rem; border-radius: .75rem;" />
|
||||
</a>
|
||||
</td>
|
||||
<td>
|
||||
<a href="https://tailscale.com/blog/self-host-a-local-ai-stack/?utm_source=OpenWebUI&utm_medium=paid-ad-placement&utm_campaign=OpenWebUI-Docs">Tailscale</a> • Connect self-hosted AI to any device with Tailscale
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<a href="https://warp.dev/open-webui" target="_blank">
|
||||
<img src="https://docs.openwebui.com/sponsors/logos/warp.png" alt="Warp" style="width: 8rem; height: 8rem; border-radius: .75rem;" />
|
||||
</a>
|
||||
</td>
|
||||
<td>
|
||||
<a href="https://warp.dev/open-webui">Warp</a> • The intelligent terminal for developers
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
---
|
||||
|
||||
We are incredibly grateful for the generous support of our sponsors. Their contributions help us to maintain and improve our project, ensuring we can continue to deliver quality work to our community. Thank you!
|
||||
|
|
@ -190,6 +213,14 @@ docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=
|
|||
|
||||
### Keeping Your Docker Installation Up-to-Date
|
||||
|
||||
In case you want to update your local Docker installation to the latest version, you can do it with [Watchtower](https://containrrr.dev/watchtower/):
|
||||
|
||||
```bash
|
||||
docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui
|
||||
```
|
||||
|
||||
In the last part of the command, replace `open-webui` with your container name if it is different.
|
||||
|
||||
Check our Updating Guide available in our [Open WebUI Documentation](https://docs.openwebui.com/getting-started/updating).
|
||||
|
||||
### Using the Dev Branch 🌙
|
||||
|
|
@ -217,7 +248,7 @@ Discover upcoming features on our roadmap in the [Open WebUI Documentation](http
|
|||
|
||||
## License 📜
|
||||
|
||||
This project contains code under multiple licenses. The current codebase includes components licensed under the Open WebUI License with an additional requirement to preserve the "Open WebUI" branding, as well as prior contributions under their respective original licenses. For a detailed record of license changes and the applicable terms for each section of the code, please refer to [LICENSE_HISTORY](./LICENSE_HISTORY). For complete and updated licensing details, please see the [LICENSE](./LICENSE) and [LICENSE_HISTORY](./LICENSE_HISTORY) files.
|
||||
This project is licensed under the [Open WebUI License](LICENSE), a revised BSD-3-Clause license. You receive all the same rights as the classic BSD-3 license: you can use, modify, and distribute the software, including in proprietary and commercial products, with minimal restrictions. The only additional requirement is to preserve the "Open WebUI" branding, as detailed in the LICENSE file. For full terms, see the [LICENSE](LICENSE) document. 📄
|
||||
|
||||
## Support 💬
|
||||
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
export CORS_ALLOW_ORIGIN="http://localhost:5173;http://localhost:8080"
|
||||
export CORS_ALLOW_ORIGIN="http://localhost:5173"
|
||||
PORT="${PORT:-8080}"
|
||||
uvicorn open_webui.main:app --port $PORT --host 0.0.0.0 --forwarded-allow-ips '*' --reload
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load diff
|
|
@ -38,14 +38,13 @@ class ERROR_MESSAGES(str, Enum):
|
|||
ID_TAKEN = "Uh-oh! This id is already registered. Please choose another id string."
|
||||
MODEL_ID_TAKEN = "Uh-oh! This model id is already registered. Please choose another model id string."
|
||||
NAME_TAG_TAKEN = "Uh-oh! This name tag is already registered. Please choose another name tag string."
|
||||
MODEL_ID_TOO_LONG = "The model id is too long. Please make sure your model id is less than 256 characters long."
|
||||
|
||||
INVALID_TOKEN = (
|
||||
"Your session has expired or the token is invalid. Please sign in again."
|
||||
)
|
||||
INVALID_CRED = "The email or password provided is incorrect. Please check for typos and try logging in again."
|
||||
INVALID_EMAIL_FORMAT = "The email format you entered is invalid. Please double-check and make sure you're using a valid email address (e.g., yourname@example.com)."
|
||||
INCORRECT_PASSWORD = (
|
||||
INVALID_PASSWORD = (
|
||||
"The password provided is incorrect. Please check for typos and try again."
|
||||
)
|
||||
INVALID_TRUSTED_HEADER = "Your provider has not provided a trusted header. Please contact your administrator for assistance."
|
||||
|
|
@ -105,10 +104,6 @@ class ERROR_MESSAGES(str, Enum):
|
|||
)
|
||||
FILE_NOT_PROCESSED = "Extracted content is not available for this file. Please ensure that the file is processed before proceeding."
|
||||
|
||||
INVALID_PASSWORD = lambda err="": (
|
||||
err if err else "The password does not meet the required validation criteria."
|
||||
)
|
||||
|
||||
|
||||
class TASKS(str, Enum):
|
||||
def __str__(self) -> str:
|
||||
|
|
|
|||
|
|
@ -8,8 +8,6 @@ import shutil
|
|||
from uuid import uuid4
|
||||
from pathlib import Path
|
||||
from cryptography.hazmat.primitives import serialization
|
||||
import re
|
||||
|
||||
|
||||
import markdown
|
||||
from bs4 import BeautifulSoup
|
||||
|
|
@ -85,7 +83,32 @@ if "cuda_error" in locals():
|
|||
log.exception(cuda_error)
|
||||
del cuda_error
|
||||
|
||||
SRC_LOG_LEVELS = {} # Legacy variable, do not remove
|
||||
log_sources = [
|
||||
"AUDIO",
|
||||
"COMFYUI",
|
||||
"CONFIG",
|
||||
"DB",
|
||||
"IMAGES",
|
||||
"MAIN",
|
||||
"MODELS",
|
||||
"OLLAMA",
|
||||
"OPENAI",
|
||||
"RAG",
|
||||
"WEBHOOK",
|
||||
"SOCKET",
|
||||
"OAUTH",
|
||||
]
|
||||
|
||||
SRC_LOG_LEVELS = {}
|
||||
|
||||
for source in log_sources:
|
||||
log_env_var = source + "_LOG_LEVEL"
|
||||
SRC_LOG_LEVELS[source] = os.environ.get(log_env_var, "").upper()
|
||||
if SRC_LOG_LEVELS[source] not in logging.getLevelNamesMapping():
|
||||
SRC_LOG_LEVELS[source] = GLOBAL_LOG_LEVEL
|
||||
log.info(f"{log_env_var}: {SRC_LOG_LEVELS[source]}")
|
||||
|
||||
log.setLevel(SRC_LOG_LEVELS["CONFIG"])
|
||||
|
||||
WEBUI_NAME = os.environ.get("WEBUI_NAME", "Open WebUI")
|
||||
if WEBUI_NAME != "Open WebUI":
|
||||
|
|
@ -112,9 +135,6 @@ else:
|
|||
PACKAGE_DATA = {"version": "0.0.0"}
|
||||
|
||||
VERSION = PACKAGE_DATA["version"]
|
||||
|
||||
|
||||
DEPLOYMENT_ID = os.environ.get("DEPLOYMENT_ID", "")
|
||||
INSTANCE_ID = os.environ.get("INSTANCE_ID", str(uuid4()))
|
||||
|
||||
|
||||
|
|
@ -192,11 +212,6 @@ ENABLE_FORWARD_USER_INFO_HEADERS = (
|
|||
os.environ.get("ENABLE_FORWARD_USER_INFO_HEADERS", "False").lower() == "true"
|
||||
)
|
||||
|
||||
# Experimental feature, may be removed in future
|
||||
ENABLE_STAR_SESSIONS_MIDDLEWARE = (
|
||||
os.environ.get("ENABLE_STAR_SESSIONS_MIDDLEWARE", "False").lower() == "true"
|
||||
)
|
||||
|
||||
####################################
|
||||
# WEBUI_BUILD_HASH
|
||||
####################################
|
||||
|
|
@ -339,11 +354,6 @@ if DATABASE_USER_ACTIVE_STATUS_UPDATE_INTERVAL is not None:
|
|||
except Exception:
|
||||
DATABASE_USER_ACTIVE_STATUS_UPDATE_INTERVAL = 0.0
|
||||
|
||||
# Enable public visibility of active user count (when disabled, only admins can see it)
|
||||
ENABLE_PUBLIC_ACTIVE_USERS_COUNT = (
|
||||
os.environ.get("ENABLE_PUBLIC_ACTIVE_USERS_COUNT", "True").lower() == "true"
|
||||
)
|
||||
|
||||
RESET_CONFIG_ON_START = (
|
||||
os.environ.get("RESET_CONFIG_ON_START", "False").lower() == "true"
|
||||
)
|
||||
|
|
@ -352,8 +362,6 @@ ENABLE_REALTIME_CHAT_SAVE = (
|
|||
os.environ.get("ENABLE_REALTIME_CHAT_SAVE", "False").lower() == "true"
|
||||
)
|
||||
|
||||
ENABLE_QUERIES_CACHE = os.environ.get("ENABLE_QUERIES_CACHE", "False").lower() == "true"
|
||||
|
||||
####################################
|
||||
# REDIS
|
||||
####################################
|
||||
|
|
@ -375,13 +383,6 @@ try:
|
|||
except ValueError:
|
||||
REDIS_SENTINEL_MAX_RETRY_COUNT = 2
|
||||
|
||||
|
||||
REDIS_SOCKET_CONNECT_TIMEOUT = os.environ.get("REDIS_SOCKET_CONNECT_TIMEOUT", "")
|
||||
try:
|
||||
REDIS_SOCKET_CONNECT_TIMEOUT = float(REDIS_SOCKET_CONNECT_TIMEOUT)
|
||||
except ValueError:
|
||||
REDIS_SOCKET_CONNECT_TIMEOUT = None
|
||||
|
||||
####################################
|
||||
# UVICORN WORKERS
|
||||
####################################
|
||||
|
|
@ -401,10 +402,6 @@ except ValueError:
|
|||
####################################
|
||||
|
||||
WEBUI_AUTH = os.environ.get("WEBUI_AUTH", "True").lower() == "true"
|
||||
|
||||
ENABLE_INITIAL_ADMIN_SIGNUP = (
|
||||
os.environ.get("ENABLE_INITIAL_ADMIN_SIGNUP", "False").lower() == "true"
|
||||
)
|
||||
ENABLE_SIGNUP_PASSWORD_CONFIRMATION = (
|
||||
os.environ.get("ENABLE_SIGNUP_PASSWORD_CONFIRMATION", "False").lower() == "true"
|
||||
)
|
||||
|
|
@ -418,23 +415,6 @@ WEBUI_AUTH_TRUSTED_GROUPS_HEADER = os.environ.get(
|
|||
)
|
||||
|
||||
|
||||
ENABLE_PASSWORD_VALIDATION = (
|
||||
os.environ.get("ENABLE_PASSWORD_VALIDATION", "False").lower() == "true"
|
||||
)
|
||||
PASSWORD_VALIDATION_REGEX_PATTERN = os.environ.get(
|
||||
"PASSWORD_VALIDATION_REGEX_PATTERN",
|
||||
"^(?=.*[a-z])(?=.*[A-Z])(?=.*\d)(?=.*[^\w\s]).{8,}$",
|
||||
)
|
||||
|
||||
try:
|
||||
PASSWORD_VALIDATION_REGEX_PATTERN = re.compile(PASSWORD_VALIDATION_REGEX_PATTERN)
|
||||
except Exception as e:
|
||||
log.error(f"Invalid PASSWORD_VALIDATION_REGEX_PATTERN: {e}")
|
||||
PASSWORD_VALIDATION_REGEX_PATTERN = re.compile(
|
||||
"^(?=.*[a-z])(?=.*[A-Z])(?=.*\d)(?=.*[^\w\s]).{8,}$"
|
||||
)
|
||||
|
||||
|
||||
BYPASS_MODEL_ACCESS_CONTROL = (
|
||||
os.environ.get("BYPASS_MODEL_ACCESS_CONTROL", "False").lower() == "true"
|
||||
)
|
||||
|
|
@ -479,33 +459,12 @@ ENABLE_COMPRESSION_MIDDLEWARE = (
|
|||
os.environ.get("ENABLE_COMPRESSION_MIDDLEWARE", "True").lower() == "true"
|
||||
)
|
||||
|
||||
####################################
|
||||
# OAUTH Configuration
|
||||
####################################
|
||||
ENABLE_OAUTH_EMAIL_FALLBACK = (
|
||||
os.environ.get("ENABLE_OAUTH_EMAIL_FALLBACK", "False").lower() == "true"
|
||||
)
|
||||
|
||||
ENABLE_OAUTH_ID_TOKEN_COOKIE = (
|
||||
os.environ.get("ENABLE_OAUTH_ID_TOKEN_COOKIE", "True").lower() == "true"
|
||||
)
|
||||
|
||||
OAUTH_CLIENT_INFO_ENCRYPTION_KEY = os.environ.get(
|
||||
"OAUTH_CLIENT_INFO_ENCRYPTION_KEY", WEBUI_SECRET_KEY
|
||||
)
|
||||
|
||||
OAUTH_SESSION_TOKEN_ENCRYPTION_KEY = os.environ.get(
|
||||
"OAUTH_SESSION_TOKEN_ENCRYPTION_KEY", WEBUI_SECRET_KEY
|
||||
)
|
||||
|
||||
####################################
|
||||
# SCIM Configuration
|
||||
####################################
|
||||
|
||||
ENABLE_SCIM = (
|
||||
os.environ.get("ENABLE_SCIM", os.environ.get("SCIM_ENABLED", "False")).lower()
|
||||
== "true"
|
||||
)
|
||||
SCIM_ENABLED = os.environ.get("SCIM_ENABLED", "False").lower() == "true"
|
||||
SCIM_TOKEN = os.environ.get("SCIM_TOKEN", "")
|
||||
|
||||
####################################
|
||||
|
|
@ -539,10 +498,6 @@ if LICENSE_PUBLIC_KEY:
|
|||
# MODELS
|
||||
####################################
|
||||
|
||||
ENABLE_CUSTOM_MODEL_FALLBACK = (
|
||||
os.environ.get("ENABLE_CUSTOM_MODEL_FALLBACK", "False").lower() == "true"
|
||||
)
|
||||
|
||||
MODELS_CACHE_TTL = os.environ.get("MODELS_CACHE_TTL", "1")
|
||||
if MODELS_CACHE_TTL == "":
|
||||
MODELS_CACHE_TTL = None
|
||||
|
|
@ -557,11 +512,6 @@ else:
|
|||
# CHAT
|
||||
####################################
|
||||
|
||||
ENABLE_CHAT_RESPONSE_BASE64_IMAGE_URL_CONVERSION = (
|
||||
os.environ.get("ENABLE_CHAT_RESPONSE_BASE64_IMAGE_URL_CONVERSION", "False").lower()
|
||||
== "true"
|
||||
)
|
||||
|
||||
CHAT_RESPONSE_STREAM_DELTA_CHUNK_SIZE = os.environ.get(
|
||||
"CHAT_RESPONSE_STREAM_DELTA_CHUNK_SIZE", "1"
|
||||
)
|
||||
|
|
@ -577,34 +527,6 @@ else:
|
|||
CHAT_RESPONSE_STREAM_DELTA_CHUNK_SIZE = 1
|
||||
|
||||
|
||||
CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = os.environ.get(
|
||||
"CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES", "30"
|
||||
)
|
||||
|
||||
if CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES == "":
|
||||
CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = 30
|
||||
else:
|
||||
try:
|
||||
CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = int(CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES)
|
||||
except Exception:
|
||||
CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = 30
|
||||
|
||||
|
||||
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE = os.environ.get(
|
||||
"CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE", ""
|
||||
)
|
||||
|
||||
if CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE == "":
|
||||
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE = None
|
||||
else:
|
||||
try:
|
||||
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE = int(
|
||||
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE
|
||||
)
|
||||
except Exception:
|
||||
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE = None
|
||||
|
||||
|
||||
####################################
|
||||
# WEBSOCKET SUPPORT
|
||||
####################################
|
||||
|
|
@ -616,24 +538,6 @@ ENABLE_WEBSOCKET_SUPPORT = (
|
|||
|
||||
WEBSOCKET_MANAGER = os.environ.get("WEBSOCKET_MANAGER", "")
|
||||
|
||||
WEBSOCKET_REDIS_OPTIONS = os.environ.get("WEBSOCKET_REDIS_OPTIONS", "")
|
||||
|
||||
|
||||
if WEBSOCKET_REDIS_OPTIONS == "":
|
||||
if REDIS_SOCKET_CONNECT_TIMEOUT:
|
||||
WEBSOCKET_REDIS_OPTIONS = {
|
||||
"socket_connect_timeout": REDIS_SOCKET_CONNECT_TIMEOUT
|
||||
}
|
||||
else:
|
||||
log.debug("No WEBSOCKET_REDIS_OPTIONS provided, defaulting to None")
|
||||
WEBSOCKET_REDIS_OPTIONS = None
|
||||
else:
|
||||
try:
|
||||
WEBSOCKET_REDIS_OPTIONS = json.loads(WEBSOCKET_REDIS_OPTIONS)
|
||||
except Exception:
|
||||
log.warning("Invalid WEBSOCKET_REDIS_OPTIONS, defaulting to None")
|
||||
WEBSOCKET_REDIS_OPTIONS = None
|
||||
|
||||
WEBSOCKET_REDIS_URL = os.environ.get("WEBSOCKET_REDIS_URL", REDIS_URL)
|
||||
WEBSOCKET_REDIS_CLUSTER = (
|
||||
os.environ.get("WEBSOCKET_REDIS_CLUSTER", str(REDIS_CLUSTER)).lower() == "true"
|
||||
|
|
@ -648,23 +552,6 @@ except ValueError:
|
|||
|
||||
WEBSOCKET_SENTINEL_HOSTS = os.environ.get("WEBSOCKET_SENTINEL_HOSTS", "")
|
||||
WEBSOCKET_SENTINEL_PORT = os.environ.get("WEBSOCKET_SENTINEL_PORT", "26379")
|
||||
WEBSOCKET_SERVER_LOGGING = (
|
||||
os.environ.get("WEBSOCKET_SERVER_LOGGING", "False").lower() == "true"
|
||||
)
|
||||
WEBSOCKET_SERVER_ENGINEIO_LOGGING = (
|
||||
os.environ.get("WEBSOCKET_SERVER_LOGGING", "False").lower() == "true"
|
||||
)
|
||||
WEBSOCKET_SERVER_PING_TIMEOUT = os.environ.get("WEBSOCKET_SERVER_PING_TIMEOUT", "20")
|
||||
try:
|
||||
WEBSOCKET_SERVER_PING_TIMEOUT = int(WEBSOCKET_SERVER_PING_TIMEOUT)
|
||||
except ValueError:
|
||||
WEBSOCKET_SERVER_PING_TIMEOUT = 20
|
||||
|
||||
WEBSOCKET_SERVER_PING_INTERVAL = os.environ.get("WEBSOCKET_SERVER_PING_INTERVAL", "25")
|
||||
try:
|
||||
WEBSOCKET_SERVER_PING_INTERVAL = int(WEBSOCKET_SERVER_PING_INTERVAL)
|
||||
except ValueError:
|
||||
WEBSOCKET_SERVER_PING_INTERVAL = 25
|
||||
|
||||
|
||||
AIOHTTP_CLIENT_TIMEOUT = os.environ.get("AIOHTTP_CLIENT_TIMEOUT", "")
|
||||
|
|
@ -777,9 +664,7 @@ if OFFLINE_MODE:
|
|||
# AUDIT LOGGING
|
||||
####################################
|
||||
# Where to store log file
|
||||
# Defaults to the DATA_DIR/audit.log. To set AUDIT_LOGS_FILE_PATH you need to
|
||||
# provide the whole path, like: /app/audit.log
|
||||
AUDIT_LOGS_FILE_PATH = os.getenv("AUDIT_LOGS_FILE_PATH", f"{DATA_DIR}/audit.log")
|
||||
AUDIT_LOGS_FILE_PATH = f"{DATA_DIR}/audit.log"
|
||||
# Maximum size of a file before rotating into a new log file
|
||||
AUDIT_LOG_FILE_ROTATION_SIZE = os.getenv("AUDIT_LOG_FILE_ROTATION_SIZE", "10MB")
|
||||
|
||||
|
|
|
|||
|
|
@ -19,7 +19,6 @@ from fastapi import (
|
|||
from starlette.responses import Response, StreamingResponse
|
||||
|
||||
|
||||
from open_webui.constants import ERROR_MESSAGES
|
||||
from open_webui.socket.main import (
|
||||
get_event_call,
|
||||
get_event_emitter,
|
||||
|
|
@ -37,7 +36,7 @@ from open_webui.utils.plugin import (
|
|||
from open_webui.utils.tools import get_tools
|
||||
from open_webui.utils.access_control import has_access
|
||||
|
||||
from open_webui.env import GLOBAL_LOG_LEVEL
|
||||
from open_webui.env import SRC_LOG_LEVELS, GLOBAL_LOG_LEVEL
|
||||
|
||||
from open_webui.utils.misc import (
|
||||
add_or_update_system_message,
|
||||
|
|
@ -54,26 +53,15 @@ from open_webui.utils.payload import (
|
|||
|
||||
logging.basicConfig(stream=sys.stdout, level=GLOBAL_LOG_LEVEL)
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["MAIN"])
|
||||
|
||||
|
||||
def get_function_module_by_id(request: Request, pipe_id: str):
|
||||
function_module, _, _ = get_function_module_from_cache(request, pipe_id)
|
||||
|
||||
if hasattr(function_module, "valves") and hasattr(function_module, "Valves"):
|
||||
Valves = function_module.Valves
|
||||
valves = Functions.get_function_valves_by_id(pipe_id)
|
||||
|
||||
if valves:
|
||||
try:
|
||||
function_module.valves = Valves(
|
||||
**{k: v for k, v in valves.items() if v is not None}
|
||||
)
|
||||
except Exception as e:
|
||||
log.exception(f"Error loading valves for function {pipe_id}: {e}")
|
||||
raise e
|
||||
else:
|
||||
function_module.valves = Valves()
|
||||
|
||||
function_module.valves = function_module.Valves(**(valves if valves else {}))
|
||||
return function_module
|
||||
|
||||
|
||||
|
|
@ -82,13 +70,8 @@ async def get_function_models(request):
|
|||
pipe_models = []
|
||||
|
||||
for pipe in pipes:
|
||||
try:
|
||||
function_module = get_function_module_by_id(request, pipe.id)
|
||||
|
||||
has_user_valves = False
|
||||
if hasattr(function_module, "UserValves"):
|
||||
has_user_valves = True
|
||||
|
||||
# Check if function is a manifold
|
||||
if hasattr(function_module, "pipes"):
|
||||
sub_pipes = []
|
||||
|
|
@ -127,7 +110,6 @@ async def get_function_models(request):
|
|||
"created": pipe.created_at,
|
||||
"owned_by": "openai",
|
||||
"pipe": pipe_flag,
|
||||
"has_user_valves": has_user_valves,
|
||||
}
|
||||
)
|
||||
else:
|
||||
|
|
@ -145,12 +127,8 @@ async def get_function_models(request):
|
|||
"created": pipe.created_at,
|
||||
"owned_by": "openai",
|
||||
"pipe": pipe_flag,
|
||||
"has_user_valves": has_user_valves,
|
||||
}
|
||||
)
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
continue
|
||||
|
||||
return pipe_models
|
||||
|
||||
|
|
@ -241,16 +219,6 @@ async def generate_function_chat_completion(
|
|||
__task__ = metadata.get("task", None)
|
||||
__task_body__ = metadata.get("task_body", None)
|
||||
|
||||
oauth_token = None
|
||||
try:
|
||||
if request.cookies.get("oauth_session_id", None):
|
||||
oauth_token = await request.app.state.oauth_manager.get_oauth_token(
|
||||
user.id,
|
||||
request.cookies.get("oauth_session_id", None),
|
||||
)
|
||||
except Exception as e:
|
||||
log.error(f"Error getting OAuth token: {e}")
|
||||
|
||||
extra_params = {
|
||||
"__event_emitter__": __event_emitter__,
|
||||
"__event_call__": __event_call__,
|
||||
|
|
@ -262,10 +230,9 @@ async def generate_function_chat_completion(
|
|||
"__files__": files,
|
||||
"__user__": user.model_dump() if isinstance(user, UserModel) else {},
|
||||
"__metadata__": metadata,
|
||||
"__oauth_token__": oauth_token,
|
||||
"__request__": request,
|
||||
}
|
||||
extra_params["__tools__"] = await get_tools(
|
||||
extra_params["__tools__"] = get_tools(
|
||||
request,
|
||||
tool_ids,
|
||||
user,
|
||||
|
|
|
|||
|
|
@ -9,6 +9,7 @@ from open_webui.env import (
|
|||
OPEN_WEBUI_DIR,
|
||||
DATABASE_URL,
|
||||
DATABASE_SCHEMA,
|
||||
SRC_LOG_LEVELS,
|
||||
DATABASE_POOL_MAX_OVERFLOW,
|
||||
DATABASE_POOL_RECYCLE,
|
||||
DATABASE_POOL_SIZE,
|
||||
|
|
@ -24,6 +25,7 @@ from sqlalchemy.sql.type_api import _T
|
|||
from typing_extensions import Self
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["DB"])
|
||||
|
||||
|
||||
class JSONField(types.TypeDecorator):
|
||||
|
|
@ -90,6 +92,8 @@ if SQLALCHEMY_DATABASE_URL.startswith("sqlite+sqlcipher://"):
|
|||
|
||||
# Extract database path from SQLCipher URL
|
||||
db_path = SQLALCHEMY_DATABASE_URL.replace("sqlite+sqlcipher://", "")
|
||||
if db_path.startswith("/"):
|
||||
db_path = db_path[1:] # Remove leading slash for relative paths
|
||||
|
||||
# Create a custom creator function that uses sqlcipher3
|
||||
def create_sqlcipher_connection():
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ import logging
|
|||
import os
|
||||
from contextvars import ContextVar
|
||||
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from peewee import *
|
||||
from peewee import InterfaceError as PeeWeeInterfaceError
|
||||
from peewee import PostgresqlDatabase
|
||||
|
|
@ -9,6 +10,7 @@ from playhouse.db_url import connect, parse
|
|||
from playhouse.shortcuts import ReconnectMixin
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["DB"])
|
||||
|
||||
db_state_default = {"closed": None, "conn": None, "ctx": None, "transactions": None}
|
||||
db_state = ContextVar("db_state", default=db_state_default.copy())
|
||||
|
|
@ -54,6 +56,8 @@ def register_connection(db_url):
|
|||
# Parse the database path from SQLCipher URL
|
||||
# Convert sqlite+sqlcipher:///path/to/db.sqlite to /path/to/db.sqlite
|
||||
db_path = db_url.replace("sqlite+sqlcipher://", "")
|
||||
if db_path.startswith("/"):
|
||||
db_path = db_path[1:] # Remove leading slash for relative paths
|
||||
|
||||
# Use Peewee's native SqlCipherDatabase with encryption
|
||||
db = SqlCipherDatabase(db_path, passphrase=database_password)
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load diff
|
|
@ -1,103 +0,0 @@
|
|||
"""Update messages and channel member table
|
||||
|
||||
Revision ID: 2f1211949ecc
|
||||
Revises: 37f288994c47
|
||||
Create Date: 2025-11-27 03:07:56.200231
|
||||
|
||||
"""
|
||||
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
import open_webui.internal.db
|
||||
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = "2f1211949ecc"
|
||||
down_revision: Union[str, None] = "37f288994c47"
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
# New columns to be added to channel_member table
|
||||
op.add_column("channel_member", sa.Column("status", sa.Text(), nullable=True))
|
||||
op.add_column(
|
||||
"channel_member",
|
||||
sa.Column(
|
||||
"is_active",
|
||||
sa.Boolean(),
|
||||
nullable=False,
|
||||
default=True,
|
||||
server_default=sa.sql.expression.true(),
|
||||
),
|
||||
)
|
||||
|
||||
op.add_column(
|
||||
"channel_member",
|
||||
sa.Column(
|
||||
"is_channel_muted",
|
||||
sa.Boolean(),
|
||||
nullable=False,
|
||||
default=False,
|
||||
server_default=sa.sql.expression.false(),
|
||||
),
|
||||
)
|
||||
op.add_column(
|
||||
"channel_member",
|
||||
sa.Column(
|
||||
"is_channel_pinned",
|
||||
sa.Boolean(),
|
||||
nullable=False,
|
||||
default=False,
|
||||
server_default=sa.sql.expression.false(),
|
||||
),
|
||||
)
|
||||
|
||||
op.add_column("channel_member", sa.Column("data", sa.JSON(), nullable=True))
|
||||
op.add_column("channel_member", sa.Column("meta", sa.JSON(), nullable=True))
|
||||
|
||||
op.add_column(
|
||||
"channel_member", sa.Column("joined_at", sa.BigInteger(), nullable=False)
|
||||
)
|
||||
op.add_column(
|
||||
"channel_member", sa.Column("left_at", sa.BigInteger(), nullable=True)
|
||||
)
|
||||
|
||||
op.add_column(
|
||||
"channel_member", sa.Column("last_read_at", sa.BigInteger(), nullable=True)
|
||||
)
|
||||
|
||||
op.add_column(
|
||||
"channel_member", sa.Column("updated_at", sa.BigInteger(), nullable=True)
|
||||
)
|
||||
|
||||
# New columns to be added to message table
|
||||
op.add_column(
|
||||
"message",
|
||||
sa.Column(
|
||||
"is_pinned",
|
||||
sa.Boolean(),
|
||||
nullable=False,
|
||||
default=False,
|
||||
server_default=sa.sql.expression.false(),
|
||||
),
|
||||
)
|
||||
op.add_column("message", sa.Column("pinned_at", sa.BigInteger(), nullable=True))
|
||||
op.add_column("message", sa.Column("pinned_by", sa.Text(), nullable=True))
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
op.drop_column("channel_member", "updated_at")
|
||||
op.drop_column("channel_member", "last_read_at")
|
||||
|
||||
op.drop_column("channel_member", "meta")
|
||||
op.drop_column("channel_member", "data")
|
||||
|
||||
op.drop_column("channel_member", "is_channel_pinned")
|
||||
op.drop_column("channel_member", "is_channel_muted")
|
||||
|
||||
op.drop_column("message", "pinned_by")
|
||||
op.drop_column("message", "pinned_at")
|
||||
op.drop_column("message", "is_pinned")
|
||||
|
|
@ -1,146 +0,0 @@
|
|||
"""add_group_member_table
|
||||
|
||||
Revision ID: 37f288994c47
|
||||
Revises: a5c220713937
|
||||
Create Date: 2025-11-17 03:45:25.123939
|
||||
|
||||
"""
|
||||
|
||||
import uuid
|
||||
import time
|
||||
import json
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = "37f288994c47"
|
||||
down_revision: Union[str, None] = "a5c220713937"
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
# 1. Create new table
|
||||
op.create_table(
|
||||
"group_member",
|
||||
sa.Column("id", sa.Text(), primary_key=True, unique=True, nullable=False),
|
||||
sa.Column(
|
||||
"group_id",
|
||||
sa.Text(),
|
||||
sa.ForeignKey("group.id", ondelete="CASCADE"),
|
||||
nullable=False,
|
||||
),
|
||||
sa.Column(
|
||||
"user_id",
|
||||
sa.Text(),
|
||||
sa.ForeignKey("user.id", ondelete="CASCADE"),
|
||||
nullable=False,
|
||||
),
|
||||
sa.Column("created_at", sa.BigInteger(), nullable=True),
|
||||
sa.Column("updated_at", sa.BigInteger(), nullable=True),
|
||||
sa.UniqueConstraint("group_id", "user_id", name="uq_group_member_group_user"),
|
||||
)
|
||||
|
||||
connection = op.get_bind()
|
||||
|
||||
# 2. Read existing group with user_ids JSON column
|
||||
group_table = sa.Table(
|
||||
"group",
|
||||
sa.MetaData(),
|
||||
sa.Column("id", sa.Text()),
|
||||
sa.Column("user_ids", sa.JSON()), # JSON stored as text in SQLite + PG
|
||||
)
|
||||
|
||||
results = connection.execute(
|
||||
sa.select(group_table.c.id, group_table.c.user_ids)
|
||||
).fetchall()
|
||||
|
||||
print(results)
|
||||
|
||||
# 3. Insert members into group_member table
|
||||
gm_table = sa.Table(
|
||||
"group_member",
|
||||
sa.MetaData(),
|
||||
sa.Column("id", sa.Text()),
|
||||
sa.Column("group_id", sa.Text()),
|
||||
sa.Column("user_id", sa.Text()),
|
||||
sa.Column("created_at", sa.BigInteger()),
|
||||
sa.Column("updated_at", sa.BigInteger()),
|
||||
)
|
||||
|
||||
now = int(time.time())
|
||||
for group_id, user_ids in results:
|
||||
if not user_ids:
|
||||
continue
|
||||
|
||||
if isinstance(user_ids, str):
|
||||
try:
|
||||
user_ids = json.loads(user_ids)
|
||||
except Exception:
|
||||
continue # skip invalid JSON
|
||||
|
||||
if not isinstance(user_ids, list):
|
||||
continue
|
||||
|
||||
rows = [
|
||||
{
|
||||
"id": str(uuid.uuid4()),
|
||||
"group_id": group_id,
|
||||
"user_id": uid,
|
||||
"created_at": now,
|
||||
"updated_at": now,
|
||||
}
|
||||
for uid in user_ids
|
||||
]
|
||||
|
||||
if rows:
|
||||
connection.execute(gm_table.insert(), rows)
|
||||
|
||||
# 4. Optionally drop the old column
|
||||
with op.batch_alter_table("group") as batch:
|
||||
batch.drop_column("user_ids")
|
||||
|
||||
|
||||
def downgrade():
|
||||
# Reverse: restore user_ids column
|
||||
with op.batch_alter_table("group") as batch:
|
||||
batch.add_column(sa.Column("user_ids", sa.JSON()))
|
||||
|
||||
connection = op.get_bind()
|
||||
gm_table = sa.Table(
|
||||
"group_member",
|
||||
sa.MetaData(),
|
||||
sa.Column("group_id", sa.Text()),
|
||||
sa.Column("user_id", sa.Text()),
|
||||
sa.Column("created_at", sa.BigInteger()),
|
||||
sa.Column("updated_at", sa.BigInteger()),
|
||||
)
|
||||
|
||||
group_table = sa.Table(
|
||||
"group",
|
||||
sa.MetaData(),
|
||||
sa.Column("id", sa.Text()),
|
||||
sa.Column("user_ids", sa.JSON()),
|
||||
)
|
||||
|
||||
# Build JSON arrays again
|
||||
results = connection.execute(sa.select(group_table.c.id)).fetchall()
|
||||
|
||||
for (group_id,) in results:
|
||||
members = connection.execute(
|
||||
sa.select(gm_table.c.user_id).where(gm_table.c.group_id == group_id)
|
||||
).fetchall()
|
||||
|
||||
member_ids = [m[0] for m in members]
|
||||
|
||||
connection.execute(
|
||||
group_table.update()
|
||||
.where(group_table.c.id == group_id)
|
||||
.values(user_ids=member_ids)
|
||||
)
|
||||
|
||||
# Drop the new table
|
||||
op.drop_table("group_member")
|
||||
|
|
@ -1,80 +0,0 @@
|
|||
"""Add oauth_session table
|
||||
|
||||
Revision ID: 38d63c18f30f
|
||||
Revises: 3af16a1c9fb6
|
||||
Create Date: 2025-09-08 14:19:59.583921
|
||||
|
||||
"""
|
||||
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = "38d63c18f30f"
|
||||
down_revision: Union[str, None] = "3af16a1c9fb6"
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
# Ensure 'id' column in 'user' table is unique and primary key (ForeignKey constraint)
|
||||
inspector = sa.inspect(op.get_bind())
|
||||
columns = inspector.get_columns("user")
|
||||
|
||||
pk_columns = inspector.get_pk_constraint("user")["constrained_columns"]
|
||||
id_column = next((col for col in columns if col["name"] == "id"), None)
|
||||
|
||||
if id_column and not id_column.get("unique", False):
|
||||
unique_constraints = inspector.get_unique_constraints("user")
|
||||
unique_columns = {tuple(u["column_names"]) for u in unique_constraints}
|
||||
|
||||
with op.batch_alter_table("user") as batch_op:
|
||||
# If primary key is wrong, drop it
|
||||
if pk_columns and pk_columns != ["id"]:
|
||||
batch_op.drop_constraint(
|
||||
inspector.get_pk_constraint("user")["name"], type_="primary"
|
||||
)
|
||||
|
||||
# Add unique constraint if missing
|
||||
if ("id",) not in unique_columns:
|
||||
batch_op.create_unique_constraint("uq_user_id", ["id"])
|
||||
|
||||
# Re-create correct primary key
|
||||
batch_op.create_primary_key("pk_user_id", ["id"])
|
||||
|
||||
# Create oauth_session table
|
||||
op.create_table(
|
||||
"oauth_session",
|
||||
sa.Column("id", sa.Text(), primary_key=True, nullable=False, unique=True),
|
||||
sa.Column(
|
||||
"user_id",
|
||||
sa.Text(),
|
||||
sa.ForeignKey("user.id", ondelete="CASCADE"),
|
||||
nullable=False,
|
||||
),
|
||||
sa.Column("provider", sa.Text(), nullable=False),
|
||||
sa.Column("token", sa.Text(), nullable=False),
|
||||
sa.Column("expires_at", sa.BigInteger(), nullable=False),
|
||||
sa.Column("created_at", sa.BigInteger(), nullable=False),
|
||||
sa.Column("updated_at", sa.BigInteger(), nullable=False),
|
||||
)
|
||||
|
||||
# Create indexes for better performance
|
||||
op.create_index("idx_oauth_session_user_id", "oauth_session", ["user_id"])
|
||||
op.create_index("idx_oauth_session_expires_at", "oauth_session", ["expires_at"])
|
||||
op.create_index(
|
||||
"idx_oauth_session_user_provider", "oauth_session", ["user_id", "provider"]
|
||||
)
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
# Drop indexes first
|
||||
op.drop_index("idx_oauth_session_user_provider", table_name="oauth_session")
|
||||
op.drop_index("idx_oauth_session_expires_at", table_name="oauth_session")
|
||||
op.drop_index("idx_oauth_session_user_id", table_name="oauth_session")
|
||||
|
||||
# Drop the table
|
||||
op.drop_table("oauth_session")
|
||||
|
|
@ -1,169 +0,0 @@
|
|||
"""Add knowledge_file table
|
||||
|
||||
Revision ID: 3e0e00844bb0
|
||||
Revises: 90ef40d4714e
|
||||
Create Date: 2025-12-02 06:54:19.401334
|
||||
|
||||
"""
|
||||
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
from sqlalchemy import inspect
|
||||
import open_webui.internal.db
|
||||
|
||||
import time
|
||||
import json
|
||||
import uuid
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = "3e0e00844bb0"
|
||||
down_revision: Union[str, None] = "90ef40d4714e"
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
op.create_table(
|
||||
"knowledge_file",
|
||||
sa.Column("id", sa.Text(), primary_key=True),
|
||||
sa.Column("user_id", sa.Text(), nullable=False),
|
||||
sa.Column(
|
||||
"knowledge_id",
|
||||
sa.Text(),
|
||||
sa.ForeignKey("knowledge.id", ondelete="CASCADE"),
|
||||
nullable=False,
|
||||
),
|
||||
sa.Column(
|
||||
"file_id",
|
||||
sa.Text(),
|
||||
sa.ForeignKey("file.id", ondelete="CASCADE"),
|
||||
nullable=False,
|
||||
),
|
||||
sa.Column("created_at", sa.BigInteger(), nullable=False),
|
||||
sa.Column("updated_at", sa.BigInteger(), nullable=False),
|
||||
# indexes
|
||||
sa.Index("ix_knowledge_file_knowledge_id", "knowledge_id"),
|
||||
sa.Index("ix_knowledge_file_file_id", "file_id"),
|
||||
sa.Index("ix_knowledge_file_user_id", "user_id"),
|
||||
# unique constraints
|
||||
sa.UniqueConstraint(
|
||||
"knowledge_id", "file_id", name="uq_knowledge_file_knowledge_file"
|
||||
), # prevent duplicate entries
|
||||
)
|
||||
|
||||
connection = op.get_bind()
|
||||
|
||||
# 2. Read existing group with user_ids JSON column
|
||||
knowledge_table = sa.Table(
|
||||
"knowledge",
|
||||
sa.MetaData(),
|
||||
sa.Column("id", sa.Text()),
|
||||
sa.Column("user_id", sa.Text()),
|
||||
sa.Column("data", sa.JSON()), # JSON stored as text in SQLite + PG
|
||||
)
|
||||
|
||||
results = connection.execute(
|
||||
sa.select(
|
||||
knowledge_table.c.id, knowledge_table.c.user_id, knowledge_table.c.data
|
||||
)
|
||||
).fetchall()
|
||||
|
||||
# 3. Insert members into group_member table
|
||||
kf_table = sa.Table(
|
||||
"knowledge_file",
|
||||
sa.MetaData(),
|
||||
sa.Column("id", sa.Text()),
|
||||
sa.Column("user_id", sa.Text()),
|
||||
sa.Column("knowledge_id", sa.Text()),
|
||||
sa.Column("file_id", sa.Text()),
|
||||
sa.Column("created_at", sa.BigInteger()),
|
||||
sa.Column("updated_at", sa.BigInteger()),
|
||||
)
|
||||
|
||||
file_table = sa.Table(
|
||||
"file",
|
||||
sa.MetaData(),
|
||||
sa.Column("id", sa.Text()),
|
||||
)
|
||||
|
||||
now = int(time.time())
|
||||
for knowledge_id, user_id, data in results:
|
||||
if not data:
|
||||
continue
|
||||
|
||||
if isinstance(data, str):
|
||||
try:
|
||||
data = json.loads(data)
|
||||
except Exception:
|
||||
continue # skip invalid JSON
|
||||
|
||||
if not isinstance(data, dict):
|
||||
continue
|
||||
|
||||
file_ids = data.get("file_ids", [])
|
||||
|
||||
for file_id in file_ids:
|
||||
file_exists = connection.execute(
|
||||
sa.select(file_table.c.id).where(file_table.c.id == file_id)
|
||||
).fetchone()
|
||||
|
||||
if not file_exists:
|
||||
continue # skip non-existing files
|
||||
|
||||
row = {
|
||||
"id": str(uuid.uuid4()),
|
||||
"user_id": user_id,
|
||||
"knowledge_id": knowledge_id,
|
||||
"file_id": file_id,
|
||||
"created_at": now,
|
||||
"updated_at": now,
|
||||
}
|
||||
connection.execute(kf_table.insert().values(**row))
|
||||
|
||||
with op.batch_alter_table("knowledge") as batch:
|
||||
batch.drop_column("data")
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
# 1. Add back the old data column
|
||||
op.add_column("knowledge", sa.Column("data", sa.JSON(), nullable=True))
|
||||
|
||||
connection = op.get_bind()
|
||||
|
||||
# 2. Read knowledge_file entries and reconstruct data JSON
|
||||
knowledge_table = sa.Table(
|
||||
"knowledge",
|
||||
sa.MetaData(),
|
||||
sa.Column("id", sa.Text()),
|
||||
sa.Column("data", sa.JSON()),
|
||||
)
|
||||
|
||||
kf_table = sa.Table(
|
||||
"knowledge_file",
|
||||
sa.MetaData(),
|
||||
sa.Column("id", sa.Text()),
|
||||
sa.Column("knowledge_id", sa.Text()),
|
||||
sa.Column("file_id", sa.Text()),
|
||||
)
|
||||
|
||||
results = connection.execute(sa.select(knowledge_table.c.id)).fetchall()
|
||||
|
||||
for (knowledge_id,) in results:
|
||||
file_ids = connection.execute(
|
||||
sa.select(kf_table.c.file_id).where(kf_table.c.knowledge_id == knowledge_id)
|
||||
).fetchall()
|
||||
|
||||
file_ids_list = [fid for (fid,) in file_ids]
|
||||
|
||||
data_json = {"file_ids": file_ids_list}
|
||||
|
||||
connection.execute(
|
||||
knowledge_table.update()
|
||||
.where(knowledge_table.c.id == knowledge_id)
|
||||
.values(data=data_json)
|
||||
)
|
||||
|
||||
# 3. Drop the knowledge_file table
|
||||
op.drop_table("knowledge_file")
|
||||
|
|
@ -1,54 +0,0 @@
|
|||
"""Add channel file table
|
||||
|
||||
Revision ID: 6283dc0e4d8d
|
||||
Revises: 3e0e00844bb0
|
||||
Create Date: 2025-12-10 15:11:39.424601
|
||||
|
||||
"""
|
||||
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
import open_webui.internal.db
|
||||
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = "6283dc0e4d8d"
|
||||
down_revision: Union[str, None] = "3e0e00844bb0"
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
op.create_table(
|
||||
"channel_file",
|
||||
sa.Column("id", sa.Text(), primary_key=True),
|
||||
sa.Column("user_id", sa.Text(), nullable=False),
|
||||
sa.Column(
|
||||
"channel_id",
|
||||
sa.Text(),
|
||||
sa.ForeignKey("channel.id", ondelete="CASCADE"),
|
||||
nullable=False,
|
||||
),
|
||||
sa.Column(
|
||||
"file_id",
|
||||
sa.Text(),
|
||||
sa.ForeignKey("file.id", ondelete="CASCADE"),
|
||||
nullable=False,
|
||||
),
|
||||
sa.Column("created_at", sa.BigInteger(), nullable=False),
|
||||
sa.Column("updated_at", sa.BigInteger(), nullable=False),
|
||||
# indexes
|
||||
sa.Index("ix_channel_file_channel_id", "channel_id"),
|
||||
sa.Index("ix_channel_file_file_id", "file_id"),
|
||||
sa.Index("ix_channel_file_user_id", "user_id"),
|
||||
# unique constraints
|
||||
sa.UniqueConstraint(
|
||||
"channel_id", "file_id", name="uq_channel_file_channel_file"
|
||||
), # prevent duplicate entries
|
||||
)
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
op.drop_table("channel_file")
|
||||
|
|
@ -1,49 +0,0 @@
|
|||
"""Update channel file and knowledge table
|
||||
|
||||
Revision ID: 81cc2ce44d79
|
||||
Revises: 6283dc0e4d8d
|
||||
Create Date: 2025-12-10 16:07:58.001282
|
||||
|
||||
"""
|
||||
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
import open_webui.internal.db
|
||||
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = "81cc2ce44d79"
|
||||
down_revision: Union[str, None] = "6283dc0e4d8d"
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
# Add message_id column to channel_file table
|
||||
with op.batch_alter_table("channel_file", schema=None) as batch_op:
|
||||
batch_op.add_column(
|
||||
sa.Column(
|
||||
"message_id",
|
||||
sa.Text(),
|
||||
sa.ForeignKey(
|
||||
"message.id", ondelete="CASCADE", name="fk_channel_file_message_id"
|
||||
),
|
||||
nullable=True,
|
||||
)
|
||||
)
|
||||
|
||||
# Add data column to knowledge table
|
||||
with op.batch_alter_table("knowledge", schema=None) as batch_op:
|
||||
batch_op.add_column(sa.Column("data", sa.JSON(), nullable=True))
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
# Remove message_id column from channel_file table
|
||||
with op.batch_alter_table("channel_file", schema=None) as batch_op:
|
||||
batch_op.drop_column("message_id")
|
||||
|
||||
# Remove data column from knowledge table
|
||||
with op.batch_alter_table("knowledge", schema=None) as batch_op:
|
||||
batch_op.drop_column("data")
|
||||
|
|
@ -1,81 +0,0 @@
|
|||
"""Update channel and channel members table
|
||||
|
||||
Revision ID: 90ef40d4714e
|
||||
Revises: b10670c03dd5
|
||||
Create Date: 2025-11-30 06:33:38.790341
|
||||
|
||||
"""
|
||||
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
import open_webui.internal.db
|
||||
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = "90ef40d4714e"
|
||||
down_revision: Union[str, None] = "b10670c03dd5"
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
# Update 'channel' table
|
||||
op.add_column("channel", sa.Column("is_private", sa.Boolean(), nullable=True))
|
||||
|
||||
op.add_column("channel", sa.Column("archived_at", sa.BigInteger(), nullable=True))
|
||||
op.add_column("channel", sa.Column("archived_by", sa.Text(), nullable=True))
|
||||
|
||||
op.add_column("channel", sa.Column("deleted_at", sa.BigInteger(), nullable=True))
|
||||
op.add_column("channel", sa.Column("deleted_by", sa.Text(), nullable=True))
|
||||
|
||||
op.add_column("channel", sa.Column("updated_by", sa.Text(), nullable=True))
|
||||
|
||||
# Update 'channel_member' table
|
||||
op.add_column("channel_member", sa.Column("role", sa.Text(), nullable=True))
|
||||
op.add_column("channel_member", sa.Column("invited_by", sa.Text(), nullable=True))
|
||||
op.add_column(
|
||||
"channel_member", sa.Column("invited_at", sa.BigInteger(), nullable=True)
|
||||
)
|
||||
|
||||
# Create 'channel_webhook' table
|
||||
op.create_table(
|
||||
"channel_webhook",
|
||||
sa.Column("id", sa.Text(), primary_key=True, unique=True, nullable=False),
|
||||
sa.Column("user_id", sa.Text(), nullable=False),
|
||||
sa.Column(
|
||||
"channel_id",
|
||||
sa.Text(),
|
||||
sa.ForeignKey("channel.id", ondelete="CASCADE"),
|
||||
nullable=False,
|
||||
),
|
||||
sa.Column("name", sa.Text(), nullable=False),
|
||||
sa.Column("profile_image_url", sa.Text(), nullable=True),
|
||||
sa.Column("token", sa.Text(), nullable=False),
|
||||
sa.Column("last_used_at", sa.BigInteger(), nullable=True),
|
||||
sa.Column("created_at", sa.BigInteger(), nullable=False),
|
||||
sa.Column("updated_at", sa.BigInteger(), nullable=False),
|
||||
)
|
||||
|
||||
pass
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
# Downgrade 'channel' table
|
||||
op.drop_column("channel", "is_private")
|
||||
op.drop_column("channel", "archived_at")
|
||||
op.drop_column("channel", "archived_by")
|
||||
op.drop_column("channel", "deleted_at")
|
||||
op.drop_column("channel", "deleted_by")
|
||||
op.drop_column("channel", "updated_by")
|
||||
|
||||
# Downgrade 'channel_member' table
|
||||
op.drop_column("channel_member", "role")
|
||||
op.drop_column("channel_member", "invited_by")
|
||||
op.drop_column("channel_member", "invited_at")
|
||||
|
||||
# Drop 'channel_webhook' table
|
||||
op.drop_table("channel_webhook")
|
||||
|
||||
pass
|
||||
|
|
@ -1,34 +0,0 @@
|
|||
"""Add reply_to_id column to message
|
||||
|
||||
Revision ID: a5c220713937
|
||||
Revises: 38d63c18f30f
|
||||
Create Date: 2025-09-27 02:24:18.058455
|
||||
|
||||
"""
|
||||
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = "a5c220713937"
|
||||
down_revision: Union[str, None] = "38d63c18f30f"
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
# Add 'reply_to_id' column to the 'message' table for replying to messages
|
||||
op.add_column(
|
||||
"message",
|
||||
sa.Column("reply_to_id", sa.Text(), nullable=True),
|
||||
)
|
||||
pass
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
# Remove 'reply_to_id' column from the 'message' table
|
||||
op.drop_column("message", "reply_to_id")
|
||||
|
||||
pass
|
||||
|
|
@ -1,251 +0,0 @@
|
|||
"""Update user table
|
||||
|
||||
Revision ID: b10670c03dd5
|
||||
Revises: 2f1211949ecc
|
||||
Create Date: 2025-11-28 04:55:31.737538
|
||||
|
||||
"""
|
||||
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
import open_webui.internal.db
|
||||
import json
|
||||
import time
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = "b10670c03dd5"
|
||||
down_revision: Union[str, None] = "2f1211949ecc"
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def _drop_sqlite_indexes_for_column(table_name, column_name, conn):
|
||||
"""
|
||||
SQLite requires manual removal of any indexes referencing a column
|
||||
before ALTER TABLE ... DROP COLUMN can succeed.
|
||||
"""
|
||||
indexes = conn.execute(sa.text(f"PRAGMA index_list('{table_name}')")).fetchall()
|
||||
|
||||
for idx in indexes:
|
||||
index_name = idx[1] # index name
|
||||
# Get indexed columns
|
||||
idx_info = conn.execute(
|
||||
sa.text(f"PRAGMA index_info('{index_name}')")
|
||||
).fetchall()
|
||||
|
||||
indexed_cols = [row[2] for row in idx_info] # col names
|
||||
if column_name in indexed_cols:
|
||||
conn.execute(sa.text(f"DROP INDEX IF EXISTS {index_name}"))
|
||||
|
||||
|
||||
def _convert_column_to_json(table: str, column: str):
|
||||
conn = op.get_bind()
|
||||
dialect = conn.dialect.name
|
||||
|
||||
# SQLite cannot ALTER COLUMN → must recreate column
|
||||
if dialect == "sqlite":
|
||||
# 1. Add temporary column
|
||||
op.add_column(table, sa.Column(f"{column}_json", sa.JSON(), nullable=True))
|
||||
|
||||
# 2. Load old data
|
||||
rows = conn.execute(sa.text(f'SELECT id, {column} FROM "{table}"')).fetchall()
|
||||
|
||||
for row in rows:
|
||||
uid, raw = row
|
||||
if raw is None:
|
||||
parsed = None
|
||||
else:
|
||||
try:
|
||||
parsed = json.loads(raw)
|
||||
except Exception:
|
||||
parsed = None # fallback safe behavior
|
||||
|
||||
conn.execute(
|
||||
sa.text(f'UPDATE "{table}" SET {column}_json = :val WHERE id = :id'),
|
||||
{"val": json.dumps(parsed) if parsed else None, "id": uid},
|
||||
)
|
||||
|
||||
# 3. Drop old TEXT column
|
||||
op.drop_column(table, column)
|
||||
|
||||
# 4. Rename new JSON column → original name
|
||||
op.alter_column(table, f"{column}_json", new_column_name=column)
|
||||
|
||||
else:
|
||||
# PostgreSQL supports direct CAST
|
||||
op.alter_column(
|
||||
table,
|
||||
column,
|
||||
type_=sa.JSON(),
|
||||
postgresql_using=f"{column}::json",
|
||||
)
|
||||
|
||||
|
||||
def _convert_column_to_text(table: str, column: str):
|
||||
conn = op.get_bind()
|
||||
dialect = conn.dialect.name
|
||||
|
||||
if dialect == "sqlite":
|
||||
op.add_column(table, sa.Column(f"{column}_text", sa.Text(), nullable=True))
|
||||
|
||||
rows = conn.execute(sa.text(f'SELECT id, {column} FROM "{table}"')).fetchall()
|
||||
|
||||
for uid, raw in rows:
|
||||
conn.execute(
|
||||
sa.text(f'UPDATE "{table}" SET {column}_text = :val WHERE id = :id'),
|
||||
{"val": json.dumps(raw) if raw else None, "id": uid},
|
||||
)
|
||||
|
||||
op.drop_column(table, column)
|
||||
op.alter_column(table, f"{column}_text", new_column_name=column)
|
||||
|
||||
else:
|
||||
op.alter_column(
|
||||
table,
|
||||
column,
|
||||
type_=sa.Text(),
|
||||
postgresql_using=f"to_json({column})::text",
|
||||
)
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
op.add_column(
|
||||
"user", sa.Column("profile_banner_image_url", sa.Text(), nullable=True)
|
||||
)
|
||||
op.add_column("user", sa.Column("timezone", sa.String(), nullable=True))
|
||||
|
||||
op.add_column("user", sa.Column("presence_state", sa.String(), nullable=True))
|
||||
op.add_column("user", sa.Column("status_emoji", sa.String(), nullable=True))
|
||||
op.add_column("user", sa.Column("status_message", sa.Text(), nullable=True))
|
||||
op.add_column(
|
||||
"user", sa.Column("status_expires_at", sa.BigInteger(), nullable=True)
|
||||
)
|
||||
|
||||
op.add_column("user", sa.Column("oauth", sa.JSON(), nullable=True))
|
||||
|
||||
# Convert info (TEXT/JSONField) → JSON
|
||||
_convert_column_to_json("user", "info")
|
||||
# Convert settings (TEXT/JSONField) → JSON
|
||||
_convert_column_to_json("user", "settings")
|
||||
|
||||
op.create_table(
|
||||
"api_key",
|
||||
sa.Column("id", sa.Text(), primary_key=True, unique=True),
|
||||
sa.Column("user_id", sa.Text(), sa.ForeignKey("user.id", ondelete="CASCADE")),
|
||||
sa.Column("key", sa.Text(), unique=True, nullable=False),
|
||||
sa.Column("data", sa.JSON(), nullable=True),
|
||||
sa.Column("expires_at", sa.BigInteger(), nullable=True),
|
||||
sa.Column("last_used_at", sa.BigInteger(), nullable=True),
|
||||
sa.Column("created_at", sa.BigInteger(), nullable=False),
|
||||
sa.Column("updated_at", sa.BigInteger(), nullable=False),
|
||||
)
|
||||
|
||||
conn = op.get_bind()
|
||||
users = conn.execute(
|
||||
sa.text('SELECT id, oauth_sub FROM "user" WHERE oauth_sub IS NOT NULL')
|
||||
).fetchall()
|
||||
|
||||
for uid, oauth_sub in users:
|
||||
if oauth_sub:
|
||||
# Example formats supported:
|
||||
# provider@sub
|
||||
# plain sub (stored as {"oidc": {"sub": sub}})
|
||||
if "@" in oauth_sub:
|
||||
provider, sub = oauth_sub.split("@", 1)
|
||||
else:
|
||||
provider, sub = "oidc", oauth_sub
|
||||
|
||||
oauth_json = json.dumps({provider: {"sub": sub}})
|
||||
conn.execute(
|
||||
sa.text('UPDATE "user" SET oauth = :oauth WHERE id = :id'),
|
||||
{"oauth": oauth_json, "id": uid},
|
||||
)
|
||||
|
||||
users_with_keys = conn.execute(
|
||||
sa.text('SELECT id, api_key FROM "user" WHERE api_key IS NOT NULL')
|
||||
).fetchall()
|
||||
now = int(time.time())
|
||||
|
||||
for uid, api_key in users_with_keys:
|
||||
if api_key:
|
||||
conn.execute(
|
||||
sa.text(
|
||||
"""
|
||||
INSERT INTO api_key (id, user_id, key, created_at, updated_at)
|
||||
VALUES (:id, :user_id, :key, :created_at, :updated_at)
|
||||
"""
|
||||
),
|
||||
{
|
||||
"id": f"key_{uid}",
|
||||
"user_id": uid,
|
||||
"key": api_key,
|
||||
"created_at": now,
|
||||
"updated_at": now,
|
||||
},
|
||||
)
|
||||
|
||||
if conn.dialect.name == "sqlite":
|
||||
_drop_sqlite_indexes_for_column("user", "api_key", conn)
|
||||
_drop_sqlite_indexes_for_column("user", "oauth_sub", conn)
|
||||
|
||||
with op.batch_alter_table("user") as batch_op:
|
||||
batch_op.drop_column("api_key")
|
||||
batch_op.drop_column("oauth_sub")
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
# --- 1. Restore old oauth_sub column ---
|
||||
op.add_column("user", sa.Column("oauth_sub", sa.Text(), nullable=True))
|
||||
|
||||
conn = op.get_bind()
|
||||
users = conn.execute(
|
||||
sa.text('SELECT id, oauth FROM "user" WHERE oauth IS NOT NULL')
|
||||
).fetchall()
|
||||
|
||||
for uid, oauth in users:
|
||||
try:
|
||||
data = json.loads(oauth)
|
||||
provider = list(data.keys())[0]
|
||||
sub = data[provider].get("sub")
|
||||
oauth_sub = f"{provider}@{sub}"
|
||||
except Exception:
|
||||
oauth_sub = None
|
||||
|
||||
conn.execute(
|
||||
sa.text('UPDATE "user" SET oauth_sub = :oauth_sub WHERE id = :id'),
|
||||
{"oauth_sub": oauth_sub, "id": uid},
|
||||
)
|
||||
|
||||
op.drop_column("user", "oauth")
|
||||
|
||||
# --- 2. Restore api_key field ---
|
||||
op.add_column("user", sa.Column("api_key", sa.String(), nullable=True))
|
||||
|
||||
# Restore values from api_key
|
||||
keys = conn.execute(sa.text("SELECT user_id, key FROM api_key")).fetchall()
|
||||
for uid, key in keys:
|
||||
conn.execute(
|
||||
sa.text('UPDATE "user" SET api_key = :key WHERE id = :id'),
|
||||
{"key": key, "id": uid},
|
||||
)
|
||||
|
||||
# Drop new table
|
||||
op.drop_table("api_key")
|
||||
|
||||
with op.batch_alter_table("user") as batch_op:
|
||||
batch_op.drop_column("profile_banner_image_url")
|
||||
batch_op.drop_column("timezone")
|
||||
|
||||
batch_op.drop_column("presence_state")
|
||||
batch_op.drop_column("status_emoji")
|
||||
batch_op.drop_column("status_message")
|
||||
batch_op.drop_column("status_expires_at")
|
||||
|
||||
# Convert info (JSON) → TEXT
|
||||
_convert_column_to_text("user", "info")
|
||||
# Convert settings (JSON) → TEXT
|
||||
_convert_column_to_text("user", "settings")
|
||||
|
|
@ -1,57 +0,0 @@
|
|||
"""Add chat_file table
|
||||
|
||||
Revision ID: c440947495f3
|
||||
Revises: 81cc2ce44d79
|
||||
Create Date: 2025-12-21 20:27:41.694897
|
||||
|
||||
"""
|
||||
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = "c440947495f3"
|
||||
down_revision: Union[str, None] = "81cc2ce44d79"
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
op.create_table(
|
||||
"chat_file",
|
||||
sa.Column("id", sa.Text(), primary_key=True),
|
||||
sa.Column("user_id", sa.Text(), nullable=False),
|
||||
sa.Column(
|
||||
"chat_id",
|
||||
sa.Text(),
|
||||
sa.ForeignKey("chat.id", ondelete="CASCADE"),
|
||||
nullable=False,
|
||||
),
|
||||
sa.Column(
|
||||
"file_id",
|
||||
sa.Text(),
|
||||
sa.ForeignKey("file.id", ondelete="CASCADE"),
|
||||
nullable=False,
|
||||
),
|
||||
sa.Column("message_id", sa.Text(), nullable=True),
|
||||
sa.Column("created_at", sa.BigInteger(), nullable=False),
|
||||
sa.Column("updated_at", sa.BigInteger(), nullable=False),
|
||||
# indexes
|
||||
sa.Index("ix_chat_file_chat_id", "chat_id"),
|
||||
sa.Index("ix_chat_file_file_id", "file_id"),
|
||||
sa.Index("ix_chat_file_message_id", "message_id"),
|
||||
sa.Index("ix_chat_file_user_id", "user_id"),
|
||||
# unique constraints
|
||||
sa.UniqueConstraint(
|
||||
"chat_id", "file_id", name="uq_chat_file_chat_file"
|
||||
), # prevent duplicate entries
|
||||
)
|
||||
pass
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
op.drop_table("chat_file")
|
||||
pass
|
||||
|
|
@ -3,11 +3,14 @@ import uuid
|
|||
from typing import Optional
|
||||
|
||||
from open_webui.internal.db import Base, get_db
|
||||
from open_webui.models.users import UserModel, UserProfileImageResponse, Users
|
||||
from open_webui.models.users import UserModel, Users
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from pydantic import BaseModel
|
||||
from sqlalchemy import Boolean, Column, String, Text
|
||||
from open_webui.utils.auth import verify_password
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
||||
|
||||
####################
|
||||
# DB MODEL
|
||||
|
|
@ -17,7 +20,7 @@ log = logging.getLogger(__name__)
|
|||
class Auth(Base):
|
||||
__tablename__ = "auth"
|
||||
|
||||
id = Column(String, primary_key=True, unique=True)
|
||||
id = Column(String, primary_key=True)
|
||||
email = Column(String)
|
||||
password = Column(Text)
|
||||
active = Column(Boolean)
|
||||
|
|
@ -44,7 +47,15 @@ class ApiKey(BaseModel):
|
|||
api_key: Optional[str] = None
|
||||
|
||||
|
||||
class SigninResponse(Token, UserProfileImageResponse):
|
||||
class UserResponse(BaseModel):
|
||||
id: str
|
||||
email: str
|
||||
name: str
|
||||
role: str
|
||||
profile_image_url: str
|
||||
|
||||
|
||||
class SigninResponse(Token, UserResponse):
|
||||
pass
|
||||
|
||||
|
||||
|
|
@ -86,7 +97,7 @@ class AuthsTable:
|
|||
name: str,
|
||||
profile_image_url: str = "/user.png",
|
||||
role: str = "pending",
|
||||
oauth: Optional[dict] = None,
|
||||
oauth_sub: Optional[str] = None,
|
||||
) -> Optional[UserModel]:
|
||||
with get_db() as db:
|
||||
log.info("insert_new_auth")
|
||||
|
|
@ -100,7 +111,7 @@ class AuthsTable:
|
|||
db.add(result)
|
||||
|
||||
user = Users.insert_new_user(
|
||||
id, name, email, profile_image_url, role, oauth=oauth
|
||||
id, name, email, profile_image_url, role, oauth_sub
|
||||
)
|
||||
|
||||
db.commit()
|
||||
|
|
@ -111,9 +122,7 @@ class AuthsTable:
|
|||
else:
|
||||
return None
|
||||
|
||||
def authenticate_user(
|
||||
self, email: str, verify_password: callable
|
||||
) -> Optional[UserModel]:
|
||||
def authenticate_user(self, email: str, password: str) -> Optional[UserModel]:
|
||||
log.info(f"authenticate_user: {email}")
|
||||
|
||||
user = Users.get_user_by_email(email)
|
||||
|
|
@ -124,7 +133,7 @@ class AuthsTable:
|
|||
with get_db() as db:
|
||||
auth = db.query(Auth).filter_by(id=user.id, active=True).first()
|
||||
if auth:
|
||||
if verify_password(auth.password):
|
||||
if verify_password(password, auth.password):
|
||||
return user
|
||||
else:
|
||||
return None
|
||||
|
|
|
|||
|
|
@ -4,24 +4,10 @@ import uuid
|
|||
from typing import Optional
|
||||
|
||||
from open_webui.internal.db import Base, get_db
|
||||
from open_webui.models.groups import Groups
|
||||
from open_webui.utils.access_control import has_access
|
||||
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
from sqlalchemy.dialects.postgresql import JSONB
|
||||
|
||||
|
||||
from sqlalchemy import (
|
||||
BigInteger,
|
||||
Boolean,
|
||||
Column,
|
||||
ForeignKey,
|
||||
String,
|
||||
Text,
|
||||
JSON,
|
||||
UniqueConstraint,
|
||||
case,
|
||||
cast,
|
||||
)
|
||||
from sqlalchemy import BigInteger, Boolean, Column, String, Text, JSON
|
||||
from sqlalchemy import or_, func, select, and_, text
|
||||
from sqlalchemy.sql import exists
|
||||
|
||||
|
|
@ -33,30 +19,19 @@ from sqlalchemy.sql import exists
|
|||
class Channel(Base):
|
||||
__tablename__ = "channel"
|
||||
|
||||
id = Column(Text, primary_key=True, unique=True)
|
||||
id = Column(Text, primary_key=True)
|
||||
user_id = Column(Text)
|
||||
type = Column(Text, nullable=True)
|
||||
|
||||
name = Column(Text)
|
||||
description = Column(Text, nullable=True)
|
||||
|
||||
# Used to indicate if the channel is private (for 'group' type channels)
|
||||
is_private = Column(Boolean, nullable=True)
|
||||
|
||||
data = Column(JSON, nullable=True)
|
||||
meta = Column(JSON, nullable=True)
|
||||
access_control = Column(JSON, nullable=True)
|
||||
|
||||
created_at = Column(BigInteger)
|
||||
|
||||
updated_at = Column(BigInteger)
|
||||
updated_by = Column(Text, nullable=True)
|
||||
|
||||
archived_at = Column(BigInteger, nullable=True)
|
||||
archived_by = Column(Text, nullable=True)
|
||||
|
||||
deleted_at = Column(BigInteger, nullable=True)
|
||||
deleted_by = Column(Text, nullable=True)
|
||||
|
||||
|
||||
class ChannelModel(BaseModel):
|
||||
|
|
@ -64,157 +39,17 @@ class ChannelModel(BaseModel):
|
|||
|
||||
id: str
|
||||
user_id: str
|
||||
|
||||
type: Optional[str] = None
|
||||
|
||||
name: str
|
||||
description: Optional[str] = None
|
||||
|
||||
is_private: Optional[bool] = None
|
||||
|
||||
data: Optional[dict] = None
|
||||
meta: Optional[dict] = None
|
||||
access_control: Optional[dict] = None
|
||||
|
||||
created_at: int # timestamp in epoch (time_ns)
|
||||
|
||||
updated_at: int # timestamp in epoch (time_ns)
|
||||
updated_by: Optional[str] = None
|
||||
|
||||
archived_at: Optional[int] = None # timestamp in epoch (time_ns)
|
||||
archived_by: Optional[str] = None
|
||||
|
||||
deleted_at: Optional[int] = None # timestamp in epoch (time_ns)
|
||||
deleted_by: Optional[str] = None
|
||||
|
||||
|
||||
class ChannelMember(Base):
|
||||
__tablename__ = "channel_member"
|
||||
|
||||
id = Column(Text, primary_key=True, unique=True)
|
||||
channel_id = Column(Text, nullable=False)
|
||||
user_id = Column(Text, nullable=False)
|
||||
|
||||
role = Column(Text, nullable=True)
|
||||
status = Column(Text, nullable=True)
|
||||
|
||||
is_active = Column(Boolean, nullable=False, default=True)
|
||||
|
||||
is_channel_muted = Column(Boolean, nullable=False, default=False)
|
||||
is_channel_pinned = Column(Boolean, nullable=False, default=False)
|
||||
|
||||
data = Column(JSON, nullable=True)
|
||||
meta = Column(JSON, nullable=True)
|
||||
|
||||
invited_at = Column(BigInteger, nullable=True)
|
||||
invited_by = Column(Text, nullable=True)
|
||||
|
||||
joined_at = Column(BigInteger)
|
||||
left_at = Column(BigInteger, nullable=True)
|
||||
|
||||
last_read_at = Column(BigInteger, nullable=True)
|
||||
|
||||
created_at = Column(BigInteger)
|
||||
updated_at = Column(BigInteger)
|
||||
|
||||
|
||||
class ChannelMemberModel(BaseModel):
|
||||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
id: str
|
||||
channel_id: str
|
||||
user_id: str
|
||||
|
||||
role: Optional[str] = None
|
||||
status: Optional[str] = None
|
||||
|
||||
is_active: bool = True
|
||||
|
||||
is_channel_muted: bool = False
|
||||
is_channel_pinned: bool = False
|
||||
|
||||
data: Optional[dict] = None
|
||||
meta: Optional[dict] = None
|
||||
|
||||
invited_at: Optional[int] = None # timestamp in epoch (time_ns)
|
||||
invited_by: Optional[str] = None
|
||||
|
||||
joined_at: Optional[int] = None # timestamp in epoch (time_ns)
|
||||
left_at: Optional[int] = None # timestamp in epoch (time_ns)
|
||||
|
||||
last_read_at: Optional[int] = None # timestamp in epoch (time_ns)
|
||||
|
||||
created_at: Optional[int] = None # timestamp in epoch (time_ns)
|
||||
updated_at: Optional[int] = None # timestamp in epoch (time_ns)
|
||||
|
||||
|
||||
class ChannelFile(Base):
|
||||
__tablename__ = "channel_file"
|
||||
|
||||
id = Column(Text, unique=True, primary_key=True)
|
||||
user_id = Column(Text, nullable=False)
|
||||
|
||||
channel_id = Column(
|
||||
Text, ForeignKey("channel.id", ondelete="CASCADE"), nullable=False
|
||||
)
|
||||
message_id = Column(
|
||||
Text, ForeignKey("message.id", ondelete="CASCADE"), nullable=True
|
||||
)
|
||||
file_id = Column(Text, ForeignKey("file.id", ondelete="CASCADE"), nullable=False)
|
||||
|
||||
created_at = Column(BigInteger, nullable=False)
|
||||
updated_at = Column(BigInteger, nullable=False)
|
||||
|
||||
__table_args__ = (
|
||||
UniqueConstraint("channel_id", "file_id", name="uq_channel_file_channel_file"),
|
||||
)
|
||||
|
||||
|
||||
class ChannelFileModel(BaseModel):
|
||||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
id: str
|
||||
|
||||
channel_id: str
|
||||
file_id: str
|
||||
user_id: str
|
||||
|
||||
created_at: int # timestamp in epoch (time_ns)
|
||||
updated_at: int # timestamp in epoch (time_ns)
|
||||
|
||||
|
||||
class ChannelWebhook(Base):
|
||||
__tablename__ = "channel_webhook"
|
||||
|
||||
id = Column(Text, primary_key=True, unique=True)
|
||||
channel_id = Column(Text, nullable=False)
|
||||
user_id = Column(Text, nullable=False)
|
||||
|
||||
name = Column(Text, nullable=False)
|
||||
profile_image_url = Column(Text, nullable=True)
|
||||
|
||||
token = Column(Text, nullable=False)
|
||||
last_used_at = Column(BigInteger, nullable=True)
|
||||
|
||||
created_at = Column(BigInteger, nullable=False)
|
||||
updated_at = Column(BigInteger, nullable=False)
|
||||
|
||||
|
||||
class ChannelWebhookModel(BaseModel):
|
||||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
id: str
|
||||
channel_id: str
|
||||
user_id: str
|
||||
|
||||
name: str
|
||||
profile_image_url: Optional[str] = None
|
||||
|
||||
token: str
|
||||
last_used_at: Optional[int] = None # timestamp in epoch (time_ns)
|
||||
|
||||
created_at: int # timestamp in epoch (time_ns)
|
||||
updated_at: int # timestamp in epoch (time_ns)
|
||||
created_at: int # timestamp in epoch
|
||||
updated_at: int # timestamp in epoch
|
||||
|
||||
|
||||
####################
|
||||
|
|
@ -222,95 +57,23 @@ class ChannelWebhookModel(BaseModel):
|
|||
####################
|
||||
|
||||
|
||||
class ChannelResponse(ChannelModel):
|
||||
is_manager: bool = False
|
||||
write_access: bool = False
|
||||
|
||||
user_count: Optional[int] = None
|
||||
|
||||
|
||||
class ChannelForm(BaseModel):
|
||||
name: str = ""
|
||||
name: str
|
||||
description: Optional[str] = None
|
||||
is_private: Optional[bool] = None
|
||||
data: Optional[dict] = None
|
||||
meta: Optional[dict] = None
|
||||
access_control: Optional[dict] = None
|
||||
group_ids: Optional[list[str]] = None
|
||||
user_ids: Optional[list[str]] = None
|
||||
|
||||
|
||||
class CreateChannelForm(ChannelForm):
|
||||
type: Optional[str] = None
|
||||
|
||||
|
||||
class ChannelTable:
|
||||
|
||||
def _collect_unique_user_ids(
|
||||
self,
|
||||
invited_by: str,
|
||||
user_ids: Optional[list[str]] = None,
|
||||
group_ids: Optional[list[str]] = None,
|
||||
) -> set[str]:
|
||||
"""
|
||||
Collect unique user ids from:
|
||||
- invited_by
|
||||
- user_ids
|
||||
- each group in group_ids
|
||||
Returns a set for efficient SQL diffing.
|
||||
"""
|
||||
users = set(user_ids or [])
|
||||
users.add(invited_by)
|
||||
|
||||
for group_id in group_ids or []:
|
||||
users.update(Groups.get_group_user_ids_by_id(group_id))
|
||||
|
||||
return users
|
||||
|
||||
def _create_membership_models(
|
||||
self,
|
||||
channel_id: str,
|
||||
invited_by: str,
|
||||
user_ids: set[str],
|
||||
) -> list[ChannelMember]:
|
||||
"""
|
||||
Takes a set of NEW user IDs (already filtered to exclude existing members).
|
||||
Returns ORM ChannelMember objects to be added.
|
||||
"""
|
||||
now = int(time.time_ns())
|
||||
memberships = []
|
||||
|
||||
for uid in user_ids:
|
||||
model = ChannelMemberModel(
|
||||
**{
|
||||
"id": str(uuid.uuid4()),
|
||||
"channel_id": channel_id,
|
||||
"user_id": uid,
|
||||
"status": "joined",
|
||||
"is_active": True,
|
||||
"is_channel_muted": False,
|
||||
"is_channel_pinned": False,
|
||||
"invited_at": now,
|
||||
"invited_by": invited_by,
|
||||
"joined_at": now,
|
||||
"left_at": None,
|
||||
"last_read_at": now,
|
||||
"created_at": now,
|
||||
"updated_at": now,
|
||||
}
|
||||
)
|
||||
memberships.append(ChannelMember(**model.model_dump()))
|
||||
|
||||
return memberships
|
||||
|
||||
def insert_new_channel(
|
||||
self, form_data: CreateChannelForm, user_id: str
|
||||
self, type: Optional[str], form_data: ChannelForm, user_id: str
|
||||
) -> Optional[ChannelModel]:
|
||||
with get_db() as db:
|
||||
channel = ChannelModel(
|
||||
**{
|
||||
**form_data.model_dump(),
|
||||
"type": form_data.type if form_data.type else None,
|
||||
"type": type,
|
||||
"name": form_data.name.lower(),
|
||||
"id": str(uuid.uuid4()),
|
||||
"user_id": user_id,
|
||||
|
|
@ -318,21 +81,9 @@ class ChannelTable:
|
|||
"updated_at": int(time.time_ns()),
|
||||
}
|
||||
)
|
||||
|
||||
new_channel = Channel(**channel.model_dump())
|
||||
|
||||
if form_data.type in ["group", "dm"]:
|
||||
users = self._collect_unique_user_ids(
|
||||
invited_by=user_id,
|
||||
user_ids=form_data.user_ids,
|
||||
group_ids=form_data.group_ids,
|
||||
)
|
||||
memberships = self._create_membership_models(
|
||||
channel_id=new_channel.id,
|
||||
invited_by=user_id,
|
||||
user_ids=users,
|
||||
)
|
||||
|
||||
db.add_all(memberships)
|
||||
db.add(new_channel)
|
||||
db.commit()
|
||||
return channel
|
||||
|
|
@ -342,481 +93,22 @@ class ChannelTable:
|
|||
channels = db.query(Channel).all()
|
||||
return [ChannelModel.model_validate(channel) for channel in channels]
|
||||
|
||||
def _has_permission(self, db, query, filter: dict, permission: str = "read"):
|
||||
group_ids = filter.get("group_ids", [])
|
||||
user_id = filter.get("user_id")
|
||||
|
||||
dialect_name = db.bind.dialect.name
|
||||
|
||||
# Public access
|
||||
conditions = []
|
||||
if group_ids or user_id:
|
||||
conditions.extend(
|
||||
[
|
||||
Channel.access_control.is_(None),
|
||||
cast(Channel.access_control, String) == "null",
|
||||
]
|
||||
)
|
||||
|
||||
# User-level permission
|
||||
if user_id:
|
||||
conditions.append(Channel.user_id == user_id)
|
||||
|
||||
# Group-level permission
|
||||
if group_ids:
|
||||
group_conditions = []
|
||||
for gid in group_ids:
|
||||
if dialect_name == "sqlite":
|
||||
group_conditions.append(
|
||||
Channel.access_control[permission]["group_ids"].contains([gid])
|
||||
)
|
||||
elif dialect_name == "postgresql":
|
||||
group_conditions.append(
|
||||
cast(
|
||||
Channel.access_control[permission]["group_ids"],
|
||||
JSONB,
|
||||
).contains([gid])
|
||||
)
|
||||
conditions.append(or_(*group_conditions))
|
||||
|
||||
if conditions:
|
||||
query = query.filter(or_(*conditions))
|
||||
|
||||
return query
|
||||
|
||||
def get_channels_by_user_id(self, user_id: str) -> list[ChannelModel]:
|
||||
with get_db() as db:
|
||||
user_group_ids = [
|
||||
group.id for group in Groups.get_groups_by_member_id(user_id)
|
||||
]
|
||||
|
||||
membership_channels = (
|
||||
db.query(Channel)
|
||||
.join(ChannelMember, Channel.id == ChannelMember.channel_id)
|
||||
.filter(
|
||||
Channel.deleted_at.is_(None),
|
||||
Channel.archived_at.is_(None),
|
||||
Channel.type.in_(["group", "dm"]),
|
||||
ChannelMember.user_id == user_id,
|
||||
ChannelMember.is_active.is_(True),
|
||||
)
|
||||
.all()
|
||||
)
|
||||
|
||||
query = db.query(Channel).filter(
|
||||
Channel.deleted_at.is_(None),
|
||||
Channel.archived_at.is_(None),
|
||||
or_(
|
||||
Channel.type.is_(None), # True NULL/None
|
||||
Channel.type == "", # Empty string
|
||||
and_(Channel.type != "group", Channel.type != "dm"),
|
||||
),
|
||||
)
|
||||
query = self._has_permission(
|
||||
db, query, {"user_id": user_id, "group_ids": user_group_ids}
|
||||
)
|
||||
|
||||
standard_channels = query.all()
|
||||
|
||||
all_channels = membership_channels + standard_channels
|
||||
return [ChannelModel.model_validate(c) for c in all_channels]
|
||||
|
||||
def get_dm_channel_by_user_ids(self, user_ids: list[str]) -> Optional[ChannelModel]:
|
||||
with get_db() as db:
|
||||
# Ensure uniqueness in case a list with duplicates is passed
|
||||
unique_user_ids = list(set(user_ids))
|
||||
|
||||
match_count = func.sum(
|
||||
case(
|
||||
(ChannelMember.user_id.in_(unique_user_ids), 1),
|
||||
else_=0,
|
||||
)
|
||||
)
|
||||
|
||||
subquery = (
|
||||
db.query(ChannelMember.channel_id)
|
||||
.group_by(ChannelMember.channel_id)
|
||||
# 1. Channel must have exactly len(user_ids) members
|
||||
.having(func.count(ChannelMember.user_id) == len(unique_user_ids))
|
||||
# 2. All those members must be in unique_user_ids
|
||||
.having(match_count == len(unique_user_ids))
|
||||
.subquery()
|
||||
)
|
||||
|
||||
channel = (
|
||||
db.query(Channel)
|
||||
.filter(
|
||||
Channel.id.in_(subquery),
|
||||
Channel.type == "dm",
|
||||
)
|
||||
.first()
|
||||
)
|
||||
|
||||
return ChannelModel.model_validate(channel) if channel else None
|
||||
|
||||
def add_members_to_channel(
|
||||
self,
|
||||
channel_id: str,
|
||||
invited_by: str,
|
||||
user_ids: Optional[list[str]] = None,
|
||||
group_ids: Optional[list[str]] = None,
|
||||
) -> list[ChannelMemberModel]:
|
||||
with get_db() as db:
|
||||
# 1. Collect all user_ids including groups + inviter
|
||||
requested_users = self._collect_unique_user_ids(
|
||||
invited_by, user_ids, group_ids
|
||||
)
|
||||
|
||||
existing_users = {
|
||||
row.user_id
|
||||
for row in db.query(ChannelMember.user_id)
|
||||
.filter(ChannelMember.channel_id == channel_id)
|
||||
.all()
|
||||
}
|
||||
|
||||
new_user_ids = requested_users - existing_users
|
||||
if not new_user_ids:
|
||||
return [] # Nothing to add
|
||||
|
||||
new_memberships = self._create_membership_models(
|
||||
channel_id, invited_by, new_user_ids
|
||||
)
|
||||
|
||||
db.add_all(new_memberships)
|
||||
db.commit()
|
||||
|
||||
def get_channels_by_user_id(
|
||||
self, user_id: str, permission: str = "read"
|
||||
) -> list[ChannelModel]:
|
||||
channels = self.get_channels()
|
||||
return [
|
||||
ChannelMemberModel.model_validate(membership)
|
||||
for membership in new_memberships
|
||||
channel
|
||||
for channel in channels
|
||||
if channel.user_id == user_id
|
||||
or has_access(user_id, permission, channel.access_control)
|
||||
]
|
||||
|
||||
def remove_members_from_channel(
|
||||
self,
|
||||
channel_id: str,
|
||||
user_ids: list[str],
|
||||
) -> int:
|
||||
with get_db() as db:
|
||||
result = (
|
||||
db.query(ChannelMember)
|
||||
.filter(
|
||||
ChannelMember.channel_id == channel_id,
|
||||
ChannelMember.user_id.in_(user_ids),
|
||||
)
|
||||
.delete(synchronize_session=False)
|
||||
)
|
||||
db.commit()
|
||||
return result # number of rows deleted
|
||||
|
||||
def is_user_channel_manager(self, channel_id: str, user_id: str) -> bool:
|
||||
with get_db() as db:
|
||||
# Check if the user is the creator of the channel
|
||||
# or has a 'manager' role in ChannelMember
|
||||
channel = db.query(Channel).filter(Channel.id == channel_id).first()
|
||||
if channel and channel.user_id == user_id:
|
||||
return True
|
||||
|
||||
membership = (
|
||||
db.query(ChannelMember)
|
||||
.filter(
|
||||
ChannelMember.channel_id == channel_id,
|
||||
ChannelMember.user_id == user_id,
|
||||
ChannelMember.role == "manager",
|
||||
)
|
||||
.first()
|
||||
)
|
||||
return membership is not None
|
||||
|
||||
def join_channel(
|
||||
self, channel_id: str, user_id: str
|
||||
) -> Optional[ChannelMemberModel]:
|
||||
with get_db() as db:
|
||||
# Check if the membership already exists
|
||||
existing_membership = (
|
||||
db.query(ChannelMember)
|
||||
.filter(
|
||||
ChannelMember.channel_id == channel_id,
|
||||
ChannelMember.user_id == user_id,
|
||||
)
|
||||
.first()
|
||||
)
|
||||
if existing_membership:
|
||||
return ChannelMemberModel.model_validate(existing_membership)
|
||||
|
||||
# Create new membership
|
||||
channel_member = ChannelMemberModel(
|
||||
**{
|
||||
"id": str(uuid.uuid4()),
|
||||
"channel_id": channel_id,
|
||||
"user_id": user_id,
|
||||
"status": "joined",
|
||||
"is_active": True,
|
||||
"is_channel_muted": False,
|
||||
"is_channel_pinned": False,
|
||||
"joined_at": int(time.time_ns()),
|
||||
"left_at": None,
|
||||
"last_read_at": int(time.time_ns()),
|
||||
"created_at": int(time.time_ns()),
|
||||
"updated_at": int(time.time_ns()),
|
||||
}
|
||||
)
|
||||
new_membership = ChannelMember(**channel_member.model_dump())
|
||||
|
||||
db.add(new_membership)
|
||||
db.commit()
|
||||
return channel_member
|
||||
|
||||
def leave_channel(self, channel_id: str, user_id: str) -> bool:
|
||||
with get_db() as db:
|
||||
membership = (
|
||||
db.query(ChannelMember)
|
||||
.filter(
|
||||
ChannelMember.channel_id == channel_id,
|
||||
ChannelMember.user_id == user_id,
|
||||
)
|
||||
.first()
|
||||
)
|
||||
if not membership:
|
||||
return False
|
||||
|
||||
membership.status = "left"
|
||||
membership.is_active = False
|
||||
membership.left_at = int(time.time_ns())
|
||||
membership.updated_at = int(time.time_ns())
|
||||
|
||||
db.commit()
|
||||
return True
|
||||
|
||||
def get_member_by_channel_and_user_id(
|
||||
self, channel_id: str, user_id: str
|
||||
) -> Optional[ChannelMemberModel]:
|
||||
with get_db() as db:
|
||||
membership = (
|
||||
db.query(ChannelMember)
|
||||
.filter(
|
||||
ChannelMember.channel_id == channel_id,
|
||||
ChannelMember.user_id == user_id,
|
||||
)
|
||||
.first()
|
||||
)
|
||||
return ChannelMemberModel.model_validate(membership) if membership else None
|
||||
|
||||
def get_members_by_channel_id(self, channel_id: str) -> list[ChannelMemberModel]:
|
||||
with get_db() as db:
|
||||
memberships = (
|
||||
db.query(ChannelMember)
|
||||
.filter(ChannelMember.channel_id == channel_id)
|
||||
.all()
|
||||
)
|
||||
return [
|
||||
ChannelMemberModel.model_validate(membership)
|
||||
for membership in memberships
|
||||
]
|
||||
|
||||
def pin_channel(self, channel_id: str, user_id: str, is_pinned: bool) -> bool:
|
||||
with get_db() as db:
|
||||
membership = (
|
||||
db.query(ChannelMember)
|
||||
.filter(
|
||||
ChannelMember.channel_id == channel_id,
|
||||
ChannelMember.user_id == user_id,
|
||||
)
|
||||
.first()
|
||||
)
|
||||
if not membership:
|
||||
return False
|
||||
|
||||
membership.is_channel_pinned = is_pinned
|
||||
membership.updated_at = int(time.time_ns())
|
||||
|
||||
db.commit()
|
||||
return True
|
||||
|
||||
def update_member_last_read_at(self, channel_id: str, user_id: str) -> bool:
|
||||
with get_db() as db:
|
||||
membership = (
|
||||
db.query(ChannelMember)
|
||||
.filter(
|
||||
ChannelMember.channel_id == channel_id,
|
||||
ChannelMember.user_id == user_id,
|
||||
)
|
||||
.first()
|
||||
)
|
||||
if not membership:
|
||||
return False
|
||||
|
||||
membership.last_read_at = int(time.time_ns())
|
||||
membership.updated_at = int(time.time_ns())
|
||||
|
||||
db.commit()
|
||||
return True
|
||||
|
||||
def update_member_active_status(
|
||||
self, channel_id: str, user_id: str, is_active: bool
|
||||
) -> bool:
|
||||
with get_db() as db:
|
||||
membership = (
|
||||
db.query(ChannelMember)
|
||||
.filter(
|
||||
ChannelMember.channel_id == channel_id,
|
||||
ChannelMember.user_id == user_id,
|
||||
)
|
||||
.first()
|
||||
)
|
||||
if not membership:
|
||||
return False
|
||||
|
||||
membership.is_active = is_active
|
||||
membership.updated_at = int(time.time_ns())
|
||||
|
||||
db.commit()
|
||||
return True
|
||||
|
||||
def is_user_channel_member(self, channel_id: str, user_id: str) -> bool:
|
||||
with get_db() as db:
|
||||
membership = (
|
||||
db.query(ChannelMember)
|
||||
.filter(
|
||||
ChannelMember.channel_id == channel_id,
|
||||
ChannelMember.user_id == user_id,
|
||||
)
|
||||
.first()
|
||||
)
|
||||
return membership is not None
|
||||
|
||||
def get_channel_by_id(self, id: str) -> Optional[ChannelModel]:
|
||||
with get_db() as db:
|
||||
channel = db.query(Channel).filter(Channel.id == id).first()
|
||||
return ChannelModel.model_validate(channel) if channel else None
|
||||
|
||||
def get_channels_by_file_id(self, file_id: str) -> list[ChannelModel]:
|
||||
with get_db() as db:
|
||||
channel_files = (
|
||||
db.query(ChannelFile).filter(ChannelFile.file_id == file_id).all()
|
||||
)
|
||||
channel_ids = [cf.channel_id for cf in channel_files]
|
||||
channels = db.query(Channel).filter(Channel.id.in_(channel_ids)).all()
|
||||
return [ChannelModel.model_validate(channel) for channel in channels]
|
||||
|
||||
def get_channels_by_file_id_and_user_id(
|
||||
self, file_id: str, user_id: str
|
||||
) -> list[ChannelModel]:
|
||||
with get_db() as db:
|
||||
# 1. Determine which channels have this file
|
||||
channel_file_rows = (
|
||||
db.query(ChannelFile).filter(ChannelFile.file_id == file_id).all()
|
||||
)
|
||||
channel_ids = [row.channel_id for row in channel_file_rows]
|
||||
|
||||
if not channel_ids:
|
||||
return []
|
||||
|
||||
# 2. Load all channel rows that still exist
|
||||
channels = (
|
||||
db.query(Channel)
|
||||
.filter(
|
||||
Channel.id.in_(channel_ids),
|
||||
Channel.deleted_at.is_(None),
|
||||
Channel.archived_at.is_(None),
|
||||
)
|
||||
.all()
|
||||
)
|
||||
if not channels:
|
||||
return []
|
||||
|
||||
# Preload user's group membership
|
||||
user_group_ids = [g.id for g in Groups.get_groups_by_member_id(user_id)]
|
||||
|
||||
allowed_channels = []
|
||||
|
||||
for channel in channels:
|
||||
# --- Case A: group or dm => user must be an active member ---
|
||||
if channel.type in ["group", "dm"]:
|
||||
membership = (
|
||||
db.query(ChannelMember)
|
||||
.filter(
|
||||
ChannelMember.channel_id == channel.id,
|
||||
ChannelMember.user_id == user_id,
|
||||
ChannelMember.is_active.is_(True),
|
||||
)
|
||||
.first()
|
||||
)
|
||||
if membership:
|
||||
allowed_channels.append(ChannelModel.model_validate(channel))
|
||||
continue
|
||||
|
||||
# --- Case B: standard channel => rely on ACL permissions ---
|
||||
query = db.query(Channel).filter(Channel.id == channel.id)
|
||||
|
||||
query = self._has_permission(
|
||||
db,
|
||||
query,
|
||||
{"user_id": user_id, "group_ids": user_group_ids},
|
||||
permission="read",
|
||||
)
|
||||
|
||||
allowed = query.first()
|
||||
if allowed:
|
||||
allowed_channels.append(ChannelModel.model_validate(allowed))
|
||||
|
||||
return allowed_channels
|
||||
|
||||
def get_channel_by_id_and_user_id(
|
||||
self, id: str, user_id: str
|
||||
) -> Optional[ChannelModel]:
|
||||
with get_db() as db:
|
||||
# Fetch the channel
|
||||
channel: Channel = (
|
||||
db.query(Channel)
|
||||
.filter(
|
||||
Channel.id == id,
|
||||
Channel.deleted_at.is_(None),
|
||||
Channel.archived_at.is_(None),
|
||||
)
|
||||
.first()
|
||||
)
|
||||
|
||||
if not channel:
|
||||
return None
|
||||
|
||||
# If the channel is a group or dm, read access requires membership (active)
|
||||
if channel.type in ["group", "dm"]:
|
||||
membership = (
|
||||
db.query(ChannelMember)
|
||||
.filter(
|
||||
ChannelMember.channel_id == id,
|
||||
ChannelMember.user_id == user_id,
|
||||
ChannelMember.is_active.is_(True),
|
||||
)
|
||||
.first()
|
||||
)
|
||||
if membership:
|
||||
return ChannelModel.model_validate(channel)
|
||||
else:
|
||||
return None
|
||||
|
||||
# For channels that are NOT group/dm, fall back to ACL-based read access
|
||||
query = db.query(Channel).filter(Channel.id == id)
|
||||
|
||||
# Determine user groups
|
||||
user_group_ids = [
|
||||
group.id for group in Groups.get_groups_by_member_id(user_id)
|
||||
]
|
||||
|
||||
# Apply ACL rules
|
||||
query = self._has_permission(
|
||||
db,
|
||||
query,
|
||||
{"user_id": user_id, "group_ids": user_group_ids},
|
||||
permission="read",
|
||||
)
|
||||
|
||||
channel_allowed = query.first()
|
||||
return (
|
||||
ChannelModel.model_validate(channel_allowed)
|
||||
if channel_allowed
|
||||
else None
|
||||
)
|
||||
|
||||
def update_channel_by_id(
|
||||
self, id: str, form_data: ChannelForm
|
||||
) -> Optional[ChannelModel]:
|
||||
|
|
@ -826,77 +118,14 @@ class ChannelTable:
|
|||
return None
|
||||
|
||||
channel.name = form_data.name
|
||||
channel.description = form_data.description
|
||||
channel.is_private = form_data.is_private
|
||||
|
||||
channel.data = form_data.data
|
||||
channel.meta = form_data.meta
|
||||
|
||||
channel.access_control = form_data.access_control
|
||||
channel.updated_at = int(time.time_ns())
|
||||
|
||||
db.commit()
|
||||
return ChannelModel.model_validate(channel) if channel else None
|
||||
|
||||
def add_file_to_channel_by_id(
|
||||
self, channel_id: str, file_id: str, user_id: str
|
||||
) -> Optional[ChannelFileModel]:
|
||||
with get_db() as db:
|
||||
channel_file = ChannelFileModel(
|
||||
**{
|
||||
"id": str(uuid.uuid4()),
|
||||
"channel_id": channel_id,
|
||||
"file_id": file_id,
|
||||
"user_id": user_id,
|
||||
"created_at": int(time.time()),
|
||||
"updated_at": int(time.time()),
|
||||
}
|
||||
)
|
||||
|
||||
try:
|
||||
result = ChannelFile(**channel_file.model_dump())
|
||||
db.add(result)
|
||||
db.commit()
|
||||
db.refresh(result)
|
||||
if result:
|
||||
return ChannelFileModel.model_validate(result)
|
||||
else:
|
||||
return None
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def set_file_message_id_in_channel_by_id(
|
||||
self, channel_id: str, file_id: str, message_id: str
|
||||
) -> bool:
|
||||
try:
|
||||
with get_db() as db:
|
||||
channel_file = (
|
||||
db.query(ChannelFile)
|
||||
.filter_by(channel_id=channel_id, file_id=file_id)
|
||||
.first()
|
||||
)
|
||||
if not channel_file:
|
||||
return False
|
||||
|
||||
channel_file.message_id = message_id
|
||||
channel_file.updated_at = int(time.time())
|
||||
|
||||
db.commit()
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def remove_file_from_channel_by_id(self, channel_id: str, file_id: str) -> bool:
|
||||
try:
|
||||
with get_db() as db:
|
||||
db.query(ChannelFile).filter_by(
|
||||
channel_id=channel_id, file_id=file_id
|
||||
).delete()
|
||||
db.commit()
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def delete_channel_by_id(self, id: str):
|
||||
with get_db() as db:
|
||||
db.query(Channel).filter(Channel.id == id).delete()
|
||||
|
|
|
|||
|
|
@ -7,20 +7,10 @@ from typing import Optional
|
|||
from open_webui.internal.db import Base, get_db
|
||||
from open_webui.models.tags import TagModel, Tag, Tags
|
||||
from open_webui.models.folders import Folders
|
||||
from open_webui.utils.misc import sanitize_data_for_db, sanitize_text_for_db
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
from sqlalchemy import (
|
||||
BigInteger,
|
||||
Boolean,
|
||||
Column,
|
||||
ForeignKey,
|
||||
String,
|
||||
Text,
|
||||
JSON,
|
||||
Index,
|
||||
UniqueConstraint,
|
||||
)
|
||||
from sqlalchemy import BigInteger, Boolean, Column, String, Text, JSON, Index
|
||||
from sqlalchemy import or_, func, select, and_, text
|
||||
from sqlalchemy.sql import exists
|
||||
from sqlalchemy.sql.expression import bindparam
|
||||
|
|
@ -30,12 +20,13 @@ from sqlalchemy.sql.expression import bindparam
|
|||
####################
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
||||
|
||||
|
||||
class Chat(Base):
|
||||
__tablename__ = "chat"
|
||||
|
||||
id = Column(String, primary_key=True, unique=True)
|
||||
id = Column(String, primary_key=True)
|
||||
user_id = Column(String)
|
||||
title = Column(Text)
|
||||
chat = Column(JSON)
|
||||
|
|
@ -84,38 +75,6 @@ class ChatModel(BaseModel):
|
|||
folder_id: Optional[str] = None
|
||||
|
||||
|
||||
class ChatFile(Base):
|
||||
__tablename__ = "chat_file"
|
||||
|
||||
id = Column(Text, unique=True, primary_key=True)
|
||||
user_id = Column(Text, nullable=False)
|
||||
|
||||
chat_id = Column(Text, ForeignKey("chat.id", ondelete="CASCADE"), nullable=False)
|
||||
message_id = Column(Text, nullable=True)
|
||||
file_id = Column(Text, ForeignKey("file.id", ondelete="CASCADE"), nullable=False)
|
||||
|
||||
created_at = Column(BigInteger, nullable=False)
|
||||
updated_at = Column(BigInteger, nullable=False)
|
||||
|
||||
__table_args__ = (
|
||||
UniqueConstraint("chat_id", "file_id", name="uq_chat_file_chat_file"),
|
||||
)
|
||||
|
||||
|
||||
class ChatFileModel(BaseModel):
|
||||
id: str
|
||||
user_id: str
|
||||
|
||||
chat_id: str
|
||||
message_id: Optional[str] = None
|
||||
file_id: str
|
||||
|
||||
created_at: int
|
||||
updated_at: int
|
||||
|
||||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
|
||||
####################
|
||||
# Forms
|
||||
####################
|
||||
|
|
@ -133,10 +92,6 @@ class ChatImportForm(ChatForm):
|
|||
updated_at: Optional[int] = None
|
||||
|
||||
|
||||
class ChatsImportForm(BaseModel):
|
||||
chats: list[ChatImportForm]
|
||||
|
||||
|
||||
class ChatTitleMessagesForm(BaseModel):
|
||||
title: str
|
||||
messages: list[dict]
|
||||
|
|
@ -167,77 +122,7 @@ class ChatTitleIdResponse(BaseModel):
|
|||
created_at: int
|
||||
|
||||
|
||||
class ChatListResponse(BaseModel):
|
||||
items: list[ChatModel]
|
||||
total: int
|
||||
|
||||
|
||||
class ChatUsageStatsResponse(BaseModel):
|
||||
id: str # chat id
|
||||
|
||||
models: dict = {} # models used in the chat with their usage counts
|
||||
message_count: int # number of messages in the chat
|
||||
|
||||
history_models: dict = {} # models used in the chat history with their usage counts
|
||||
history_message_count: int # number of messages in the chat history
|
||||
history_user_message_count: int # number of user messages in the chat history
|
||||
history_assistant_message_count: (
|
||||
int # number of assistant messages in the chat history
|
||||
)
|
||||
|
||||
average_response_time: (
|
||||
float # average response time of assistant messages in seconds
|
||||
)
|
||||
average_user_message_content_length: (
|
||||
float # average length of user message contents
|
||||
)
|
||||
average_assistant_message_content_length: (
|
||||
float # average length of assistant message contents
|
||||
)
|
||||
|
||||
tags: list[str] = [] # tags associated with the chat
|
||||
|
||||
last_message_at: int # timestamp of the last message
|
||||
updated_at: int
|
||||
created_at: int
|
||||
|
||||
model_config = ConfigDict(extra="allow")
|
||||
|
||||
|
||||
class ChatUsageStatsListResponse(BaseModel):
|
||||
items: list[ChatUsageStatsResponse]
|
||||
total: int
|
||||
model_config = ConfigDict(extra="allow")
|
||||
|
||||
|
||||
class ChatTable:
|
||||
def _clean_null_bytes(self, obj):
|
||||
"""Recursively remove null bytes from strings in dict/list structures."""
|
||||
return sanitize_data_for_db(obj)
|
||||
|
||||
def _sanitize_chat_row(self, chat_item):
|
||||
"""
|
||||
Clean a Chat SQLAlchemy model's title + chat JSON,
|
||||
and return True if anything changed.
|
||||
"""
|
||||
changed = False
|
||||
|
||||
# Clean title
|
||||
if chat_item.title:
|
||||
cleaned = self._clean_null_bytes(chat_item.title)
|
||||
if cleaned != chat_item.title:
|
||||
chat_item.title = cleaned
|
||||
changed = True
|
||||
|
||||
# Clean JSON
|
||||
if chat_item.chat:
|
||||
cleaned = self._clean_null_bytes(chat_item.chat)
|
||||
if cleaned != chat_item.chat:
|
||||
chat_item.chat = cleaned
|
||||
changed = True
|
||||
|
||||
return changed
|
||||
|
||||
def insert_new_chat(self, user_id: str, form_data: ChatForm) -> Optional[ChatModel]:
|
||||
with get_db() as db:
|
||||
id = str(uuid.uuid4())
|
||||
|
|
@ -245,76 +130,68 @@ class ChatTable:
|
|||
**{
|
||||
"id": id,
|
||||
"user_id": user_id,
|
||||
"title": self._clean_null_bytes(
|
||||
"title": (
|
||||
form_data.chat["title"]
|
||||
if "title" in form_data.chat
|
||||
else "New Chat"
|
||||
),
|
||||
"chat": self._clean_null_bytes(form_data.chat),
|
||||
"chat": form_data.chat,
|
||||
"folder_id": form_data.folder_id,
|
||||
"created_at": int(time.time()),
|
||||
"updated_at": int(time.time()),
|
||||
}
|
||||
)
|
||||
|
||||
chat_item = Chat(**chat.model_dump())
|
||||
db.add(chat_item)
|
||||
result = Chat(**chat.model_dump())
|
||||
db.add(result)
|
||||
db.commit()
|
||||
db.refresh(chat_item)
|
||||
return ChatModel.model_validate(chat_item) if chat_item else None
|
||||
db.refresh(result)
|
||||
return ChatModel.model_validate(result) if result else None
|
||||
|
||||
def _chat_import_form_to_chat_model(
|
||||
def import_chat(
|
||||
self, user_id: str, form_data: ChatImportForm
|
||||
) -> ChatModel:
|
||||
) -> Optional[ChatModel]:
|
||||
with get_db() as db:
|
||||
id = str(uuid.uuid4())
|
||||
chat = ChatModel(
|
||||
**{
|
||||
"id": id,
|
||||
"user_id": user_id,
|
||||
"title": self._clean_null_bytes(
|
||||
form_data.chat["title"] if "title" in form_data.chat else "New Chat"
|
||||
"title": (
|
||||
form_data.chat["title"]
|
||||
if "title" in form_data.chat
|
||||
else "New Chat"
|
||||
),
|
||||
"chat": self._clean_null_bytes(form_data.chat),
|
||||
"chat": form_data.chat,
|
||||
"meta": form_data.meta,
|
||||
"pinned": form_data.pinned,
|
||||
"folder_id": form_data.folder_id,
|
||||
"created_at": (
|
||||
form_data.created_at if form_data.created_at else int(time.time())
|
||||
form_data.created_at
|
||||
if form_data.created_at
|
||||
else int(time.time())
|
||||
),
|
||||
"updated_at": (
|
||||
form_data.updated_at if form_data.updated_at else int(time.time())
|
||||
form_data.updated_at
|
||||
if form_data.updated_at
|
||||
else int(time.time())
|
||||
),
|
||||
}
|
||||
)
|
||||
return chat
|
||||
|
||||
def import_chats(
|
||||
self, user_id: str, chat_import_forms: list[ChatImportForm]
|
||||
) -> list[ChatModel]:
|
||||
with get_db() as db:
|
||||
chats = []
|
||||
|
||||
for form_data in chat_import_forms:
|
||||
chat = self._chat_import_form_to_chat_model(user_id, form_data)
|
||||
chats.append(Chat(**chat.model_dump()))
|
||||
|
||||
db.add_all(chats)
|
||||
result = Chat(**chat.model_dump())
|
||||
db.add(result)
|
||||
db.commit()
|
||||
return [ChatModel.model_validate(chat) for chat in chats]
|
||||
db.refresh(result)
|
||||
return ChatModel.model_validate(result) if result else None
|
||||
|
||||
def update_chat_by_id(self, id: str, chat: dict) -> Optional[ChatModel]:
|
||||
try:
|
||||
with get_db() as db:
|
||||
chat_item = db.get(Chat, id)
|
||||
chat_item.chat = self._clean_null_bytes(chat)
|
||||
chat_item.title = (
|
||||
self._clean_null_bytes(chat["title"])
|
||||
if "title" in chat
|
||||
else "New Chat"
|
||||
)
|
||||
|
||||
chat_item.chat = chat
|
||||
chat_item.title = chat["title"] if "title" in chat else "New Chat"
|
||||
chat_item.updated_at = int(time.time())
|
||||
|
||||
db.commit()
|
||||
db.refresh(chat_item)
|
||||
|
||||
|
|
@ -359,7 +236,7 @@ class ChatTable:
|
|||
|
||||
return chat.chat.get("title", "New Chat")
|
||||
|
||||
def get_messages_map_by_chat_id(self, id: str) -> Optional[dict]:
|
||||
def get_messages_by_chat_id(self, id: str) -> Optional[dict]:
|
||||
chat = self.get_chat_by_id(id)
|
||||
if chat is None:
|
||||
return None
|
||||
|
|
@ -384,7 +261,7 @@ class ChatTable:
|
|||
|
||||
# Sanitize message content for null characters before upserting
|
||||
if isinstance(message.get("content"), str):
|
||||
message["content"] = sanitize_text_for_db(message["content"])
|
||||
message["content"] = message["content"].replace("\x00", "")
|
||||
|
||||
chat = chat.chat
|
||||
history = chat.get("history", {})
|
||||
|
|
@ -420,27 +297,6 @@ class ChatTable:
|
|||
chat["history"] = history
|
||||
return self.update_chat_by_id(id, chat)
|
||||
|
||||
def add_message_files_by_id_and_message_id(
|
||||
self, id: str, message_id: str, files: list[dict]
|
||||
) -> list[dict]:
|
||||
chat = self.get_chat_by_id(id)
|
||||
if chat is None:
|
||||
return None
|
||||
|
||||
chat = chat.chat
|
||||
history = chat.get("history", {})
|
||||
|
||||
message_files = []
|
||||
|
||||
if message_id in history.get("messages", {}):
|
||||
message_files = history["messages"][message_id].get("files", [])
|
||||
message_files = message_files + files
|
||||
history["messages"][message_id]["files"] = message_files
|
||||
|
||||
chat["history"] = history
|
||||
self.update_chat_by_id(id, chat)
|
||||
return message_files
|
||||
|
||||
def insert_shared_chat_by_chat_id(self, chat_id: str) -> Optional[ChatModel]:
|
||||
with get_db() as db:
|
||||
# Get the existing chat to share
|
||||
|
|
@ -510,15 +366,6 @@ class ChatTable:
|
|||
except Exception:
|
||||
return False
|
||||
|
||||
def unarchive_all_chats_by_user_id(self, user_id: str) -> bool:
|
||||
try:
|
||||
with get_db() as db:
|
||||
db.query(Chat).filter_by(user_id=user_id).update({"archived": False})
|
||||
db.commit()
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def update_chat_share_id_by_id(
|
||||
self, id: str, share_id: Optional[str]
|
||||
) -> Optional[ChatModel]:
|
||||
|
|
@ -549,7 +396,6 @@ class ChatTable:
|
|||
with get_db() as db:
|
||||
chat = db.get(Chat, id)
|
||||
chat.archived = not chat.archived
|
||||
chat.folder_id = None
|
||||
chat.updated_at = int(time.time())
|
||||
db.commit()
|
||||
db.refresh(chat)
|
||||
|
|
@ -585,10 +431,7 @@ class ChatTable:
|
|||
order_by = filter.get("order_by")
|
||||
direction = filter.get("direction")
|
||||
|
||||
if order_by and direction:
|
||||
if not getattr(Chat, order_by, None):
|
||||
raise ValueError("Invalid order_by field")
|
||||
|
||||
if order_by and direction and getattr(Chat, order_by):
|
||||
if direction.lower() == "asc":
|
||||
query = query.order_by(getattr(Chat, order_by).asc())
|
||||
elif direction.lower() == "desc":
|
||||
|
|
@ -649,18 +492,11 @@ class ChatTable:
|
|||
self,
|
||||
user_id: str,
|
||||
include_archived: bool = False,
|
||||
include_folders: bool = False,
|
||||
include_pinned: bool = False,
|
||||
skip: Optional[int] = None,
|
||||
limit: Optional[int] = None,
|
||||
) -> list[ChatTitleIdResponse]:
|
||||
with get_db() as db:
|
||||
query = db.query(Chat).filter_by(user_id=user_id)
|
||||
|
||||
if not include_folders:
|
||||
query = query.filter_by(folder_id=None)
|
||||
|
||||
if not include_pinned:
|
||||
query = db.query(Chat).filter_by(user_id=user_id).filter_by(folder_id=None)
|
||||
query = query.filter(or_(Chat.pinned == False, Chat.pinned == None))
|
||||
|
||||
if not include_archived:
|
||||
|
|
@ -706,15 +542,8 @@ class ChatTable:
|
|||
def get_chat_by_id(self, id: str) -> Optional[ChatModel]:
|
||||
try:
|
||||
with get_db() as db:
|
||||
chat_item = db.get(Chat, id)
|
||||
if chat_item is None:
|
||||
return None
|
||||
|
||||
if self._sanitize_chat_row(chat_item):
|
||||
db.commit()
|
||||
db.refresh(chat_item)
|
||||
|
||||
return ChatModel.model_validate(chat_item)
|
||||
chat = db.get(Chat, id)
|
||||
return ChatModel.model_validate(chat)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
|
@ -749,31 +578,14 @@ class ChatTable:
|
|||
)
|
||||
return [ChatModel.model_validate(chat) for chat in all_chats]
|
||||
|
||||
def get_chats_by_user_id(
|
||||
self, user_id: str, skip: Optional[int] = None, limit: Optional[int] = None
|
||||
) -> ChatListResponse:
|
||||
def get_chats_by_user_id(self, user_id: str) -> list[ChatModel]:
|
||||
with get_db() as db:
|
||||
query = (
|
||||
all_chats = (
|
||||
db.query(Chat)
|
||||
.filter_by(user_id=user_id)
|
||||
.order_by(Chat.updated_at.desc())
|
||||
)
|
||||
|
||||
total = query.count()
|
||||
|
||||
if skip is not None:
|
||||
query = query.offset(skip)
|
||||
if limit is not None:
|
||||
query = query.limit(limit)
|
||||
|
||||
all_chats = query.all()
|
||||
|
||||
return ChatListResponse(
|
||||
**{
|
||||
"items": [ChatModel.model_validate(chat) for chat in all_chats],
|
||||
"total": total,
|
||||
}
|
||||
)
|
||||
return [ChatModel.model_validate(chat) for chat in all_chats]
|
||||
|
||||
def get_pinned_chats_by_user_id(self, user_id: str) -> list[ChatModel]:
|
||||
with get_db() as db:
|
||||
|
|
@ -804,7 +616,7 @@ class ChatTable:
|
|||
"""
|
||||
Filters chats based on a search query using Python, allowing pagination using skip and limit.
|
||||
"""
|
||||
search_text = sanitize_text_for_db(search_text).lower().strip()
|
||||
search_text = search_text.replace("\u0000", "").lower().strip()
|
||||
|
||||
if not search_text:
|
||||
return self.get_chat_list_by_user_id(
|
||||
|
|
@ -934,32 +746,21 @@ class ChatTable:
|
|||
)
|
||||
|
||||
elif dialect_name == "postgresql":
|
||||
# PostgreSQL doesn't allow null bytes in text. We filter those out by checking
|
||||
# the JSON representation for \u0000 before attempting text extraction
|
||||
|
||||
# Safety filter: JSON field must not contain \u0000
|
||||
query = query.filter(text("Chat.chat::text NOT LIKE '%\\\\u0000%'"))
|
||||
|
||||
# Safety filter: title must not contain actual null bytes
|
||||
query = query.filter(text("Chat.title::text NOT LIKE '%\\x00%'"))
|
||||
|
||||
postgres_content_sql = """
|
||||
EXISTS (
|
||||
SELECT 1
|
||||
FROM json_array_elements(Chat.chat->'messages') AS message
|
||||
WHERE json_typeof(message->'content') = 'string'
|
||||
AND LOWER(message->>'content') LIKE '%' || :content_key || '%'
|
||||
# PostgreSQL relies on proper JSON query for search
|
||||
postgres_content_sql = (
|
||||
"EXISTS ("
|
||||
" SELECT 1 "
|
||||
" FROM json_array_elements(Chat.chat->'messages') AS message "
|
||||
" WHERE LOWER(message->>'content') LIKE '%' || :content_key || '%'"
|
||||
")"
|
||||
)
|
||||
"""
|
||||
|
||||
postgres_content_clause = text(postgres_content_sql)
|
||||
|
||||
query = query.filter(
|
||||
or_(
|
||||
Chat.title.ilike(bindparam("title_key")),
|
||||
postgres_content_clause,
|
||||
).params(title_key=f"%{search_text}%", content_key=search_text)
|
||||
)
|
||||
).params(title_key=f"%{search_text}%", content_key=search_text.lower())
|
||||
|
||||
# Check if there are any tags to filter, it should have all the tags
|
||||
if "none" in tag_ids:
|
||||
|
|
@ -1004,7 +805,7 @@ class ChatTable:
|
|||
return [ChatModel.model_validate(chat) for chat in all_chats]
|
||||
|
||||
def get_chats_by_folder_id_and_user_id(
|
||||
self, folder_id: str, user_id: str, skip: int = 0, limit: int = 60
|
||||
self, folder_id: str, user_id: str
|
||||
) -> list[ChatModel]:
|
||||
with get_db() as db:
|
||||
query = db.query(Chat).filter_by(folder_id=folder_id, user_id=user_id)
|
||||
|
|
@ -1013,11 +814,6 @@ class ChatTable:
|
|||
|
||||
query = query.order_by(Chat.updated_at.desc())
|
||||
|
||||
if skip:
|
||||
query = query.offset(skip)
|
||||
if limit:
|
||||
query = query.limit(limit)
|
||||
|
||||
all_chats = query.all()
|
||||
return [ChatModel.model_validate(chat) for chat in all_chats]
|
||||
|
||||
|
|
@ -1147,16 +943,6 @@ class ChatTable:
|
|||
|
||||
return count
|
||||
|
||||
def count_chats_by_folder_id_and_user_id(self, folder_id: str, user_id: str) -> int:
|
||||
with get_db() as db:
|
||||
query = db.query(Chat).filter_by(user_id=user_id)
|
||||
|
||||
query = query.filter_by(folder_id=folder_id)
|
||||
count = query.count()
|
||||
|
||||
log.info(f"Count of chats for folder '{folder_id}': {count}")
|
||||
return count
|
||||
|
||||
def delete_tag_by_id_and_user_id_and_tag_name(
|
||||
self, id: str, user_id: str, tag_name: str
|
||||
) -> bool:
|
||||
|
|
@ -1234,20 +1020,6 @@ class ChatTable:
|
|||
except Exception:
|
||||
return False
|
||||
|
||||
def move_chats_by_user_id_and_folder_id(
|
||||
self, user_id: str, folder_id: str, new_folder_id: Optional[str]
|
||||
) -> bool:
|
||||
try:
|
||||
with get_db() as db:
|
||||
db.query(Chat).filter_by(user_id=user_id, folder_id=folder_id).update(
|
||||
{"folder_id": new_folder_id}
|
||||
)
|
||||
db.commit()
|
||||
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def delete_shared_chats_by_user_id(self, user_id: str) -> bool:
|
||||
try:
|
||||
with get_db() as db:
|
||||
|
|
@ -1261,93 +1033,5 @@ class ChatTable:
|
|||
except Exception:
|
||||
return False
|
||||
|
||||
def insert_chat_files(
|
||||
self, chat_id: str, message_id: str, file_ids: list[str], user_id: str
|
||||
) -> Optional[list[ChatFileModel]]:
|
||||
if not file_ids:
|
||||
return None
|
||||
|
||||
chat_message_file_ids = [
|
||||
item.id
|
||||
for item in self.get_chat_files_by_chat_id_and_message_id(
|
||||
chat_id, message_id
|
||||
)
|
||||
]
|
||||
# Remove duplicates and existing file_ids
|
||||
file_ids = list(
|
||||
set(
|
||||
[
|
||||
file_id
|
||||
for file_id in file_ids
|
||||
if file_id and file_id not in chat_message_file_ids
|
||||
]
|
||||
)
|
||||
)
|
||||
if not file_ids:
|
||||
return None
|
||||
|
||||
try:
|
||||
with get_db() as db:
|
||||
now = int(time.time())
|
||||
|
||||
chat_files = [
|
||||
ChatFileModel(
|
||||
id=str(uuid.uuid4()),
|
||||
user_id=user_id,
|
||||
chat_id=chat_id,
|
||||
message_id=message_id,
|
||||
file_id=file_id,
|
||||
created_at=now,
|
||||
updated_at=now,
|
||||
)
|
||||
for file_id in file_ids
|
||||
]
|
||||
|
||||
results = [
|
||||
ChatFile(**chat_file.model_dump()) for chat_file in chat_files
|
||||
]
|
||||
|
||||
db.add_all(results)
|
||||
db.commit()
|
||||
|
||||
return chat_files
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def get_chat_files_by_chat_id_and_message_id(
|
||||
self, chat_id: str, message_id: str
|
||||
) -> list[ChatFileModel]:
|
||||
with get_db() as db:
|
||||
all_chat_files = (
|
||||
db.query(ChatFile)
|
||||
.filter_by(chat_id=chat_id, message_id=message_id)
|
||||
.order_by(ChatFile.created_at.asc())
|
||||
.all()
|
||||
)
|
||||
return [
|
||||
ChatFileModel.model_validate(chat_file) for chat_file in all_chat_files
|
||||
]
|
||||
|
||||
def delete_chat_file(self, chat_id: str, file_id: str) -> bool:
|
||||
try:
|
||||
with get_db() as db:
|
||||
db.query(ChatFile).filter_by(chat_id=chat_id, file_id=file_id).delete()
|
||||
db.commit()
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def get_shared_chats_by_file_id(self, file_id: str) -> list[ChatModel]:
|
||||
with get_db() as db:
|
||||
# Join Chat and ChatFile tables to get shared chats associated with the file_id
|
||||
all_chats = (
|
||||
db.query(Chat)
|
||||
.join(ChatFile, Chat.id == ChatFile.chat_id)
|
||||
.filter(ChatFile.file_id == file_id, Chat.share_id.isnot(None))
|
||||
.all()
|
||||
)
|
||||
|
||||
return [ChatModel.model_validate(chat) for chat in all_chats]
|
||||
|
||||
|
||||
Chats = ChatTable()
|
||||
|
|
|
|||
|
|
@ -4,12 +4,14 @@ import uuid
|
|||
from typing import Optional
|
||||
|
||||
from open_webui.internal.db import Base, get_db
|
||||
from open_webui.models.users import User
|
||||
from open_webui.models.chats import Chats
|
||||
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
from sqlalchemy import BigInteger, Column, Text, JSON, Boolean
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
||||
|
||||
|
||||
####################
|
||||
|
|
@ -19,7 +21,7 @@ log = logging.getLogger(__name__)
|
|||
|
||||
class Feedback(Base):
|
||||
__tablename__ = "feedback"
|
||||
id = Column(Text, primary_key=True, unique=True)
|
||||
id = Column(Text, primary_key=True)
|
||||
user_id = Column(Text)
|
||||
version = Column(BigInteger, default=0)
|
||||
type = Column(Text)
|
||||
|
|
@ -60,13 +62,6 @@ class FeedbackResponse(BaseModel):
|
|||
updated_at: int
|
||||
|
||||
|
||||
class FeedbackIdResponse(BaseModel):
|
||||
id: str
|
||||
user_id: str
|
||||
created_at: int
|
||||
updated_at: int
|
||||
|
||||
|
||||
class RatingData(BaseModel):
|
||||
rating: Optional[str | int] = None
|
||||
model_id: Optional[str] = None
|
||||
|
|
@ -97,28 +92,6 @@ class FeedbackForm(BaseModel):
|
|||
model_config = ConfigDict(extra="allow")
|
||||
|
||||
|
||||
class UserResponse(BaseModel):
|
||||
id: str
|
||||
name: str
|
||||
email: str
|
||||
role: str = "pending"
|
||||
|
||||
last_active_at: int # timestamp in epoch
|
||||
updated_at: int # timestamp in epoch
|
||||
created_at: int # timestamp in epoch
|
||||
|
||||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
|
||||
class FeedbackUserResponse(FeedbackResponse):
|
||||
user: Optional[UserResponse] = None
|
||||
|
||||
|
||||
class FeedbackListResponse(BaseModel):
|
||||
items: list[FeedbackUserResponse]
|
||||
total: int
|
||||
|
||||
|
||||
class FeedbackTable:
|
||||
def insert_new_feedback(
|
||||
self, user_id: str, form_data: FeedbackForm
|
||||
|
|
@ -170,70 +143,6 @@ class FeedbackTable:
|
|||
except Exception:
|
||||
return None
|
||||
|
||||
def get_feedback_items(
|
||||
self, filter: dict = {}, skip: int = 0, limit: int = 30
|
||||
) -> FeedbackListResponse:
|
||||
with get_db() as db:
|
||||
query = db.query(Feedback, User).join(User, Feedback.user_id == User.id)
|
||||
|
||||
if filter:
|
||||
order_by = filter.get("order_by")
|
||||
direction = filter.get("direction")
|
||||
|
||||
if order_by == "username":
|
||||
if direction == "asc":
|
||||
query = query.order_by(User.name.asc())
|
||||
else:
|
||||
query = query.order_by(User.name.desc())
|
||||
elif order_by == "model_id":
|
||||
# it's stored in feedback.data['model_id']
|
||||
if direction == "asc":
|
||||
query = query.order_by(
|
||||
Feedback.data["model_id"].as_string().asc()
|
||||
)
|
||||
else:
|
||||
query = query.order_by(
|
||||
Feedback.data["model_id"].as_string().desc()
|
||||
)
|
||||
elif order_by == "rating":
|
||||
# it's stored in feedback.data['rating']
|
||||
if direction == "asc":
|
||||
query = query.order_by(
|
||||
Feedback.data["rating"].as_string().asc()
|
||||
)
|
||||
else:
|
||||
query = query.order_by(
|
||||
Feedback.data["rating"].as_string().desc()
|
||||
)
|
||||
elif order_by == "updated_at":
|
||||
if direction == "asc":
|
||||
query = query.order_by(Feedback.updated_at.asc())
|
||||
else:
|
||||
query = query.order_by(Feedback.updated_at.desc())
|
||||
|
||||
else:
|
||||
query = query.order_by(Feedback.created_at.desc())
|
||||
|
||||
# Count BEFORE pagination
|
||||
total = query.count()
|
||||
|
||||
if skip:
|
||||
query = query.offset(skip)
|
||||
if limit:
|
||||
query = query.limit(limit)
|
||||
|
||||
items = query.all()
|
||||
|
||||
feedbacks = []
|
||||
for feedback, user in items:
|
||||
feedback_model = FeedbackModel.model_validate(feedback)
|
||||
user_model = UserResponse.model_validate(user)
|
||||
feedbacks.append(
|
||||
FeedbackUserResponse(**feedback_model.model_dump(), user=user_model)
|
||||
)
|
||||
|
||||
return FeedbackListResponse(items=feedbacks, total=total)
|
||||
|
||||
def get_all_feedbacks(self) -> list[FeedbackModel]:
|
||||
with get_db() as db:
|
||||
return [
|
||||
|
|
|
|||
|
|
@ -3,10 +3,12 @@ import time
|
|||
from typing import Optional
|
||||
|
||||
from open_webui.internal.db import Base, JSONField, get_db
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
from sqlalchemy import BigInteger, Column, String, Text, JSON
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
||||
|
||||
####################
|
||||
# Files DB Schema
|
||||
|
|
@ -15,7 +17,7 @@ log = logging.getLogger(__name__)
|
|||
|
||||
class File(Base):
|
||||
__tablename__ = "file"
|
||||
id = Column(String, primary_key=True, unique=True)
|
||||
id = Column(String, primary_key=True)
|
||||
user_id = Column(String)
|
||||
hash = Column(Text, nullable=True)
|
||||
|
||||
|
|
@ -80,8 +82,7 @@ class FileModelResponse(BaseModel):
|
|||
|
||||
class FileMetadataResponse(BaseModel):
|
||||
id: str
|
||||
hash: Optional[str] = None
|
||||
meta: Optional[dict] = None
|
||||
meta: dict
|
||||
created_at: int # timestamp in epoch
|
||||
updated_at: int # timestamp in epoch
|
||||
|
||||
|
|
@ -96,17 +97,6 @@ class FileForm(BaseModel):
|
|||
access_control: Optional[dict] = None
|
||||
|
||||
|
||||
class FileUpdateForm(BaseModel):
|
||||
hash: Optional[str] = None
|
||||
data: Optional[dict] = None
|
||||
meta: Optional[dict] = None
|
||||
|
||||
|
||||
class FileListResponse(BaseModel):
|
||||
items: list[FileModel]
|
||||
total: int
|
||||
|
||||
|
||||
class FilesTable:
|
||||
def insert_new_file(self, user_id: str, form_data: FileForm) -> Optional[FileModel]:
|
||||
with get_db() as db:
|
||||
|
|
@ -140,24 +130,12 @@ class FilesTable:
|
|||
except Exception:
|
||||
return None
|
||||
|
||||
def get_file_by_id_and_user_id(self, id: str, user_id: str) -> Optional[FileModel]:
|
||||
with get_db() as db:
|
||||
try:
|
||||
file = db.query(File).filter_by(id=id, user_id=user_id).first()
|
||||
if file:
|
||||
return FileModel.model_validate(file)
|
||||
else:
|
||||
return None
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def get_file_metadata_by_id(self, id: str) -> Optional[FileMetadataResponse]:
|
||||
with get_db() as db:
|
||||
try:
|
||||
file = db.get(File, id)
|
||||
return FileMetadataResponse(
|
||||
id=file.id,
|
||||
hash=file.hash,
|
||||
meta=file.meta,
|
||||
created_at=file.created_at,
|
||||
updated_at=file.updated_at,
|
||||
|
|
@ -169,15 +147,6 @@ class FilesTable:
|
|||
with get_db() as db:
|
||||
return [FileModel.model_validate(file) for file in db.query(File).all()]
|
||||
|
||||
def check_access_by_user_id(self, id, user_id, permission="write") -> bool:
|
||||
file = self.get_file_by_id(id)
|
||||
if not file:
|
||||
return False
|
||||
if file.user_id == user_id:
|
||||
return True
|
||||
# Implement additional access control logic here as needed
|
||||
return False
|
||||
|
||||
def get_files_by_ids(self, ids: list[str]) -> list[FileModel]:
|
||||
with get_db() as db:
|
||||
return [
|
||||
|
|
@ -193,14 +162,11 @@ class FilesTable:
|
|||
return [
|
||||
FileMetadataResponse(
|
||||
id=file.id,
|
||||
hash=file.hash,
|
||||
meta=file.meta,
|
||||
created_at=file.created_at,
|
||||
updated_at=file.updated_at,
|
||||
)
|
||||
for file in db.query(
|
||||
File.id, File.hash, File.meta, File.created_at, File.updated_at
|
||||
)
|
||||
for file in db.query(File)
|
||||
.filter(File.id.in_(ids))
|
||||
.order_by(File.updated_at.desc())
|
||||
.all()
|
||||
|
|
@ -213,35 +179,11 @@ class FilesTable:
|
|||
for file in db.query(File).filter_by(user_id=user_id).all()
|
||||
]
|
||||
|
||||
def update_file_by_id(
|
||||
self, id: str, form_data: FileUpdateForm
|
||||
) -> Optional[FileModel]:
|
||||
with get_db() as db:
|
||||
try:
|
||||
file = db.query(File).filter_by(id=id).first()
|
||||
|
||||
if form_data.hash is not None:
|
||||
file.hash = form_data.hash
|
||||
|
||||
if form_data.data is not None:
|
||||
file.data = {**(file.data if file.data else {}), **form_data.data}
|
||||
|
||||
if form_data.meta is not None:
|
||||
file.meta = {**(file.meta if file.meta else {}), **form_data.meta}
|
||||
|
||||
file.updated_at = int(time.time())
|
||||
db.commit()
|
||||
return FileModel.model_validate(file)
|
||||
except Exception as e:
|
||||
log.exception(f"Error updating file completely by id: {e}")
|
||||
return None
|
||||
|
||||
def update_file_hash_by_id(self, id: str, hash: str) -> Optional[FileModel]:
|
||||
with get_db() as db:
|
||||
try:
|
||||
file = db.query(File).filter_by(id=id).first()
|
||||
file.hash = hash
|
||||
file.updated_at = int(time.time())
|
||||
db.commit()
|
||||
|
||||
return FileModel.model_validate(file)
|
||||
|
|
@ -253,7 +195,6 @@ class FilesTable:
|
|||
try:
|
||||
file = db.query(File).filter_by(id=id).first()
|
||||
file.data = {**(file.data if file.data else {}), **data}
|
||||
file.updated_at = int(time.time())
|
||||
db.commit()
|
||||
return FileModel.model_validate(file)
|
||||
except Exception as e:
|
||||
|
|
@ -265,7 +206,6 @@ class FilesTable:
|
|||
try:
|
||||
file = db.query(File).filter_by(id=id).first()
|
||||
file.meta = {**(file.meta if file.meta else {}), **meta}
|
||||
file.updated_at = int(time.time())
|
||||
db.commit()
|
||||
return FileModel.model_validate(file)
|
||||
except Exception:
|
||||
|
|
|
|||
|
|
@ -9,9 +9,11 @@ from pydantic import BaseModel, ConfigDict
|
|||
from sqlalchemy import BigInteger, Column, Text, JSON, Boolean, func
|
||||
|
||||
from open_webui.internal.db import Base, get_db
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
||||
|
||||
|
||||
####################
|
||||
|
|
@ -21,7 +23,7 @@ log = logging.getLogger(__name__)
|
|||
|
||||
class Folder(Base):
|
||||
__tablename__ = "folder"
|
||||
id = Column(Text, primary_key=True, unique=True)
|
||||
id = Column(Text, primary_key=True)
|
||||
parent_id = Column(Text, nullable=True)
|
||||
user_id = Column(Text)
|
||||
name = Column(Text)
|
||||
|
|
@ -48,20 +50,6 @@ class FolderModel(BaseModel):
|
|||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
|
||||
class FolderMetadataResponse(BaseModel):
|
||||
icon: Optional[str] = None
|
||||
|
||||
|
||||
class FolderNameIdResponse(BaseModel):
|
||||
id: str
|
||||
name: str
|
||||
meta: Optional[FolderMetadataResponse] = None
|
||||
parent_id: Optional[str] = None
|
||||
is_expanded: bool = False
|
||||
created_at: int
|
||||
updated_at: int
|
||||
|
||||
|
||||
####################
|
||||
# Forms
|
||||
####################
|
||||
|
|
@ -70,14 +58,6 @@ class FolderNameIdResponse(BaseModel):
|
|||
class FolderForm(BaseModel):
|
||||
name: str
|
||||
data: Optional[dict] = None
|
||||
meta: Optional[dict] = None
|
||||
model_config = ConfigDict(extra="allow")
|
||||
|
||||
|
||||
class FolderUpdateForm(BaseModel):
|
||||
name: Optional[str] = None
|
||||
data: Optional[dict] = None
|
||||
meta: Optional[dict] = None
|
||||
model_config = ConfigDict(extra="allow")
|
||||
|
||||
|
||||
|
|
@ -211,7 +191,7 @@ class FolderTable:
|
|||
return
|
||||
|
||||
def update_folder_by_id_and_user_id(
|
||||
self, id: str, user_id: str, form_data: FolderUpdateForm
|
||||
self, id: str, user_id: str, form_data: FolderForm
|
||||
) -> Optional[FolderModel]:
|
||||
try:
|
||||
with get_db() as db:
|
||||
|
|
@ -242,13 +222,8 @@ class FolderTable:
|
|||
**form_data["data"],
|
||||
}
|
||||
|
||||
if "meta" in form_data:
|
||||
folder.meta = {
|
||||
**(folder.meta or {}),
|
||||
**form_data["meta"],
|
||||
}
|
||||
|
||||
folder.updated_at = int(time.time())
|
||||
|
||||
db.commit()
|
||||
|
||||
return FolderModel.model_validate(folder)
|
||||
|
|
|
|||
|
|
@ -3,11 +3,13 @@ import time
|
|||
from typing import Optional
|
||||
|
||||
from open_webui.internal.db import Base, JSONField, get_db
|
||||
from open_webui.models.users import Users, UserModel
|
||||
from open_webui.models.users import Users
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
from sqlalchemy import BigInteger, Boolean, Column, String, Text, Index
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
||||
|
||||
####################
|
||||
# Functions DB Schema
|
||||
|
|
@ -17,7 +19,7 @@ log = logging.getLogger(__name__)
|
|||
class Function(Base):
|
||||
__tablename__ = "function"
|
||||
|
||||
id = Column(String, primary_key=True, unique=True)
|
||||
id = Column(String, primary_key=True)
|
||||
user_id = Column(String)
|
||||
name = Column(Text)
|
||||
type = Column(Text)
|
||||
|
|
@ -35,7 +37,6 @@ class Function(Base):
|
|||
class FunctionMeta(BaseModel):
|
||||
description: Optional[str] = None
|
||||
manifest: Optional[dict] = {}
|
||||
model_config = ConfigDict(extra="allow")
|
||||
|
||||
|
||||
class FunctionModel(BaseModel):
|
||||
|
|
@ -53,31 +54,11 @@ class FunctionModel(BaseModel):
|
|||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
|
||||
class FunctionWithValvesModel(BaseModel):
|
||||
id: str
|
||||
user_id: str
|
||||
name: str
|
||||
type: str
|
||||
content: str
|
||||
meta: FunctionMeta
|
||||
valves: Optional[dict] = None
|
||||
is_active: bool = False
|
||||
is_global: bool = False
|
||||
updated_at: int # timestamp in epoch
|
||||
created_at: int # timestamp in epoch
|
||||
|
||||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
|
||||
####################
|
||||
# Forms
|
||||
####################
|
||||
|
||||
|
||||
class FunctionUserResponse(FunctionModel):
|
||||
user: Optional[UserModel] = None
|
||||
|
||||
|
||||
class FunctionResponse(BaseModel):
|
||||
id: str
|
||||
user_id: str
|
||||
|
|
@ -130,8 +111,8 @@ class FunctionsTable:
|
|||
return None
|
||||
|
||||
def sync_functions(
|
||||
self, user_id: str, functions: list[FunctionWithValvesModel]
|
||||
) -> list[FunctionWithValvesModel]:
|
||||
self, user_id: str, functions: list[FunctionModel]
|
||||
) -> list[FunctionModel]:
|
||||
# Synchronize functions for a user by updating existing ones, inserting new ones, and removing those that are no longer present.
|
||||
try:
|
||||
with get_db() as db:
|
||||
|
|
@ -185,46 +166,17 @@ class FunctionsTable:
|
|||
except Exception:
|
||||
return None
|
||||
|
||||
def get_functions(
|
||||
self, active_only=False, include_valves=False
|
||||
) -> list[FunctionModel | FunctionWithValvesModel]:
|
||||
def get_functions(self, active_only=False) -> list[FunctionModel]:
|
||||
with get_db() as db:
|
||||
if active_only:
|
||||
functions = db.query(Function).filter_by(is_active=True).all()
|
||||
|
||||
else:
|
||||
functions = db.query(Function).all()
|
||||
|
||||
if include_valves:
|
||||
return [
|
||||
FunctionWithValvesModel.model_validate(function)
|
||||
for function in functions
|
||||
FunctionModel.model_validate(function)
|
||||
for function in db.query(Function).filter_by(is_active=True).all()
|
||||
]
|
||||
else:
|
||||
return [
|
||||
FunctionModel.model_validate(function) for function in functions
|
||||
]
|
||||
|
||||
def get_function_list(self) -> list[FunctionUserResponse]:
|
||||
with get_db() as db:
|
||||
functions = db.query(Function).order_by(Function.updated_at.desc()).all()
|
||||
user_ids = list(set(func.user_id for func in functions))
|
||||
|
||||
users = Users.get_users_by_user_ids(user_ids) if user_ids else []
|
||||
users_dict = {user.id: user for user in users}
|
||||
|
||||
return [
|
||||
FunctionUserResponse.model_validate(
|
||||
{
|
||||
**FunctionModel.model_validate(func).model_dump(),
|
||||
"user": (
|
||||
users_dict.get(func.user_id).model_dump()
|
||||
if func.user_id in users_dict
|
||||
else None
|
||||
),
|
||||
}
|
||||
)
|
||||
for func in functions
|
||||
FunctionModel.model_validate(function)
|
||||
for function in db.query(Function).all()
|
||||
]
|
||||
|
||||
def get_functions_by_type(
|
||||
|
|
@ -285,29 +237,6 @@ class FunctionsTable:
|
|||
except Exception:
|
||||
return None
|
||||
|
||||
def update_function_metadata_by_id(
|
||||
self, id: str, metadata: dict
|
||||
) -> Optional[FunctionModel]:
|
||||
with get_db() as db:
|
||||
try:
|
||||
function = db.get(Function, id)
|
||||
|
||||
if function:
|
||||
if function.meta:
|
||||
function.meta = {**function.meta, **metadata}
|
||||
else:
|
||||
function.meta = metadata
|
||||
|
||||
function.updated_at = int(time.time())
|
||||
db.commit()
|
||||
db.refresh(function)
|
||||
return self.get_function_by_id(id)
|
||||
else:
|
||||
return None
|
||||
except Exception as e:
|
||||
log.exception(f"Error updating function metadata by id {id}: {e}")
|
||||
return None
|
||||
|
||||
def get_user_valves_by_id_and_user_id(
|
||||
self, id: str, user_id: str
|
||||
) -> Optional[dict]:
|
||||
|
|
|
|||
|
|
@ -5,26 +5,17 @@ from typing import Optional
|
|||
import uuid
|
||||
|
||||
from open_webui.internal.db import Base, get_db
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
from open_webui.models.files import FileMetadataResponse
|
||||
|
||||
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
from sqlalchemy import (
|
||||
BigInteger,
|
||||
Column,
|
||||
String,
|
||||
Text,
|
||||
JSON,
|
||||
and_,
|
||||
func,
|
||||
ForeignKey,
|
||||
cast,
|
||||
or_,
|
||||
)
|
||||
from sqlalchemy import BigInteger, Column, String, Text, JSON, func
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
||||
|
||||
####################
|
||||
# UserGroup DB Schema
|
||||
|
|
@ -44,12 +35,14 @@ class Group(Base):
|
|||
meta = Column(JSON, nullable=True)
|
||||
|
||||
permissions = Column(JSON, nullable=True)
|
||||
user_ids = Column(JSON, nullable=True)
|
||||
|
||||
created_at = Column(BigInteger)
|
||||
updated_at = Column(BigInteger)
|
||||
|
||||
|
||||
class GroupModel(BaseModel):
|
||||
model_config = ConfigDict(from_attributes=True)
|
||||
id: str
|
||||
user_id: str
|
||||
|
||||
|
|
@ -60,64 +53,44 @@ class GroupModel(BaseModel):
|
|||
meta: Optional[dict] = None
|
||||
|
||||
permissions: Optional[dict] = None
|
||||
user_ids: list[str] = []
|
||||
|
||||
created_at: int # timestamp in epoch
|
||||
updated_at: int # timestamp in epoch
|
||||
|
||||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
|
||||
class GroupMember(Base):
|
||||
__tablename__ = "group_member"
|
||||
|
||||
id = Column(Text, unique=True, primary_key=True)
|
||||
group_id = Column(
|
||||
Text,
|
||||
ForeignKey("group.id", ondelete="CASCADE"),
|
||||
nullable=False,
|
||||
)
|
||||
user_id = Column(Text, nullable=False)
|
||||
created_at = Column(BigInteger, nullable=True)
|
||||
updated_at = Column(BigInteger, nullable=True)
|
||||
|
||||
|
||||
class GroupMemberModel(BaseModel):
|
||||
id: str
|
||||
group_id: str
|
||||
user_id: str
|
||||
created_at: Optional[int] = None # timestamp in epoch
|
||||
updated_at: Optional[int] = None # timestamp in epoch
|
||||
|
||||
|
||||
####################
|
||||
# Forms
|
||||
####################
|
||||
|
||||
|
||||
class GroupResponse(GroupModel):
|
||||
member_count: Optional[int] = None
|
||||
class GroupResponse(BaseModel):
|
||||
id: str
|
||||
user_id: str
|
||||
name: str
|
||||
description: str
|
||||
permissions: Optional[dict] = None
|
||||
data: Optional[dict] = None
|
||||
meta: Optional[dict] = None
|
||||
user_ids: list[str] = []
|
||||
created_at: int # timestamp in epoch
|
||||
updated_at: int # timestamp in epoch
|
||||
|
||||
|
||||
class GroupForm(BaseModel):
|
||||
name: str
|
||||
description: str
|
||||
permissions: Optional[dict] = None
|
||||
data: Optional[dict] = None
|
||||
|
||||
|
||||
class UserIdsForm(BaseModel):
|
||||
user_ids: Optional[list[str]] = None
|
||||
|
||||
|
||||
class GroupUpdateForm(GroupForm):
|
||||
class GroupUpdateForm(GroupForm, UserIdsForm):
|
||||
pass
|
||||
|
||||
|
||||
class GroupListResponse(BaseModel):
|
||||
items: list[GroupResponse] = []
|
||||
total: int = 0
|
||||
|
||||
|
||||
class GroupTable:
|
||||
def insert_new_group(
|
||||
self, user_id: str, form_data: GroupForm
|
||||
|
|
@ -146,94 +119,24 @@ class GroupTable:
|
|||
except Exception:
|
||||
return None
|
||||
|
||||
def get_all_groups(self) -> list[GroupModel]:
|
||||
def get_groups(self) -> list[GroupModel]:
|
||||
with get_db() as db:
|
||||
groups = db.query(Group).order_by(Group.updated_at.desc()).all()
|
||||
return [GroupModel.model_validate(group) for group in groups]
|
||||
|
||||
def get_groups(self, filter) -> list[GroupResponse]:
|
||||
with get_db() as db:
|
||||
query = db.query(Group)
|
||||
|
||||
if filter:
|
||||
if "query" in filter:
|
||||
query = query.filter(Group.name.ilike(f"%{filter['query']}%"))
|
||||
if "member_id" in filter:
|
||||
query = query.join(
|
||||
GroupMember, GroupMember.group_id == Group.id
|
||||
).filter(GroupMember.user_id == filter["member_id"])
|
||||
|
||||
if "share" in filter:
|
||||
share_value = filter["share"]
|
||||
json_share = Group.data["config"]["share"].as_boolean()
|
||||
|
||||
if share_value:
|
||||
query = query.filter(
|
||||
or_(
|
||||
Group.data.is_(None),
|
||||
json_share.is_(None),
|
||||
json_share == True,
|
||||
)
|
||||
)
|
||||
else:
|
||||
query = query.filter(
|
||||
and_(Group.data.isnot(None), json_share == False)
|
||||
)
|
||||
groups = query.order_by(Group.updated_at.desc()).all()
|
||||
return [
|
||||
GroupResponse.model_validate(
|
||||
{
|
||||
**GroupModel.model_validate(group).model_dump(),
|
||||
"member_count": self.get_group_member_count_by_id(group.id),
|
||||
}
|
||||
)
|
||||
for group in groups
|
||||
GroupModel.model_validate(group)
|
||||
for group in db.query(Group).order_by(Group.updated_at.desc()).all()
|
||||
]
|
||||
|
||||
def search_groups(
|
||||
self, filter: Optional[dict] = None, skip: int = 0, limit: int = 30
|
||||
) -> GroupListResponse:
|
||||
with get_db() as db:
|
||||
query = db.query(Group)
|
||||
|
||||
if filter:
|
||||
if "query" in filter:
|
||||
query = query.filter(Group.name.ilike(f"%{filter['query']}%"))
|
||||
if "member_id" in filter:
|
||||
query = query.join(
|
||||
GroupMember, GroupMember.group_id == Group.id
|
||||
).filter(GroupMember.user_id == filter["member_id"])
|
||||
|
||||
if "share" in filter:
|
||||
# 'share' is stored in data JSON, support both sqlite and postgres
|
||||
share_value = filter["share"]
|
||||
print("Filtering by share:", share_value)
|
||||
query = query.filter(
|
||||
Group.data.op("->>")("share") == str(share_value)
|
||||
)
|
||||
|
||||
total = query.count()
|
||||
query = query.order_by(Group.updated_at.desc())
|
||||
groups = query.offset(skip).limit(limit).all()
|
||||
|
||||
return {
|
||||
"items": [
|
||||
GroupResponse.model_validate(
|
||||
**GroupModel.model_validate(group).model_dump(),
|
||||
member_count=self.get_group_member_count_by_id(group.id),
|
||||
)
|
||||
for group in groups
|
||||
],
|
||||
"total": total,
|
||||
}
|
||||
|
||||
def get_groups_by_member_id(self, user_id: str) -> list[GroupModel]:
|
||||
with get_db() as db:
|
||||
return [
|
||||
GroupModel.model_validate(group)
|
||||
for group in db.query(Group)
|
||||
.join(GroupMember, GroupMember.group_id == Group.id)
|
||||
.filter(GroupMember.user_id == user_id)
|
||||
.filter(
|
||||
func.json_array_length(Group.user_ids) > 0
|
||||
) # Ensure array exists
|
||||
.filter(
|
||||
Group.user_ids.cast(String).like(f'%"{user_id}"%')
|
||||
) # String-based check
|
||||
.order_by(Group.updated_at.desc())
|
||||
.all()
|
||||
]
|
||||
|
|
@ -246,64 +149,13 @@ class GroupTable:
|
|||
except Exception:
|
||||
return None
|
||||
|
||||
def get_group_user_ids_by_id(self, id: str) -> Optional[list[str]]:
|
||||
with get_db() as db:
|
||||
members = (
|
||||
db.query(GroupMember.user_id).filter(GroupMember.group_id == id).all()
|
||||
)
|
||||
|
||||
if not members:
|
||||
def get_group_user_ids_by_id(self, id: str) -> Optional[str]:
|
||||
group = self.get_group_by_id(id)
|
||||
if group:
|
||||
return group.user_ids
|
||||
else:
|
||||
return None
|
||||
|
||||
return [m[0] for m in members]
|
||||
|
||||
def get_group_user_ids_by_ids(self, group_ids: list[str]) -> dict[str, list[str]]:
|
||||
with get_db() as db:
|
||||
members = (
|
||||
db.query(GroupMember.group_id, GroupMember.user_id)
|
||||
.filter(GroupMember.group_id.in_(group_ids))
|
||||
.all()
|
||||
)
|
||||
|
||||
group_user_ids: dict[str, list[str]] = {
|
||||
group_id: [] for group_id in group_ids
|
||||
}
|
||||
|
||||
for group_id, user_id in members:
|
||||
group_user_ids[group_id].append(user_id)
|
||||
|
||||
return group_user_ids
|
||||
|
||||
def set_group_user_ids_by_id(self, group_id: str, user_ids: list[str]) -> None:
|
||||
with get_db() as db:
|
||||
# Delete existing members
|
||||
db.query(GroupMember).filter(GroupMember.group_id == group_id).delete()
|
||||
|
||||
# Insert new members
|
||||
now = int(time.time())
|
||||
new_members = [
|
||||
GroupMember(
|
||||
id=str(uuid.uuid4()),
|
||||
group_id=group_id,
|
||||
user_id=user_id,
|
||||
created_at=now,
|
||||
updated_at=now,
|
||||
)
|
||||
for user_id in user_ids
|
||||
]
|
||||
|
||||
db.add_all(new_members)
|
||||
db.commit()
|
||||
|
||||
def get_group_member_count_by_id(self, id: str) -> int:
|
||||
with get_db() as db:
|
||||
count = (
|
||||
db.query(func.count(GroupMember.user_id))
|
||||
.filter(GroupMember.group_id == id)
|
||||
.scalar()
|
||||
)
|
||||
return count if count else 0
|
||||
|
||||
def update_group_by_id(
|
||||
self, id: str, form_data: GroupUpdateForm, overwrite: bool = False
|
||||
) -> Optional[GroupModel]:
|
||||
|
|
@ -343,29 +195,20 @@ class GroupTable:
|
|||
def remove_user_from_all_groups(self, user_id: str) -> bool:
|
||||
with get_db() as db:
|
||||
try:
|
||||
# Find all groups the user belongs to
|
||||
groups = (
|
||||
db.query(Group)
|
||||
.join(GroupMember, GroupMember.group_id == Group.id)
|
||||
.filter(GroupMember.user_id == user_id)
|
||||
.all()
|
||||
)
|
||||
groups = self.get_groups_by_member_id(user_id)
|
||||
|
||||
# Remove the user from each group
|
||||
for group in groups:
|
||||
db.query(GroupMember).filter(
|
||||
GroupMember.group_id == group.id, GroupMember.user_id == user_id
|
||||
).delete()
|
||||
|
||||
group.user_ids.remove(user_id)
|
||||
db.query(Group).filter_by(id=group.id).update(
|
||||
{"updated_at": int(time.time())}
|
||||
{
|
||||
"user_ids": group.user_ids,
|
||||
"updated_at": int(time.time()),
|
||||
}
|
||||
)
|
||||
|
||||
db.commit()
|
||||
return True
|
||||
|
||||
return True
|
||||
except Exception:
|
||||
db.rollback()
|
||||
return False
|
||||
|
||||
def create_groups_by_group_names(
|
||||
|
|
@ -373,7 +216,7 @@ class GroupTable:
|
|||
) -> list[GroupModel]:
|
||||
|
||||
# check for existing groups
|
||||
existing_groups = self.get_all_groups()
|
||||
existing_groups = self.get_groups()
|
||||
existing_group_names = {group.name for group in existing_groups}
|
||||
|
||||
new_groups = []
|
||||
|
|
@ -403,61 +246,37 @@ class GroupTable:
|
|||
def sync_groups_by_group_names(self, user_id: str, group_names: list[str]) -> bool:
|
||||
with get_db() as db:
|
||||
try:
|
||||
now = int(time.time())
|
||||
groups = db.query(Group).filter(Group.name.in_(group_names)).all()
|
||||
group_ids = [group.id for group in groups]
|
||||
|
||||
# 1. Groups that SHOULD contain the user
|
||||
target_groups = (
|
||||
db.query(Group).filter(Group.name.in_(group_names)).all()
|
||||
)
|
||||
target_group_ids = {g.id for g in target_groups}
|
||||
# Remove user from groups not in the new list
|
||||
existing_groups = self.get_groups_by_member_id(user_id)
|
||||
|
||||
# 2. Groups the user is CURRENTLY in
|
||||
existing_group_ids = {
|
||||
g.id
|
||||
for g in db.query(Group)
|
||||
.join(GroupMember, GroupMember.group_id == Group.id)
|
||||
.filter(GroupMember.user_id == user_id)
|
||||
.all()
|
||||
for group in existing_groups:
|
||||
if group.id not in group_ids:
|
||||
group.user_ids.remove(user_id)
|
||||
db.query(Group).filter_by(id=group.id).update(
|
||||
{
|
||||
"user_ids": group.user_ids,
|
||||
"updated_at": int(time.time()),
|
||||
}
|
||||
|
||||
# 3. Determine adds + removals
|
||||
groups_to_add = target_group_ids - existing_group_ids
|
||||
groups_to_remove = existing_group_ids - target_group_ids
|
||||
|
||||
# 4. Remove in one bulk delete
|
||||
if groups_to_remove:
|
||||
db.query(GroupMember).filter(
|
||||
GroupMember.user_id == user_id,
|
||||
GroupMember.group_id.in_(groups_to_remove),
|
||||
).delete(synchronize_session=False)
|
||||
|
||||
db.query(Group).filter(Group.id.in_(groups_to_remove)).update(
|
||||
{"updated_at": now}, synchronize_session=False
|
||||
)
|
||||
|
||||
# 5. Bulk insert missing memberships
|
||||
for group_id in groups_to_add:
|
||||
db.add(
|
||||
GroupMember(
|
||||
id=str(uuid.uuid4()),
|
||||
group_id=group_id,
|
||||
user_id=user_id,
|
||||
created_at=now,
|
||||
updated_at=now,
|
||||
)
|
||||
)
|
||||
|
||||
if groups_to_add:
|
||||
db.query(Group).filter(Group.id.in_(groups_to_add)).update(
|
||||
{"updated_at": now}, synchronize_session=False
|
||||
# Add user to new groups
|
||||
for group in groups:
|
||||
if user_id not in group.user_ids:
|
||||
group.user_ids.append(user_id)
|
||||
db.query(Group).filter_by(id=group.id).update(
|
||||
{
|
||||
"user_ids": group.user_ids,
|
||||
"updated_at": int(time.time()),
|
||||
}
|
||||
)
|
||||
|
||||
db.commit()
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
db.rollback()
|
||||
return False
|
||||
|
||||
def add_users_to_group(
|
||||
|
|
@ -469,31 +288,17 @@ class GroupTable:
|
|||
if not group:
|
||||
return None
|
||||
|
||||
now = int(time.time())
|
||||
if not group.user_ids:
|
||||
group.user_ids = []
|
||||
|
||||
for user_id in user_ids or []:
|
||||
try:
|
||||
db.add(
|
||||
GroupMember(
|
||||
id=str(uuid.uuid4()),
|
||||
group_id=id,
|
||||
user_id=user_id,
|
||||
created_at=now,
|
||||
updated_at=now,
|
||||
)
|
||||
)
|
||||
db.flush() # Detect unique constraint violation early
|
||||
except Exception:
|
||||
db.rollback() # Clear failed INSERT
|
||||
db.begin() # Start a new transaction
|
||||
continue # Duplicate → ignore
|
||||
for user_id in user_ids:
|
||||
if user_id not in group.user_ids:
|
||||
group.user_ids.append(user_id)
|
||||
|
||||
group.updated_at = now
|
||||
group.updated_at = int(time.time())
|
||||
db.commit()
|
||||
db.refresh(group)
|
||||
|
||||
return GroupModel.model_validate(group)
|
||||
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
return None
|
||||
|
|
@ -507,22 +312,17 @@ class GroupTable:
|
|||
if not group:
|
||||
return None
|
||||
|
||||
if not user_ids:
|
||||
if not group.user_ids:
|
||||
return GroupModel.model_validate(group)
|
||||
|
||||
# Remove each user from group_member
|
||||
for user_id in user_ids:
|
||||
db.query(GroupMember).filter(
|
||||
GroupMember.group_id == id, GroupMember.user_id == user_id
|
||||
).delete()
|
||||
if user_id in group.user_ids:
|
||||
group.user_ids.remove(user_id)
|
||||
|
||||
# Update group timestamp
|
||||
group.updated_at = int(time.time())
|
||||
|
||||
db.commit()
|
||||
db.refresh(group)
|
||||
return GroupModel.model_validate(group)
|
||||
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
return None
|
||||
|
|
|
|||
|
|
@ -5,34 +5,19 @@ from typing import Optional
|
|||
import uuid
|
||||
|
||||
from open_webui.internal.db import Base, get_db
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
from open_webui.models.files import (
|
||||
File,
|
||||
FileModel,
|
||||
FileMetadataResponse,
|
||||
FileModelResponse,
|
||||
)
|
||||
from open_webui.models.groups import Groups
|
||||
from open_webui.models.users import User, UserModel, Users, UserResponse
|
||||
from open_webui.models.files import FileMetadataResponse
|
||||
from open_webui.models.users import Users, UserResponse
|
||||
|
||||
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
from sqlalchemy import (
|
||||
BigInteger,
|
||||
Column,
|
||||
ForeignKey,
|
||||
String,
|
||||
Text,
|
||||
JSON,
|
||||
UniqueConstraint,
|
||||
or_,
|
||||
)
|
||||
from sqlalchemy import BigInteger, Column, String, Text, JSON
|
||||
|
||||
from open_webui.utils.access_control import has_access
|
||||
from open_webui.utils.db.access_control import has_permission
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
||||
|
||||
####################
|
||||
# Knowledge DB Schema
|
||||
|
|
@ -48,7 +33,9 @@ class Knowledge(Base):
|
|||
name = Column(Text)
|
||||
description = Column(Text)
|
||||
|
||||
data = Column(JSON, nullable=True)
|
||||
meta = Column(JSON, nullable=True)
|
||||
|
||||
access_control = Column(JSON, nullable=True) # Controls data access levels.
|
||||
# Defines access control rules for this entry.
|
||||
# - `None`: Public access, available to all users with the "user" role.
|
||||
|
|
@ -79,6 +66,7 @@ class KnowledgeModel(BaseModel):
|
|||
name: str
|
||||
description: str
|
||||
|
||||
data: Optional[dict] = None
|
||||
meta: Optional[dict] = None
|
||||
|
||||
access_control: Optional[dict] = None
|
||||
|
|
@ -87,42 +75,11 @@ class KnowledgeModel(BaseModel):
|
|||
updated_at: int # timestamp in epoch
|
||||
|
||||
|
||||
class KnowledgeFile(Base):
|
||||
__tablename__ = "knowledge_file"
|
||||
|
||||
id = Column(Text, unique=True, primary_key=True)
|
||||
|
||||
knowledge_id = Column(
|
||||
Text, ForeignKey("knowledge.id", ondelete="CASCADE"), nullable=False
|
||||
)
|
||||
file_id = Column(Text, ForeignKey("file.id", ondelete="CASCADE"), nullable=False)
|
||||
user_id = Column(Text, nullable=False)
|
||||
|
||||
created_at = Column(BigInteger, nullable=False)
|
||||
updated_at = Column(BigInteger, nullable=False)
|
||||
|
||||
__table_args__ = (
|
||||
UniqueConstraint(
|
||||
"knowledge_id", "file_id", name="uq_knowledge_file_knowledge_file"
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
class KnowledgeFileModel(BaseModel):
|
||||
id: str
|
||||
knowledge_id: str
|
||||
file_id: str
|
||||
user_id: str
|
||||
|
||||
created_at: int # timestamp in epoch
|
||||
updated_at: int # timestamp in epoch
|
||||
|
||||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
|
||||
####################
|
||||
# Forms
|
||||
####################
|
||||
|
||||
|
||||
class KnowledgeUserModel(KnowledgeModel):
|
||||
user: Optional[UserResponse] = None
|
||||
|
||||
|
|
@ -132,29 +89,16 @@ class KnowledgeResponse(KnowledgeModel):
|
|||
|
||||
|
||||
class KnowledgeUserResponse(KnowledgeUserModel):
|
||||
pass
|
||||
files: Optional[list[FileMetadataResponse | dict]] = None
|
||||
|
||||
|
||||
class KnowledgeForm(BaseModel):
|
||||
name: str
|
||||
description: str
|
||||
data: Optional[dict] = None
|
||||
access_control: Optional[dict] = None
|
||||
|
||||
|
||||
class FileUserResponse(FileModelResponse):
|
||||
user: Optional[UserResponse] = None
|
||||
|
||||
|
||||
class KnowledgeListResponse(BaseModel):
|
||||
items: list[KnowledgeUserModel]
|
||||
total: int
|
||||
|
||||
|
||||
class KnowledgeFileListResponse(BaseModel):
|
||||
items: list[FileUserResponse]
|
||||
total: int
|
||||
|
||||
|
||||
class KnowledgeTable:
|
||||
def insert_new_knowledge(
|
||||
self, user_id: str, form_data: KnowledgeForm
|
||||
|
|
@ -182,21 +126,13 @@ class KnowledgeTable:
|
|||
except Exception:
|
||||
return None
|
||||
|
||||
def get_knowledge_bases(
|
||||
self, skip: int = 0, limit: int = 30
|
||||
) -> list[KnowledgeUserModel]:
|
||||
def get_knowledge_bases(self) -> list[KnowledgeUserModel]:
|
||||
with get_db() as db:
|
||||
all_knowledge = (
|
||||
db.query(Knowledge).order_by(Knowledge.updated_at.desc()).all()
|
||||
)
|
||||
user_ids = list(set(knowledge.user_id for knowledge in all_knowledge))
|
||||
|
||||
users = Users.get_users_by_user_ids(user_ids) if user_ids else []
|
||||
users_dict = {user.id: user for user in users}
|
||||
|
||||
knowledge_bases = []
|
||||
for knowledge in all_knowledge:
|
||||
user = users_dict.get(knowledge.user_id)
|
||||
for knowledge in (
|
||||
db.query(Knowledge).order_by(Knowledge.updated_at.desc()).all()
|
||||
):
|
||||
user = Users.get_user_by_id(knowledge.user_id)
|
||||
knowledge_bases.append(
|
||||
KnowledgeUserModel.model_validate(
|
||||
{
|
||||
|
|
@ -207,147 +143,15 @@ class KnowledgeTable:
|
|||
)
|
||||
return knowledge_bases
|
||||
|
||||
def search_knowledge_bases(
|
||||
self, user_id: str, filter: dict, skip: int = 0, limit: int = 30
|
||||
) -> KnowledgeListResponse:
|
||||
try:
|
||||
with get_db() as db:
|
||||
query = db.query(Knowledge, User).outerjoin(
|
||||
User, User.id == Knowledge.user_id
|
||||
)
|
||||
|
||||
if filter:
|
||||
query_key = filter.get("query")
|
||||
if query_key:
|
||||
query = query.filter(
|
||||
or_(
|
||||
Knowledge.name.ilike(f"%{query_key}%"),
|
||||
Knowledge.description.ilike(f"%{query_key}%"),
|
||||
)
|
||||
)
|
||||
|
||||
view_option = filter.get("view_option")
|
||||
if view_option == "created":
|
||||
query = query.filter(Knowledge.user_id == user_id)
|
||||
elif view_option == "shared":
|
||||
query = query.filter(Knowledge.user_id != user_id)
|
||||
|
||||
query = has_permission(db, Knowledge, query, filter)
|
||||
|
||||
query = query.order_by(Knowledge.updated_at.desc())
|
||||
|
||||
total = query.count()
|
||||
if skip:
|
||||
query = query.offset(skip)
|
||||
if limit:
|
||||
query = query.limit(limit)
|
||||
|
||||
items = query.all()
|
||||
|
||||
knowledge_bases = []
|
||||
for knowledge_base, user in items:
|
||||
knowledge_bases.append(
|
||||
KnowledgeUserModel.model_validate(
|
||||
{
|
||||
**KnowledgeModel.model_validate(
|
||||
knowledge_base
|
||||
).model_dump(),
|
||||
"user": (
|
||||
UserModel.model_validate(user).model_dump()
|
||||
if user
|
||||
else None
|
||||
),
|
||||
}
|
||||
)
|
||||
)
|
||||
|
||||
return KnowledgeListResponse(items=knowledge_bases, total=total)
|
||||
except Exception as e:
|
||||
print(e)
|
||||
return KnowledgeListResponse(items=[], total=0)
|
||||
|
||||
def search_knowledge_files(
|
||||
self, filter: dict, skip: int = 0, limit: int = 30
|
||||
) -> KnowledgeFileListResponse:
|
||||
"""
|
||||
Scalable version: search files across all knowledge bases the user has
|
||||
READ access to, without loading all KBs or using large IN() lists.
|
||||
"""
|
||||
try:
|
||||
with get_db() as db:
|
||||
# Base query: join Knowledge → KnowledgeFile → File
|
||||
query = (
|
||||
db.query(File, User)
|
||||
.join(KnowledgeFile, File.id == KnowledgeFile.file_id)
|
||||
.join(Knowledge, KnowledgeFile.knowledge_id == Knowledge.id)
|
||||
.outerjoin(User, User.id == KnowledgeFile.user_id)
|
||||
)
|
||||
|
||||
# Apply access-control directly to the joined query
|
||||
# This makes the database handle filtering, even with 10k+ KBs
|
||||
query = has_permission(db, Knowledge, query, filter)
|
||||
|
||||
# Apply filename search
|
||||
if filter:
|
||||
q = filter.get("query")
|
||||
if q:
|
||||
query = query.filter(File.filename.ilike(f"%{q}%"))
|
||||
|
||||
# Order by file changes
|
||||
query = query.order_by(File.updated_at.desc())
|
||||
|
||||
# Count before pagination
|
||||
total = query.count()
|
||||
|
||||
if skip:
|
||||
query = query.offset(skip)
|
||||
if limit:
|
||||
query = query.limit(limit)
|
||||
|
||||
rows = query.all()
|
||||
|
||||
items = []
|
||||
for file, user in rows:
|
||||
items.append(
|
||||
FileUserResponse(
|
||||
**FileModel.model_validate(file).model_dump(),
|
||||
user=(
|
||||
UserResponse(
|
||||
**UserModel.model_validate(user).model_dump()
|
||||
)
|
||||
if user
|
||||
else None
|
||||
),
|
||||
)
|
||||
)
|
||||
|
||||
return KnowledgeFileListResponse(items=items, total=total)
|
||||
|
||||
except Exception as e:
|
||||
print("search_knowledge_files error:", e)
|
||||
return KnowledgeFileListResponse(items=[], total=0)
|
||||
|
||||
def check_access_by_user_id(self, id, user_id, permission="write") -> bool:
|
||||
knowledge = self.get_knowledge_by_id(id)
|
||||
if not knowledge:
|
||||
return False
|
||||
if knowledge.user_id == user_id:
|
||||
return True
|
||||
user_group_ids = {group.id for group in Groups.get_groups_by_member_id(user_id)}
|
||||
return has_access(user_id, permission, knowledge.access_control, user_group_ids)
|
||||
|
||||
def get_knowledge_bases_by_user_id(
|
||||
self, user_id: str, permission: str = "write"
|
||||
) -> list[KnowledgeUserModel]:
|
||||
knowledge_bases = self.get_knowledge_bases()
|
||||
user_group_ids = {group.id for group in Groups.get_groups_by_member_id(user_id)}
|
||||
return [
|
||||
knowledge_base
|
||||
for knowledge_base in knowledge_bases
|
||||
if knowledge_base.user_id == user_id
|
||||
or has_access(
|
||||
user_id, permission, knowledge_base.access_control, user_group_ids
|
||||
)
|
||||
or has_access(user_id, permission, knowledge_base.access_control)
|
||||
]
|
||||
|
||||
def get_knowledge_by_id(self, id: str) -> Optional[KnowledgeModel]:
|
||||
|
|
@ -358,197 +162,6 @@ class KnowledgeTable:
|
|||
except Exception:
|
||||
return None
|
||||
|
||||
def get_knowledge_by_id_and_user_id(
|
||||
self, id: str, user_id: str
|
||||
) -> Optional[KnowledgeModel]:
|
||||
knowledge = self.get_knowledge_by_id(id)
|
||||
if not knowledge:
|
||||
return None
|
||||
|
||||
if knowledge.user_id == user_id:
|
||||
return knowledge
|
||||
|
||||
user_group_ids = {group.id for group in Groups.get_groups_by_member_id(user_id)}
|
||||
if has_access(user_id, "write", knowledge.access_control, user_group_ids):
|
||||
return knowledge
|
||||
return None
|
||||
|
||||
def get_knowledges_by_file_id(self, file_id: str) -> list[KnowledgeModel]:
|
||||
try:
|
||||
with get_db() as db:
|
||||
knowledges = (
|
||||
db.query(Knowledge)
|
||||
.join(KnowledgeFile, Knowledge.id == KnowledgeFile.knowledge_id)
|
||||
.filter(KnowledgeFile.file_id == file_id)
|
||||
.all()
|
||||
)
|
||||
return [
|
||||
KnowledgeModel.model_validate(knowledge) for knowledge in knowledges
|
||||
]
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
def search_files_by_id(
|
||||
self,
|
||||
knowledge_id: str,
|
||||
user_id: str,
|
||||
filter: dict,
|
||||
skip: int = 0,
|
||||
limit: int = 30,
|
||||
) -> KnowledgeFileListResponse:
|
||||
try:
|
||||
with get_db() as db:
|
||||
query = (
|
||||
db.query(File, User)
|
||||
.join(KnowledgeFile, File.id == KnowledgeFile.file_id)
|
||||
.outerjoin(User, User.id == KnowledgeFile.user_id)
|
||||
.filter(KnowledgeFile.knowledge_id == knowledge_id)
|
||||
)
|
||||
|
||||
if filter:
|
||||
query_key = filter.get("query")
|
||||
if query_key:
|
||||
query = query.filter(or_(File.filename.ilike(f"%{query_key}%")))
|
||||
|
||||
view_option = filter.get("view_option")
|
||||
if view_option == "created":
|
||||
query = query.filter(KnowledgeFile.user_id == user_id)
|
||||
elif view_option == "shared":
|
||||
query = query.filter(KnowledgeFile.user_id != user_id)
|
||||
|
||||
order_by = filter.get("order_by")
|
||||
direction = filter.get("direction")
|
||||
|
||||
if order_by == "name":
|
||||
if direction == "asc":
|
||||
query = query.order_by(File.filename.asc())
|
||||
else:
|
||||
query = query.order_by(File.filename.desc())
|
||||
elif order_by == "created_at":
|
||||
if direction == "asc":
|
||||
query = query.order_by(File.created_at.asc())
|
||||
else:
|
||||
query = query.order_by(File.created_at.desc())
|
||||
elif order_by == "updated_at":
|
||||
if direction == "asc":
|
||||
query = query.order_by(File.updated_at.asc())
|
||||
else:
|
||||
query = query.order_by(File.updated_at.desc())
|
||||
else:
|
||||
query = query.order_by(File.updated_at.desc())
|
||||
|
||||
else:
|
||||
query = query.order_by(File.updated_at.desc())
|
||||
|
||||
# Count BEFORE pagination
|
||||
total = query.count()
|
||||
|
||||
if skip:
|
||||
query = query.offset(skip)
|
||||
if limit:
|
||||
query = query.limit(limit)
|
||||
|
||||
items = query.all()
|
||||
|
||||
files = []
|
||||
for file, user in items:
|
||||
files.append(
|
||||
FileUserResponse(
|
||||
**FileModel.model_validate(file).model_dump(),
|
||||
user=(
|
||||
UserResponse(
|
||||
**UserModel.model_validate(user).model_dump()
|
||||
)
|
||||
if user
|
||||
else None
|
||||
),
|
||||
)
|
||||
)
|
||||
|
||||
return KnowledgeFileListResponse(items=files, total=total)
|
||||
except Exception as e:
|
||||
print(e)
|
||||
return KnowledgeFileListResponse(items=[], total=0)
|
||||
|
||||
def get_files_by_id(self, knowledge_id: str) -> list[FileModel]:
|
||||
try:
|
||||
with get_db() as db:
|
||||
files = (
|
||||
db.query(File)
|
||||
.join(KnowledgeFile, File.id == KnowledgeFile.file_id)
|
||||
.filter(KnowledgeFile.knowledge_id == knowledge_id)
|
||||
.all()
|
||||
)
|
||||
return [FileModel.model_validate(file) for file in files]
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
def get_file_metadatas_by_id(self, knowledge_id: str) -> list[FileMetadataResponse]:
|
||||
try:
|
||||
with get_db() as db:
|
||||
files = self.get_files_by_id(knowledge_id)
|
||||
return [FileMetadataResponse(**file.model_dump()) for file in files]
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
def add_file_to_knowledge_by_id(
|
||||
self, knowledge_id: str, file_id: str, user_id: str
|
||||
) -> Optional[KnowledgeFileModel]:
|
||||
with get_db() as db:
|
||||
knowledge_file = KnowledgeFileModel(
|
||||
**{
|
||||
"id": str(uuid.uuid4()),
|
||||
"knowledge_id": knowledge_id,
|
||||
"file_id": file_id,
|
||||
"user_id": user_id,
|
||||
"created_at": int(time.time()),
|
||||
"updated_at": int(time.time()),
|
||||
}
|
||||
)
|
||||
|
||||
try:
|
||||
result = KnowledgeFile(**knowledge_file.model_dump())
|
||||
db.add(result)
|
||||
db.commit()
|
||||
db.refresh(result)
|
||||
if result:
|
||||
return KnowledgeFileModel.model_validate(result)
|
||||
else:
|
||||
return None
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def remove_file_from_knowledge_by_id(self, knowledge_id: str, file_id: str) -> bool:
|
||||
try:
|
||||
with get_db() as db:
|
||||
db.query(KnowledgeFile).filter_by(
|
||||
knowledge_id=knowledge_id, file_id=file_id
|
||||
).delete()
|
||||
db.commit()
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def reset_knowledge_by_id(self, id: str) -> Optional[KnowledgeModel]:
|
||||
try:
|
||||
with get_db() as db:
|
||||
# Delete all knowledge_file entries for this knowledge_id
|
||||
db.query(KnowledgeFile).filter_by(knowledge_id=id).delete()
|
||||
db.commit()
|
||||
|
||||
# Update the knowledge entry's updated_at timestamp
|
||||
db.query(Knowledge).filter_by(id=id).update(
|
||||
{
|
||||
"updated_at": int(time.time()),
|
||||
}
|
||||
)
|
||||
db.commit()
|
||||
|
||||
return self.get_knowledge_by_id(id=id)
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
return None
|
||||
|
||||
def update_knowledge_by_id(
|
||||
self, id: str, form_data: KnowledgeForm, overwrite: bool = False
|
||||
) -> Optional[KnowledgeModel]:
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ from sqlalchemy import BigInteger, Column, String, Text
|
|||
class Memory(Base):
|
||||
__tablename__ = "memory"
|
||||
|
||||
id = Column(String, primary_key=True, unique=True)
|
||||
id = Column(String, primary_key=True)
|
||||
user_id = Column(String)
|
||||
content = Column(Text)
|
||||
updated_at = Column(BigInteger)
|
||||
|
|
|
|||
|
|
@ -5,11 +5,9 @@ from typing import Optional
|
|||
|
||||
from open_webui.internal.db import Base, get_db
|
||||
from open_webui.models.tags import TagModel, Tag, Tags
|
||||
from open_webui.models.users import Users, User, UserNameResponse
|
||||
from open_webui.models.channels import Channels, ChannelMember
|
||||
|
||||
|
||||
from pydantic import BaseModel, ConfigDict, field_validator
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
from sqlalchemy import BigInteger, Boolean, Column, String, Text, JSON
|
||||
from sqlalchemy import or_, func, select, and_, text
|
||||
from sqlalchemy.sql import exists
|
||||
|
|
@ -21,7 +19,7 @@ from sqlalchemy.sql import exists
|
|||
|
||||
class MessageReaction(Base):
|
||||
__tablename__ = "message_reaction"
|
||||
id = Column(Text, primary_key=True, unique=True)
|
||||
id = Column(Text, primary_key=True)
|
||||
user_id = Column(Text)
|
||||
message_id = Column(Text)
|
||||
name = Column(Text)
|
||||
|
|
@ -40,19 +38,13 @@ class MessageReactionModel(BaseModel):
|
|||
|
||||
class Message(Base):
|
||||
__tablename__ = "message"
|
||||
id = Column(Text, primary_key=True, unique=True)
|
||||
id = Column(Text, primary_key=True)
|
||||
|
||||
user_id = Column(Text)
|
||||
channel_id = Column(Text, nullable=True)
|
||||
|
||||
reply_to_id = Column(Text, nullable=True)
|
||||
parent_id = Column(Text, nullable=True)
|
||||
|
||||
# Pins
|
||||
is_pinned = Column(Boolean, nullable=False, default=False)
|
||||
pinned_at = Column(BigInteger, nullable=True)
|
||||
pinned_by = Column(Text, nullable=True)
|
||||
|
||||
content = Column(Text)
|
||||
data = Column(JSON, nullable=True)
|
||||
meta = Column(JSON, nullable=True)
|
||||
|
|
@ -68,20 +60,14 @@ class MessageModel(BaseModel):
|
|||
user_id: str
|
||||
channel_id: Optional[str] = None
|
||||
|
||||
reply_to_id: Optional[str] = None
|
||||
parent_id: Optional[str] = None
|
||||
|
||||
# Pins
|
||||
is_pinned: bool = False
|
||||
pinned_by: Optional[str] = None
|
||||
pinned_at: Optional[int] = None # timestamp in epoch (time_ns)
|
||||
|
||||
content: str
|
||||
data: Optional[dict] = None
|
||||
meta: Optional[dict] = None
|
||||
|
||||
created_at: int # timestamp in epoch (time_ns)
|
||||
updated_at: int # timestamp in epoch (time_ns)
|
||||
created_at: int # timestamp in epoch
|
||||
updated_at: int # timestamp in epoch
|
||||
|
||||
|
||||
####################
|
||||
|
|
@ -90,9 +76,7 @@ class MessageModel(BaseModel):
|
|||
|
||||
|
||||
class MessageForm(BaseModel):
|
||||
temp_id: Optional[str] = None
|
||||
content: str
|
||||
reply_to_id: Optional[str] = None
|
||||
parent_id: Optional[str] = None
|
||||
data: Optional[dict] = None
|
||||
meta: Optional[dict] = None
|
||||
|
|
@ -100,36 +84,11 @@ class MessageForm(BaseModel):
|
|||
|
||||
class Reactions(BaseModel):
|
||||
name: str
|
||||
users: list[dict]
|
||||
user_ids: list[str]
|
||||
count: int
|
||||
|
||||
|
||||
class MessageUserResponse(MessageModel):
|
||||
user: Optional[UserNameResponse] = None
|
||||
|
||||
|
||||
class MessageUserSlimResponse(MessageUserResponse):
|
||||
data: bool | None = None
|
||||
|
||||
@field_validator("data", mode="before")
|
||||
def convert_data_to_bool(cls, v):
|
||||
# No data or not a dict → False
|
||||
if not isinstance(v, dict):
|
||||
return False
|
||||
|
||||
# True if ANY value in the dict is non-empty
|
||||
return any(bool(val) for val in v.values())
|
||||
|
||||
|
||||
class MessageReplyToResponse(MessageUserResponse):
|
||||
reply_to_message: Optional[MessageUserSlimResponse] = None
|
||||
|
||||
|
||||
class MessageWithReactionsResponse(MessageUserSlimResponse):
|
||||
reactions: list[Reactions]
|
||||
|
||||
|
||||
class MessageResponse(MessageReplyToResponse):
|
||||
class MessageResponse(MessageModel):
|
||||
latest_reply_at: Optional[int]
|
||||
reply_count: int
|
||||
reactions: list[Reactions]
|
||||
|
|
@ -140,21 +99,15 @@ class MessageTable:
|
|||
self, form_data: MessageForm, channel_id: str, user_id: str
|
||||
) -> Optional[MessageModel]:
|
||||
with get_db() as db:
|
||||
channel_member = Channels.join_channel(channel_id, user_id)
|
||||
|
||||
id = str(uuid.uuid4())
|
||||
ts = int(time.time_ns())
|
||||
|
||||
ts = int(time.time_ns())
|
||||
message = MessageModel(
|
||||
**{
|
||||
"id": id,
|
||||
"user_id": user_id,
|
||||
"channel_id": channel_id,
|
||||
"reply_to_id": form_data.reply_to_id,
|
||||
"parent_id": form_data.parent_id,
|
||||
"is_pinned": False,
|
||||
"pinned_at": None,
|
||||
"pinned_by": None,
|
||||
"content": form_data.content,
|
||||
"data": form_data.data,
|
||||
"meta": form_data.meta,
|
||||
|
|
@ -162,8 +115,8 @@ class MessageTable:
|
|||
"updated_at": ts,
|
||||
}
|
||||
)
|
||||
result = Message(**message.model_dump())
|
||||
|
||||
result = Message(**message.model_dump())
|
||||
db.add(result)
|
||||
db.commit()
|
||||
db.refresh(result)
|
||||
|
|
@ -175,32 +128,19 @@ class MessageTable:
|
|||
if not message:
|
||||
return None
|
||||
|
||||
reply_to_message = (
|
||||
self.get_message_by_id(message.reply_to_id)
|
||||
if message.reply_to_id
|
||||
else None
|
||||
)
|
||||
|
||||
reactions = self.get_reactions_by_message_id(id)
|
||||
thread_replies = self.get_thread_replies_by_message_id(id)
|
||||
replies = self.get_replies_by_message_id(id)
|
||||
|
||||
user = Users.get_user_by_id(message.user_id)
|
||||
return MessageResponse.model_validate(
|
||||
{
|
||||
return MessageResponse(
|
||||
**{
|
||||
**MessageModel.model_validate(message).model_dump(),
|
||||
"user": user.model_dump() if user else None,
|
||||
"reply_to_message": (
|
||||
reply_to_message.model_dump() if reply_to_message else None
|
||||
),
|
||||
"latest_reply_at": (
|
||||
thread_replies[0].created_at if thread_replies else None
|
||||
),
|
||||
"reply_count": len(thread_replies),
|
||||
"latest_reply_at": replies[0].created_at if replies else None,
|
||||
"reply_count": len(replies),
|
||||
"reactions": reactions,
|
||||
}
|
||||
)
|
||||
|
||||
def get_thread_replies_by_message_id(self, id: str) -> list[MessageReplyToResponse]:
|
||||
def get_replies_by_message_id(self, id: str) -> list[MessageModel]:
|
||||
with get_db() as db:
|
||||
all_messages = (
|
||||
db.query(Message)
|
||||
|
|
@ -208,27 +148,7 @@ class MessageTable:
|
|||
.order_by(Message.created_at.desc())
|
||||
.all()
|
||||
)
|
||||
|
||||
messages = []
|
||||
for message in all_messages:
|
||||
reply_to_message = (
|
||||
self.get_message_by_id(message.reply_to_id)
|
||||
if message.reply_to_id
|
||||
else None
|
||||
)
|
||||
messages.append(
|
||||
MessageReplyToResponse.model_validate(
|
||||
{
|
||||
**MessageModel.model_validate(message).model_dump(),
|
||||
"reply_to_message": (
|
||||
reply_to_message.model_dump()
|
||||
if reply_to_message
|
||||
else None
|
||||
),
|
||||
}
|
||||
)
|
||||
)
|
||||
return messages
|
||||
return [MessageModel.model_validate(message) for message in all_messages]
|
||||
|
||||
def get_reply_user_ids_by_message_id(self, id: str) -> list[str]:
|
||||
with get_db() as db:
|
||||
|
|
@ -239,7 +159,7 @@ class MessageTable:
|
|||
|
||||
def get_messages_by_channel_id(
|
||||
self, channel_id: str, skip: int = 0, limit: int = 50
|
||||
) -> list[MessageReplyToResponse]:
|
||||
) -> list[MessageModel]:
|
||||
with get_db() as db:
|
||||
all_messages = (
|
||||
db.query(Message)
|
||||
|
|
@ -249,31 +169,11 @@ class MessageTable:
|
|||
.limit(limit)
|
||||
.all()
|
||||
)
|
||||
|
||||
messages = []
|
||||
for message in all_messages:
|
||||
reply_to_message = (
|
||||
self.get_message_by_id(message.reply_to_id)
|
||||
if message.reply_to_id
|
||||
else None
|
||||
)
|
||||
messages.append(
|
||||
MessageReplyToResponse.model_validate(
|
||||
{
|
||||
**MessageModel.model_validate(message).model_dump(),
|
||||
"reply_to_message": (
|
||||
reply_to_message.model_dump()
|
||||
if reply_to_message
|
||||
else None
|
||||
),
|
||||
}
|
||||
)
|
||||
)
|
||||
return messages
|
||||
return [MessageModel.model_validate(message) for message in all_messages]
|
||||
|
||||
def get_messages_by_parent_id(
|
||||
self, channel_id: str, parent_id: str, skip: int = 0, limit: int = 50
|
||||
) -> list[MessageReplyToResponse]:
|
||||
) -> list[MessageModel]:
|
||||
with get_db() as db:
|
||||
message = db.get(Message, parent_id)
|
||||
|
||||
|
|
@ -293,49 +193,6 @@ class MessageTable:
|
|||
if len(all_messages) < limit:
|
||||
all_messages.append(message)
|
||||
|
||||
messages = []
|
||||
for message in all_messages:
|
||||
reply_to_message = (
|
||||
self.get_message_by_id(message.reply_to_id)
|
||||
if message.reply_to_id
|
||||
else None
|
||||
)
|
||||
messages.append(
|
||||
MessageReplyToResponse.model_validate(
|
||||
{
|
||||
**MessageModel.model_validate(message).model_dump(),
|
||||
"reply_to_message": (
|
||||
reply_to_message.model_dump()
|
||||
if reply_to_message
|
||||
else None
|
||||
),
|
||||
}
|
||||
)
|
||||
)
|
||||
return messages
|
||||
|
||||
def get_last_message_by_channel_id(self, channel_id: str) -> Optional[MessageModel]:
|
||||
with get_db() as db:
|
||||
message = (
|
||||
db.query(Message)
|
||||
.filter_by(channel_id=channel_id)
|
||||
.order_by(Message.created_at.desc())
|
||||
.first()
|
||||
)
|
||||
return MessageModel.model_validate(message) if message else None
|
||||
|
||||
def get_pinned_messages_by_channel_id(
|
||||
self, channel_id: str, skip: int = 0, limit: int = 50
|
||||
) -> list[MessageModel]:
|
||||
with get_db() as db:
|
||||
all_messages = (
|
||||
db.query(Message)
|
||||
.filter_by(channel_id=channel_id, is_pinned=True)
|
||||
.order_by(Message.pinned_at.desc())
|
||||
.offset(skip)
|
||||
.limit(limit)
|
||||
.all()
|
||||
)
|
||||
return [MessageModel.model_validate(message) for message in all_messages]
|
||||
|
||||
def update_message_by_id(
|
||||
|
|
@ -344,57 +201,17 @@ class MessageTable:
|
|||
with get_db() as db:
|
||||
message = db.get(Message, id)
|
||||
message.content = form_data.content
|
||||
message.data = {
|
||||
**(message.data if message.data else {}),
|
||||
**(form_data.data if form_data.data else {}),
|
||||
}
|
||||
message.meta = {
|
||||
**(message.meta if message.meta else {}),
|
||||
**(form_data.meta if form_data.meta else {}),
|
||||
}
|
||||
message.data = form_data.data
|
||||
message.meta = form_data.meta
|
||||
message.updated_at = int(time.time_ns())
|
||||
db.commit()
|
||||
db.refresh(message)
|
||||
return MessageModel.model_validate(message) if message else None
|
||||
|
||||
def update_is_pinned_by_id(
|
||||
self, id: str, is_pinned: bool, pinned_by: Optional[str] = None
|
||||
) -> Optional[MessageModel]:
|
||||
with get_db() as db:
|
||||
message = db.get(Message, id)
|
||||
message.is_pinned = is_pinned
|
||||
message.pinned_at = int(time.time_ns()) if is_pinned else None
|
||||
message.pinned_by = pinned_by if is_pinned else None
|
||||
db.commit()
|
||||
db.refresh(message)
|
||||
return MessageModel.model_validate(message) if message else None
|
||||
|
||||
def get_unread_message_count(
|
||||
self, channel_id: str, user_id: str, last_read_at: Optional[int] = None
|
||||
) -> int:
|
||||
with get_db() as db:
|
||||
query = db.query(Message).filter(
|
||||
Message.channel_id == channel_id,
|
||||
Message.parent_id == None, # only count top-level messages
|
||||
Message.created_at > (last_read_at if last_read_at else 0),
|
||||
)
|
||||
if user_id:
|
||||
query = query.filter(Message.user_id != user_id)
|
||||
return query.count()
|
||||
|
||||
def add_reaction_to_message(
|
||||
self, id: str, user_id: str, name: str
|
||||
) -> Optional[MessageReactionModel]:
|
||||
with get_db() as db:
|
||||
# check for existing reaction
|
||||
existing_reaction = (
|
||||
db.query(MessageReaction)
|
||||
.filter_by(message_id=id, user_id=user_id, name=name)
|
||||
.first()
|
||||
)
|
||||
if existing_reaction:
|
||||
return MessageReactionModel.model_validate(existing_reaction)
|
||||
|
||||
reaction_id = str(uuid.uuid4())
|
||||
reaction = MessageReactionModel(
|
||||
id=reaction_id,
|
||||
|
|
@ -411,30 +228,17 @@ class MessageTable:
|
|||
|
||||
def get_reactions_by_message_id(self, id: str) -> list[Reactions]:
|
||||
with get_db() as db:
|
||||
# JOIN User so all user info is fetched in one query
|
||||
results = (
|
||||
db.query(MessageReaction, User)
|
||||
.join(User, MessageReaction.user_id == User.id)
|
||||
.filter(MessageReaction.message_id == id)
|
||||
.all()
|
||||
)
|
||||
all_reactions = db.query(MessageReaction).filter_by(message_id=id).all()
|
||||
|
||||
reactions = {}
|
||||
|
||||
for reaction, user in results:
|
||||
for reaction in all_reactions:
|
||||
if reaction.name not in reactions:
|
||||
reactions[reaction.name] = {
|
||||
"name": reaction.name,
|
||||
"users": [],
|
||||
"user_ids": [],
|
||||
"count": 0,
|
||||
}
|
||||
|
||||
reactions[reaction.name]["users"].append(
|
||||
{
|
||||
"id": user.id,
|
||||
"name": user.name,
|
||||
}
|
||||
)
|
||||
reactions[reaction.name]["user_ids"].append(reaction.user_id)
|
||||
reactions[reaction.name]["count"] += 1
|
||||
|
||||
return [Reactions(**reaction) for reaction in reactions.values()]
|
||||
|
|
|
|||
|
|
@ -3,17 +3,15 @@ import time
|
|||
from typing import Optional
|
||||
|
||||
from open_webui.internal.db import Base, JSONField, get_db
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
from open_webui.models.groups import Groups
|
||||
from open_webui.models.users import User, UserModel, Users, UserResponse
|
||||
from open_webui.models.users import Users, UserResponse
|
||||
|
||||
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
|
||||
from sqlalchemy import String, cast, or_, and_, func
|
||||
from sqlalchemy import or_, and_, func
|
||||
from sqlalchemy.dialects import postgresql, sqlite
|
||||
|
||||
from sqlalchemy.dialects.postgresql import JSONB
|
||||
from sqlalchemy import BigInteger, Column, Text, JSON, Boolean
|
||||
|
||||
|
||||
|
|
@ -21,6 +19,7 @@ from open_webui.utils.access_control import has_access
|
|||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
||||
|
||||
|
||||
####################
|
||||
|
|
@ -53,7 +52,7 @@ class ModelMeta(BaseModel):
|
|||
class Model(Base):
|
||||
__tablename__ = "model"
|
||||
|
||||
id = Column(Text, primary_key=True, unique=True)
|
||||
id = Column(Text, primary_key=True)
|
||||
"""
|
||||
The model's id as used in the API. If set to an existing model, it will override the model.
|
||||
"""
|
||||
|
|
@ -133,11 +132,6 @@ class ModelResponse(ModelModel):
|
|||
pass
|
||||
|
||||
|
||||
class ModelListResponse(BaseModel):
|
||||
items: list[ModelUserResponse]
|
||||
total: int
|
||||
|
||||
|
||||
class ModelForm(BaseModel):
|
||||
id: str
|
||||
base_model_id: Optional[str] = None
|
||||
|
|
@ -181,16 +175,9 @@ class ModelsTable:
|
|||
|
||||
def get_models(self) -> list[ModelUserResponse]:
|
||||
with get_db() as db:
|
||||
all_models = db.query(Model).filter(Model.base_model_id != None).all()
|
||||
|
||||
user_ids = list(set(model.user_id for model in all_models))
|
||||
|
||||
users = Users.get_users_by_user_ids(user_ids) if user_ids else []
|
||||
users_dict = {user.id: user for user in users}
|
||||
|
||||
models = []
|
||||
for model in all_models:
|
||||
user = users_dict.get(model.user_id)
|
||||
for model in db.query(Model).filter(Model.base_model_id != None).all():
|
||||
user = Users.get_user_by_id(model.user_id)
|
||||
models.append(
|
||||
ModelUserResponse.model_validate(
|
||||
{
|
||||
|
|
@ -212,143 +199,13 @@ class ModelsTable:
|
|||
self, user_id: str, permission: str = "write"
|
||||
) -> list[ModelUserResponse]:
|
||||
models = self.get_models()
|
||||
user_group_ids = {group.id for group in Groups.get_groups_by_member_id(user_id)}
|
||||
return [
|
||||
model
|
||||
for model in models
|
||||
if model.user_id == user_id
|
||||
or has_access(user_id, permission, model.access_control, user_group_ids)
|
||||
or has_access(user_id, permission, model.access_control)
|
||||
]
|
||||
|
||||
def _has_permission(self, db, query, filter: dict, permission: str = "read"):
|
||||
group_ids = filter.get("group_ids", [])
|
||||
user_id = filter.get("user_id")
|
||||
|
||||
dialect_name = db.bind.dialect.name
|
||||
|
||||
# Public access
|
||||
conditions = []
|
||||
if group_ids or user_id:
|
||||
conditions.extend(
|
||||
[
|
||||
Model.access_control.is_(None),
|
||||
cast(Model.access_control, String) == "null",
|
||||
]
|
||||
)
|
||||
|
||||
# User-level permission
|
||||
if user_id:
|
||||
conditions.append(Model.user_id == user_id)
|
||||
|
||||
# Group-level permission
|
||||
if group_ids:
|
||||
group_conditions = []
|
||||
for gid in group_ids:
|
||||
if dialect_name == "sqlite":
|
||||
group_conditions.append(
|
||||
Model.access_control[permission]["group_ids"].contains([gid])
|
||||
)
|
||||
elif dialect_name == "postgresql":
|
||||
group_conditions.append(
|
||||
cast(
|
||||
Model.access_control[permission]["group_ids"],
|
||||
JSONB,
|
||||
).contains([gid])
|
||||
)
|
||||
conditions.append(or_(*group_conditions))
|
||||
|
||||
if conditions:
|
||||
query = query.filter(or_(*conditions))
|
||||
|
||||
return query
|
||||
|
||||
def search_models(
|
||||
self, user_id: str, filter: dict = {}, skip: int = 0, limit: int = 30
|
||||
) -> ModelListResponse:
|
||||
with get_db() as db:
|
||||
# Join GroupMember so we can order by group_id when requested
|
||||
query = db.query(Model, User).outerjoin(User, User.id == Model.user_id)
|
||||
query = query.filter(Model.base_model_id != None)
|
||||
|
||||
if filter:
|
||||
query_key = filter.get("query")
|
||||
if query_key:
|
||||
query = query.filter(
|
||||
or_(
|
||||
Model.name.ilike(f"%{query_key}%"),
|
||||
Model.base_model_id.ilike(f"%{query_key}%"),
|
||||
)
|
||||
)
|
||||
|
||||
view_option = filter.get("view_option")
|
||||
if view_option == "created":
|
||||
query = query.filter(Model.user_id == user_id)
|
||||
elif view_option == "shared":
|
||||
query = query.filter(Model.user_id != user_id)
|
||||
|
||||
# Apply access control filtering
|
||||
query = self._has_permission(
|
||||
db,
|
||||
query,
|
||||
filter,
|
||||
permission="write",
|
||||
)
|
||||
|
||||
tag = filter.get("tag")
|
||||
if tag:
|
||||
# TODO: This is a simple implementation and should be improved for performance
|
||||
like_pattern = f'%"{tag.lower()}"%' # `"tag"` inside JSON array
|
||||
meta_text = func.lower(cast(Model.meta, String))
|
||||
|
||||
query = query.filter(meta_text.like(like_pattern))
|
||||
|
||||
order_by = filter.get("order_by")
|
||||
direction = filter.get("direction")
|
||||
|
||||
if order_by == "name":
|
||||
if direction == "asc":
|
||||
query = query.order_by(Model.name.asc())
|
||||
else:
|
||||
query = query.order_by(Model.name.desc())
|
||||
elif order_by == "created_at":
|
||||
if direction == "asc":
|
||||
query = query.order_by(Model.created_at.asc())
|
||||
else:
|
||||
query = query.order_by(Model.created_at.desc())
|
||||
elif order_by == "updated_at":
|
||||
if direction == "asc":
|
||||
query = query.order_by(Model.updated_at.asc())
|
||||
else:
|
||||
query = query.order_by(Model.updated_at.desc())
|
||||
|
||||
else:
|
||||
query = query.order_by(Model.created_at.desc())
|
||||
|
||||
# Count BEFORE pagination
|
||||
total = query.count()
|
||||
|
||||
if skip:
|
||||
query = query.offset(skip)
|
||||
if limit:
|
||||
query = query.limit(limit)
|
||||
|
||||
items = query.all()
|
||||
|
||||
models = []
|
||||
for model, user in items:
|
||||
models.append(
|
||||
ModelUserResponse(
|
||||
**ModelModel.model_validate(model).model_dump(),
|
||||
user=(
|
||||
UserResponse(**UserModel.model_validate(user).model_dump())
|
||||
if user
|
||||
else None
|
||||
),
|
||||
)
|
||||
)
|
||||
|
||||
return ModelListResponse(items=models, total=total)
|
||||
|
||||
def get_model_by_id(self, id: str) -> Optional[ModelModel]:
|
||||
try:
|
||||
with get_db() as db:
|
||||
|
|
@ -357,14 +214,6 @@ class ModelsTable:
|
|||
except Exception:
|
||||
return None
|
||||
|
||||
def get_models_by_ids(self, ids: list[str]) -> list[ModelModel]:
|
||||
try:
|
||||
with get_db() as db:
|
||||
models = db.query(Model).filter(Model.id.in_(ids)).all()
|
||||
return [ModelModel.model_validate(model) for model in models]
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
def toggle_model_by_id(self, id: str) -> Optional[ModelModel]:
|
||||
with get_db() as db:
|
||||
try:
|
||||
|
|
@ -386,9 +235,11 @@ class ModelsTable:
|
|||
try:
|
||||
with get_db() as db:
|
||||
# update only the fields that are present in the model
|
||||
data = model.model_dump(exclude={"id"})
|
||||
result = db.query(Model).filter_by(id=id).update(data)
|
||||
|
||||
result = (
|
||||
db.query(Model)
|
||||
.filter_by(id=id)
|
||||
.update(model.model_dump(exclude={"id"}))
|
||||
)
|
||||
db.commit()
|
||||
|
||||
model = db.get(Model, id)
|
||||
|
|
|
|||
|
|
@ -2,20 +2,15 @@ import json
|
|||
import time
|
||||
import uuid
|
||||
from typing import Optional
|
||||
from functools import lru_cache
|
||||
|
||||
from open_webui.internal.db import Base, get_db
|
||||
from open_webui.models.groups import Groups
|
||||
from open_webui.utils.access_control import has_access
|
||||
from open_webui.models.users import User, UserModel, Users, UserResponse
|
||||
from open_webui.models.users import Users, UserResponse
|
||||
|
||||
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
from sqlalchemy import BigInteger, Boolean, Column, String, Text, JSON
|
||||
from sqlalchemy.dialects.postgresql import JSONB
|
||||
|
||||
|
||||
from sqlalchemy import or_, func, select, and_, text, cast, or_, and_, func
|
||||
from sqlalchemy import or_, func, select, and_, text
|
||||
from sqlalchemy.sql import exists
|
||||
|
||||
####################
|
||||
|
|
@ -26,7 +21,7 @@ from sqlalchemy.sql import exists
|
|||
class Note(Base):
|
||||
__tablename__ = "note"
|
||||
|
||||
id = Column(Text, primary_key=True, unique=True)
|
||||
id = Column(Text, primary_key=True)
|
||||
user_id = Column(Text)
|
||||
|
||||
title = Column(Text)
|
||||
|
|
@ -78,138 +73,7 @@ class NoteUserResponse(NoteModel):
|
|||
user: Optional[UserResponse] = None
|
||||
|
||||
|
||||
class NoteItemResponse(BaseModel):
|
||||
id: str
|
||||
title: str
|
||||
data: Optional[dict]
|
||||
updated_at: int
|
||||
created_at: int
|
||||
user: Optional[UserResponse] = None
|
||||
|
||||
|
||||
class NoteListResponse(BaseModel):
|
||||
items: list[NoteUserResponse]
|
||||
total: int
|
||||
|
||||
|
||||
class NoteTable:
|
||||
def _has_permission(self, db, query, filter: dict, permission: str = "read"):
|
||||
group_ids = filter.get("group_ids", [])
|
||||
user_id = filter.get("user_id")
|
||||
dialect_name = db.bind.dialect.name
|
||||
|
||||
conditions = []
|
||||
|
||||
# Handle read_only permission separately
|
||||
if permission == "read_only":
|
||||
# For read_only, we want items where:
|
||||
# 1. User has explicit read permission (via groups or user-level)
|
||||
# 2. BUT does NOT have write permission
|
||||
# 3. Public items are NOT considered read_only
|
||||
|
||||
read_conditions = []
|
||||
|
||||
# Group-level read permission
|
||||
if group_ids:
|
||||
group_read_conditions = []
|
||||
for gid in group_ids:
|
||||
if dialect_name == "sqlite":
|
||||
group_read_conditions.append(
|
||||
Note.access_control["read"]["group_ids"].contains([gid])
|
||||
)
|
||||
elif dialect_name == "postgresql":
|
||||
group_read_conditions.append(
|
||||
cast(
|
||||
Note.access_control["read"]["group_ids"],
|
||||
JSONB,
|
||||
).contains([gid])
|
||||
)
|
||||
|
||||
if group_read_conditions:
|
||||
read_conditions.append(or_(*group_read_conditions))
|
||||
|
||||
# Combine read conditions
|
||||
if read_conditions:
|
||||
has_read = or_(*read_conditions)
|
||||
else:
|
||||
# If no read conditions, return empty result
|
||||
return query.filter(False)
|
||||
|
||||
# Now exclude items where user has write permission
|
||||
write_exclusions = []
|
||||
|
||||
# Exclude items owned by user (they have implicit write)
|
||||
if user_id:
|
||||
write_exclusions.append(Note.user_id != user_id)
|
||||
|
||||
# Exclude items where user has explicit write permission via groups
|
||||
if group_ids:
|
||||
group_write_conditions = []
|
||||
for gid in group_ids:
|
||||
if dialect_name == "sqlite":
|
||||
group_write_conditions.append(
|
||||
Note.access_control["write"]["group_ids"].contains([gid])
|
||||
)
|
||||
elif dialect_name == "postgresql":
|
||||
group_write_conditions.append(
|
||||
cast(
|
||||
Note.access_control["write"]["group_ids"],
|
||||
JSONB,
|
||||
).contains([gid])
|
||||
)
|
||||
|
||||
if group_write_conditions:
|
||||
# User should NOT have write permission
|
||||
write_exclusions.append(~or_(*group_write_conditions))
|
||||
|
||||
# Exclude public items (items without access_control)
|
||||
write_exclusions.append(Note.access_control.isnot(None))
|
||||
write_exclusions.append(cast(Note.access_control, String) != "null")
|
||||
|
||||
# Combine: has read AND does not have write AND not public
|
||||
if write_exclusions:
|
||||
query = query.filter(and_(has_read, *write_exclusions))
|
||||
else:
|
||||
query = query.filter(has_read)
|
||||
|
||||
return query
|
||||
|
||||
# Original logic for other permissions (read, write, etc.)
|
||||
# Public access conditions
|
||||
if group_ids or user_id:
|
||||
conditions.extend(
|
||||
[
|
||||
Note.access_control.is_(None),
|
||||
cast(Note.access_control, String) == "null",
|
||||
]
|
||||
)
|
||||
|
||||
# User-level permission (owner has all permissions)
|
||||
if user_id:
|
||||
conditions.append(Note.user_id == user_id)
|
||||
|
||||
# Group-level permission
|
||||
if group_ids:
|
||||
group_conditions = []
|
||||
for gid in group_ids:
|
||||
if dialect_name == "sqlite":
|
||||
group_conditions.append(
|
||||
Note.access_control[permission]["group_ids"].contains([gid])
|
||||
)
|
||||
elif dialect_name == "postgresql":
|
||||
group_conditions.append(
|
||||
cast(
|
||||
Note.access_control[permission]["group_ids"],
|
||||
JSONB,
|
||||
).contains([gid])
|
||||
)
|
||||
conditions.append(or_(*group_conditions))
|
||||
|
||||
if conditions:
|
||||
query = query.filter(or_(*conditions))
|
||||
|
||||
return query
|
||||
|
||||
def insert_new_note(
|
||||
self,
|
||||
form_data: NoteForm,
|
||||
|
|
@ -232,128 +96,22 @@ class NoteTable:
|
|||
db.commit()
|
||||
return note
|
||||
|
||||
def get_notes(
|
||||
self, skip: Optional[int] = None, limit: Optional[int] = None
|
||||
) -> list[NoteModel]:
|
||||
def get_notes(self) -> list[NoteModel]:
|
||||
with get_db() as db:
|
||||
query = db.query(Note).order_by(Note.updated_at.desc())
|
||||
if skip is not None:
|
||||
query = query.offset(skip)
|
||||
if limit is not None:
|
||||
query = query.limit(limit)
|
||||
notes = query.all()
|
||||
notes = db.query(Note).order_by(Note.updated_at.desc()).all()
|
||||
return [NoteModel.model_validate(note) for note in notes]
|
||||
|
||||
def search_notes(
|
||||
self, user_id: str, filter: dict = {}, skip: int = 0, limit: int = 30
|
||||
) -> NoteListResponse:
|
||||
with get_db() as db:
|
||||
query = db.query(Note, User).outerjoin(User, User.id == Note.user_id)
|
||||
if filter:
|
||||
query_key = filter.get("query")
|
||||
if query_key:
|
||||
query = query.filter(
|
||||
or_(
|
||||
Note.title.ilike(f"%{query_key}%"),
|
||||
cast(Note.data["content"]["md"], Text).ilike(
|
||||
f"%{query_key}%"
|
||||
),
|
||||
)
|
||||
)
|
||||
|
||||
view_option = filter.get("view_option")
|
||||
if view_option == "created":
|
||||
query = query.filter(Note.user_id == user_id)
|
||||
elif view_option == "shared":
|
||||
query = query.filter(Note.user_id != user_id)
|
||||
|
||||
# Apply access control filtering
|
||||
if "permission" in filter:
|
||||
permission = filter["permission"]
|
||||
else:
|
||||
permission = "write"
|
||||
|
||||
query = self._has_permission(
|
||||
db,
|
||||
query,
|
||||
filter,
|
||||
permission=permission,
|
||||
)
|
||||
|
||||
order_by = filter.get("order_by")
|
||||
direction = filter.get("direction")
|
||||
|
||||
if order_by == "name":
|
||||
if direction == "asc":
|
||||
query = query.order_by(Note.title.asc())
|
||||
else:
|
||||
query = query.order_by(Note.title.desc())
|
||||
elif order_by == "created_at":
|
||||
if direction == "asc":
|
||||
query = query.order_by(Note.created_at.asc())
|
||||
else:
|
||||
query = query.order_by(Note.created_at.desc())
|
||||
elif order_by == "updated_at":
|
||||
if direction == "asc":
|
||||
query = query.order_by(Note.updated_at.asc())
|
||||
else:
|
||||
query = query.order_by(Note.updated_at.desc())
|
||||
else:
|
||||
query = query.order_by(Note.updated_at.desc())
|
||||
|
||||
else:
|
||||
query = query.order_by(Note.updated_at.desc())
|
||||
|
||||
# Count BEFORE pagination
|
||||
total = query.count()
|
||||
|
||||
if skip:
|
||||
query = query.offset(skip)
|
||||
if limit:
|
||||
query = query.limit(limit)
|
||||
|
||||
items = query.all()
|
||||
|
||||
notes = []
|
||||
for note, user in items:
|
||||
notes.append(
|
||||
NoteUserResponse(
|
||||
**NoteModel.model_validate(note).model_dump(),
|
||||
user=(
|
||||
UserResponse(**UserModel.model_validate(user).model_dump())
|
||||
if user
|
||||
else None
|
||||
),
|
||||
)
|
||||
)
|
||||
|
||||
return NoteListResponse(items=notes, total=total)
|
||||
|
||||
def get_notes_by_user_id(
|
||||
self,
|
||||
user_id: str,
|
||||
permission: str = "read",
|
||||
skip: Optional[int] = None,
|
||||
limit: Optional[int] = None,
|
||||
self, user_id: str, permission: str = "write"
|
||||
) -> list[NoteModel]:
|
||||
with get_db() as db:
|
||||
user_group_ids = [
|
||||
group.id for group in Groups.get_groups_by_member_id(user_id)
|
||||
notes = self.get_notes()
|
||||
return [
|
||||
note
|
||||
for note in notes
|
||||
if note.user_id == user_id
|
||||
or has_access(user_id, permission, note.access_control)
|
||||
]
|
||||
|
||||
query = db.query(Note).order_by(Note.updated_at.desc())
|
||||
query = self._has_permission(
|
||||
db, query, {"user_id": user_id, "group_ids": user_group_ids}, permission
|
||||
)
|
||||
|
||||
if skip is not None:
|
||||
query = query.offset(skip)
|
||||
if limit is not None:
|
||||
query = query.limit(limit)
|
||||
|
||||
notes = query.all()
|
||||
return [NoteModel.model_validate(note) for note in notes]
|
||||
|
||||
def get_note_by_id(self, id: str) -> Optional[NoteModel]:
|
||||
with get_db() as db:
|
||||
note = db.query(Note).filter(Note.id == id).first()
|
||||
|
|
|
|||
|
|
@ -1,276 +0,0 @@
|
|||
import time
|
||||
import logging
|
||||
import uuid
|
||||
from typing import Optional, List
|
||||
import base64
|
||||
import hashlib
|
||||
import json
|
||||
|
||||
from cryptography.fernet import Fernet
|
||||
|
||||
from open_webui.internal.db import Base, get_db
|
||||
from open_webui.env import OAUTH_SESSION_TOKEN_ENCRYPTION_KEY
|
||||
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
from sqlalchemy import BigInteger, Column, String, Text, Index
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
####################
|
||||
# DB MODEL
|
||||
####################
|
||||
|
||||
|
||||
class OAuthSession(Base):
|
||||
__tablename__ = "oauth_session"
|
||||
|
||||
id = Column(Text, primary_key=True, unique=True)
|
||||
user_id = Column(Text, nullable=False)
|
||||
provider = Column(Text, nullable=False)
|
||||
token = Column(
|
||||
Text, nullable=False
|
||||
) # JSON with access_token, id_token, refresh_token
|
||||
expires_at = Column(BigInteger, nullable=False)
|
||||
created_at = Column(BigInteger, nullable=False)
|
||||
updated_at = Column(BigInteger, nullable=False)
|
||||
|
||||
# Add indexes for better performance
|
||||
__table_args__ = (
|
||||
Index("idx_oauth_session_user_id", "user_id"),
|
||||
Index("idx_oauth_session_expires_at", "expires_at"),
|
||||
Index("idx_oauth_session_user_provider", "user_id", "provider"),
|
||||
)
|
||||
|
||||
|
||||
class OAuthSessionModel(BaseModel):
|
||||
id: str
|
||||
user_id: str
|
||||
provider: str
|
||||
token: dict
|
||||
expires_at: int # timestamp in epoch
|
||||
created_at: int # timestamp in epoch
|
||||
updated_at: int # timestamp in epoch
|
||||
|
||||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
|
||||
####################
|
||||
# Forms
|
||||
####################
|
||||
|
||||
|
||||
class OAuthSessionResponse(BaseModel):
|
||||
id: str
|
||||
user_id: str
|
||||
provider: str
|
||||
expires_at: int
|
||||
|
||||
|
||||
class OAuthSessionTable:
|
||||
def __init__(self):
|
||||
self.encryption_key = OAUTH_SESSION_TOKEN_ENCRYPTION_KEY
|
||||
if not self.encryption_key:
|
||||
raise Exception("OAUTH_SESSION_TOKEN_ENCRYPTION_KEY is not set")
|
||||
|
||||
# check if encryption key is in the right format for Fernet (32 url-safe base64-encoded bytes)
|
||||
if len(self.encryption_key) != 44:
|
||||
key_bytes = hashlib.sha256(self.encryption_key.encode()).digest()
|
||||
self.encryption_key = base64.urlsafe_b64encode(key_bytes)
|
||||
else:
|
||||
self.encryption_key = self.encryption_key.encode()
|
||||
|
||||
try:
|
||||
self.fernet = Fernet(self.encryption_key)
|
||||
except Exception as e:
|
||||
log.error(f"Error initializing Fernet with provided key: {e}")
|
||||
raise
|
||||
|
||||
def _encrypt_token(self, token) -> str:
|
||||
"""Encrypt OAuth tokens for storage"""
|
||||
try:
|
||||
token_json = json.dumps(token)
|
||||
encrypted = self.fernet.encrypt(token_json.encode()).decode()
|
||||
return encrypted
|
||||
except Exception as e:
|
||||
log.error(f"Error encrypting tokens: {e}")
|
||||
raise
|
||||
|
||||
def _decrypt_token(self, token: str):
|
||||
"""Decrypt OAuth tokens from storage"""
|
||||
try:
|
||||
decrypted = self.fernet.decrypt(token.encode()).decode()
|
||||
return json.loads(decrypted)
|
||||
except Exception as e:
|
||||
log.error(f"Error decrypting tokens: {e}")
|
||||
raise
|
||||
|
||||
def create_session(
|
||||
self,
|
||||
user_id: str,
|
||||
provider: str,
|
||||
token: dict,
|
||||
) -> Optional[OAuthSessionModel]:
|
||||
"""Create a new OAuth session"""
|
||||
try:
|
||||
with get_db() as db:
|
||||
current_time = int(time.time())
|
||||
id = str(uuid.uuid4())
|
||||
|
||||
result = OAuthSession(
|
||||
**{
|
||||
"id": id,
|
||||
"user_id": user_id,
|
||||
"provider": provider,
|
||||
"token": self._encrypt_token(token),
|
||||
"expires_at": token.get("expires_at"),
|
||||
"created_at": current_time,
|
||||
"updated_at": current_time,
|
||||
}
|
||||
)
|
||||
|
||||
db.add(result)
|
||||
db.commit()
|
||||
db.refresh(result)
|
||||
|
||||
if result:
|
||||
result.token = token # Return decrypted token
|
||||
return OAuthSessionModel.model_validate(result)
|
||||
else:
|
||||
return None
|
||||
except Exception as e:
|
||||
log.error(f"Error creating OAuth session: {e}")
|
||||
return None
|
||||
|
||||
def get_session_by_id(self, session_id: str) -> Optional[OAuthSessionModel]:
|
||||
"""Get OAuth session by ID"""
|
||||
try:
|
||||
with get_db() as db:
|
||||
session = db.query(OAuthSession).filter_by(id=session_id).first()
|
||||
if session:
|
||||
session.token = self._decrypt_token(session.token)
|
||||
return OAuthSessionModel.model_validate(session)
|
||||
|
||||
return None
|
||||
except Exception as e:
|
||||
log.error(f"Error getting OAuth session by ID: {e}")
|
||||
return None
|
||||
|
||||
def get_session_by_id_and_user_id(
|
||||
self, session_id: str, user_id: str
|
||||
) -> Optional[OAuthSessionModel]:
|
||||
"""Get OAuth session by ID and user ID"""
|
||||
try:
|
||||
with get_db() as db:
|
||||
session = (
|
||||
db.query(OAuthSession)
|
||||
.filter_by(id=session_id, user_id=user_id)
|
||||
.first()
|
||||
)
|
||||
if session:
|
||||
session.token = self._decrypt_token(session.token)
|
||||
return OAuthSessionModel.model_validate(session)
|
||||
|
||||
return None
|
||||
except Exception as e:
|
||||
log.error(f"Error getting OAuth session by ID: {e}")
|
||||
return None
|
||||
|
||||
def get_session_by_provider_and_user_id(
|
||||
self, provider: str, user_id: str
|
||||
) -> Optional[OAuthSessionModel]:
|
||||
"""Get OAuth session by provider and user ID"""
|
||||
try:
|
||||
with get_db() as db:
|
||||
session = (
|
||||
db.query(OAuthSession)
|
||||
.filter_by(provider=provider, user_id=user_id)
|
||||
.first()
|
||||
)
|
||||
if session:
|
||||
session.token = self._decrypt_token(session.token)
|
||||
return OAuthSessionModel.model_validate(session)
|
||||
|
||||
return None
|
||||
except Exception as e:
|
||||
log.error(f"Error getting OAuth session by provider and user ID: {e}")
|
||||
return None
|
||||
|
||||
def get_sessions_by_user_id(self, user_id: str) -> List[OAuthSessionModel]:
|
||||
"""Get all OAuth sessions for a user"""
|
||||
try:
|
||||
with get_db() as db:
|
||||
sessions = db.query(OAuthSession).filter_by(user_id=user_id).all()
|
||||
|
||||
results = []
|
||||
for session in sessions:
|
||||
session.token = self._decrypt_token(session.token)
|
||||
results.append(OAuthSessionModel.model_validate(session))
|
||||
|
||||
return results
|
||||
|
||||
except Exception as e:
|
||||
log.error(f"Error getting OAuth sessions by user ID: {e}")
|
||||
return []
|
||||
|
||||
def update_session_by_id(
|
||||
self, session_id: str, token: dict
|
||||
) -> Optional[OAuthSessionModel]:
|
||||
"""Update OAuth session tokens"""
|
||||
try:
|
||||
with get_db() as db:
|
||||
current_time = int(time.time())
|
||||
|
||||
db.query(OAuthSession).filter_by(id=session_id).update(
|
||||
{
|
||||
"token": self._encrypt_token(token),
|
||||
"expires_at": token.get("expires_at"),
|
||||
"updated_at": current_time,
|
||||
}
|
||||
)
|
||||
db.commit()
|
||||
session = db.query(OAuthSession).filter_by(id=session_id).first()
|
||||
|
||||
if session:
|
||||
session.token = self._decrypt_token(session.token)
|
||||
return OAuthSessionModel.model_validate(session)
|
||||
|
||||
return None
|
||||
except Exception as e:
|
||||
log.error(f"Error updating OAuth session tokens: {e}")
|
||||
return None
|
||||
|
||||
def delete_session_by_id(self, session_id: str) -> bool:
|
||||
"""Delete an OAuth session"""
|
||||
try:
|
||||
with get_db() as db:
|
||||
result = db.query(OAuthSession).filter_by(id=session_id).delete()
|
||||
db.commit()
|
||||
return result > 0
|
||||
except Exception as e:
|
||||
log.error(f"Error deleting OAuth session: {e}")
|
||||
return False
|
||||
|
||||
def delete_sessions_by_user_id(self, user_id: str) -> bool:
|
||||
"""Delete all OAuth sessions for a user"""
|
||||
try:
|
||||
with get_db() as db:
|
||||
result = db.query(OAuthSession).filter_by(user_id=user_id).delete()
|
||||
db.commit()
|
||||
return True
|
||||
except Exception as e:
|
||||
log.error(f"Error deleting OAuth sessions by user ID: {e}")
|
||||
return False
|
||||
|
||||
def delete_sessions_by_provider(self, provider: str) -> bool:
|
||||
"""Delete all OAuth sessions for a provider"""
|
||||
try:
|
||||
with get_db() as db:
|
||||
db.query(OAuthSession).filter_by(provider=provider).delete()
|
||||
db.commit()
|
||||
return True
|
||||
except Exception as e:
|
||||
log.error(f"Error deleting OAuth sessions by provider {provider}: {e}")
|
||||
return False
|
||||
|
||||
|
||||
OAuthSessions = OAuthSessionTable()
|
||||
|
|
@ -2,7 +2,6 @@ import time
|
|||
from typing import Optional
|
||||
|
||||
from open_webui.internal.db import Base, get_db
|
||||
from open_webui.models.groups import Groups
|
||||
from open_webui.models.users import Users, UserResponse
|
||||
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
|
|
@ -104,16 +103,10 @@ class PromptsTable:
|
|||
|
||||
def get_prompts(self) -> list[PromptUserResponse]:
|
||||
with get_db() as db:
|
||||
all_prompts = db.query(Prompt).order_by(Prompt.timestamp.desc()).all()
|
||||
|
||||
user_ids = list(set(prompt.user_id for prompt in all_prompts))
|
||||
|
||||
users = Users.get_users_by_user_ids(user_ids) if user_ids else []
|
||||
users_dict = {user.id: user for user in users}
|
||||
|
||||
prompts = []
|
||||
for prompt in all_prompts:
|
||||
user = users_dict.get(prompt.user_id)
|
||||
|
||||
for prompt in db.query(Prompt).order_by(Prompt.timestamp.desc()).all():
|
||||
user = Users.get_user_by_id(prompt.user_id)
|
||||
prompts.append(
|
||||
PromptUserResponse.model_validate(
|
||||
{
|
||||
|
|
@ -129,13 +122,12 @@ class PromptsTable:
|
|||
self, user_id: str, permission: str = "write"
|
||||
) -> list[PromptUserResponse]:
|
||||
prompts = self.get_prompts()
|
||||
user_group_ids = {group.id for group in Groups.get_groups_by_member_id(user_id)}
|
||||
|
||||
return [
|
||||
prompt
|
||||
for prompt in prompts
|
||||
if prompt.user_id == user_id
|
||||
or has_access(user_id, permission, prompt.access_control, user_group_ids)
|
||||
or has_access(user_id, permission, prompt.access_control)
|
||||
]
|
||||
|
||||
def update_prompt_by_command(
|
||||
|
|
|
|||
|
|
@ -6,10 +6,12 @@ from typing import Optional
|
|||
from open_webui.internal.db import Base, get_db
|
||||
|
||||
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
from sqlalchemy import BigInteger, Column, String, JSON, PrimaryKeyConstraint, Index
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
||||
|
||||
|
||||
####################
|
||||
|
|
|
|||
|
|
@ -4,8 +4,7 @@ from typing import Optional
|
|||
|
||||
from open_webui.internal.db import Base, JSONField, get_db
|
||||
from open_webui.models.users import Users, UserResponse
|
||||
from open_webui.models.groups import Groups
|
||||
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
from sqlalchemy import BigInteger, Column, String, Text, JSON
|
||||
|
||||
|
|
@ -13,6 +12,7 @@ from open_webui.utils.access_control import has_access
|
|||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
||||
|
||||
####################
|
||||
# Tools DB Schema
|
||||
|
|
@ -22,7 +22,7 @@ log = logging.getLogger(__name__)
|
|||
class Tool(Base):
|
||||
__tablename__ = "tool"
|
||||
|
||||
id = Column(String, primary_key=True, unique=True)
|
||||
id = Column(String, primary_key=True)
|
||||
user_id = Column(String)
|
||||
name = Column(Text)
|
||||
content = Column(Text)
|
||||
|
|
@ -93,8 +93,6 @@ class ToolResponse(BaseModel):
|
|||
class ToolUserResponse(ToolResponse):
|
||||
user: Optional[UserResponse] = None
|
||||
|
||||
model_config = ConfigDict(extra="allow")
|
||||
|
||||
|
||||
class ToolForm(BaseModel):
|
||||
id: str
|
||||
|
|
@ -146,16 +144,9 @@ class ToolsTable:
|
|||
|
||||
def get_tools(self) -> list[ToolUserModel]:
|
||||
with get_db() as db:
|
||||
all_tools = db.query(Tool).order_by(Tool.updated_at.desc()).all()
|
||||
|
||||
user_ids = list(set(tool.user_id for tool in all_tools))
|
||||
|
||||
users = Users.get_users_by_user_ids(user_ids) if user_ids else []
|
||||
users_dict = {user.id: user for user in users}
|
||||
|
||||
tools = []
|
||||
for tool in all_tools:
|
||||
user = users_dict.get(tool.user_id)
|
||||
for tool in db.query(Tool).order_by(Tool.updated_at.desc()).all():
|
||||
user = Users.get_user_by_id(tool.user_id)
|
||||
tools.append(
|
||||
ToolUserModel.model_validate(
|
||||
{
|
||||
|
|
@ -170,13 +161,12 @@ class ToolsTable:
|
|||
self, user_id: str, permission: str = "write"
|
||||
) -> list[ToolUserModel]:
|
||||
tools = self.get_tools()
|
||||
user_group_ids = {group.id for group in Groups.get_groups_by_member_id(user_id)}
|
||||
|
||||
return [
|
||||
tool
|
||||
for tool in tools
|
||||
if tool.user_id == user_id
|
||||
or has_access(user_id, permission, tool.access_control, user_group_ids)
|
||||
or has_access(user_id, permission, tool.access_control)
|
||||
]
|
||||
|
||||
def get_tool_valves_by_id(self, id: str) -> Optional[dict]:
|
||||
|
|
|
|||
|
|
@ -5,29 +5,14 @@ from open_webui.internal.db import Base, JSONField, get_db
|
|||
|
||||
|
||||
from open_webui.env import DATABASE_USER_ACTIVE_STATUS_UPDATE_INTERVAL
|
||||
|
||||
from open_webui.models.chats import Chats
|
||||
from open_webui.models.groups import Groups, GroupMember
|
||||
from open_webui.models.channels import ChannelMember
|
||||
|
||||
from open_webui.models.groups import Groups
|
||||
from open_webui.utils.misc import throttle
|
||||
|
||||
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
from sqlalchemy import (
|
||||
BigInteger,
|
||||
JSON,
|
||||
Column,
|
||||
String,
|
||||
Boolean,
|
||||
Text,
|
||||
Date,
|
||||
exists,
|
||||
select,
|
||||
cast,
|
||||
)
|
||||
from sqlalchemy import or_, case
|
||||
from sqlalchemy.dialects.postgresql import JSONB
|
||||
from sqlalchemy import BigInteger, Column, String, Text, Date
|
||||
from sqlalchemy import or_
|
||||
|
||||
import datetime
|
||||
|
||||
|
|
@ -36,71 +21,59 @@ import datetime
|
|||
####################
|
||||
|
||||
|
||||
class User(Base):
|
||||
__tablename__ = "user"
|
||||
|
||||
id = Column(String, primary_key=True)
|
||||
name = Column(String)
|
||||
|
||||
email = Column(String)
|
||||
username = Column(String(50), nullable=True)
|
||||
|
||||
role = Column(String)
|
||||
profile_image_url = Column(Text)
|
||||
|
||||
bio = Column(Text, nullable=True)
|
||||
gender = Column(Text, nullable=True)
|
||||
date_of_birth = Column(Date, nullable=True)
|
||||
|
||||
info = Column(JSONField, nullable=True)
|
||||
settings = Column(JSONField, nullable=True)
|
||||
|
||||
api_key = Column(String, nullable=True, unique=True)
|
||||
oauth_sub = Column(Text, unique=True)
|
||||
|
||||
last_active_at = Column(BigInteger)
|
||||
|
||||
updated_at = Column(BigInteger)
|
||||
created_at = Column(BigInteger)
|
||||
|
||||
|
||||
class UserSettings(BaseModel):
|
||||
ui: Optional[dict] = {}
|
||||
model_config = ConfigDict(extra="allow")
|
||||
pass
|
||||
|
||||
|
||||
class User(Base):
|
||||
__tablename__ = "user"
|
||||
|
||||
id = Column(String, primary_key=True, unique=True)
|
||||
email = Column(String)
|
||||
username = Column(String(50), nullable=True)
|
||||
role = Column(String)
|
||||
|
||||
name = Column(String)
|
||||
|
||||
profile_image_url = Column(Text)
|
||||
profile_banner_image_url = Column(Text, nullable=True)
|
||||
|
||||
bio = Column(Text, nullable=True)
|
||||
gender = Column(Text, nullable=True)
|
||||
date_of_birth = Column(Date, nullable=True)
|
||||
timezone = Column(String, nullable=True)
|
||||
|
||||
presence_state = Column(String, nullable=True)
|
||||
status_emoji = Column(String, nullable=True)
|
||||
status_message = Column(Text, nullable=True)
|
||||
status_expires_at = Column(BigInteger, nullable=True)
|
||||
|
||||
info = Column(JSON, nullable=True)
|
||||
settings = Column(JSON, nullable=True)
|
||||
|
||||
oauth = Column(JSON, nullable=True)
|
||||
|
||||
last_active_at = Column(BigInteger)
|
||||
updated_at = Column(BigInteger)
|
||||
created_at = Column(BigInteger)
|
||||
|
||||
|
||||
class UserModel(BaseModel):
|
||||
id: str
|
||||
name: str
|
||||
|
||||
email: str
|
||||
username: Optional[str] = None
|
||||
|
||||
role: str = "pending"
|
||||
|
||||
name: str
|
||||
|
||||
profile_image_url: str
|
||||
profile_banner_image_url: Optional[str] = None
|
||||
|
||||
bio: Optional[str] = None
|
||||
gender: Optional[str] = None
|
||||
date_of_birth: Optional[datetime.date] = None
|
||||
timezone: Optional[str] = None
|
||||
|
||||
presence_state: Optional[str] = None
|
||||
status_emoji: Optional[str] = None
|
||||
status_message: Optional[str] = None
|
||||
status_expires_at: Optional[int] = None
|
||||
|
||||
info: Optional[dict] = None
|
||||
settings: Optional[UserSettings] = None
|
||||
|
||||
oauth: Optional[dict] = None
|
||||
api_key: Optional[str] = None
|
||||
oauth_sub: Optional[str] = None
|
||||
|
||||
last_active_at: int # timestamp in epoch
|
||||
updated_at: int # timestamp in epoch
|
||||
|
|
@ -109,38 +82,6 @@ class UserModel(BaseModel):
|
|||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
|
||||
class UserStatusModel(UserModel):
|
||||
is_active: bool = False
|
||||
|
||||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
|
||||
class ApiKey(Base):
|
||||
__tablename__ = "api_key"
|
||||
|
||||
id = Column(Text, primary_key=True, unique=True)
|
||||
user_id = Column(Text, nullable=False)
|
||||
key = Column(Text, unique=True, nullable=False)
|
||||
data = Column(JSON, nullable=True)
|
||||
expires_at = Column(BigInteger, nullable=True)
|
||||
last_used_at = Column(BigInteger, nullable=True)
|
||||
created_at = Column(BigInteger, nullable=False)
|
||||
updated_at = Column(BigInteger, nullable=False)
|
||||
|
||||
|
||||
class ApiKeyModel(BaseModel):
|
||||
id: str
|
||||
user_id: str
|
||||
key: str
|
||||
data: Optional[dict] = None
|
||||
expires_at: Optional[int] = None
|
||||
last_used_at: Optional[int] = None
|
||||
created_at: int # timestamp in epoch
|
||||
updated_at: int # timestamp in epoch
|
||||
|
||||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
|
||||
####################
|
||||
# Forms
|
||||
####################
|
||||
|
|
@ -154,70 +95,35 @@ class UpdateProfileForm(BaseModel):
|
|||
date_of_birth: Optional[datetime.date] = None
|
||||
|
||||
|
||||
class UserGroupIdsModel(UserModel):
|
||||
group_ids: list[str] = []
|
||||
|
||||
|
||||
class UserModelResponse(UserModel):
|
||||
model_config = ConfigDict(extra="allow")
|
||||
|
||||
|
||||
class UserListResponse(BaseModel):
|
||||
users: list[UserModelResponse]
|
||||
users: list[UserModel]
|
||||
total: int
|
||||
|
||||
|
||||
class UserGroupIdsListResponse(BaseModel):
|
||||
users: list[UserGroupIdsModel]
|
||||
total: int
|
||||
|
||||
|
||||
class UserStatus(BaseModel):
|
||||
status_emoji: Optional[str] = None
|
||||
status_message: Optional[str] = None
|
||||
status_expires_at: Optional[int] = None
|
||||
|
||||
|
||||
class UserInfoResponse(UserStatus):
|
||||
class UserInfoResponse(BaseModel):
|
||||
id: str
|
||||
name: str
|
||||
email: str
|
||||
role: str
|
||||
|
||||
|
||||
class UserIdNameResponse(BaseModel):
|
||||
id: str
|
||||
name: str
|
||||
|
||||
|
||||
class UserIdNameStatusResponse(UserStatus):
|
||||
id: str
|
||||
name: str
|
||||
is_active: Optional[bool] = None
|
||||
|
||||
|
||||
class UserInfoListResponse(BaseModel):
|
||||
users: list[UserInfoResponse]
|
||||
total: int
|
||||
|
||||
|
||||
class UserIdNameListResponse(BaseModel):
|
||||
users: list[UserIdNameResponse]
|
||||
total: int
|
||||
class UserResponse(BaseModel):
|
||||
id: str
|
||||
name: str
|
||||
email: str
|
||||
role: str
|
||||
profile_image_url: str
|
||||
|
||||
|
||||
class UserNameResponse(BaseModel):
|
||||
id: str
|
||||
name: str
|
||||
role: str
|
||||
|
||||
|
||||
class UserResponse(UserNameResponse):
|
||||
email: str
|
||||
|
||||
|
||||
class UserProfileImageResponse(UserNameResponse):
|
||||
email: str
|
||||
profile_image_url: str
|
||||
|
||||
|
||||
|
|
@ -242,20 +148,20 @@ class UsersTable:
|
|||
email: str,
|
||||
profile_image_url: str = "/user.png",
|
||||
role: str = "pending",
|
||||
oauth: Optional[dict] = None,
|
||||
oauth_sub: Optional[str] = None,
|
||||
) -> Optional[UserModel]:
|
||||
with get_db() as db:
|
||||
user = UserModel(
|
||||
**{
|
||||
"id": id,
|
||||
"email": email,
|
||||
"name": name,
|
||||
"email": email,
|
||||
"role": role,
|
||||
"profile_image_url": profile_image_url,
|
||||
"last_active_at": int(time.time()),
|
||||
"created_at": int(time.time()),
|
||||
"updated_at": int(time.time()),
|
||||
"oauth": oauth,
|
||||
"oauth_sub": oauth_sub,
|
||||
}
|
||||
)
|
||||
result = User(**user.model_dump())
|
||||
|
|
@ -278,13 +184,8 @@ class UsersTable:
|
|||
def get_user_by_api_key(self, api_key: str) -> Optional[UserModel]:
|
||||
try:
|
||||
with get_db() as db:
|
||||
user = (
|
||||
db.query(User)
|
||||
.join(ApiKey, User.id == ApiKey.user_id)
|
||||
.filter(ApiKey.key == api_key)
|
||||
.first()
|
||||
)
|
||||
return UserModel.model_validate(user) if user else None
|
||||
user = db.query(User).filter_by(api_key=api_key).first()
|
||||
return UserModel.model_validate(user)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
|
@ -296,23 +197,12 @@ class UsersTable:
|
|||
except Exception:
|
||||
return None
|
||||
|
||||
def get_user_by_oauth_sub(self, provider: str, sub: str) -> Optional[UserModel]:
|
||||
def get_user_by_oauth_sub(self, sub: str) -> Optional[UserModel]:
|
||||
try:
|
||||
with get_db() as db: # type: Session
|
||||
dialect_name = db.bind.dialect.name
|
||||
|
||||
query = db.query(User)
|
||||
if dialect_name == "sqlite":
|
||||
query = query.filter(User.oauth.contains({provider: {"sub": sub}}))
|
||||
elif dialect_name == "postgresql":
|
||||
query = query.filter(
|
||||
User.oauth[provider].cast(JSONB)["sub"].astext == sub
|
||||
)
|
||||
|
||||
user = query.first()
|
||||
return UserModel.model_validate(user) if user else None
|
||||
except Exception as e:
|
||||
# You may want to log the exception here
|
||||
with get_db() as db:
|
||||
user = db.query(User).filter_by(oauth_sub=sub).first()
|
||||
return UserModel.model_validate(user)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def get_users(
|
||||
|
|
@ -320,9 +210,8 @@ class UsersTable:
|
|||
filter: Optional[dict] = None,
|
||||
skip: Optional[int] = None,
|
||||
limit: Optional[int] = None,
|
||||
) -> dict:
|
||||
) -> UserListResponse:
|
||||
with get_db() as db:
|
||||
# Join GroupMember so we can order by group_id when requested
|
||||
query = db.query(User)
|
||||
|
||||
if filter:
|
||||
|
|
@ -335,76 +224,14 @@ class UsersTable:
|
|||
)
|
||||
)
|
||||
|
||||
channel_id = filter.get("channel_id")
|
||||
if channel_id:
|
||||
query = query.filter(
|
||||
exists(
|
||||
select(ChannelMember.id).where(
|
||||
ChannelMember.user_id == User.id,
|
||||
ChannelMember.channel_id == channel_id,
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
user_ids = filter.get("user_ids")
|
||||
group_ids = filter.get("group_ids")
|
||||
|
||||
if isinstance(user_ids, list) and isinstance(group_ids, list):
|
||||
# If both are empty lists, return no users
|
||||
if not user_ids and not group_ids:
|
||||
return {"users": [], "total": 0}
|
||||
|
||||
if user_ids:
|
||||
query = query.filter(User.id.in_(user_ids))
|
||||
|
||||
if group_ids:
|
||||
query = query.filter(
|
||||
exists(
|
||||
select(GroupMember.id).where(
|
||||
GroupMember.user_id == User.id,
|
||||
GroupMember.group_id.in_(group_ids),
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
roles = filter.get("roles")
|
||||
if roles:
|
||||
include_roles = [role for role in roles if not role.startswith("!")]
|
||||
exclude_roles = [role[1:] for role in roles if role.startswith("!")]
|
||||
|
||||
if include_roles:
|
||||
query = query.filter(User.role.in_(include_roles))
|
||||
if exclude_roles:
|
||||
query = query.filter(~User.role.in_(exclude_roles))
|
||||
|
||||
order_by = filter.get("order_by")
|
||||
direction = filter.get("direction")
|
||||
|
||||
if order_by and order_by.startswith("group_id:"):
|
||||
group_id = order_by.split(":", 1)[1]
|
||||
|
||||
# Subquery that checks if the user belongs to the group
|
||||
membership_exists = exists(
|
||||
select(GroupMember.id).where(
|
||||
GroupMember.user_id == User.id,
|
||||
GroupMember.group_id == group_id,
|
||||
)
|
||||
)
|
||||
|
||||
# CASE: user in group → 1, user not in group → 0
|
||||
group_sort = case((membership_exists, 1), else_=0)
|
||||
|
||||
if direction == "asc":
|
||||
query = query.order_by(group_sort.asc(), User.name.asc())
|
||||
else:
|
||||
query = query.order_by(group_sort.desc(), User.name.asc())
|
||||
|
||||
elif order_by == "name":
|
||||
if order_by == "name":
|
||||
if direction == "asc":
|
||||
query = query.order_by(User.name.asc())
|
||||
else:
|
||||
query = query.order_by(User.name.desc())
|
||||
|
||||
elif order_by == "email":
|
||||
if direction == "asc":
|
||||
query = query.order_by(User.email.asc())
|
||||
|
|
@ -437,32 +264,18 @@ class UsersTable:
|
|||
else:
|
||||
query = query.order_by(User.created_at.desc())
|
||||
|
||||
# Count BEFORE pagination
|
||||
total = query.count()
|
||||
|
||||
# correct pagination logic
|
||||
if skip is not None:
|
||||
if skip:
|
||||
query = query.offset(skip)
|
||||
if limit is not None:
|
||||
if limit:
|
||||
query = query.limit(limit)
|
||||
|
||||
users = query.all()
|
||||
return {
|
||||
"users": [UserModel.model_validate(user) for user in users],
|
||||
"total": total,
|
||||
"total": db.query(User).count(),
|
||||
}
|
||||
|
||||
def get_users_by_group_id(self, group_id: str) -> list[UserModel]:
|
||||
with get_db() as db:
|
||||
users = (
|
||||
db.query(User)
|
||||
.join(GroupMember, User.id == GroupMember.user_id)
|
||||
.filter(GroupMember.group_id == group_id)
|
||||
.all()
|
||||
)
|
||||
return [UserModel.model_validate(user) for user in users]
|
||||
|
||||
def get_users_by_user_ids(self, user_ids: list[str]) -> list[UserStatusModel]:
|
||||
def get_users_by_user_ids(self, user_ids: list[str]) -> list[UserModel]:
|
||||
with get_db() as db:
|
||||
users = db.query(User).filter(User.id.in_(user_ids)).all()
|
||||
return [UserModel.model_validate(user) for user in users]
|
||||
|
|
@ -499,15 +312,6 @@ class UsersTable:
|
|||
except Exception:
|
||||
return None
|
||||
|
||||
def get_num_users_active_today(self) -> Optional[int]:
|
||||
with get_db() as db:
|
||||
current_timestamp = int(datetime.datetime.now().timestamp())
|
||||
today_midnight_timestamp = current_timestamp - (current_timestamp % 86400)
|
||||
query = db.query(User).filter(
|
||||
User.last_active_at > today_midnight_timestamp
|
||||
)
|
||||
return query.count()
|
||||
|
||||
def update_user_role_by_id(self, id: str, role: str) -> Optional[UserModel]:
|
||||
try:
|
||||
with get_db() as db:
|
||||
|
|
@ -518,21 +322,6 @@ class UsersTable:
|
|||
except Exception:
|
||||
return None
|
||||
|
||||
def update_user_status_by_id(
|
||||
self, id: str, form_data: UserStatus
|
||||
) -> Optional[UserModel]:
|
||||
try:
|
||||
with get_db() as db:
|
||||
db.query(User).filter_by(id=id).update(
|
||||
{**form_data.model_dump(exclude_none=True)}
|
||||
)
|
||||
db.commit()
|
||||
|
||||
user = db.query(User).filter_by(id=id).first()
|
||||
return UserModel.model_validate(user)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def update_user_profile_image_url_by_id(
|
||||
self, id: str, profile_image_url: str
|
||||
) -> Optional[UserModel]:
|
||||
|
|
@ -549,7 +338,7 @@ class UsersTable:
|
|||
return None
|
||||
|
||||
@throttle(DATABASE_USER_ACTIVE_STATUS_UPDATE_INTERVAL)
|
||||
def update_last_active_by_id(self, id: str) -> Optional[UserModel]:
|
||||
def update_user_last_active_by_id(self, id: str) -> Optional[UserModel]:
|
||||
try:
|
||||
with get_db() as db:
|
||||
db.query(User).filter_by(id=id).update(
|
||||
|
|
@ -562,35 +351,16 @@ class UsersTable:
|
|||
except Exception:
|
||||
return None
|
||||
|
||||
def update_user_oauth_by_id(
|
||||
self, id: str, provider: str, sub: str
|
||||
def update_user_oauth_sub_by_id(
|
||||
self, id: str, oauth_sub: str
|
||||
) -> Optional[UserModel]:
|
||||
"""
|
||||
Update or insert an OAuth provider/sub pair into the user's oauth JSON field.
|
||||
Example resulting structure:
|
||||
{
|
||||
"google": { "sub": "123" },
|
||||
"github": { "sub": "abc" }
|
||||
}
|
||||
"""
|
||||
try:
|
||||
with get_db() as db:
|
||||
user = db.query(User).filter_by(id=id).first()
|
||||
if not user:
|
||||
return None
|
||||
|
||||
# Load existing oauth JSON or create empty
|
||||
oauth = user.oauth or {}
|
||||
|
||||
# Update or insert provider entry
|
||||
oauth[provider] = {"sub": sub}
|
||||
|
||||
# Persist updated JSON
|
||||
db.query(User).filter_by(id=id).update({"oauth": oauth})
|
||||
db.query(User).filter_by(id=id).update({"oauth_sub": oauth_sub})
|
||||
db.commit()
|
||||
|
||||
user = db.query(User).filter_by(id=id).first()
|
||||
return UserModel.model_validate(user)
|
||||
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
|
@ -644,44 +414,22 @@ class UsersTable:
|
|||
except Exception:
|
||||
return False
|
||||
|
||||
def get_user_api_key_by_id(self, id: str) -> Optional[str]:
|
||||
try:
|
||||
with get_db() as db:
|
||||
api_key = db.query(ApiKey).filter_by(user_id=id).first()
|
||||
return api_key.key if api_key else None
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def update_user_api_key_by_id(self, id: str, api_key: str) -> bool:
|
||||
try:
|
||||
with get_db() as db:
|
||||
db.query(ApiKey).filter_by(user_id=id).delete()
|
||||
result = db.query(User).filter_by(id=id).update({"api_key": api_key})
|
||||
db.commit()
|
||||
|
||||
now = int(time.time())
|
||||
new_api_key = ApiKey(
|
||||
id=f"key_{id}",
|
||||
user_id=id,
|
||||
key=api_key,
|
||||
created_at=now,
|
||||
updated_at=now,
|
||||
)
|
||||
db.add(new_api_key)
|
||||
db.commit()
|
||||
|
||||
return True
|
||||
|
||||
return True if result == 1 else False
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def delete_user_api_key_by_id(self, id: str) -> bool:
|
||||
def get_user_api_key_by_id(self, id: str) -> Optional[str]:
|
||||
try:
|
||||
with get_db() as db:
|
||||
db.query(ApiKey).filter_by(user_id=id).delete()
|
||||
db.commit()
|
||||
return True
|
||||
user = db.query(User).filter_by(id=id).first()
|
||||
return user.api_key
|
||||
except Exception:
|
||||
return False
|
||||
return None
|
||||
|
||||
def get_valid_user_ids(self, user_ids: list[str]) -> list[str]:
|
||||
with get_db() as db:
|
||||
|
|
@ -696,23 +444,5 @@ class UsersTable:
|
|||
else:
|
||||
return None
|
||||
|
||||
def get_active_user_count(self) -> int:
|
||||
with get_db() as db:
|
||||
# Consider user active if last_active_at within the last 3 minutes
|
||||
three_minutes_ago = int(time.time()) - 180
|
||||
count = (
|
||||
db.query(User).filter(User.last_active_at >= three_minutes_ago).count()
|
||||
)
|
||||
return count
|
||||
|
||||
def is_user_active(self, user_id: str) -> bool:
|
||||
with get_db() as db:
|
||||
user = db.query(User).filter_by(id=user_id).first()
|
||||
if user and user.last_active_at:
|
||||
# Consider user active if last_active_at within the last 3 minutes
|
||||
three_minutes_ago = int(time.time()) - 180
|
||||
return user.last_active_at >= three_minutes_ago
|
||||
return False
|
||||
|
||||
|
||||
Users = UsersTable()
|
||||
|
|
|
|||
|
|
@ -64,7 +64,7 @@ class DatalabMarkerLoader:
|
|||
return mime_map.get(ext, "application/octet-stream")
|
||||
|
||||
def check_marker_request_status(self, request_id: str) -> dict:
|
||||
url = f"{self.api_base_url}/{request_id}"
|
||||
url = f"{self.api_base_url}/marker/{request_id}"
|
||||
headers = {"X-Api-Key": self.api_key}
|
||||
try:
|
||||
response = requests.get(url, headers=headers)
|
||||
|
|
@ -111,7 +111,7 @@ class DatalabMarkerLoader:
|
|||
with open(self.file_path, "rb") as f:
|
||||
files = {"file": (filename, f, mime_type)}
|
||||
response = requests.post(
|
||||
f"{self.api_base_url}",
|
||||
f"{self.api_base_url}/marker",
|
||||
data=form_data,
|
||||
files=files,
|
||||
headers=headers,
|
||||
|
|
|
|||
|
|
@ -1,13 +1,13 @@
|
|||
import requests
|
||||
import logging, os
|
||||
from typing import Iterator, List, Union
|
||||
from urllib.parse import quote
|
||||
|
||||
from langchain_core.document_loaders import BaseLoader
|
||||
from langchain_core.documents import Document
|
||||
from open_webui.utils.headers import include_user_info_headers
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
class ExternalDocumentLoader(BaseLoader):
|
||||
|
|
@ -17,7 +17,6 @@ class ExternalDocumentLoader(BaseLoader):
|
|||
url: str,
|
||||
api_key: str,
|
||||
mime_type=None,
|
||||
user=None,
|
||||
**kwargs,
|
||||
) -> None:
|
||||
self.url = url
|
||||
|
|
@ -26,8 +25,6 @@ class ExternalDocumentLoader(BaseLoader):
|
|||
self.file_path = file_path
|
||||
self.mime_type = mime_type
|
||||
|
||||
self.user = user
|
||||
|
||||
def load(self) -> List[Document]:
|
||||
with open(self.file_path, "rb") as f:
|
||||
data = f.read()
|
||||
|
|
@ -40,13 +37,10 @@ class ExternalDocumentLoader(BaseLoader):
|
|||
headers["Authorization"] = f"Bearer {self.api_key}"
|
||||
|
||||
try:
|
||||
headers["X-Filename"] = quote(os.path.basename(self.file_path))
|
||||
headers["X-Filename"] = os.path.basename(self.file_path)
|
||||
except:
|
||||
pass
|
||||
|
||||
if self.user is not None:
|
||||
headers = include_user_info_headers(headers, self.user)
|
||||
|
||||
url = self.url
|
||||
if url.endswith("/"):
|
||||
url = url[:-1]
|
||||
|
|
|
|||
|
|
@ -4,8 +4,10 @@ from typing import Iterator, List, Union
|
|||
|
||||
from langchain_core.document_loaders import BaseLoader
|
||||
from langchain_core.documents import Document
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
class ExternalWebLoader(BaseLoader):
|
||||
|
|
|
|||
|
|
@ -4,7 +4,6 @@ import ftfy
|
|||
import sys
|
||||
import json
|
||||
|
||||
from azure.identity import DefaultAzureCredential
|
||||
from langchain_community.document_loaders import (
|
||||
AzureAIDocumentIntelligenceLoader,
|
||||
BSHTMLLoader,
|
||||
|
|
@ -27,13 +26,13 @@ from open_webui.retrieval.loaders.external_document import ExternalDocumentLoade
|
|||
|
||||
from open_webui.retrieval.loaders.mistral import MistralLoader
|
||||
from open_webui.retrieval.loaders.datalab_marker import DatalabMarkerLoader
|
||||
from open_webui.retrieval.loaders.mineru import MinerULoader
|
||||
|
||||
|
||||
from open_webui.env import GLOBAL_LOG_LEVEL
|
||||
from open_webui.env import SRC_LOG_LEVELS, GLOBAL_LOG_LEVEL
|
||||
|
||||
logging.basicConfig(stream=sys.stdout, level=GLOBAL_LOG_LEVEL)
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
known_source_ext = [
|
||||
"go",
|
||||
|
|
@ -131,9 +130,8 @@ class TikaLoader:
|
|||
|
||||
|
||||
class DoclingLoader:
|
||||
def __init__(self, url, api_key=None, file_path=None, mime_type=None, params=None):
|
||||
def __init__(self, url, file_path=None, mime_type=None, params=None):
|
||||
self.url = url.rstrip("/")
|
||||
self.api_key = api_key
|
||||
self.file_path = file_path
|
||||
self.mime_type = mime_type
|
||||
|
||||
|
|
@ -141,25 +139,51 @@ class DoclingLoader:
|
|||
|
||||
def load(self) -> list[Document]:
|
||||
with open(self.file_path, "rb") as f:
|
||||
headers = {}
|
||||
if self.api_key:
|
||||
headers["X-Api-Key"] = f"Bearer {self.api_key}"
|
||||
|
||||
r = requests.post(
|
||||
f"{self.url}/v1/convert/file",
|
||||
files={
|
||||
files = {
|
||||
"files": (
|
||||
self.file_path,
|
||||
f,
|
||||
self.mime_type or "application/octet-stream",
|
||||
)
|
||||
},
|
||||
data={
|
||||
"image_export_mode": "placeholder",
|
||||
**self.params,
|
||||
},
|
||||
headers=headers,
|
||||
}
|
||||
|
||||
params = {"image_export_mode": "placeholder", "table_mode": "accurate"}
|
||||
|
||||
if self.params:
|
||||
if self.params.get("do_picture_description"):
|
||||
params["do_picture_description"] = self.params.get(
|
||||
"do_picture_description"
|
||||
)
|
||||
|
||||
picture_description_mode = self.params.get(
|
||||
"picture_description_mode", ""
|
||||
).lower()
|
||||
|
||||
if picture_description_mode == "local" and self.params.get(
|
||||
"picture_description_local", {}
|
||||
):
|
||||
params["picture_description_local"] = json.dumps(
|
||||
self.params.get("picture_description_local", {})
|
||||
)
|
||||
|
||||
elif picture_description_mode == "api" and self.params.get(
|
||||
"picture_description_api", {}
|
||||
):
|
||||
params["picture_description_api"] = json.dumps(
|
||||
self.params.get("picture_description_api", {})
|
||||
)
|
||||
|
||||
if self.params.get("ocr_engine") and self.params.get("ocr_lang"):
|
||||
params["ocr_engine"] = self.params.get("ocr_engine")
|
||||
params["ocr_lang"] = [
|
||||
lang.strip()
|
||||
for lang in self.params.get("ocr_lang").split(",")
|
||||
if lang.strip()
|
||||
]
|
||||
|
||||
endpoint = f"{self.url}/v1/convert/file"
|
||||
r = requests.post(endpoint, files=files, data=params)
|
||||
|
||||
if r.ok:
|
||||
result = r.json()
|
||||
document_data = result.get("document", {})
|
||||
|
|
@ -168,6 +192,7 @@ class DoclingLoader:
|
|||
metadata = {"Content-Type": self.mime_type} if self.mime_type else {}
|
||||
|
||||
log.debug("Docling extracted text: %s", text)
|
||||
|
||||
return [Document(page_content=text, metadata=metadata)]
|
||||
else:
|
||||
error_msg = f"Error calling Docling API: {r.reason}"
|
||||
|
|
@ -184,7 +209,6 @@ class DoclingLoader:
|
|||
class Loader:
|
||||
def __init__(self, engine: str = "", **kwargs):
|
||||
self.engine = engine
|
||||
self.user = kwargs.get("user", None)
|
||||
self.kwargs = kwargs
|
||||
|
||||
def load(
|
||||
|
|
@ -221,7 +245,6 @@ class Loader:
|
|||
url=self.kwargs.get("EXTERNAL_DOCUMENT_LOADER_URL"),
|
||||
api_key=self.kwargs.get("EXTERNAL_DOCUMENT_LOADER_API_KEY"),
|
||||
mime_type=file_content_type,
|
||||
user=self.user,
|
||||
)
|
||||
elif self.engine == "tika" and self.kwargs.get("TIKA_SERVER_URL"):
|
||||
if self._is_text_file(file_ext, file_content_type):
|
||||
|
|
@ -230,6 +253,7 @@ class Loader:
|
|||
loader = TikaLoader(
|
||||
url=self.kwargs.get("TIKA_SERVER_URL"),
|
||||
file_path=file_path,
|
||||
mime_type=file_content_type,
|
||||
extract_images=self.kwargs.get("PDF_EXTRACT_IMAGES"),
|
||||
)
|
||||
elif (
|
||||
|
|
@ -259,7 +283,7 @@ class Loader:
|
|||
):
|
||||
api_base_url = self.kwargs.get("DATALAB_MARKER_API_BASE_URL", "")
|
||||
if not api_base_url or api_base_url.strip() == "":
|
||||
api_base_url = "https://www.datalab.to/api/v1/marker" # https://github.com/open-webui/open-webui/pull/16867#issuecomment-3218424349
|
||||
api_base_url = "https://www.datalab.to/api/v1"
|
||||
|
||||
loader = DatalabMarkerLoader(
|
||||
file_path=file_path,
|
||||
|
|
@ -296,7 +320,6 @@ class Loader:
|
|||
|
||||
loader = DoclingLoader(
|
||||
url=self.kwargs.get("DOCLING_SERVER_URL"),
|
||||
api_key=self.kwargs.get("DOCLING_API_KEY", None),
|
||||
file_path=file_path,
|
||||
mime_type=file_content_type,
|
||||
params=params,
|
||||
|
|
@ -304,48 +327,23 @@ class Loader:
|
|||
elif (
|
||||
self.engine == "document_intelligence"
|
||||
and self.kwargs.get("DOCUMENT_INTELLIGENCE_ENDPOINT") != ""
|
||||
and self.kwargs.get("DOCUMENT_INTELLIGENCE_KEY") != ""
|
||||
and (
|
||||
file_ext in ["pdf", "docx", "ppt", "pptx"]
|
||||
file_ext in ["pdf", "xls", "xlsx", "docx", "ppt", "pptx"]
|
||||
or file_content_type
|
||||
in [
|
||||
"application/vnd.ms-excel",
|
||||
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
|
||||
"application/vnd.openxmlformats-officedocument.wordprocessingml.document",
|
||||
"application/vnd.ms-powerpoint",
|
||||
"application/vnd.openxmlformats-officedocument.presentationml.presentation",
|
||||
]
|
||||
)
|
||||
):
|
||||
if self.kwargs.get("DOCUMENT_INTELLIGENCE_KEY") != "":
|
||||
loader = AzureAIDocumentIntelligenceLoader(
|
||||
file_path=file_path,
|
||||
api_endpoint=self.kwargs.get("DOCUMENT_INTELLIGENCE_ENDPOINT"),
|
||||
api_key=self.kwargs.get("DOCUMENT_INTELLIGENCE_KEY"),
|
||||
api_model=self.kwargs.get("DOCUMENT_INTELLIGENCE_MODEL"),
|
||||
)
|
||||
else:
|
||||
loader = AzureAIDocumentIntelligenceLoader(
|
||||
file_path=file_path,
|
||||
api_endpoint=self.kwargs.get("DOCUMENT_INTELLIGENCE_ENDPOINT"),
|
||||
azure_credential=DefaultAzureCredential(),
|
||||
api_model=self.kwargs.get("DOCUMENT_INTELLIGENCE_MODEL"),
|
||||
)
|
||||
elif self.engine == "mineru" and file_ext in [
|
||||
"pdf"
|
||||
]: # MinerU currently only supports PDF
|
||||
|
||||
mineru_timeout = self.kwargs.get("MINERU_API_TIMEOUT", 300)
|
||||
if mineru_timeout:
|
||||
try:
|
||||
mineru_timeout = int(mineru_timeout)
|
||||
except ValueError:
|
||||
mineru_timeout = 300
|
||||
|
||||
loader = MinerULoader(
|
||||
file_path=file_path,
|
||||
api_mode=self.kwargs.get("MINERU_API_MODE", "local"),
|
||||
api_url=self.kwargs.get("MINERU_API_URL", "http://localhost:8000"),
|
||||
api_key=self.kwargs.get("MINERU_API_KEY", ""),
|
||||
params=self.kwargs.get("MINERU_PARAMS", {}),
|
||||
timeout=mineru_timeout,
|
||||
)
|
||||
elif (
|
||||
self.engine == "mistral_ocr"
|
||||
|
|
@ -354,9 +352,16 @@ class Loader:
|
|||
in ["pdf"] # Mistral OCR currently only supports PDF and images
|
||||
):
|
||||
loader = MistralLoader(
|
||||
base_url=self.kwargs.get("MISTRAL_OCR_API_BASE_URL"),
|
||||
api_key=self.kwargs.get("MISTRAL_OCR_API_KEY"),
|
||||
file_path=file_path,
|
||||
api_key=self.kwargs.get("MISTRAL_OCR_API_KEY"), file_path=file_path
|
||||
)
|
||||
elif (
|
||||
self.engine == "external"
|
||||
and self.kwargs.get("MISTRAL_OCR_API_KEY") != ""
|
||||
and file_ext
|
||||
in ["pdf"] # Mistral OCR currently only supports PDF and images
|
||||
):
|
||||
loader = MistralLoader(
|
||||
api_key=self.kwargs.get("MISTRAL_OCR_API_KEY"), file_path=file_path
|
||||
)
|
||||
else:
|
||||
if file_ext == "pdf":
|
||||
|
|
|
|||
|
|
@ -1,524 +0,0 @@
|
|||
import os
|
||||
import time
|
||||
import requests
|
||||
import logging
|
||||
import tempfile
|
||||
import zipfile
|
||||
from typing import List, Optional
|
||||
from langchain_core.documents import Document
|
||||
from fastapi import HTTPException, status
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class MinerULoader:
|
||||
"""
|
||||
MinerU document parser loader supporting both Cloud API and Local API modes.
|
||||
|
||||
Cloud API: Uses MinerU managed service with async task-based processing
|
||||
Local API: Uses self-hosted MinerU API with synchronous processing
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
file_path: str,
|
||||
api_mode: str = "local",
|
||||
api_url: str = "http://localhost:8000",
|
||||
api_key: str = "",
|
||||
params: dict = None,
|
||||
timeout: Optional[int] = 300,
|
||||
):
|
||||
self.file_path = file_path
|
||||
self.api_mode = api_mode.lower()
|
||||
self.api_url = api_url.rstrip("/")
|
||||
self.api_key = api_key
|
||||
self.timeout = timeout
|
||||
|
||||
# Parse params dict with defaults
|
||||
self.params = params or {}
|
||||
self.enable_ocr = params.get("enable_ocr", False)
|
||||
self.enable_formula = params.get("enable_formula", True)
|
||||
self.enable_table = params.get("enable_table", True)
|
||||
self.language = params.get("language", "en")
|
||||
self.model_version = params.get("model_version", "pipeline")
|
||||
|
||||
self.page_ranges = self.params.pop("page_ranges", "")
|
||||
|
||||
# Validate API mode
|
||||
if self.api_mode not in ["local", "cloud"]:
|
||||
raise ValueError(
|
||||
f"Invalid API mode: {self.api_mode}. Must be 'local' or 'cloud'"
|
||||
)
|
||||
|
||||
# Validate Cloud API requirements
|
||||
if self.api_mode == "cloud" and not self.api_key:
|
||||
raise ValueError("API key is required for Cloud API mode")
|
||||
|
||||
def load(self) -> List[Document]:
|
||||
"""
|
||||
Main entry point for loading and parsing the document.
|
||||
Routes to Cloud or Local API based on api_mode.
|
||||
"""
|
||||
try:
|
||||
if self.api_mode == "cloud":
|
||||
return self._load_cloud_api()
|
||||
else:
|
||||
return self._load_local_api()
|
||||
except Exception as e:
|
||||
log.error(f"Error loading document with MinerU: {e}")
|
||||
raise
|
||||
|
||||
def _load_local_api(self) -> List[Document]:
|
||||
"""
|
||||
Load document using Local API (synchronous).
|
||||
Posts file to /file_parse endpoint and gets immediate response.
|
||||
"""
|
||||
log.info(f"Using MinerU Local API at {self.api_url}")
|
||||
|
||||
filename = os.path.basename(self.file_path)
|
||||
|
||||
# Build form data for Local API
|
||||
form_data = {
|
||||
**self.params,
|
||||
"return_md": "true",
|
||||
}
|
||||
|
||||
# Page ranges (Local API uses start_page_id and end_page_id)
|
||||
if self.page_ranges:
|
||||
# For simplicity, if page_ranges is specified, log a warning
|
||||
# Full page range parsing would require parsing the string
|
||||
log.warning(
|
||||
f"Page ranges '{self.page_ranges}' specified but Local API uses different format. "
|
||||
"Consider using start_page_id/end_page_id parameters if needed."
|
||||
)
|
||||
|
||||
try:
|
||||
with open(self.file_path, "rb") as f:
|
||||
files = {"files": (filename, f, "application/octet-stream")}
|
||||
|
||||
log.info(f"Sending file to MinerU Local API: {filename}")
|
||||
log.debug(f"Local API parameters: {form_data}")
|
||||
|
||||
response = requests.post(
|
||||
f"{self.api_url}/file_parse",
|
||||
data=form_data,
|
||||
files=files,
|
||||
timeout=self.timeout,
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
except FileNotFoundError:
|
||||
raise HTTPException(
|
||||
status.HTTP_404_NOT_FOUND, detail=f"File not found: {self.file_path}"
|
||||
)
|
||||
except requests.Timeout:
|
||||
raise HTTPException(
|
||||
status.HTTP_504_GATEWAY_TIMEOUT,
|
||||
detail="MinerU Local API request timed out",
|
||||
)
|
||||
except requests.HTTPError as e:
|
||||
error_detail = f"MinerU Local API request failed: {e}"
|
||||
if e.response is not None:
|
||||
try:
|
||||
error_data = e.response.json()
|
||||
error_detail += f" - {error_data}"
|
||||
except:
|
||||
error_detail += f" - {e.response.text}"
|
||||
raise HTTPException(status.HTTP_400_BAD_REQUEST, detail=error_detail)
|
||||
except Exception as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Error calling MinerU Local API: {str(e)}",
|
||||
)
|
||||
|
||||
# Parse response
|
||||
try:
|
||||
result = response.json()
|
||||
except ValueError as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_502_BAD_GATEWAY,
|
||||
detail=f"Invalid JSON response from MinerU Local API: {e}",
|
||||
)
|
||||
|
||||
# Extract markdown content from response
|
||||
if "results" not in result:
|
||||
raise HTTPException(
|
||||
status.HTTP_502_BAD_GATEWAY,
|
||||
detail="MinerU Local API response missing 'results' field",
|
||||
)
|
||||
|
||||
results = result["results"]
|
||||
if not results:
|
||||
raise HTTPException(
|
||||
status.HTTP_400_BAD_REQUEST,
|
||||
detail="MinerU returned empty results",
|
||||
)
|
||||
|
||||
# Get the first (and typically only) result
|
||||
file_result = list(results.values())[0]
|
||||
markdown_content = file_result.get("md_content", "")
|
||||
|
||||
if not markdown_content:
|
||||
raise HTTPException(
|
||||
status.HTTP_400_BAD_REQUEST,
|
||||
detail="MinerU returned empty markdown content",
|
||||
)
|
||||
|
||||
log.info(f"Successfully parsed document with MinerU Local API: {filename}")
|
||||
|
||||
# Create metadata
|
||||
metadata = {
|
||||
"source": filename,
|
||||
"api_mode": "local",
|
||||
"backend": result.get("backend", "unknown"),
|
||||
"version": result.get("version", "unknown"),
|
||||
}
|
||||
|
||||
return [Document(page_content=markdown_content, metadata=metadata)]
|
||||
|
||||
def _load_cloud_api(self) -> List[Document]:
|
||||
"""
|
||||
Load document using Cloud API (asynchronous).
|
||||
Uses batch upload endpoint to avoid need for public file URLs.
|
||||
"""
|
||||
log.info(f"Using MinerU Cloud API at {self.api_url}")
|
||||
|
||||
filename = os.path.basename(self.file_path)
|
||||
|
||||
# Step 1: Request presigned upload URL
|
||||
batch_id, upload_url = self._request_upload_url(filename)
|
||||
|
||||
# Step 2: Upload file to presigned URL
|
||||
self._upload_to_presigned_url(upload_url)
|
||||
|
||||
# Step 3: Poll for results
|
||||
result = self._poll_batch_status(batch_id, filename)
|
||||
|
||||
# Step 4: Download and extract markdown from ZIP
|
||||
markdown_content = self._download_and_extract_zip(
|
||||
result["full_zip_url"], filename
|
||||
)
|
||||
|
||||
log.info(f"Successfully parsed document with MinerU Cloud API: {filename}")
|
||||
|
||||
# Create metadata
|
||||
metadata = {
|
||||
"source": filename,
|
||||
"api_mode": "cloud",
|
||||
"batch_id": batch_id,
|
||||
}
|
||||
|
||||
return [Document(page_content=markdown_content, metadata=metadata)]
|
||||
|
||||
def _request_upload_url(self, filename: str) -> tuple:
|
||||
"""
|
||||
Request presigned upload URL from Cloud API.
|
||||
Returns (batch_id, upload_url).
|
||||
"""
|
||||
headers = {
|
||||
"Authorization": f"Bearer {self.api_key}",
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
|
||||
# Build request body
|
||||
request_body = {
|
||||
**self.params,
|
||||
"files": [
|
||||
{
|
||||
"name": filename,
|
||||
"is_ocr": self.enable_ocr,
|
||||
}
|
||||
],
|
||||
}
|
||||
|
||||
# Add page ranges if specified
|
||||
if self.page_ranges:
|
||||
request_body["files"][0]["page_ranges"] = self.page_ranges
|
||||
|
||||
log.info(f"Requesting upload URL for: {filename}")
|
||||
log.debug(f"Cloud API request body: {request_body}")
|
||||
|
||||
try:
|
||||
response = requests.post(
|
||||
f"{self.api_url}/file-urls/batch",
|
||||
headers=headers,
|
||||
json=request_body,
|
||||
timeout=30,
|
||||
)
|
||||
response.raise_for_status()
|
||||
except requests.HTTPError as e:
|
||||
error_detail = f"Failed to request upload URL: {e}"
|
||||
if e.response is not None:
|
||||
try:
|
||||
error_data = e.response.json()
|
||||
error_detail += f" - {error_data.get('msg', error_data)}"
|
||||
except:
|
||||
error_detail += f" - {e.response.text}"
|
||||
raise HTTPException(status.HTTP_400_BAD_REQUEST, detail=error_detail)
|
||||
except Exception as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Error requesting upload URL: {str(e)}",
|
||||
)
|
||||
|
||||
try:
|
||||
result = response.json()
|
||||
except ValueError as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_502_BAD_GATEWAY,
|
||||
detail=f"Invalid JSON response: {e}",
|
||||
)
|
||||
|
||||
# Check for API error response
|
||||
if result.get("code") != 0:
|
||||
raise HTTPException(
|
||||
status.HTTP_400_BAD_REQUEST,
|
||||
detail=f"MinerU Cloud API error: {result.get('msg', 'Unknown error')}",
|
||||
)
|
||||
|
||||
data = result.get("data", {})
|
||||
batch_id = data.get("batch_id")
|
||||
file_urls = data.get("file_urls", [])
|
||||
|
||||
if not batch_id or not file_urls:
|
||||
raise HTTPException(
|
||||
status.HTTP_502_BAD_GATEWAY,
|
||||
detail="MinerU Cloud API response missing batch_id or file_urls",
|
||||
)
|
||||
|
||||
upload_url = file_urls[0]
|
||||
log.info(f"Received upload URL for batch: {batch_id}")
|
||||
|
||||
return batch_id, upload_url
|
||||
|
||||
def _upload_to_presigned_url(self, upload_url: str) -> None:
|
||||
"""
|
||||
Upload file to presigned URL (no authentication needed).
|
||||
"""
|
||||
log.info(f"Uploading file to presigned URL")
|
||||
|
||||
try:
|
||||
with open(self.file_path, "rb") as f:
|
||||
response = requests.put(
|
||||
upload_url,
|
||||
data=f,
|
||||
timeout=self.timeout,
|
||||
)
|
||||
response.raise_for_status()
|
||||
except FileNotFoundError:
|
||||
raise HTTPException(
|
||||
status.HTTP_404_NOT_FOUND, detail=f"File not found: {self.file_path}"
|
||||
)
|
||||
except requests.Timeout:
|
||||
raise HTTPException(
|
||||
status.HTTP_504_GATEWAY_TIMEOUT,
|
||||
detail="File upload to presigned URL timed out",
|
||||
)
|
||||
except requests.HTTPError as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_400_BAD_REQUEST,
|
||||
detail=f"Failed to upload file to presigned URL: {e}",
|
||||
)
|
||||
except Exception as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Error uploading file: {str(e)}",
|
||||
)
|
||||
|
||||
log.info("File uploaded successfully")
|
||||
|
||||
def _poll_batch_status(self, batch_id: str, filename: str) -> dict:
|
||||
"""
|
||||
Poll batch status until completion.
|
||||
Returns the result dict for the file.
|
||||
"""
|
||||
headers = {
|
||||
"Authorization": f"Bearer {self.api_key}",
|
||||
}
|
||||
|
||||
max_iterations = 300 # 10 minutes max (2 seconds per iteration)
|
||||
poll_interval = 2 # seconds
|
||||
|
||||
log.info(f"Polling batch status: {batch_id}")
|
||||
|
||||
for iteration in range(max_iterations):
|
||||
try:
|
||||
response = requests.get(
|
||||
f"{self.api_url}/extract-results/batch/{batch_id}",
|
||||
headers=headers,
|
||||
timeout=30,
|
||||
)
|
||||
response.raise_for_status()
|
||||
except requests.HTTPError as e:
|
||||
error_detail = f"Failed to poll batch status: {e}"
|
||||
if e.response is not None:
|
||||
try:
|
||||
error_data = e.response.json()
|
||||
error_detail += f" - {error_data.get('msg', error_data)}"
|
||||
except:
|
||||
error_detail += f" - {e.response.text}"
|
||||
raise HTTPException(status.HTTP_400_BAD_REQUEST, detail=error_detail)
|
||||
except Exception as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Error polling batch status: {str(e)}",
|
||||
)
|
||||
|
||||
try:
|
||||
result = response.json()
|
||||
except ValueError as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_502_BAD_GATEWAY,
|
||||
detail=f"Invalid JSON response while polling: {e}",
|
||||
)
|
||||
|
||||
# Check for API error response
|
||||
if result.get("code") != 0:
|
||||
raise HTTPException(
|
||||
status.HTTP_400_BAD_REQUEST,
|
||||
detail=f"MinerU Cloud API error: {result.get('msg', 'Unknown error')}",
|
||||
)
|
||||
|
||||
data = result.get("data", {})
|
||||
extract_result = data.get("extract_result", [])
|
||||
|
||||
# Find our file in the batch results
|
||||
file_result = None
|
||||
for item in extract_result:
|
||||
if item.get("file_name") == filename:
|
||||
file_result = item
|
||||
break
|
||||
|
||||
if not file_result:
|
||||
raise HTTPException(
|
||||
status.HTTP_502_BAD_GATEWAY,
|
||||
detail=f"File {filename} not found in batch results",
|
||||
)
|
||||
|
||||
state = file_result.get("state")
|
||||
|
||||
if state == "done":
|
||||
log.info(f"Processing complete for {filename}")
|
||||
return file_result
|
||||
elif state == "failed":
|
||||
error_msg = file_result.get("err_msg", "Unknown error")
|
||||
raise HTTPException(
|
||||
status.HTTP_400_BAD_REQUEST,
|
||||
detail=f"MinerU processing failed: {error_msg}",
|
||||
)
|
||||
elif state in ["waiting-file", "pending", "running", "converting"]:
|
||||
# Still processing
|
||||
if iteration % 10 == 0: # Log every 20 seconds
|
||||
log.info(
|
||||
f"Processing status: {state} (iteration {iteration + 1}/{max_iterations})"
|
||||
)
|
||||
time.sleep(poll_interval)
|
||||
else:
|
||||
log.warning(f"Unknown state: {state}")
|
||||
time.sleep(poll_interval)
|
||||
|
||||
# Timeout
|
||||
raise HTTPException(
|
||||
status.HTTP_504_GATEWAY_TIMEOUT,
|
||||
detail="MinerU processing timed out after 10 minutes",
|
||||
)
|
||||
|
||||
def _download_and_extract_zip(self, zip_url: str, filename: str) -> str:
|
||||
"""
|
||||
Download ZIP file from CDN and extract markdown content.
|
||||
Returns the markdown content as a string.
|
||||
"""
|
||||
log.info(f"Downloading results from: {zip_url}")
|
||||
|
||||
try:
|
||||
response = requests.get(zip_url, timeout=60)
|
||||
response.raise_for_status()
|
||||
except requests.HTTPError as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_400_BAD_REQUEST,
|
||||
detail=f"Failed to download results ZIP: {e}",
|
||||
)
|
||||
except Exception as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Error downloading results: {str(e)}",
|
||||
)
|
||||
|
||||
# Save ZIP to temporary file and extract
|
||||
try:
|
||||
with tempfile.NamedTemporaryFile(delete=False, suffix=".zip") as tmp_zip:
|
||||
tmp_zip.write(response.content)
|
||||
tmp_zip_path = tmp_zip.name
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmp_dir:
|
||||
# Extract ZIP
|
||||
with zipfile.ZipFile(tmp_zip_path, "r") as zip_ref:
|
||||
zip_ref.extractall(tmp_dir)
|
||||
|
||||
# Find markdown file - search recursively for any .md file
|
||||
markdown_content = None
|
||||
found_md_path = None
|
||||
|
||||
# First, list all files in the ZIP for debugging
|
||||
all_files = []
|
||||
for root, dirs, files in os.walk(tmp_dir):
|
||||
for file in files:
|
||||
full_path = os.path.join(root, file)
|
||||
all_files.append(full_path)
|
||||
# Look for any .md file
|
||||
if file.endswith(".md"):
|
||||
found_md_path = full_path
|
||||
log.info(f"Found markdown file at: {full_path}")
|
||||
try:
|
||||
with open(full_path, "r", encoding="utf-8") as f:
|
||||
markdown_content = f.read()
|
||||
if (
|
||||
markdown_content
|
||||
): # Use the first non-empty markdown file
|
||||
break
|
||||
except Exception as e:
|
||||
log.warning(f"Failed to read {full_path}: {e}")
|
||||
if markdown_content:
|
||||
break
|
||||
|
||||
if markdown_content is None:
|
||||
log.error(f"Available files in ZIP: {all_files}")
|
||||
# Try to provide more helpful error message
|
||||
md_files = [f for f in all_files if f.endswith(".md")]
|
||||
if md_files:
|
||||
error_msg = (
|
||||
f"Found .md files but couldn't read them: {md_files}"
|
||||
)
|
||||
else:
|
||||
error_msg = (
|
||||
f"No .md files found in ZIP. Available files: {all_files}"
|
||||
)
|
||||
raise HTTPException(
|
||||
status.HTTP_502_BAD_GATEWAY,
|
||||
detail=error_msg,
|
||||
)
|
||||
|
||||
# Clean up temporary ZIP file
|
||||
os.unlink(tmp_zip_path)
|
||||
|
||||
except zipfile.BadZipFile as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_502_BAD_GATEWAY,
|
||||
detail=f"Invalid ZIP file received: {e}",
|
||||
)
|
||||
except Exception as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Error extracting ZIP: {str(e)}",
|
||||
)
|
||||
|
||||
if not markdown_content:
|
||||
raise HTTPException(
|
||||
status.HTTP_400_BAD_REQUEST,
|
||||
detail="Extracted markdown content is empty",
|
||||
)
|
||||
|
||||
log.info(
|
||||
f"Successfully extracted markdown content ({len(markdown_content)} characters)"
|
||||
)
|
||||
return markdown_content
|
||||
|
|
@ -9,10 +9,11 @@ from typing import List, Dict, Any
|
|||
from contextlib import asynccontextmanager
|
||||
|
||||
from langchain_core.documents import Document
|
||||
from open_webui.env import GLOBAL_LOG_LEVEL
|
||||
from open_webui.env import SRC_LOG_LEVELS, GLOBAL_LOG_LEVEL
|
||||
|
||||
logging.basicConfig(stream=sys.stdout, level=GLOBAL_LOG_LEVEL)
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
class MistralLoader:
|
||||
|
|
@ -29,9 +30,10 @@ class MistralLoader:
|
|||
- Enhanced error handling with retryable error classification
|
||||
"""
|
||||
|
||||
BASE_API_URL = "https://api.mistral.ai/v1"
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
base_url: str,
|
||||
api_key: str,
|
||||
file_path: str,
|
||||
timeout: int = 300, # 5 minutes default
|
||||
|
|
@ -53,9 +55,6 @@ class MistralLoader:
|
|||
if not os.path.exists(file_path):
|
||||
raise FileNotFoundError(f"File not found at {file_path}")
|
||||
|
||||
self.base_url = (
|
||||
base_url.rstrip("/") if base_url else "https://api.mistral.ai/v1"
|
||||
)
|
||||
self.api_key = api_key
|
||||
self.file_path = file_path
|
||||
self.timeout = timeout
|
||||
|
|
@ -241,7 +240,7 @@ class MistralLoader:
|
|||
in a context manager to minimize memory usage duration.
|
||||
"""
|
||||
log.info("Uploading file to Mistral API")
|
||||
url = f"{self.base_url}/files"
|
||||
url = f"{self.BASE_API_URL}/files"
|
||||
|
||||
def upload_request():
|
||||
# MEMORY OPTIMIZATION: Use context manager to minimize file handle lifetime
|
||||
|
|
@ -276,7 +275,7 @@ class MistralLoader:
|
|||
|
||||
async def _upload_file_async(self, session: aiohttp.ClientSession) -> str:
|
||||
"""Async file upload with streaming for better memory efficiency."""
|
||||
url = f"{self.base_url}/files"
|
||||
url = f"{self.BASE_API_URL}/files"
|
||||
|
||||
async def upload_request():
|
||||
# Create multipart writer for streaming upload
|
||||
|
|
@ -322,7 +321,7 @@ class MistralLoader:
|
|||
def _get_signed_url(self, file_id: str) -> str:
|
||||
"""Retrieves a temporary signed URL for the uploaded file (sync version)."""
|
||||
log.info(f"Getting signed URL for file ID: {file_id}")
|
||||
url = f"{self.base_url}/files/{file_id}/url"
|
||||
url = f"{self.BASE_API_URL}/files/{file_id}/url"
|
||||
params = {"expiry": 1}
|
||||
signed_url_headers = {**self.headers, "Accept": "application/json"}
|
||||
|
||||
|
|
@ -347,7 +346,7 @@ class MistralLoader:
|
|||
self, session: aiohttp.ClientSession, file_id: str
|
||||
) -> str:
|
||||
"""Async signed URL retrieval."""
|
||||
url = f"{self.base_url}/files/{file_id}/url"
|
||||
url = f"{self.BASE_API_URL}/files/{file_id}/url"
|
||||
params = {"expiry": 1}
|
||||
|
||||
headers = {**self.headers, "Accept": "application/json"}
|
||||
|
|
@ -374,7 +373,7 @@ class MistralLoader:
|
|||
def _process_ocr(self, signed_url: str) -> Dict[str, Any]:
|
||||
"""Sends the signed URL to the OCR endpoint for processing (sync version)."""
|
||||
log.info("Processing OCR via Mistral API")
|
||||
url = f"{self.base_url}/ocr"
|
||||
url = f"{self.BASE_API_URL}/ocr"
|
||||
ocr_headers = {
|
||||
**self.headers,
|
||||
"Content-Type": "application/json",
|
||||
|
|
@ -408,7 +407,7 @@ class MistralLoader:
|
|||
self, session: aiohttp.ClientSession, signed_url: str
|
||||
) -> Dict[str, Any]:
|
||||
"""Async OCR processing with timing metrics."""
|
||||
url = f"{self.base_url}/ocr"
|
||||
url = f"{self.BASE_API_URL}/ocr"
|
||||
|
||||
headers = {
|
||||
**self.headers,
|
||||
|
|
@ -447,7 +446,7 @@ class MistralLoader:
|
|||
def _delete_file(self, file_id: str) -> None:
|
||||
"""Deletes the file from Mistral storage (sync version)."""
|
||||
log.info(f"Deleting uploaded file ID: {file_id}")
|
||||
url = f"{self.base_url}/files/{file_id}"
|
||||
url = f"{self.BASE_API_URL}/files/{file_id}"
|
||||
|
||||
try:
|
||||
response = requests.delete(
|
||||
|
|
@ -468,7 +467,7 @@ class MistralLoader:
|
|||
async def delete_request():
|
||||
self._debug_log(f"Deleting file ID: {file_id}")
|
||||
async with session.delete(
|
||||
url=f"{self.base_url}/files/{file_id}",
|
||||
url=f"{self.BASE_API_URL}/files/{file_id}",
|
||||
headers=self.headers,
|
||||
timeout=aiohttp.ClientTimeout(
|
||||
total=self.cleanup_timeout
|
||||
|
|
|
|||
|
|
@ -4,8 +4,10 @@ from typing import Iterator, List, Literal, Union
|
|||
|
||||
from langchain_core.document_loaders import BaseLoader
|
||||
from langchain_core.documents import Document
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
class TavilyLoader(BaseLoader):
|
||||
|
|
|
|||
|
|
@ -4,8 +4,10 @@ from xml.etree.ElementTree import ParseError
|
|||
from typing import Any, Dict, Generator, List, Optional, Sequence, Union
|
||||
from urllib.parse import parse_qs, urlparse
|
||||
from langchain_core.documents import Document
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
ALLOWED_SCHEMES = {"http", "https"}
|
||||
ALLOWED_NETLOCS = {
|
||||
|
|
@ -81,7 +83,6 @@ class YoutubeLoader:
|
|||
TranscriptsDisabled,
|
||||
YouTubeTranscriptApi,
|
||||
)
|
||||
from youtube_transcript_api.proxies import GenericProxyConfig
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
'Could not import "youtube_transcript_api" Python package. '
|
||||
|
|
@ -89,16 +90,18 @@ class YoutubeLoader:
|
|||
)
|
||||
|
||||
if self.proxy_url:
|
||||
youtube_proxies = GenericProxyConfig(
|
||||
http_url=self.proxy_url, https_url=self.proxy_url
|
||||
)
|
||||
youtube_proxies = {
|
||||
"http": self.proxy_url,
|
||||
"https": self.proxy_url,
|
||||
}
|
||||
log.debug(f"Using proxy URL: {self.proxy_url[:14]}...")
|
||||
else:
|
||||
youtube_proxies = None
|
||||
|
||||
transcript_api = YouTubeTranscriptApi(proxy_config=youtube_proxies)
|
||||
try:
|
||||
transcript_list = transcript_api.list(self.video_id)
|
||||
transcript_list = YouTubeTranscriptApi.list_transcripts(
|
||||
self.video_id, proxies=youtube_proxies
|
||||
)
|
||||
except Exception as e:
|
||||
log.exception("Loading YouTube transcript failed")
|
||||
return []
|
||||
|
|
@ -155,10 +158,3 @@ class YoutubeLoader:
|
|||
f"No transcript found for any of the specified languages: {languages_tried}. Verify if the video has transcripts, add more languages if needed."
|
||||
)
|
||||
raise NoTranscriptFound(self.video_id, self.language, list(transcript_list))
|
||||
|
||||
async def aload(self) -> Generator[Document, None, None]:
|
||||
"""Asynchronously load YouTube transcripts into `Document` objects."""
|
||||
import asyncio
|
||||
|
||||
loop = asyncio.get_event_loop()
|
||||
return await loop.run_in_executor(None, self.load)
|
||||
|
|
|
|||
|
|
@ -5,10 +5,12 @@ import numpy as np
|
|||
from colbert.infra import ColBERTConfig
|
||||
from colbert.modeling.checkpoint import Checkpoint
|
||||
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
from open_webui.retrieval.models.base_reranker import BaseReranker
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
class ColBERT(BaseReranker):
|
||||
|
|
|
|||
|
|
@ -4,12 +4,12 @@ from typing import Optional, List, Tuple
|
|||
from urllib.parse import quote
|
||||
|
||||
|
||||
from open_webui.env import ENABLE_FORWARD_USER_INFO_HEADERS
|
||||
from open_webui.env import ENABLE_FORWARD_USER_INFO_HEADERS, SRC_LOG_LEVELS
|
||||
from open_webui.retrieval.models.base_reranker import BaseReranker
|
||||
from open_webui.utils.headers import include_user_info_headers
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
class ExternalReranker(BaseReranker):
|
||||
|
|
@ -18,12 +18,10 @@ class ExternalReranker(BaseReranker):
|
|||
api_key: str,
|
||||
url: str = "http://localhost:8080/v1/rerank",
|
||||
model: str = "reranker",
|
||||
timeout: Optional[int] = None,
|
||||
):
|
||||
self.api_key = api_key
|
||||
self.url = url
|
||||
self.model = model
|
||||
self.timeout = timeout
|
||||
|
||||
def predict(
|
||||
self, sentences: List[Tuple[str, str]], user=None
|
||||
|
|
@ -42,19 +40,23 @@ class ExternalReranker(BaseReranker):
|
|||
log.info(f"ExternalReranker:predict:model {self.model}")
|
||||
log.info(f"ExternalReranker:predict:query {query}")
|
||||
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": f"Bearer {self.api_key}",
|
||||
}
|
||||
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
r = requests.post(
|
||||
f"{self.url}",
|
||||
headers=headers,
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": f"Bearer {self.api_key}",
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user
|
||||
else {}
|
||||
),
|
||||
},
|
||||
json=payload,
|
||||
timeout=self.timeout,
|
||||
)
|
||||
|
||||
r.raise_for_status()
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load diff
|
|
@ -11,7 +11,7 @@ from open_webui.retrieval.vector.main import (
|
|||
SearchResult,
|
||||
GetResult,
|
||||
)
|
||||
from open_webui.retrieval.vector.utils import process_metadata
|
||||
from open_webui.retrieval.vector.utils import stringify_metadata
|
||||
|
||||
from open_webui.config import (
|
||||
CHROMA_DATA_PATH,
|
||||
|
|
@ -24,8 +24,10 @@ from open_webui.config import (
|
|||
CHROMA_CLIENT_AUTH_PROVIDER,
|
||||
CHROMA_CLIENT_AUTH_CREDENTIALS,
|
||||
)
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
class ChromaClient(VectorDBBase):
|
||||
|
|
@ -144,7 +146,7 @@ class ChromaClient(VectorDBBase):
|
|||
ids = [item["id"] for item in items]
|
||||
documents = [item["text"] for item in items]
|
||||
embeddings = [item["vector"] for item in items]
|
||||
metadatas = [process_metadata(item["metadata"]) for item in items]
|
||||
metadatas = [stringify_metadata(item["metadata"]) for item in items]
|
||||
|
||||
for batch in create_batches(
|
||||
api=self.client,
|
||||
|
|
@ -164,7 +166,7 @@ class ChromaClient(VectorDBBase):
|
|||
ids = [item["id"] for item in items]
|
||||
documents = [item["text"] for item in items]
|
||||
embeddings = [item["vector"] for item in items]
|
||||
metadatas = [process_metadata(item["metadata"]) for item in items]
|
||||
metadatas = [stringify_metadata(item["metadata"]) for item in items]
|
||||
|
||||
collection.upsert(
|
||||
ids=ids, documents=documents, embeddings=embeddings, metadatas=metadatas
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ from typing import Optional
|
|||
import ssl
|
||||
from elasticsearch.helpers import bulk, scan
|
||||
|
||||
from open_webui.retrieval.vector.utils import process_metadata
|
||||
from open_webui.retrieval.vector.utils import stringify_metadata
|
||||
from open_webui.retrieval.vector.main import (
|
||||
VectorDBBase,
|
||||
VectorItem,
|
||||
|
|
@ -245,7 +245,7 @@ class ElasticsearchClient(VectorDBBase):
|
|||
"collection": collection_name,
|
||||
"vector": item["vector"],
|
||||
"text": item["text"],
|
||||
"metadata": process_metadata(item["metadata"]),
|
||||
"metadata": stringify_metadata(item["metadata"]),
|
||||
},
|
||||
}
|
||||
for item in batch
|
||||
|
|
@ -266,7 +266,7 @@ class ElasticsearchClient(VectorDBBase):
|
|||
"collection": collection_name,
|
||||
"vector": item["vector"],
|
||||
"text": item["text"],
|
||||
"metadata": process_metadata(item["metadata"]),
|
||||
"metadata": stringify_metadata(item["metadata"]),
|
||||
},
|
||||
"doc_as_upsert": True,
|
||||
}
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ import json
|
|||
import logging
|
||||
from typing import Optional
|
||||
|
||||
from open_webui.retrieval.vector.utils import process_metadata
|
||||
from open_webui.retrieval.vector.utils import stringify_metadata
|
||||
from open_webui.retrieval.vector.main import (
|
||||
VectorDBBase,
|
||||
VectorItem,
|
||||
|
|
@ -22,11 +22,11 @@ from open_webui.config import (
|
|||
MILVUS_HNSW_M,
|
||||
MILVUS_HNSW_EFCONSTRUCTION,
|
||||
MILVUS_IVF_FLAT_NLIST,
|
||||
MILVUS_DISKANN_MAX_DEGREE,
|
||||
MILVUS_DISKANN_SEARCH_LIST_SIZE,
|
||||
)
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
class MilvusClient(VectorDBBase):
|
||||
|
|
@ -131,18 +131,12 @@ class MilvusClient(VectorDBBase):
|
|||
elif index_type == "IVF_FLAT":
|
||||
index_creation_params = {"nlist": MILVUS_IVF_FLAT_NLIST}
|
||||
log.info(f"IVF_FLAT params: {index_creation_params}")
|
||||
elif index_type == "DISKANN":
|
||||
index_creation_params = {
|
||||
"max_degree": MILVUS_DISKANN_MAX_DEGREE,
|
||||
"search_list_size": MILVUS_DISKANN_SEARCH_LIST_SIZE,
|
||||
}
|
||||
log.info(f"DISKANN params: {index_creation_params}")
|
||||
elif index_type in ["FLAT", "AUTOINDEX"]:
|
||||
log.info(f"Using {index_type} index with no specific build-time params.")
|
||||
else:
|
||||
log.warning(
|
||||
f"Unsupported MILVUS_INDEX_TYPE: '{index_type}'. "
|
||||
f"Supported types: HNSW, IVF_FLAT, DISKANN, FLAT, AUTOINDEX. "
|
||||
f"Supported types: HNSW, IVF_FLAT, FLAT, AUTOINDEX. "
|
||||
f"Milvus will use its default for the collection if this type is not directly supported for index creation."
|
||||
)
|
||||
# For unsupported types, pass the type directly to Milvus; it might handle it or use a default.
|
||||
|
|
@ -195,27 +189,26 @@ class MilvusClient(VectorDBBase):
|
|||
)
|
||||
return self._result_to_search_result(result)
|
||||
|
||||
def query(self, collection_name: str, filter: dict, limit: int = -1):
|
||||
def query(self, collection_name: str, filter: dict, limit: Optional[int] = None):
|
||||
connections.connect(uri=MILVUS_URI, token=MILVUS_TOKEN, db_name=MILVUS_DB)
|
||||
|
||||
# Construct the filter string for querying
|
||||
collection_name = collection_name.replace("-", "_")
|
||||
if not self.has_collection(collection_name):
|
||||
log.warning(
|
||||
f"Query attempted on non-existent collection: {self.collection_prefix}_{collection_name}"
|
||||
)
|
||||
return None
|
||||
|
||||
filter_expressions = []
|
||||
for key, value in filter.items():
|
||||
if isinstance(value, str):
|
||||
filter_expressions.append(f'metadata["{key}"] == "{value}"')
|
||||
else:
|
||||
filter_expressions.append(f'metadata["{key}"] == {value}')
|
||||
|
||||
filter_string = " && ".join(filter_expressions)
|
||||
filter_string = " && ".join(
|
||||
[
|
||||
f'metadata["{key}"] == {json.dumps(value)}'
|
||||
for key, value in filter.items()
|
||||
]
|
||||
)
|
||||
|
||||
collection = Collection(f"{self.collection_prefix}_{collection_name}")
|
||||
collection.load()
|
||||
all_results = []
|
||||
|
||||
try:
|
||||
log.info(
|
||||
|
|
@ -223,25 +216,24 @@ class MilvusClient(VectorDBBase):
|
|||
)
|
||||
|
||||
iterator = collection.query_iterator(
|
||||
expr=filter_string,
|
||||
filter=filter_string,
|
||||
output_fields=[
|
||||
"id",
|
||||
"data",
|
||||
"metadata",
|
||||
],
|
||||
limit=limit if limit > 0 else -1,
|
||||
limit=limit, # Pass the limit directly; None means no limit.
|
||||
)
|
||||
|
||||
all_results = []
|
||||
while True:
|
||||
batch = iterator.next()
|
||||
if not batch:
|
||||
result = iterator.next()
|
||||
if not result:
|
||||
iterator.close()
|
||||
break
|
||||
all_results.extend(batch)
|
||||
all_results += result
|
||||
|
||||
log.debug(f"Total results from query: {len(all_results)}")
|
||||
return self._result_to_get_result([all_results] if all_results else [[]])
|
||||
log.info(f"Total results from query: {len(all_results)}")
|
||||
return self._result_to_get_result([all_results])
|
||||
|
||||
except Exception as e:
|
||||
log.exception(
|
||||
|
|
@ -257,7 +249,7 @@ class MilvusClient(VectorDBBase):
|
|||
)
|
||||
# Using query with a trivial filter to get all items.
|
||||
# This will use the paginated query logic.
|
||||
return self.query(collection_name=collection_name, filter={}, limit=-1)
|
||||
return self.query(collection_name=collection_name, filter={}, limit=None)
|
||||
|
||||
def insert(self, collection_name: str, items: list[VectorItem]):
|
||||
# Insert the items into the collection, if the collection does not exist, it will be created.
|
||||
|
|
@ -289,7 +281,7 @@ class MilvusClient(VectorDBBase):
|
|||
"id": item["id"],
|
||||
"vector": item["vector"],
|
||||
"data": {"text": item["text"]},
|
||||
"metadata": process_metadata(item["metadata"]),
|
||||
"metadata": stringify_metadata(item["metadata"]),
|
||||
}
|
||||
for item in items
|
||||
],
|
||||
|
|
@ -325,7 +317,7 @@ class MilvusClient(VectorDBBase):
|
|||
"id": item["id"],
|
||||
"vector": item["vector"],
|
||||
"data": {"text": item["text"]},
|
||||
"metadata": process_metadata(item["metadata"]),
|
||||
"metadata": stringify_metadata(item["metadata"]),
|
||||
}
|
||||
for item in items
|
||||
],
|
||||
|
|
|
|||
|
|
@ -1,287 +0,0 @@
|
|||
import logging
|
||||
from typing import Optional, Tuple, List, Dict, Any
|
||||
|
||||
from open_webui.config import (
|
||||
MILVUS_URI,
|
||||
MILVUS_TOKEN,
|
||||
MILVUS_DB,
|
||||
MILVUS_COLLECTION_PREFIX,
|
||||
MILVUS_INDEX_TYPE,
|
||||
MILVUS_METRIC_TYPE,
|
||||
MILVUS_HNSW_M,
|
||||
MILVUS_HNSW_EFCONSTRUCTION,
|
||||
MILVUS_IVF_FLAT_NLIST,
|
||||
)
|
||||
from open_webui.retrieval.vector.main import (
|
||||
GetResult,
|
||||
SearchResult,
|
||||
VectorDBBase,
|
||||
VectorItem,
|
||||
)
|
||||
from pymilvus import (
|
||||
connections,
|
||||
utility,
|
||||
Collection,
|
||||
CollectionSchema,
|
||||
FieldSchema,
|
||||
DataType,
|
||||
)
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
RESOURCE_ID_FIELD = "resource_id"
|
||||
|
||||
|
||||
class MilvusClient(VectorDBBase):
|
||||
def __init__(self):
|
||||
# Milvus collection names can only contain numbers, letters, and underscores.
|
||||
self.collection_prefix = MILVUS_COLLECTION_PREFIX.replace("-", "_")
|
||||
connections.connect(
|
||||
alias="default",
|
||||
uri=MILVUS_URI,
|
||||
token=MILVUS_TOKEN,
|
||||
db_name=MILVUS_DB,
|
||||
)
|
||||
|
||||
# Main collection types for multi-tenancy
|
||||
self.MEMORY_COLLECTION = f"{self.collection_prefix}_memories"
|
||||
self.KNOWLEDGE_COLLECTION = f"{self.collection_prefix}_knowledge"
|
||||
self.FILE_COLLECTION = f"{self.collection_prefix}_files"
|
||||
self.WEB_SEARCH_COLLECTION = f"{self.collection_prefix}_web_search"
|
||||
self.HASH_BASED_COLLECTION = f"{self.collection_prefix}_hash_based"
|
||||
self.shared_collections = [
|
||||
self.MEMORY_COLLECTION,
|
||||
self.KNOWLEDGE_COLLECTION,
|
||||
self.FILE_COLLECTION,
|
||||
self.WEB_SEARCH_COLLECTION,
|
||||
self.HASH_BASED_COLLECTION,
|
||||
]
|
||||
|
||||
def _get_collection_and_resource_id(self, collection_name: str) -> Tuple[str, str]:
|
||||
"""
|
||||
Maps the traditional collection name to multi-tenant collection and resource ID.
|
||||
|
||||
WARNING: This mapping relies on current Open WebUI naming conventions for
|
||||
collection names. If Open WebUI changes how it generates collection names
|
||||
(e.g., "user-memory-" prefix, "file-" prefix, web search patterns, or hash
|
||||
formats), this mapping will break and route data to incorrect collections.
|
||||
POTENTIALLY CAUSING HUGE DATA CORRUPTION, DATA CONSISTENCY ISSUES AND INCORRECT
|
||||
DATA MAPPING INSIDE THE DATABASE.
|
||||
"""
|
||||
resource_id = collection_name
|
||||
|
||||
if collection_name.startswith("user-memory-"):
|
||||
return self.MEMORY_COLLECTION, resource_id
|
||||
elif collection_name.startswith("file-"):
|
||||
return self.FILE_COLLECTION, resource_id
|
||||
elif collection_name.startswith("web-search-"):
|
||||
return self.WEB_SEARCH_COLLECTION, resource_id
|
||||
elif len(collection_name) == 63 and all(
|
||||
c in "0123456789abcdef" for c in collection_name
|
||||
):
|
||||
return self.HASH_BASED_COLLECTION, resource_id
|
||||
else:
|
||||
return self.KNOWLEDGE_COLLECTION, resource_id
|
||||
|
||||
def _create_shared_collection(self, mt_collection_name: str, dimension: int):
|
||||
fields = [
|
||||
FieldSchema(
|
||||
name="id",
|
||||
dtype=DataType.VARCHAR,
|
||||
is_primary=True,
|
||||
auto_id=False,
|
||||
max_length=36,
|
||||
),
|
||||
FieldSchema(name="vector", dtype=DataType.FLOAT_VECTOR, dim=dimension),
|
||||
FieldSchema(name="text", dtype=DataType.VARCHAR, max_length=65535),
|
||||
FieldSchema(name="metadata", dtype=DataType.JSON),
|
||||
FieldSchema(name=RESOURCE_ID_FIELD, dtype=DataType.VARCHAR, max_length=255),
|
||||
]
|
||||
schema = CollectionSchema(fields, "Shared collection for multi-tenancy")
|
||||
collection = Collection(mt_collection_name, schema)
|
||||
|
||||
index_params = {
|
||||
"metric_type": MILVUS_METRIC_TYPE,
|
||||
"index_type": MILVUS_INDEX_TYPE,
|
||||
"params": {},
|
||||
}
|
||||
if MILVUS_INDEX_TYPE == "HNSW":
|
||||
index_params["params"] = {
|
||||
"M": MILVUS_HNSW_M,
|
||||
"efConstruction": MILVUS_HNSW_EFCONSTRUCTION,
|
||||
}
|
||||
elif MILVUS_INDEX_TYPE == "IVF_FLAT":
|
||||
index_params["params"] = {"nlist": MILVUS_IVF_FLAT_NLIST}
|
||||
|
||||
collection.create_index("vector", index_params)
|
||||
collection.create_index(RESOURCE_ID_FIELD)
|
||||
log.info(f"Created shared collection: {mt_collection_name}")
|
||||
return collection
|
||||
|
||||
def _ensure_collection(self, mt_collection_name: str, dimension: int):
|
||||
if not utility.has_collection(mt_collection_name):
|
||||
self._create_shared_collection(mt_collection_name, dimension)
|
||||
|
||||
def has_collection(self, collection_name: str) -> bool:
|
||||
mt_collection, resource_id = self._get_collection_and_resource_id(
|
||||
collection_name
|
||||
)
|
||||
if not utility.has_collection(mt_collection):
|
||||
return False
|
||||
|
||||
collection = Collection(mt_collection)
|
||||
collection.load()
|
||||
res = collection.query(expr=f"{RESOURCE_ID_FIELD} == '{resource_id}'", limit=1)
|
||||
return len(res) > 0
|
||||
|
||||
def upsert(self, collection_name: str, items: List[VectorItem]):
|
||||
if not items:
|
||||
return
|
||||
mt_collection, resource_id = self._get_collection_and_resource_id(
|
||||
collection_name
|
||||
)
|
||||
dimension = len(items[0]["vector"])
|
||||
self._ensure_collection(mt_collection, dimension)
|
||||
collection = Collection(mt_collection)
|
||||
|
||||
entities = [
|
||||
{
|
||||
"id": item["id"],
|
||||
"vector": item["vector"],
|
||||
"text": item["text"],
|
||||
"metadata": item["metadata"],
|
||||
RESOURCE_ID_FIELD: resource_id,
|
||||
}
|
||||
for item in items
|
||||
]
|
||||
collection.insert(entities)
|
||||
|
||||
def search(
|
||||
self, collection_name: str, vectors: List[List[float]], limit: int
|
||||
) -> Optional[SearchResult]:
|
||||
if not vectors:
|
||||
return None
|
||||
|
||||
mt_collection, resource_id = self._get_collection_and_resource_id(
|
||||
collection_name
|
||||
)
|
||||
if not utility.has_collection(mt_collection):
|
||||
return None
|
||||
|
||||
collection = Collection(mt_collection)
|
||||
collection.load()
|
||||
|
||||
search_params = {"metric_type": MILVUS_METRIC_TYPE, "params": {}}
|
||||
results = collection.search(
|
||||
data=vectors,
|
||||
anns_field="vector",
|
||||
param=search_params,
|
||||
limit=limit,
|
||||
expr=f"{RESOURCE_ID_FIELD} == '{resource_id}'",
|
||||
output_fields=["id", "text", "metadata"],
|
||||
)
|
||||
|
||||
ids, documents, metadatas, distances = [], [], [], []
|
||||
for hits in results:
|
||||
batch_ids, batch_docs, batch_metadatas, batch_dists = [], [], [], []
|
||||
for hit in hits:
|
||||
batch_ids.append(hit.entity.get("id"))
|
||||
batch_docs.append(hit.entity.get("text"))
|
||||
batch_metadatas.append(hit.entity.get("metadata"))
|
||||
batch_dists.append(hit.distance)
|
||||
ids.append(batch_ids)
|
||||
documents.append(batch_docs)
|
||||
metadatas.append(batch_metadatas)
|
||||
distances.append(batch_dists)
|
||||
|
||||
return SearchResult(
|
||||
ids=ids, documents=documents, metadatas=metadatas, distances=distances
|
||||
)
|
||||
|
||||
def delete(
|
||||
self,
|
||||
collection_name: str,
|
||||
ids: Optional[List[str]] = None,
|
||||
filter: Optional[Dict[str, Any]] = None,
|
||||
):
|
||||
mt_collection, resource_id = self._get_collection_and_resource_id(
|
||||
collection_name
|
||||
)
|
||||
if not utility.has_collection(mt_collection):
|
||||
return
|
||||
|
||||
collection = Collection(mt_collection)
|
||||
|
||||
# Build expression
|
||||
expr = [f"{RESOURCE_ID_FIELD} == '{resource_id}'"]
|
||||
if ids:
|
||||
# Milvus expects a string list for 'in' operator
|
||||
id_list_str = ", ".join([f"'{id_val}'" for id_val in ids])
|
||||
expr.append(f"id in [{id_list_str}]")
|
||||
|
||||
if filter:
|
||||
for key, value in filter.items():
|
||||
expr.append(f"metadata['{key}'] == '{value}'")
|
||||
|
||||
collection.delete(" and ".join(expr))
|
||||
|
||||
def reset(self):
|
||||
for collection_name in self.shared_collections:
|
||||
if utility.has_collection(collection_name):
|
||||
utility.drop_collection(collection_name)
|
||||
|
||||
def delete_collection(self, collection_name: str):
|
||||
mt_collection, resource_id = self._get_collection_and_resource_id(
|
||||
collection_name
|
||||
)
|
||||
if not utility.has_collection(mt_collection):
|
||||
return
|
||||
|
||||
collection = Collection(mt_collection)
|
||||
collection.delete(f"{RESOURCE_ID_FIELD} == '{resource_id}'")
|
||||
|
||||
def query(
|
||||
self, collection_name: str, filter: Dict[str, Any], limit: Optional[int] = None
|
||||
) -> Optional[GetResult]:
|
||||
mt_collection, resource_id = self._get_collection_and_resource_id(
|
||||
collection_name
|
||||
)
|
||||
if not utility.has_collection(mt_collection):
|
||||
return None
|
||||
|
||||
collection = Collection(mt_collection)
|
||||
collection.load()
|
||||
|
||||
expr = [f"{RESOURCE_ID_FIELD} == '{resource_id}'"]
|
||||
if filter:
|
||||
for key, value in filter.items():
|
||||
if isinstance(value, str):
|
||||
expr.append(f"metadata['{key}'] == '{value}'")
|
||||
else:
|
||||
expr.append(f"metadata['{key}'] == {value}")
|
||||
|
||||
iterator = collection.query_iterator(
|
||||
expr=" and ".join(expr),
|
||||
output_fields=["id", "text", "metadata"],
|
||||
limit=limit if limit else -1,
|
||||
)
|
||||
|
||||
all_results = []
|
||||
while True:
|
||||
batch = iterator.next()
|
||||
if not batch:
|
||||
iterator.close()
|
||||
break
|
||||
all_results.extend(batch)
|
||||
|
||||
ids = [res["id"] for res in all_results]
|
||||
documents = [res["text"] for res in all_results]
|
||||
metadatas = [res["metadata"] for res in all_results]
|
||||
|
||||
return GetResult(ids=[ids], documents=[documents], metadatas=[metadatas])
|
||||
|
||||
def get(self, collection_name: str) -> Optional[GetResult]:
|
||||
return self.query(collection_name, filter={}, limit=None)
|
||||
|
||||
def insert(self, collection_name: str, items: List[VectorItem]):
|
||||
return self.upsert(collection_name, items)
|
||||
|
|
@ -2,7 +2,7 @@ from opensearchpy import OpenSearch
|
|||
from opensearchpy.helpers import bulk
|
||||
from typing import Optional
|
||||
|
||||
from open_webui.retrieval.vector.utils import process_metadata
|
||||
from open_webui.retrieval.vector.utils import stringify_metadata
|
||||
from open_webui.retrieval.vector.main import (
|
||||
VectorDBBase,
|
||||
VectorItem,
|
||||
|
|
@ -201,7 +201,7 @@ class OpenSearchClient(VectorDBBase):
|
|||
"_source": {
|
||||
"vector": item["vector"],
|
||||
"text": item["text"],
|
||||
"metadata": process_metadata(item["metadata"]),
|
||||
"metadata": stringify_metadata(item["metadata"]),
|
||||
},
|
||||
}
|
||||
for item in batch
|
||||
|
|
@ -223,7 +223,7 @@ class OpenSearchClient(VectorDBBase):
|
|||
"doc": {
|
||||
"vector": item["vector"],
|
||||
"text": item["text"],
|
||||
"metadata": process_metadata(item["metadata"]),
|
||||
"metadata": stringify_metadata(item["metadata"]),
|
||||
},
|
||||
"doc_as_upsert": True,
|
||||
}
|
||||
|
|
|
|||
|
|
@ -55,8 +55,10 @@ from open_webui.config import (
|
|||
ORACLE_DB_POOL_MAX,
|
||||
ORACLE_DB_POOL_INCREMENT,
|
||||
)
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
class Oracle23aiClient(VectorDBBase):
|
||||
|
|
@ -715,7 +717,7 @@ class Oracle23aiClient(VectorDBBase):
|
|||
)
|
||||
|
||||
try:
|
||||
limit = 1000 # Hardcoded limit for get operation
|
||||
limit = limit or 1000
|
||||
|
||||
with self.get_connection() as connection:
|
||||
with connection.cursor() as cursor:
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
from typing import Optional, List, Dict, Any, Tuple
|
||||
from typing import Optional, List, Dict, Any
|
||||
import logging
|
||||
import json
|
||||
from sqlalchemy import (
|
||||
|
|
@ -22,12 +22,12 @@ from sqlalchemy.pool import NullPool, QueuePool
|
|||
|
||||
from sqlalchemy.orm import declarative_base, scoped_session, sessionmaker
|
||||
from sqlalchemy.dialects.postgresql import JSONB, array
|
||||
from pgvector.sqlalchemy import Vector, HALFVEC
|
||||
from pgvector.sqlalchemy import Vector
|
||||
from sqlalchemy.ext.mutable import MutableDict
|
||||
from sqlalchemy.exc import NoSuchTableError
|
||||
|
||||
|
||||
from open_webui.retrieval.vector.utils import process_metadata
|
||||
from open_webui.retrieval.vector.utils import stringify_metadata
|
||||
from open_webui.retrieval.vector.main import (
|
||||
VectorDBBase,
|
||||
VectorItem,
|
||||
|
|
@ -37,29 +37,21 @@ from open_webui.retrieval.vector.main import (
|
|||
from open_webui.config import (
|
||||
PGVECTOR_DB_URL,
|
||||
PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH,
|
||||
PGVECTOR_CREATE_EXTENSION,
|
||||
PGVECTOR_PGCRYPTO,
|
||||
PGVECTOR_PGCRYPTO_KEY,
|
||||
PGVECTOR_POOL_SIZE,
|
||||
PGVECTOR_POOL_MAX_OVERFLOW,
|
||||
PGVECTOR_POOL_TIMEOUT,
|
||||
PGVECTOR_POOL_RECYCLE,
|
||||
PGVECTOR_INDEX_METHOD,
|
||||
PGVECTOR_HNSW_M,
|
||||
PGVECTOR_HNSW_EF_CONSTRUCTION,
|
||||
PGVECTOR_IVFFLAT_LISTS,
|
||||
PGVECTOR_USE_HALFVEC,
|
||||
)
|
||||
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
VECTOR_LENGTH = PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH
|
||||
USE_HALFVEC = PGVECTOR_USE_HALFVEC
|
||||
|
||||
VECTOR_TYPE_FACTORY = HALFVEC if USE_HALFVEC else Vector
|
||||
VECTOR_OPCLASS = "halfvec_cosine_ops" if USE_HALFVEC else "vector_cosine_ops"
|
||||
Base = declarative_base()
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def pgcrypto_encrypt(val, key):
|
||||
|
|
@ -74,7 +66,7 @@ class DocumentChunk(Base):
|
|||
__tablename__ = "document_chunk"
|
||||
|
||||
id = Column(Text, primary_key=True)
|
||||
vector = Column(VECTOR_TYPE_FACTORY(dim=VECTOR_LENGTH), nullable=True)
|
||||
vector = Column(Vector(dim=VECTOR_LENGTH), nullable=True)
|
||||
collection_name = Column(Text, nullable=False)
|
||||
|
||||
if PGVECTOR_PGCRYPTO:
|
||||
|
|
@ -120,7 +112,6 @@ class PgvectorClient(VectorDBBase):
|
|||
try:
|
||||
# Ensure the pgvector extension is available
|
||||
# Use a conditional check to avoid permission issues on Azure PostgreSQL
|
||||
if PGVECTOR_CREATE_EXTENSION:
|
||||
self.session.execute(
|
||||
text(
|
||||
"""
|
||||
|
|
@ -164,9 +155,13 @@ class PgvectorClient(VectorDBBase):
|
|||
connection = self.session.connection()
|
||||
Base.metadata.create_all(bind=connection)
|
||||
|
||||
index_method, index_options = self._vector_index_configuration()
|
||||
self._ensure_vector_index(index_method, index_options)
|
||||
|
||||
# Create an index on the vector column if it doesn't exist
|
||||
self.session.execute(
|
||||
text(
|
||||
"CREATE INDEX IF NOT EXISTS idx_document_chunk_vector "
|
||||
"ON document_chunk USING ivfflat (vector vector_cosine_ops) WITH (lists = 100);"
|
||||
)
|
||||
)
|
||||
self.session.execute(
|
||||
text(
|
||||
"CREATE INDEX IF NOT EXISTS idx_document_chunk_collection_name "
|
||||
|
|
@ -180,78 +175,6 @@ class PgvectorClient(VectorDBBase):
|
|||
log.exception(f"Error during initialization: {e}")
|
||||
raise
|
||||
|
||||
@staticmethod
|
||||
def _extract_index_method(index_def: Optional[str]) -> Optional[str]:
|
||||
if not index_def:
|
||||
return None
|
||||
try:
|
||||
after_using = index_def.lower().split("using ", 1)[1]
|
||||
return after_using.split()[0]
|
||||
except (IndexError, AttributeError):
|
||||
return None
|
||||
|
||||
def _vector_index_configuration(self) -> Tuple[str, str]:
|
||||
if PGVECTOR_INDEX_METHOD:
|
||||
index_method = PGVECTOR_INDEX_METHOD
|
||||
log.info(
|
||||
"Using vector index method '%s' from PGVECTOR_INDEX_METHOD.",
|
||||
index_method,
|
||||
)
|
||||
elif USE_HALFVEC:
|
||||
index_method = "hnsw"
|
||||
log.info(
|
||||
"VECTOR_LENGTH=%s exceeds 2000; using halfvec column type with hnsw index.",
|
||||
VECTOR_LENGTH,
|
||||
)
|
||||
else:
|
||||
index_method = "ivfflat"
|
||||
|
||||
if index_method == "hnsw":
|
||||
index_options = f"WITH (m = {PGVECTOR_HNSW_M}, ef_construction = {PGVECTOR_HNSW_EF_CONSTRUCTION})"
|
||||
else:
|
||||
index_options = f"WITH (lists = {PGVECTOR_IVFFLAT_LISTS})"
|
||||
|
||||
return index_method, index_options
|
||||
|
||||
def _ensure_vector_index(self, index_method: str, index_options: str) -> None:
|
||||
index_name = "idx_document_chunk_vector"
|
||||
existing_index_def = self.session.execute(
|
||||
text(
|
||||
"""
|
||||
SELECT indexdef
|
||||
FROM pg_indexes
|
||||
WHERE schemaname = current_schema()
|
||||
AND tablename = 'document_chunk'
|
||||
AND indexname = :index_name
|
||||
"""
|
||||
),
|
||||
{"index_name": index_name},
|
||||
).scalar()
|
||||
|
||||
existing_method = self._extract_index_method(existing_index_def)
|
||||
if existing_method and existing_method != index_method:
|
||||
raise RuntimeError(
|
||||
f"Existing pgvector index '{index_name}' uses method '{existing_method}' but configuration now "
|
||||
f"requires '{index_method}'. Automatic rebuild is disabled to prevent long-running maintenance. "
|
||||
"Drop the index manually (optionally after tuning maintenance_work_mem/max_parallel_maintenance_workers) "
|
||||
"and recreate it with the new method before restarting Open WebUI."
|
||||
)
|
||||
|
||||
if not existing_index_def:
|
||||
index_sql = (
|
||||
f"CREATE INDEX IF NOT EXISTS {index_name} "
|
||||
f"ON document_chunk USING {index_method} (vector {VECTOR_OPCLASS})"
|
||||
)
|
||||
if index_options:
|
||||
index_sql = f"{index_sql} {index_options}"
|
||||
self.session.execute(text(index_sql))
|
||||
log.info(
|
||||
"Ensured vector index '%s' using %s%s.",
|
||||
index_name,
|
||||
index_method,
|
||||
f" {index_options}" if index_options else "",
|
||||
)
|
||||
|
||||
def check_vector_length(self) -> None:
|
||||
"""
|
||||
Check if the VECTOR_LENGTH matches the existing vector column dimension in the database.
|
||||
|
|
@ -271,20 +194,17 @@ class PgvectorClient(VectorDBBase):
|
|||
if "vector" in document_chunk_table.columns:
|
||||
vector_column = document_chunk_table.columns["vector"]
|
||||
vector_type = vector_column.type
|
||||
expected_type = HALFVEC if USE_HALFVEC else Vector
|
||||
|
||||
if not isinstance(vector_type, expected_type):
|
||||
raise Exception(
|
||||
"The 'vector' column type does not match the expected type "
|
||||
f"('{expected_type.__name__}') for VECTOR_LENGTH {VECTOR_LENGTH}."
|
||||
)
|
||||
|
||||
db_vector_length = getattr(vector_type, "dim", None)
|
||||
if db_vector_length is not None and db_vector_length != VECTOR_LENGTH:
|
||||
if isinstance(vector_type, Vector):
|
||||
db_vector_length = vector_type.dim
|
||||
if db_vector_length != VECTOR_LENGTH:
|
||||
raise Exception(
|
||||
f"VECTOR_LENGTH {VECTOR_LENGTH} does not match existing vector column dimension {db_vector_length}. "
|
||||
"Cannot change vector size after initialization without migrating the data."
|
||||
)
|
||||
else:
|
||||
raise Exception(
|
||||
"The 'vector' column exists but is not of type 'Vector'."
|
||||
)
|
||||
else:
|
||||
raise Exception(
|
||||
"The 'vector' column does not exist in the 'document_chunk' table."
|
||||
|
|
@ -343,7 +263,7 @@ class PgvectorClient(VectorDBBase):
|
|||
vector=vector,
|
||||
collection_name=collection_name,
|
||||
text=item["text"],
|
||||
vmetadata=process_metadata(item["metadata"]),
|
||||
vmetadata=stringify_metadata(item["metadata"]),
|
||||
)
|
||||
new_items.append(new_chunk)
|
||||
self.session.bulk_save_objects(new_items)
|
||||
|
|
@ -401,7 +321,7 @@ class PgvectorClient(VectorDBBase):
|
|||
if existing:
|
||||
existing.vector = vector
|
||||
existing.text = item["text"]
|
||||
existing.vmetadata = process_metadata(item["metadata"])
|
||||
existing.vmetadata = stringify_metadata(item["metadata"])
|
||||
existing.collection_name = (
|
||||
collection_name # Update collection_name if necessary
|
||||
)
|
||||
|
|
@ -411,7 +331,7 @@ class PgvectorClient(VectorDBBase):
|
|||
vector=vector,
|
||||
collection_name=collection_name,
|
||||
text=item["text"],
|
||||
vmetadata=process_metadata(item["metadata"]),
|
||||
vmetadata=stringify_metadata(item["metadata"]),
|
||||
)
|
||||
self.session.add(new_chunk)
|
||||
self.session.commit()
|
||||
|
|
@ -438,11 +358,11 @@ class PgvectorClient(VectorDBBase):
|
|||
num_queries = len(vectors)
|
||||
|
||||
def vector_expr(vector):
|
||||
return cast(array(vector), VECTOR_TYPE_FACTORY(VECTOR_LENGTH))
|
||||
return cast(array(vector), Vector(VECTOR_LENGTH))
|
||||
|
||||
# Create the values for query vectors
|
||||
qid_col = column("qid", Integer)
|
||||
q_vector_col = column("q_vector", VECTOR_TYPE_FACTORY(VECTOR_LENGTH))
|
||||
q_vector_col = column("q_vector", Vector(VECTOR_LENGTH))
|
||||
query_vectors = (
|
||||
values(qid_col, q_vector_col)
|
||||
.data(
|
||||
|
|
|
|||
|
|
@ -31,13 +31,15 @@ from open_webui.config import (
|
|||
PINECONE_METRIC,
|
||||
PINECONE_CLOUD,
|
||||
)
|
||||
from open_webui.retrieval.vector.utils import process_metadata
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from open_webui.retrieval.vector.utils import stringify_metadata
|
||||
|
||||
|
||||
NO_LIMIT = 10000 # Reasonable limit to avoid overwhelming the system
|
||||
BATCH_SIZE = 100 # Recommended batch size for Pinecone operations
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
class PineconeClient(VectorDBBase):
|
||||
|
|
@ -183,7 +185,7 @@ class PineconeClient(VectorDBBase):
|
|||
point = {
|
||||
"id": item["id"],
|
||||
"values": item["vector"],
|
||||
"metadata": process_metadata(metadata),
|
||||
"metadata": stringify_metadata(metadata),
|
||||
}
|
||||
points.append(point)
|
||||
return points
|
||||
|
|
|
|||
|
|
@ -22,10 +22,12 @@ from open_webui.config import (
|
|||
QDRANT_TIMEOUT,
|
||||
QDRANT_HNSW_M,
|
||||
)
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
NO_LIMIT = 999999999
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
class QdrantClient(VectorDBBase):
|
||||
|
|
|
|||
|
|
@ -13,6 +13,7 @@ from open_webui.config import (
|
|||
QDRANT_TIMEOUT,
|
||||
QDRANT_HNSW_M,
|
||||
)
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from open_webui.retrieval.vector.main import (
|
||||
GetResult,
|
||||
SearchResult,
|
||||
|
|
@ -29,6 +30,7 @@ TENANT_ID_FIELD = "tenant_id"
|
|||
DEFAULT_DIMENSION = 384
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def _tenant_filter(tenant_id: str) -> models.FieldCondition:
|
||||
|
|
@ -103,13 +105,6 @@ class QdrantClient(VectorDBBase):
|
|||
|
||||
Returns:
|
||||
tuple: (collection_name, tenant_id)
|
||||
|
||||
WARNING: This mapping relies on current Open WebUI naming conventions for
|
||||
collection names. If Open WebUI changes how it generates collection names
|
||||
(e.g., "user-memory-" prefix, "file-" prefix, web search patterns, or hash
|
||||
formats), this mapping will break and route data to incorrect collections.
|
||||
POTENTIALLY CAUSING HUGE DATA CORRUPTION, DATA CONSISTENCY ISSUES AND INCORRECT
|
||||
DATA MAPPING INSIDE THE DATABASE.
|
||||
"""
|
||||
# Check for user memory collections
|
||||
tenant_id = collection_name
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
from open_webui.retrieval.vector.utils import process_metadata
|
||||
from open_webui.retrieval.vector.utils import stringify_metadata
|
||||
from open_webui.retrieval.vector.main import (
|
||||
VectorDBBase,
|
||||
VectorItem,
|
||||
|
|
@ -6,11 +6,13 @@ from open_webui.retrieval.vector.main import (
|
|||
SearchResult,
|
||||
)
|
||||
from open_webui.config import S3_VECTOR_BUCKET_NAME, S3_VECTOR_REGION
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from typing import List, Optional, Dict, Any, Union
|
||||
import logging
|
||||
import boto3
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
class S3VectorClient(VectorDBBase):
|
||||
|
|
@ -115,16 +117,15 @@ class S3VectorClient(VectorDBBase):
|
|||
|
||||
def has_collection(self, collection_name: str) -> bool:
|
||||
"""
|
||||
Check if a vector index exists using direct lookup.
|
||||
This avoids pagination issues with list_indexes() and is significantly faster.
|
||||
Check if a vector index (collection) exists in the S3 vector bucket.
|
||||
"""
|
||||
|
||||
try:
|
||||
self.client.get_index(
|
||||
vectorBucketName=self.bucket_name, indexName=collection_name
|
||||
)
|
||||
return True
|
||||
response = self.client.list_indexes(vectorBucketName=self.bucket_name)
|
||||
indexes = response.get("indexes", [])
|
||||
return any(idx.get("indexName") == collection_name for idx in indexes)
|
||||
except Exception as e:
|
||||
log.error(f"Error checking if index '{collection_name}' exists: {e}")
|
||||
log.error(f"Error listing indexes: {e}")
|
||||
return False
|
||||
|
||||
def delete_collection(self, collection_name: str) -> None:
|
||||
|
|
@ -184,7 +185,7 @@ class S3VectorClient(VectorDBBase):
|
|||
metadata["text"] = item["text"]
|
||||
|
||||
# Convert metadata to string format for consistency
|
||||
metadata = process_metadata(metadata)
|
||||
metadata = stringify_metadata(metadata)
|
||||
|
||||
# Filter metadata to comply with S3 Vector API limit of 10 keys
|
||||
metadata = self._filter_metadata(metadata, item["id"])
|
||||
|
|
@ -255,7 +256,7 @@ class S3VectorClient(VectorDBBase):
|
|||
metadata["text"] = item["text"]
|
||||
|
||||
# Convert metadata to string format for consistency
|
||||
metadata = process_metadata(metadata)
|
||||
metadata = stringify_metadata(metadata)
|
||||
|
||||
# Filter metadata to comply with S3 Vector API limit of 10 keys
|
||||
metadata = self._filter_metadata(metadata, item["id"])
|
||||
|
|
|
|||
|
|
@ -1,340 +0,0 @@
|
|||
import weaviate
|
||||
import re
|
||||
import uuid
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
|
||||
from open_webui.retrieval.vector.main import (
|
||||
VectorDBBase,
|
||||
VectorItem,
|
||||
SearchResult,
|
||||
GetResult,
|
||||
)
|
||||
from open_webui.retrieval.vector.utils import process_metadata
|
||||
from open_webui.config import (
|
||||
WEAVIATE_HTTP_HOST,
|
||||
WEAVIATE_HTTP_PORT,
|
||||
WEAVIATE_GRPC_PORT,
|
||||
WEAVIATE_API_KEY,
|
||||
)
|
||||
|
||||
|
||||
def _convert_uuids_to_strings(obj: Any) -> Any:
|
||||
"""
|
||||
Recursively convert UUID objects to strings in nested data structures.
|
||||
|
||||
This function handles:
|
||||
- UUID objects -> string
|
||||
- Dictionaries with UUID values
|
||||
- Lists/Tuples with UUID values
|
||||
- Nested combinations of the above
|
||||
|
||||
Args:
|
||||
obj: Any object that might contain UUIDs
|
||||
|
||||
Returns:
|
||||
The same object structure with UUIDs converted to strings
|
||||
"""
|
||||
if isinstance(obj, uuid.UUID):
|
||||
return str(obj)
|
||||
elif isinstance(obj, dict):
|
||||
return {key: _convert_uuids_to_strings(value) for key, value in obj.items()}
|
||||
elif isinstance(obj, (list, tuple)):
|
||||
return type(obj)(_convert_uuids_to_strings(item) for item in obj)
|
||||
elif isinstance(obj, (str, int, float, bool, type(None))):
|
||||
return obj
|
||||
else:
|
||||
return obj
|
||||
|
||||
|
||||
class WeaviateClient(VectorDBBase):
|
||||
def __init__(self):
|
||||
self.url = WEAVIATE_HTTP_HOST
|
||||
try:
|
||||
# Build connection parameters
|
||||
connection_params = {
|
||||
"host": WEAVIATE_HTTP_HOST,
|
||||
"port": WEAVIATE_HTTP_PORT,
|
||||
"grpc_port": WEAVIATE_GRPC_PORT,
|
||||
}
|
||||
|
||||
# Only add auth_credentials if WEAVIATE_API_KEY exists and is not empty
|
||||
if WEAVIATE_API_KEY:
|
||||
connection_params["auth_credentials"] = (
|
||||
weaviate.classes.init.Auth.api_key(WEAVIATE_API_KEY)
|
||||
)
|
||||
|
||||
self.client = weaviate.connect_to_local(**connection_params)
|
||||
self.client.connect()
|
||||
except Exception as e:
|
||||
raise ConnectionError(f"Failed to connect to Weaviate: {e}") from e
|
||||
|
||||
def _sanitize_collection_name(self, collection_name: str) -> str:
|
||||
"""Sanitize collection name to be a valid Weaviate class name."""
|
||||
if not isinstance(collection_name, str) or not collection_name.strip():
|
||||
raise ValueError("Collection name must be a non-empty string")
|
||||
|
||||
# Requirements for a valid Weaviate class name:
|
||||
# The collection name must begin with a capital letter.
|
||||
# The name can only contain letters, numbers, and the underscore (_) character. Spaces are not allowed.
|
||||
|
||||
# Replace hyphens with underscores and keep only alphanumeric characters
|
||||
name = re.sub(r"[^a-zA-Z0-9_]", "", collection_name.replace("-", "_"))
|
||||
name = name.strip("_")
|
||||
|
||||
if not name:
|
||||
raise ValueError(
|
||||
"Could not sanitize collection name to be a valid Weaviate class name"
|
||||
)
|
||||
|
||||
# Ensure it starts with a letter and is capitalized
|
||||
if not name[0].isalpha():
|
||||
name = "C" + name
|
||||
|
||||
return name[0].upper() + name[1:]
|
||||
|
||||
def has_collection(self, collection_name: str) -> bool:
|
||||
sane_collection_name = self._sanitize_collection_name(collection_name)
|
||||
return self.client.collections.exists(sane_collection_name)
|
||||
|
||||
def delete_collection(self, collection_name: str) -> None:
|
||||
sane_collection_name = self._sanitize_collection_name(collection_name)
|
||||
if self.client.collections.exists(sane_collection_name):
|
||||
self.client.collections.delete(sane_collection_name)
|
||||
|
||||
def _create_collection(self, collection_name: str) -> None:
|
||||
self.client.collections.create(
|
||||
name=collection_name,
|
||||
vector_config=weaviate.classes.config.Configure.Vectors.self_provided(),
|
||||
properties=[
|
||||
weaviate.classes.config.Property(
|
||||
name="text", data_type=weaviate.classes.config.DataType.TEXT
|
||||
),
|
||||
],
|
||||
)
|
||||
|
||||
def insert(self, collection_name: str, items: List[VectorItem]) -> None:
|
||||
sane_collection_name = self._sanitize_collection_name(collection_name)
|
||||
if not self.client.collections.exists(sane_collection_name):
|
||||
self._create_collection(sane_collection_name)
|
||||
|
||||
collection = self.client.collections.get(sane_collection_name)
|
||||
|
||||
with collection.batch.fixed_size(batch_size=100) as batch:
|
||||
for item in items:
|
||||
item_uuid = str(uuid.uuid4()) if not item["id"] else str(item["id"])
|
||||
|
||||
properties = {"text": item["text"]}
|
||||
if item["metadata"]:
|
||||
clean_metadata = _convert_uuids_to_strings(
|
||||
process_metadata(item["metadata"])
|
||||
)
|
||||
clean_metadata.pop("text", None)
|
||||
properties.update(clean_metadata)
|
||||
|
||||
batch.add_object(
|
||||
properties=properties, uuid=item_uuid, vector=item["vector"]
|
||||
)
|
||||
|
||||
def upsert(self, collection_name: str, items: List[VectorItem]) -> None:
|
||||
sane_collection_name = self._sanitize_collection_name(collection_name)
|
||||
if not self.client.collections.exists(sane_collection_name):
|
||||
self._create_collection(sane_collection_name)
|
||||
|
||||
collection = self.client.collections.get(sane_collection_name)
|
||||
|
||||
with collection.batch.fixed_size(batch_size=100) as batch:
|
||||
for item in items:
|
||||
item_uuid = str(item["id"]) if item["id"] else None
|
||||
|
||||
properties = {"text": item["text"]}
|
||||
if item["metadata"]:
|
||||
clean_metadata = _convert_uuids_to_strings(
|
||||
process_metadata(item["metadata"])
|
||||
)
|
||||
clean_metadata.pop("text", None)
|
||||
properties.update(clean_metadata)
|
||||
|
||||
batch.add_object(
|
||||
properties=properties, uuid=item_uuid, vector=item["vector"]
|
||||
)
|
||||
|
||||
def search(
|
||||
self, collection_name: str, vectors: List[List[Union[float, int]]], limit: int
|
||||
) -> Optional[SearchResult]:
|
||||
sane_collection_name = self._sanitize_collection_name(collection_name)
|
||||
if not self.client.collections.exists(sane_collection_name):
|
||||
return None
|
||||
|
||||
collection = self.client.collections.get(sane_collection_name)
|
||||
|
||||
result_ids, result_documents, result_metadatas, result_distances = (
|
||||
[],
|
||||
[],
|
||||
[],
|
||||
[],
|
||||
)
|
||||
|
||||
for vector_embedding in vectors:
|
||||
try:
|
||||
response = collection.query.near_vector(
|
||||
near_vector=vector_embedding,
|
||||
limit=limit,
|
||||
return_metadata=weaviate.classes.query.MetadataQuery(distance=True),
|
||||
)
|
||||
|
||||
ids = [str(obj.uuid) for obj in response.objects]
|
||||
documents = []
|
||||
metadatas = []
|
||||
distances = []
|
||||
|
||||
for obj in response.objects:
|
||||
properties = dict(obj.properties) if obj.properties else {}
|
||||
documents.append(properties.pop("text", ""))
|
||||
metadatas.append(_convert_uuids_to_strings(properties))
|
||||
|
||||
# Weaviate has cosine distance, 2 (worst) -> 0 (best). Re-ordering to 0 -> 1
|
||||
raw_distances = [
|
||||
(
|
||||
obj.metadata.distance
|
||||
if obj.metadata and obj.metadata.distance
|
||||
else 2.0
|
||||
)
|
||||
for obj in response.objects
|
||||
]
|
||||
distances = [(2 - dist) / 2 for dist in raw_distances]
|
||||
|
||||
result_ids.append(ids)
|
||||
result_documents.append(documents)
|
||||
result_metadatas.append(metadatas)
|
||||
result_distances.append(distances)
|
||||
except Exception:
|
||||
result_ids.append([])
|
||||
result_documents.append([])
|
||||
result_metadatas.append([])
|
||||
result_distances.append([])
|
||||
|
||||
return SearchResult(
|
||||
**{
|
||||
"ids": result_ids,
|
||||
"documents": result_documents,
|
||||
"metadatas": result_metadatas,
|
||||
"distances": result_distances,
|
||||
}
|
||||
)
|
||||
|
||||
def query(
|
||||
self, collection_name: str, filter: Dict, limit: Optional[int] = None
|
||||
) -> Optional[GetResult]:
|
||||
sane_collection_name = self._sanitize_collection_name(collection_name)
|
||||
if not self.client.collections.exists(sane_collection_name):
|
||||
return None
|
||||
|
||||
collection = self.client.collections.get(sane_collection_name)
|
||||
|
||||
weaviate_filter = None
|
||||
if filter:
|
||||
for key, value in filter.items():
|
||||
prop_filter = weaviate.classes.query.Filter.by_property(name=key).equal(
|
||||
value
|
||||
)
|
||||
weaviate_filter = (
|
||||
prop_filter
|
||||
if weaviate_filter is None
|
||||
else weaviate.classes.query.Filter.all_of(
|
||||
[weaviate_filter, prop_filter]
|
||||
)
|
||||
)
|
||||
|
||||
try:
|
||||
response = collection.query.fetch_objects(
|
||||
filters=weaviate_filter, limit=limit
|
||||
)
|
||||
|
||||
ids = [str(obj.uuid) for obj in response.objects]
|
||||
documents = []
|
||||
metadatas = []
|
||||
|
||||
for obj in response.objects:
|
||||
properties = dict(obj.properties) if obj.properties else {}
|
||||
documents.append(properties.pop("text", ""))
|
||||
metadatas.append(_convert_uuids_to_strings(properties))
|
||||
|
||||
return GetResult(
|
||||
**{
|
||||
"ids": [ids],
|
||||
"documents": [documents],
|
||||
"metadatas": [metadatas],
|
||||
}
|
||||
)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def get(self, collection_name: str) -> Optional[GetResult]:
|
||||
sane_collection_name = self._sanitize_collection_name(collection_name)
|
||||
if not self.client.collections.exists(sane_collection_name):
|
||||
return None
|
||||
|
||||
collection = self.client.collections.get(sane_collection_name)
|
||||
ids, documents, metadatas = [], [], []
|
||||
|
||||
try:
|
||||
for item in collection.iterator():
|
||||
ids.append(str(item.uuid))
|
||||
properties = dict(item.properties) if item.properties else {}
|
||||
documents.append(properties.pop("text", ""))
|
||||
metadatas.append(_convert_uuids_to_strings(properties))
|
||||
|
||||
if not ids:
|
||||
return None
|
||||
|
||||
return GetResult(
|
||||
**{
|
||||
"ids": [ids],
|
||||
"documents": [documents],
|
||||
"metadatas": [metadatas],
|
||||
}
|
||||
)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def delete(
|
||||
self,
|
||||
collection_name: str,
|
||||
ids: Optional[List[str]] = None,
|
||||
filter: Optional[Dict] = None,
|
||||
) -> None:
|
||||
sane_collection_name = self._sanitize_collection_name(collection_name)
|
||||
if not self.client.collections.exists(sane_collection_name):
|
||||
return
|
||||
|
||||
collection = self.client.collections.get(sane_collection_name)
|
||||
|
||||
try:
|
||||
if ids:
|
||||
for item_id in ids:
|
||||
collection.data.delete_by_id(uuid=item_id)
|
||||
elif filter:
|
||||
weaviate_filter = None
|
||||
for key, value in filter.items():
|
||||
prop_filter = weaviate.classes.query.Filter.by_property(
|
||||
name=key
|
||||
).equal(value)
|
||||
weaviate_filter = (
|
||||
prop_filter
|
||||
if weaviate_filter is None
|
||||
else weaviate.classes.query.Filter.all_of(
|
||||
[weaviate_filter, prop_filter]
|
||||
)
|
||||
)
|
||||
|
||||
if weaviate_filter:
|
||||
collection.data.delete_many(where=weaviate_filter)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def reset(self) -> None:
|
||||
try:
|
||||
for collection_name in self.client.collections.list_all().keys():
|
||||
self.client.collections.delete(collection_name)
|
||||
except Exception:
|
||||
pass
|
||||
|
|
@ -1,10 +1,6 @@
|
|||
from open_webui.retrieval.vector.main import VectorDBBase
|
||||
from open_webui.retrieval.vector.type import VectorType
|
||||
from open_webui.config import (
|
||||
VECTOR_DB,
|
||||
ENABLE_QDRANT_MULTITENANCY_MODE,
|
||||
ENABLE_MILVUS_MULTITENANCY_MODE,
|
||||
)
|
||||
from open_webui.config import VECTOR_DB, ENABLE_QDRANT_MULTITENANCY_MODE
|
||||
|
||||
|
||||
class Vector:
|
||||
|
|
@ -16,13 +12,6 @@ class Vector:
|
|||
"""
|
||||
match vector_type:
|
||||
case VectorType.MILVUS:
|
||||
if ENABLE_MILVUS_MULTITENANCY_MODE:
|
||||
from open_webui.retrieval.vector.dbs.milvus_multitenancy import (
|
||||
MilvusClient,
|
||||
)
|
||||
|
||||
return MilvusClient()
|
||||
else:
|
||||
from open_webui.retrieval.vector.dbs.milvus import MilvusClient
|
||||
|
||||
return MilvusClient()
|
||||
|
|
@ -67,10 +56,6 @@ class Vector:
|
|||
from open_webui.retrieval.vector.dbs.oracle23ai import Oracle23aiClient
|
||||
|
||||
return Oracle23aiClient()
|
||||
case VectorType.WEAVIATE:
|
||||
from open_webui.retrieval.vector.dbs.weaviate import WeaviateClient
|
||||
|
||||
return WeaviateClient()
|
||||
case _:
|
||||
raise ValueError(f"Unsupported vector type: {vector_type}")
|
||||
|
||||
|
|
|
|||
|
|
@ -11,4 +11,3 @@ class VectorType(StrEnum):
|
|||
PGVECTOR = "pgvector"
|
||||
ORACLE23AI = "oracle23ai"
|
||||
S3VECTOR = "s3vector"
|
||||
WEAVIATE = "weaviate"
|
||||
|
|
|
|||
|
|
@ -1,24 +1,10 @@
|
|||
from datetime import datetime
|
||||
|
||||
KEYS_TO_EXCLUDE = ["content", "pages", "tables", "paragraphs", "sections", "figures"]
|
||||
|
||||
|
||||
def filter_metadata(metadata: dict[str, any]) -> dict[str, any]:
|
||||
metadata = {
|
||||
key: value for key, value in metadata.items() if key not in KEYS_TO_EXCLUDE
|
||||
}
|
||||
return metadata
|
||||
|
||||
|
||||
def process_metadata(
|
||||
def stringify_metadata(
|
||||
metadata: dict[str, any],
|
||||
) -> dict[str, any]:
|
||||
for key, value in metadata.items():
|
||||
# Remove large fields
|
||||
if key in KEYS_TO_EXCLUDE:
|
||||
del metadata[key]
|
||||
|
||||
# Convert non-serializable fields to strings
|
||||
if (
|
||||
isinstance(value, datetime)
|
||||
or isinstance(value, list)
|
||||
|
|
|
|||
|
|
@ -1,126 +0,0 @@
|
|||
import logging
|
||||
from typing import Optional
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
"""
|
||||
Azure AI Search integration for Open WebUI.
|
||||
Documentation: https://learn.microsoft.com/en-us/python/api/overview/azure/search-documents-readme?view=azure-python
|
||||
|
||||
Required package: azure-search-documents
|
||||
Install: pip install azure-search-documents
|
||||
"""
|
||||
|
||||
|
||||
def search_azure(
|
||||
api_key: str,
|
||||
endpoint: str,
|
||||
index_name: str,
|
||||
query: str,
|
||||
count: int,
|
||||
filter_list: Optional[list[str]] = None,
|
||||
) -> list[SearchResult]:
|
||||
"""
|
||||
Search using Azure AI Search.
|
||||
|
||||
Args:
|
||||
api_key: Azure Search API key (query key or admin key)
|
||||
endpoint: Azure Search service endpoint (e.g., https://myservice.search.windows.net)
|
||||
index_name: Name of the search index to query
|
||||
query: Search query string
|
||||
count: Number of results to return
|
||||
filter_list: Optional list of domains to filter results
|
||||
|
||||
Returns:
|
||||
List of SearchResult objects with link, title, and snippet
|
||||
"""
|
||||
try:
|
||||
from azure.core.credentials import AzureKeyCredential
|
||||
from azure.search.documents import SearchClient
|
||||
except ImportError:
|
||||
log.error(
|
||||
"azure-search-documents package is not installed. "
|
||||
"Install it with: pip install azure-search-documents"
|
||||
)
|
||||
raise ImportError(
|
||||
"azure-search-documents is required for Azure AI Search. "
|
||||
"Install it with: pip install azure-search-documents"
|
||||
)
|
||||
|
||||
try:
|
||||
# Create search client with API key authentication
|
||||
credential = AzureKeyCredential(api_key)
|
||||
search_client = SearchClient(
|
||||
endpoint=endpoint, index_name=index_name, credential=credential
|
||||
)
|
||||
|
||||
# Perform the search
|
||||
results = search_client.search(search_text=query, top=count)
|
||||
|
||||
# Convert results to list and extract fields
|
||||
search_results = []
|
||||
for result in results:
|
||||
# Azure AI Search returns documents with custom schemas
|
||||
# We need to extract common fields that might represent URL, title, and content
|
||||
# Common field names to look for:
|
||||
result_dict = dict(result)
|
||||
|
||||
# Try to find URL field (common names)
|
||||
link = (
|
||||
result_dict.get("url")
|
||||
or result_dict.get("link")
|
||||
or result_dict.get("uri")
|
||||
or result_dict.get("metadata_storage_path")
|
||||
or ""
|
||||
)
|
||||
|
||||
# Try to find title field (common names)
|
||||
title = (
|
||||
result_dict.get("title")
|
||||
or result_dict.get("name")
|
||||
or result_dict.get("metadata_title")
|
||||
or result_dict.get("metadata_storage_name")
|
||||
or None
|
||||
)
|
||||
|
||||
# Try to find content/snippet field (common names)
|
||||
snippet = (
|
||||
result_dict.get("content")
|
||||
or result_dict.get("snippet")
|
||||
or result_dict.get("description")
|
||||
or result_dict.get("summary")
|
||||
or result_dict.get("text")
|
||||
or None
|
||||
)
|
||||
|
||||
# Truncate snippet if too long
|
||||
if snippet and len(snippet) > 500:
|
||||
snippet = snippet[:497] + "..."
|
||||
|
||||
if link: # Only add if we found a valid link
|
||||
search_results.append(
|
||||
{
|
||||
"link": link,
|
||||
"title": title,
|
||||
"snippet": snippet,
|
||||
}
|
||||
)
|
||||
|
||||
# Apply domain filtering if specified
|
||||
if filter_list:
|
||||
search_results = get_filtered_results(search_results, filter_list)
|
||||
|
||||
# Convert to SearchResult objects
|
||||
return [
|
||||
SearchResult(
|
||||
link=result["link"],
|
||||
title=result.get("title"),
|
||||
snippet=result.get("snippet"),
|
||||
)
|
||||
for result in search_results
|
||||
]
|
||||
|
||||
except Exception as ex:
|
||||
log.error(f"Azure AI Search error: {ex}")
|
||||
raise ex
|
||||
|
|
@ -4,9 +4,11 @@ from pprint import pprint
|
|||
from typing import Optional
|
||||
import requests
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
import argparse
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
"""
|
||||
Documentation: https://docs.microsoft.com/en-us/bing/search-apis/bing-web-search/overview
|
||||
"""
|
||||
|
|
|
|||
|
|
@ -4,18 +4,20 @@ from typing import Optional
|
|||
import requests
|
||||
import json
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def _parse_response(response):
|
||||
results = []
|
||||
result = {}
|
||||
if "data" in response:
|
||||
data = response["data"]
|
||||
if "webPages" in data:
|
||||
webPages = data["webPages"]
|
||||
if "value" in webPages:
|
||||
results = [
|
||||
result["webpage"] = [
|
||||
{
|
||||
"id": item.get("id", ""),
|
||||
"name": item.get("name", ""),
|
||||
|
|
@ -29,7 +31,7 @@ def _parse_response(response):
|
|||
}
|
||||
for item in webPages["value"]
|
||||
]
|
||||
return results
|
||||
return result
|
||||
|
||||
|
||||
def search_bocha(
|
||||
|
|
@ -51,7 +53,7 @@ def search_bocha(
|
|||
response = requests.post(url, headers=headers, data=payload, timeout=5)
|
||||
response.raise_for_status()
|
||||
results = _parse_response(response.json())
|
||||
|
||||
print(results)
|
||||
if filter_list:
|
||||
results = get_filtered_results(results, filter_list)
|
||||
|
||||
|
|
@ -59,5 +61,5 @@ def search_bocha(
|
|||
SearchResult(
|
||||
link=result["url"], title=result.get("name"), snippet=result.get("summary")
|
||||
)
|
||||
for result in results[:count]
|
||||
for result in results.get("webpage", [])[:count]
|
||||
]
|
||||
|
|
|
|||
|
|
@ -3,8 +3,10 @@ from typing import Optional
|
|||
|
||||
import requests
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def search_brave(
|
||||
|
|
|
|||
|
|
@ -4,8 +4,10 @@ from typing import Optional
|
|||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from ddgs import DDGS
|
||||
from ddgs.exceptions import RatelimitException
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def search_duckduckgo(
|
||||
|
|
|
|||
|
|
@ -3,9 +3,11 @@ from dataclasses import dataclass
|
|||
from typing import Optional
|
||||
|
||||
import requests
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from open_webui.retrieval.web.main import SearchResult
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
EXA_API_BASE = "https://api.exa.ai"
|
||||
|
||||
|
|
|
|||
|
|
@ -2,40 +2,27 @@ import logging
|
|||
from typing import Optional, List
|
||||
|
||||
import requests
|
||||
|
||||
from fastapi import Request
|
||||
|
||||
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.utils.headers import include_user_info_headers
|
||||
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def search_external(
|
||||
request: Request,
|
||||
external_url: str,
|
||||
external_api_key: str,
|
||||
query: str,
|
||||
count: int,
|
||||
filter_list: Optional[List[str]] = None,
|
||||
user=None,
|
||||
) -> List[SearchResult]:
|
||||
try:
|
||||
headers = {
|
||||
"User-Agent": "Open WebUI (https://github.com/open-webui/open-webui) RAG Bot",
|
||||
"Authorization": f"Bearer {external_api_key}",
|
||||
}
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
chat_id = getattr(request.state, "chat_id", None)
|
||||
if chat_id:
|
||||
headers["X-OpenWebUI-Chat-Id"] = str(chat_id)
|
||||
|
||||
response = requests.post(
|
||||
external_url,
|
||||
headers=headers,
|
||||
headers={
|
||||
"User-Agent": "Open WebUI (https://github.com/open-webui/open-webui) RAG Bot",
|
||||
"Authorization": f"Bearer {external_api_key}",
|
||||
},
|
||||
json={
|
||||
"query": query,
|
||||
"count": count,
|
||||
|
|
|
|||
|
|
@ -1,10 +1,13 @@
|
|||
import logging
|
||||
from typing import Optional, List
|
||||
from urllib.parse import urljoin
|
||||
|
||||
import requests
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def search_firecrawl(
|
||||
|
|
@ -15,20 +18,27 @@ def search_firecrawl(
|
|||
filter_list: Optional[List[str]] = None,
|
||||
) -> List[SearchResult]:
|
||||
try:
|
||||
from firecrawl import FirecrawlApp
|
||||
|
||||
firecrawl = FirecrawlApp(api_key=firecrawl_api_key, api_url=firecrawl_url)
|
||||
response = firecrawl.search(
|
||||
query=query, limit=count, ignore_invalid_urls=True, timeout=count * 3
|
||||
firecrawl_search_url = urljoin(firecrawl_url, "/v1/search")
|
||||
response = requests.post(
|
||||
firecrawl_search_url,
|
||||
headers={
|
||||
"User-Agent": "Open WebUI (https://github.com/open-webui/open-webui) RAG Bot",
|
||||
"Authorization": f"Bearer {firecrawl_api_key}",
|
||||
},
|
||||
json={
|
||||
"query": query,
|
||||
"limit": count,
|
||||
},
|
||||
)
|
||||
results = response.web
|
||||
response.raise_for_status()
|
||||
results = response.json().get("data", [])
|
||||
if filter_list:
|
||||
results = get_filtered_results(results, filter_list)
|
||||
results = [
|
||||
SearchResult(
|
||||
link=result.url,
|
||||
title=result.title,
|
||||
snippet=result.description,
|
||||
link=result.get("url"),
|
||||
title=result.get("title"),
|
||||
snippet=result.get("description"),
|
||||
)
|
||||
for result in results[:count]
|
||||
]
|
||||
|
|
|
|||
|
|
@ -3,8 +3,10 @@ from typing import Optional
|
|||
|
||||
import requests
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def search_google_pse(
|
||||
|
|
@ -13,7 +15,6 @@ def search_google_pse(
|
|||
query: str,
|
||||
count: int,
|
||||
filter_list: Optional[list[str]] = None,
|
||||
referer: Optional[str] = None,
|
||||
) -> list[SearchResult]:
|
||||
"""Search using Google's Programmable Search Engine API and return the results as a list of SearchResult objects.
|
||||
Handles pagination for counts greater than 10.
|
||||
|
|
@ -29,11 +30,7 @@ def search_google_pse(
|
|||
list[SearchResult]: A list of SearchResult objects.
|
||||
"""
|
||||
url = "https://www.googleapis.com/customsearch/v1"
|
||||
|
||||
headers = {"Content-Type": "application/json"}
|
||||
if referer:
|
||||
headers["Referer"] = referer
|
||||
|
||||
all_results = []
|
||||
start_index = 1 # Google PSE start parameter is 1-based
|
||||
|
||||
|
|
|
|||
|
|
@ -2,9 +2,11 @@ import logging
|
|||
|
||||
import requests
|
||||
from open_webui.retrieval.web.main import SearchResult
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from yarl import URL
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def search_jina(api_key: str, query: str, count: int) -> list[SearchResult]:
|
||||
|
|
|
|||
|
|
@ -3,8 +3,10 @@ from typing import Optional
|
|||
|
||||
import requests
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def search_kagi(
|
||||
|
|
|
|||
|
|
@ -5,38 +5,18 @@ from urllib.parse import urlparse
|
|||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from open_webui.retrieval.web.utils import resolve_hostname
|
||||
from open_webui.utils.misc import is_string_allowed
|
||||
|
||||
|
||||
def get_filtered_results(results, filter_list):
|
||||
if not filter_list:
|
||||
return results
|
||||
|
||||
filtered_results = []
|
||||
|
||||
for result in results:
|
||||
url = result.get("url") or result.get("link", "") or result.get("href", "")
|
||||
if not validators.url(url):
|
||||
continue
|
||||
|
||||
domain = urlparse(url).netloc
|
||||
if not domain:
|
||||
continue
|
||||
|
||||
hostnames = [domain]
|
||||
|
||||
try:
|
||||
ipv4_addresses, ipv6_addresses = resolve_hostname(domain)
|
||||
hostnames.extend(ipv4_addresses)
|
||||
hostnames.extend(ipv6_addresses)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if is_string_allowed(hostnames, filter_list):
|
||||
if any(domain.endswith(filtered_domain) for filtered_domain in filter_list):
|
||||
filtered_results.append(result)
|
||||
continue
|
||||
|
||||
return filtered_results
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -3,8 +3,10 @@ from typing import Optional
|
|||
|
||||
import requests
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def search_mojeek(
|
||||
|
|
|
|||
|
|
@ -1,52 +0,0 @@
|
|||
import logging
|
||||
from dataclasses import dataclass
|
||||
from typing import Optional
|
||||
|
||||
import requests
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def search_ollama_cloud(
|
||||
url: str,
|
||||
api_key: str,
|
||||
query: str,
|
||||
count: int,
|
||||
filter_list: Optional[list[str]] = None,
|
||||
) -> list[SearchResult]:
|
||||
"""Search using Ollama Search API and return the results as a list of SearchResult objects.
|
||||
|
||||
Args:
|
||||
api_key (str): A Ollama Search API key
|
||||
query (str): The query to search for
|
||||
count (int): Number of results to return
|
||||
filter_list (Optional[list[str]]): List of domains to filter results by
|
||||
"""
|
||||
log.info(f"Searching with Ollama for query: {query}")
|
||||
|
||||
headers = {"Authorization": f"Bearer {api_key}", "Content-Type": "application/json"}
|
||||
payload = {"query": query, "max_results": count}
|
||||
|
||||
try:
|
||||
response = requests.post(f"{url}/api/web_search", headers=headers, json=payload)
|
||||
response.raise_for_status()
|
||||
data = response.json()
|
||||
|
||||
results = data.get("results", [])
|
||||
log.info(f"Found {len(results)} results")
|
||||
|
||||
if filter_list:
|
||||
results = get_filtered_results(results, filter_list)
|
||||
|
||||
return [
|
||||
SearchResult(
|
||||
link=result.get("url", ""),
|
||||
title=result.get("title", ""),
|
||||
snippet=result.get("content", ""),
|
||||
)
|
||||
for result in results
|
||||
]
|
||||
except Exception as e:
|
||||
log.error(f"Error searching Ollama: {e}")
|
||||
return []
|
||||
|
|
@ -3,6 +3,7 @@ from typing import Optional, Literal
|
|||
import requests
|
||||
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
MODELS = Literal[
|
||||
"sonar",
|
||||
|
|
@ -15,6 +16,7 @@ SEARCH_CONTEXT_USAGE_LEVELS = Literal["low", "medium", "high"]
|
|||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def search_perplexity(
|
||||
|
|
|
|||
|
|
@ -1,74 +0,0 @@
|
|||
import logging
|
||||
from typing import Optional, Literal
|
||||
import requests
|
||||
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.utils.headers import include_user_info_headers
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def search_perplexity_search(
|
||||
api_key: str,
|
||||
query: str,
|
||||
count: int,
|
||||
filter_list: Optional[list[str]] = None,
|
||||
api_url: str = "https://api.perplexity.ai/search",
|
||||
user=None,
|
||||
) -> list[SearchResult]:
|
||||
"""Search using Perplexity API and return the results as a list of SearchResult objects.
|
||||
|
||||
Args:
|
||||
api_key (str): A Perplexity API key
|
||||
query (str): The query to search for
|
||||
count (int): Maximum number of results to return
|
||||
filter_list (Optional[list[str]]): List of domains to filter results
|
||||
api_url (str): Custom API URL (defaults to https://api.perplexity.ai/search)
|
||||
user: Optional user object for forwarding user info headers
|
||||
|
||||
"""
|
||||
|
||||
# Handle PersistentConfig object
|
||||
if hasattr(api_key, "__str__"):
|
||||
api_key = str(api_key)
|
||||
|
||||
if hasattr(api_url, "__str__"):
|
||||
api_url = str(api_url)
|
||||
|
||||
try:
|
||||
url = api_url
|
||||
|
||||
# Create payload for the API call
|
||||
payload = {
|
||||
"query": query,
|
||||
"max_results": count,
|
||||
}
|
||||
|
||||
headers = {
|
||||
"Authorization": f"Bearer {api_key}",
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
|
||||
# Forward user info headers if user is provided
|
||||
if user is not None:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
# Make the API request
|
||||
response = requests.request("POST", url, json=payload, headers=headers)
|
||||
# Parse the JSON response
|
||||
json_response = response.json()
|
||||
|
||||
# Extract citations from the response
|
||||
results = json_response.get("results", [])
|
||||
|
||||
return [
|
||||
SearchResult(
|
||||
link=result["url"], title=result["title"], snippet=result["snippet"]
|
||||
)
|
||||
for result in results
|
||||
]
|
||||
|
||||
except Exception as e:
|
||||
log.error(f"Error searching with Perplexity Search API: {e}")
|
||||
return []
|
||||
|
|
@ -4,8 +4,10 @@ from urllib.parse import urlencode
|
|||
|
||||
import requests
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def search_searchapi(
|
||||
|
|
|
|||
|
|
@ -3,8 +3,10 @@ from typing import Optional
|
|||
|
||||
import requests
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def search_searxng(
|
||||
|
|
@ -25,7 +27,7 @@ def search_searxng(
|
|||
count (int): The maximum number of results to retrieve from the search.
|
||||
|
||||
Keyword Args:
|
||||
language (str): Language filter for the search results; e.g., "all", "en-US", "es". Defaults to "all".
|
||||
language (str): Language filter for the search results; e.g., "en-US". Defaults to an empty string.
|
||||
safesearch (int): Safe search filter for safer web results; 0 = off, 1 = moderate, 2 = strict. Defaults to 1 (moderate).
|
||||
time_range (str): Time range for filtering results by date; e.g., "2023-04-05..today" or "all-time". Defaults to ''.
|
||||
categories: (Optional[list[str]]): Specific categories within which the search should be performed, defaulting to an empty string if not provided.
|
||||
|
|
@ -38,7 +40,7 @@ def search_searxng(
|
|||
"""
|
||||
|
||||
# Default values for optional parameters are provided as empty strings or None when not specified.
|
||||
language = kwargs.get("language", "all")
|
||||
language = kwargs.get("language", "en-US")
|
||||
safesearch = kwargs.get("safesearch", "1")
|
||||
time_range = kwargs.get("time_range", "")
|
||||
categories = "".join(kwargs.get("categories", []))
|
||||
|
|
|
|||
|
|
@ -4,8 +4,10 @@ from urllib.parse import urlencode
|
|||
|
||||
import requests
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def search_serpapi(
|
||||
|
|
|
|||
|
|
@ -4,8 +4,10 @@ from typing import Optional
|
|||
|
||||
import requests
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def search_serper(
|
||||
|
|
|
|||
|
|
@ -4,8 +4,10 @@ from urllib.parse import urlencode
|
|||
|
||||
import requests
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def search_serply(
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue