updated to version 0.6.36 hopefully for the last time that I need to do this
19
.github/ISSUE_TEMPLATE/bug_report.yaml
vendored
|
|
@ -11,7 +11,9 @@ body:
|
|||
|
||||
## Important Notes
|
||||
|
||||
- **Before submitting a bug report**: Please check the [Issues](https://github.com/open-webui/open-webui/issues) or [Discussions](https://github.com/open-webui/open-webui/discussions) sections to see if a similar issue has already been reported. If unsure, start a discussion first, as this helps us efficiently focus on improving the project.
|
||||
- **Before submitting a bug report**: Please check the [Issues](https://github.com/open-webui/open-webui/issues) and [Discussions](https://github.com/open-webui/open-webui/discussions) sections to see if a similar issue has already been reported. If unsure, start a discussion first, as this helps us efficiently focus on improving the project. Duplicates may be closed without notice. **Please search for existing issues AND discussions. No matter open or closed.**
|
||||
|
||||
- Check for opened, **but also for (recently) CLOSED issues** as the issue you are trying to report **might already have been fixed on the dev branch!**
|
||||
|
||||
- **Respectful collaboration**: Open WebUI is a volunteer-driven project with a single maintainer and contributors who also have full-time jobs. Please be constructive and respectful in your communication.
|
||||
|
||||
|
|
@ -19,13 +21,19 @@ body:
|
|||
|
||||
- **Bug Reproducibility**: If a bug cannot be reproduced using a `:main` or `:dev` Docker setup or with `pip install` on Python 3.11, community assistance may be required. In such cases, we will move it to the "[Issues](https://github.com/open-webui/open-webui/discussions/categories/issues)" Discussions section. Your help is appreciated!
|
||||
|
||||
- **Scope**: If you want to report a SECURITY VULNERABILITY, then do so through our [GitHub security page](https://github.com/open-webui/open-webui/security).
|
||||
|
||||
- type: checkboxes
|
||||
id: issue-check
|
||||
attributes:
|
||||
label: Check Existing Issues
|
||||
description: Confirm that you’ve checked for existing reports before submitting a new one.
|
||||
options:
|
||||
- label: I have searched the existing issues and discussions.
|
||||
- label: I have searched for any existing and/or related issues.
|
||||
required: true
|
||||
- label: I have searched for any existing and/or related discussions.
|
||||
required: true
|
||||
- label: I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!).
|
||||
required: true
|
||||
- label: I am using the latest version of Open WebUI.
|
||||
required: true
|
||||
|
|
@ -47,7 +55,7 @@ body:
|
|||
id: open-webui-version
|
||||
attributes:
|
||||
label: Open WebUI Version
|
||||
description: Specify the version (e.g., v0.3.11)
|
||||
description: Specify the version (e.g., v0.6.26)
|
||||
validations:
|
||||
required: true
|
||||
|
||||
|
|
@ -63,7 +71,7 @@ body:
|
|||
id: operating-system
|
||||
attributes:
|
||||
label: Operating System
|
||||
description: Specify the OS (e.g., Windows 10, macOS Sonoma, Ubuntu 22.04)
|
||||
description: Specify the OS (e.g., Windows 10, macOS Sonoma, Ubuntu 22.04, Debian 12)
|
||||
validations:
|
||||
required: true
|
||||
|
||||
|
|
@ -126,6 +134,7 @@ body:
|
|||
description: |
|
||||
Please provide a **very detailed, step-by-step guide** to reproduce the issue. Your instructions should be so clear and precise that anyone can follow them without guesswork. Include every relevant detail—settings, configuration options, exact commands used, values entered, and any prerequisites or environment variables.
|
||||
**If full reproduction steps and all relevant settings are not provided, your issue may not be addressed.**
|
||||
**If your steps to reproduction are incomplete, lacking detail or not reproducible, your issue can not be addressed.**
|
||||
|
||||
placeholder: |
|
||||
Example (include every detail):
|
||||
|
|
@ -163,5 +172,5 @@ body:
|
|||
attributes:
|
||||
value: |
|
||||
## Note
|
||||
If the bug report is incomplete or does not follow instructions, it may not be addressed. Ensure that you've followed all the **README.md** and **troubleshooting.md** guidelines, and provide all necessary information for us to reproduce the issue.
|
||||
**If the bug report is incomplete, does not follow instructions or is lacking details it may not be addressed.** Ensure that you've followed all the **README.md** and **troubleshooting.md** guidelines, and provide all necessary information for us to reproduce the issue.
|
||||
Thank you for contributing to Open WebUI!
|
||||
|
|
|
|||
28
.github/ISSUE_TEMPLATE/feature_request.yaml
vendored
|
|
@ -8,10 +8,21 @@ body:
|
|||
value: |
|
||||
## Important Notes
|
||||
### Before submitting
|
||||
Please check the [Issues](https://github.com/open-webui/open-webui/issues) or [Discussions](https://github.com/open-webui/open-webui/discussions) to see if a similar request has been posted.
|
||||
|
||||
Please check the **open AND closed** [Issues](https://github.com/open-webui/open-webui/issues) AND [Discussions](https://github.com/open-webui/open-webui/discussions) to see if a similar request has been posted.
|
||||
It's likely we're already tracking it! If you’re unsure, start a discussion post first.
|
||||
This will help us efficiently focus on improving the project.
|
||||
|
||||
#### Scope
|
||||
|
||||
If your feature request is likely to take more than a quick coding session to implement, test and verify, then open it in the **Ideas** section of the [Discussions](https://github.com/open-webui/open-webui/discussions) instead.
|
||||
**We will close and force move your feature request to the Ideas section, if we believe your feature request is not trivial/quick to implement.**
|
||||
This is to ensure the issues tab is used only for issues, quickly addressable feature requests and tracking tickets by the maintainers.
|
||||
Other feature requests belong in the **Ideas** section of the [Discussions](https://github.com/open-webui/open-webui/discussions).
|
||||
|
||||
If your feature request might impact others in the community, definitely open a discussion instead and evaluate whether and how to implement it.
|
||||
|
||||
This will help us efficiently focus on improving the project.
|
||||
|
||||
### Collaborate respectfully
|
||||
We value a **constructive attitude**, so please be mindful of your communication. If negativity is part of your approach, our capacity to engage may be limited. We're here to help if you're **open to learning** and **communicating positively**.
|
||||
|
||||
|
|
@ -22,7 +33,6 @@ body:
|
|||
|
||||
We appreciate your time and ask that you **respect ours**.
|
||||
|
||||
|
||||
### Contributing
|
||||
If you encounter an issue, we highly encourage you to submit a pull request or fork the project. We actively work to prevent contributor burnout to maintain the quality and continuity of Open WebUI.
|
||||
|
||||
|
|
@ -35,14 +45,22 @@ body:
|
|||
label: Check Existing Issues
|
||||
description: Please confirm that you've checked for existing similar requests
|
||||
options:
|
||||
- label: I have searched the existing issues and discussions.
|
||||
- label: I have searched for all existing **open AND closed** issues and discussions for similar requests. I have found none that is comparable to my request.
|
||||
required: true
|
||||
- type: checkboxes
|
||||
id: feature-scope
|
||||
attributes:
|
||||
label: Verify Feature Scope
|
||||
description: Please confirm the feature's scope is within the described scope
|
||||
options:
|
||||
- label: I have read through and understood the scope definition for feature requests in the Issues section. I believe my feature request meets the definition and belongs in the Issues section instead of the Discussions.
|
||||
required: true
|
||||
- type: textarea
|
||||
id: problem-description
|
||||
attributes:
|
||||
label: Problem Description
|
||||
description: Is your feature request related to a problem? Please provide a clear and concise description of what the problem is.
|
||||
placeholder: "Ex. I'm always frustrated when..."
|
||||
placeholder: "Ex. I'm always frustrated when... / Not related to a problem"
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
|
|
|
|||
6
.github/dependabot.yml
vendored
|
|
@ -12,12 +12,6 @@ updates:
|
|||
interval: monthly
|
||||
target-branch: 'dev'
|
||||
|
||||
- package-ecosystem: npm
|
||||
directory: '/'
|
||||
schedule:
|
||||
interval: monthly
|
||||
target-branch: 'dev'
|
||||
|
||||
- package-ecosystem: 'github-actions'
|
||||
directory: '/'
|
||||
schedule:
|
||||
|
|
|
|||
20
.github/pull_request_template.md
vendored
|
|
@ -1,17 +1,20 @@
|
|||
# Pull Request Checklist
|
||||
|
||||
### Note to first-time contributors: Please open a discussion post in [Discussions](https://github.com/open-webui/open-webui/discussions) and describe your changes before submitting a pull request.
|
||||
### Note to first-time contributors: Please open a discussion post in [Discussions](https://github.com/open-webui/open-webui/discussions) to discuss your idea/fix with the community before creating a pull request, and describe your changes before submitting a pull request.
|
||||
|
||||
This is to ensure large feature PRs are discussed with the community first, before starting work on it. If the community does not want this feature or it is not relevant for Open WebUI as a project, it can be identified in the discussion before working on the feature and submitting the PR.
|
||||
|
||||
**Before submitting, make sure you've checked the following:**
|
||||
|
||||
- [ ] **Target branch:** Please verify that the pull request targets the `dev` branch.
|
||||
- [ ] **Description:** Provide a concise description of the changes made in this pull request.
|
||||
- [ ] **Target branch:** Verify that the pull request targets the `dev` branch. **Not targeting the `dev` branch will lead to immediate closure of the PR.**
|
||||
- [ ] **Description:** Provide a concise description of the changes made in this pull request down below.
|
||||
- [ ] **Changelog:** Ensure a changelog entry following the format of [Keep a Changelog](https://keepachangelog.com/) is added at the bottom of the PR description.
|
||||
- [ ] **Documentation:** Have you updated relevant documentation [Open WebUI Docs](https://github.com/open-webui/docs), or other documentation sources?
|
||||
- [ ] **Documentation:** If necessary, update relevant documentation [Open WebUI Docs](https://github.com/open-webui/docs) like environment variables, the tutorials, or other documentation sources.
|
||||
- [ ] **Dependencies:** Are there any new dependencies? Have you updated the dependency versions in the documentation?
|
||||
- [ ] **Testing:** Have you written and run sufficient tests to validate the changes?
|
||||
- [ ] **Testing:** Perform manual tests to **verify the implemented fix/feature works as intended AND does not break any other functionality**. Take this as an opportunity to **make screenshots of the feature/fix and include it in the PR description**.
|
||||
- [ ] **Agentic AI Code:** Confirm this Pull Request is **not written by any AI Agent** or has at least **gone through additional human review AND manual testing**. If any AI Agent is the co-author of this PR, it may lead to immediate closure of the PR.
|
||||
- [ ] **Code review:** Have you performed a self-review of your code, addressing any coding standard issues and ensuring adherence to the project's coding standards?
|
||||
- [ ] **Prefix:** To clearly categorize this pull request, prefix the pull request title using one of the following:
|
||||
- [ ] **Title Prefix:** To clearly categorize this pull request, prefix the pull request title using one of the following:
|
||||
- **BREAKING CHANGE**: Significant changes that may affect compatibility
|
||||
- **build**: Changes that affect the build system or external dependencies
|
||||
- **ci**: Changes to our continuous integration processes or workflows
|
||||
|
|
@ -73,4 +76,7 @@
|
|||
|
||||
### Contributor License Agreement
|
||||
|
||||
By submitting this pull request, I confirm that I have read and fully agree to the [Contributor License Agreement (CLA)](/CONTRIBUTOR_LICENSE_AGREEMENT), and I am providing my contributions under its terms.
|
||||
By submitting this pull request, I confirm that I have read and fully agree to the [Contributor License Agreement (CLA)](https://github.com/open-webui/open-webui/blob/main/CONTRIBUTOR_LICENSE_AGREEMENT), and I am providing my contributions under its terms.
|
||||
|
||||
> [!NOTE]
|
||||
> Deleting the CLA section will lead to immediate closure of your PR and it will not be merged in.
|
||||
|
|
|
|||
6
.github/workflows/build-release.yml
vendored
|
|
@ -11,7 +11,7 @@ jobs:
|
|||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
|
||||
- name: Check for changes in package.json
|
||||
run: |
|
||||
|
|
@ -36,7 +36,7 @@ jobs:
|
|||
echo "::set-output name=content::$CHANGELOG_ESCAPED"
|
||||
|
||||
- name: Create GitHub release
|
||||
uses: actions/github-script@v7
|
||||
uses: actions/github-script@v8
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -61,7 +61,7 @@ jobs:
|
|||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Trigger Docker build workflow
|
||||
uses: actions/github-script@v7
|
||||
uses: actions/github-script@v8
|
||||
with:
|
||||
script: |
|
||||
github.rest.actions.createWorkflowDispatch({
|
||||
|
|
|
|||
2
.github/workflows/deploy-to-hf-spaces.yml
vendored
|
|
@ -27,7 +27,7 @@ jobs:
|
|||
HF_TOKEN: ${{ secrets.HF_TOKEN }}
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
lfs: true
|
||||
|
||||
|
|
|
|||
20
.github/workflows/docker-build.yaml
vendored
|
|
@ -43,7 +43,7 @@ jobs:
|
|||
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
|
@ -142,7 +142,7 @@ jobs:
|
|||
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
|
@ -244,7 +244,7 @@ jobs:
|
|||
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
|
@ -347,7 +347,7 @@ jobs:
|
|||
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
|
@ -449,7 +449,7 @@ jobs:
|
|||
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
|
@ -535,7 +535,7 @@ jobs:
|
|||
IMAGE_NAME: '${{ github.repository }}'
|
||||
|
||||
- name: Download digests
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v5
|
||||
with:
|
||||
pattern: digests-main-*
|
||||
path: /tmp/digests
|
||||
|
|
@ -589,7 +589,7 @@ jobs:
|
|||
IMAGE_NAME: '${{ github.repository }}'
|
||||
|
||||
- name: Download digests
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v5
|
||||
with:
|
||||
pattern: digests-cuda-*
|
||||
path: /tmp/digests
|
||||
|
|
@ -645,7 +645,7 @@ jobs:
|
|||
IMAGE_NAME: '${{ github.repository }}'
|
||||
|
||||
- name: Download digests
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v5
|
||||
with:
|
||||
pattern: digests-cuda126-*
|
||||
path: /tmp/digests
|
||||
|
|
@ -701,7 +701,7 @@ jobs:
|
|||
IMAGE_NAME: '${{ github.repository }}'
|
||||
|
||||
- name: Download digests
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v5
|
||||
with:
|
||||
pattern: digests-ollama-*
|
||||
path: /tmp/digests
|
||||
|
|
@ -757,7 +757,7 @@ jobs:
|
|||
IMAGE_NAME: '${{ github.repository }}'
|
||||
|
||||
- name: Download digests
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v5
|
||||
with:
|
||||
pattern: digests-slim-*
|
||||
path: /tmp/digests
|
||||
|
|
|
|||
4
.github/workflows/format-backend.yaml
vendored
|
|
@ -30,10 +30,10 @@ jobs:
|
|||
- 3.12.x
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v5
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
uses: actions/setup-python@v6
|
||||
with:
|
||||
python-version: '${{ matrix.python-version }}'
|
||||
|
||||
|
|
|
|||
8
.github/workflows/format-build-frontend.yaml
vendored
|
|
@ -24,10 +24,10 @@ jobs:
|
|||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout Repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
uses: actions/setup-node@v5
|
||||
with:
|
||||
node-version: '22'
|
||||
|
||||
|
|
@ -51,10 +51,10 @@ jobs:
|
|||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout Repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
uses: actions/setup-node@v5
|
||||
with:
|
||||
node-version: '22'
|
||||
|
||||
|
|
|
|||
6
.github/workflows/release-pypi.yml
vendored
|
|
@ -16,15 +16,15 @@ jobs:
|
|||
id-token: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Install Git
|
||||
run: sudo apt-get update && sudo apt-get install -y git
|
||||
- uses: actions/setup-node@v4
|
||||
- uses: actions/setup-node@v5
|
||||
with:
|
||||
node-version: 22
|
||||
- uses: actions/setup-python@v5
|
||||
- uses: actions/setup-python@v6
|
||||
with:
|
||||
python-version: 3.11
|
||||
- name: Build
|
||||
|
|
|
|||
430
CHANGELOG.md
|
|
@ -5,13 +5,441 @@ All notable changes to this project will be documented in this file.
|
|||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [0.6.36] - 2025-11-07
|
||||
|
||||
### Added
|
||||
|
||||
- 🔐 OAuth group parsing now supports configurable separators via the "OAUTH_GROUPS_SEPARATOR" environment variable, enabling proper handling of semicolon-separated group claims from providers like CILogon. [#18987](https://github.com/open-webui/open-webui/pull/18987), [#18979](https://github.com/open-webui/open-webui/issues/18979)
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🛠️ Tool calling functionality is restored by correcting asynchronous function handling in tool parameter updates. [#18981](https://github.com/open-webui/open-webui/issues/18981)
|
||||
- 🖼️ The ComfyUI image edit workflow editor modal now opens correctly when clicking the Edit button. [#18978](https://github.com/open-webui/open-webui/issues/18978)
|
||||
- 🔥 Firecrawl import errors are resolved by implementing lazy loading and using the correct class name. [#18973](https://github.com/open-webui/open-webui/issues/18973)
|
||||
- 🔌 Socket.IO CORS warning is resolved by properly configuring CORS origins for Socket.IO connections. [Commit](https://github.com/open-webui/open-webui/commit/639d26252e528c9c37a5f553b11eb94376d8792d)
|
||||
|
||||
## [0.6.35] - 2025-11-06
|
||||
|
||||
### Added
|
||||
|
||||
- 🖼️ Image generation system received a comprehensive overhaul with major new capabilities including full image editing support allowing users to modify existing images using text prompts with OpenAI, Gemini, or ComfyUI engines, adding Gemini 2.5 Flash Image (Nano Banana) support, Qwen Image Edit integration, resolution of base64-encoded image display issues, streamlined AUTOMATIC1111 configuration by consolidating parameters into a flexible JSON parameters field, and enhanced UI with a code editor modal for ComfyUI workflow management. [#17434](https://github.com/open-webui/open-webui/pull/17434), [#16976](https://github.com/open-webui/open-webui/issues/16976), [Commit](https://github.com/open-webui/open-webui/commit/8e5690aab4f632a57027e2acf880b8f89a8717c0), [Commit](https://github.com/open-webui/open-webui/commit/72f8539fd2e679fec0762945f22f4b8a6920afa0), [Commit](https://github.com/open-webui/open-webui/commit/8d34fcb586eeee1fac6da2f991518b8a68b00b72), [Commit](https://github.com/open-webui/open-webui/commit/72900cd686de1fa6be84b5a8a2fc857cff7b91b8)
|
||||
- 🔒 CORS origin validation was added to WebSocket connections as a defense-in-depth security measure against cross-site WebSocket hijacking attacks. [#18411](https://github.com/open-webui/open-webui/pull/18411), [#18410](https://github.com/open-webui/open-webui/issues/18410)
|
||||
- 🔄 Automatic page refresh now occurs when a version update is detected via WebSocket connection, ensuring users always run the latest version without cache issues. [Commit](https://github.com/open-webui/open-webui/commit/989f192c92d2fe55daa31336e7971e21798b96ae)
|
||||
- 🐍 Experimental initial preparations for Python 3.13 compatibility by updating dependencies with security enhancements and cryptographic improvements. [#18430](https://github.com/open-webui/open-webui/pull/18430), [#18424](https://github.com/open-webui/open-webui/pull/18424)
|
||||
- ⚡ Image compression now preserves the original image format instead of converting to PNG, significantly reducing file sizes and improving chat loading performance. [#18506](https://github.com/open-webui/open-webui/pull/18506)
|
||||
- 🎤 Mistral Voxtral model support was added for text-to-speech, including voxtral-small and voxtral-mini models with both transcription and chat completion API support. [#18934](https://github.com/open-webui/open-webui/pull/18934)
|
||||
- 🔊 Text-to-speech now uses a global audio queue system to prevent overlapping playback, ensuring only one TTS instance plays at a time with proper stop/start controls and automatic cleanup when switching between messages. [#16152](https://github.com/open-webui/open-webui/pull/16152), [#18744](https://github.com/open-webui/open-webui/pull/18744), [#16150](https://github.com/open-webui/open-webui/issues/16150)
|
||||
- 🔊 ELEVENLABS_API_BASE_URL environment variable now allows configuration of custom ElevenLabs API endpoints, enabling support for EU residency API requirements. [#18402](https://github.com/open-webui/open-webui/issues/18402)
|
||||
- 🔐 OAUTH_ROLES_SEPARATOR environment variable now allows custom role separators for OAuth roles that contain commas, useful for roles specified in LDAP syntax. [#18572](https://github.com/open-webui/open-webui/pull/18572)
|
||||
- 📄 External document loaders can now optionally forward user information headers when ENABLE_FORWARD_USER_INFO_HEADERS is enabled, enabling cost tracking, audit logs, and usage analytics for external services. [#18731](https://github.com/open-webui/open-webui/pull/18731)
|
||||
- 📄 MISTRAL_OCR_API_BASE_URL environment variable now allows configuration of custom Mistral OCR API endpoints for flexible deployment options. [Commit](https://github.com/open-webui/open-webui/commit/415b93c7c35c2e2db4425e6da1b88b3750f496b0)
|
||||
- ⌨️ Keyboard shortcut hints are now displayed on sidebar buttons with a refactored shortcuts modal that accurately reflects all available hotkeys across different keyboard layouts. [#18473](https://github.com/open-webui/open-webui/pull/18473)
|
||||
- 🛠️ Tooltips now display tool descriptions when hovering over tool names on the model edit page, improving usability and providing immediate context. [#18707](https://github.com/open-webui/open-webui/pull/18707)
|
||||
- 📝 "Create a new note" from the search modal now immediately creates a new private note and opens it in the editor instead of navigating to the generic notes page. [#18255](https://github.com/open-webui/open-webui/pull/18255)
|
||||
- 🖨️ Code block output now preserves whitespace formatting with monospace font to accurately reflect terminal behavior. [#18352](https://github.com/open-webui/open-webui/pull/18352)
|
||||
- ✏️ Edit button is now available in the three-dot menu of models in the workspace section for quick access to model editing, with the menu reorganized for better user experience and Edit, Clone, Copy Link, and Share options logically grouped. [#18574](https://github.com/open-webui/open-webui/pull/18574)
|
||||
- 📌 Sidebar models section is now collapsible, allowing users to expand and collapse the pinned models list for better sidebar organization. [Commit](https://github.com/open-webui/open-webui/commit/82c08a3b5d189f81c96b6548cc872198771015b0)
|
||||
- 🌙 Dark mode styles for select elements were added using Tailwind CSS classes, improving consistency across the interface. [#18636](https://github.com/open-webui/open-webui/pull/18636)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Translations for Portuguese (Brazil), Greek, German, Traditional Chinese, Simplified Chinese, Spanish, Georgian, Danish, and Estonian were enhanced and expanded.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🔒 Server-Sent Event (SSE) code injection vulnerability in Direct Connections is resolved by blocking event emission from untrusted external model servers; event emitters from direct connected model servers are no longer supported, preventing arbitrary JavaScript execution in user browsers. [Commit](https://github.com/open-webui/open-webui/commit/8af6a4cf21b756a66cd58378a01c60f74c39b7ca)
|
||||
- 🛡️ DOM XSS vulnerability in "Insert Prompt as Rich Text" is resolved by sanitizing HTML content with DOMPurify before rendering. [Commit](https://github.com/open-webui/open-webui/commit/eb9c4c0e358c274aea35f21c2856c0a20051e5f1)
|
||||
- ⚙️ MCP server cancellation scope corruption is prevented by reversing disconnection order to follow LIFO and properly handling exceptions, resolving 100% CPU usage when resuming chats with expired tokens or using multiple streamable MCP servers. [#18537](https://github.com/open-webui/open-webui/pull/18537)
|
||||
- 🔧 UI freeze when querying models with knowledge bases containing inconsistent distance metrics is resolved by properly initializing the distances array in citations. [#18585](https://github.com/open-webui/open-webui/pull/18585)
|
||||
- 🤖 Duplicate model IDs from multiple OpenAI endpoints are now automatically deduplicated server-side, preventing frontend crashes for users with unified gateway proxies that aggregate multiple providers. [Commit](https://github.com/open-webui/open-webui/commit/fdf7ca11d4f3cc8fe63e81c98dc0d1e48e52ba36)
|
||||
- 🔐 Login failures with passwords longer than 72 bytes are resolved by safely truncating oversized passwords for bcrypt compatibility. [#18157](https://github.com/open-webui/open-webui/issues/18157)
|
||||
- 🔐 OAuth 2.1 MCP tool connections now automatically re-register clients when stored client IDs become stale, preventing unauthorized_client errors after editing tool endpoints and providing detailed error messages for callback failures. [#18415](https://github.com/open-webui/open-webui/pull/18415), [#18309](https://github.com/open-webui/open-webui/issues/18309)
|
||||
- 🔓 OAuth 2.1 discovery, metadata fetching, and dynamic client registration now correctly use HTTP proxy environment variables when trust_env is enabled. [Commit](https://github.com/open-webui/open-webui/commit/bafeb76c411483bd6b135f0edbcdce048120f264)
|
||||
- 🔌 MCP server connection failures now display clear error messages in the chat interface instead of silently failing. [#18892](https://github.com/open-webui/open-webui/pull/18892), [#18889](https://github.com/open-webui/open-webui/issues/18889)
|
||||
- 💬 Chat titles are now properly generated even when title auto-generation is disabled in interface settings, fixing an issue where chats would remain labeled as "New chat". [#18761](https://github.com/open-webui/open-webui/pull/18761), [#18717](https://github.com/open-webui/open-webui/issues/18717), [#6478](https://github.com/open-webui/open-webui/issues/6478)
|
||||
- 🔍 Chat query errors are prevented by properly validating and handling the "order_by" parameter to ensure requested columns exist. [#18400](https://github.com/open-webui/open-webui/pull/18400), [#18452](https://github.com/open-webui/open-webui/pull/18452)
|
||||
- 🔧 Root-level max_tokens parameter is no longer dropped when proxying to Ollama, properly converting to num_predict to limit output token length as intended. [#18618](https://github.com/open-webui/open-webui/issues/18618)
|
||||
- 🔑 Self-hosted Marker instances can now be used without requiring an API key, while keeping it optional for datalab Marker service users. [#18617](https://github.com/open-webui/open-webui/issues/18617)
|
||||
- 🔧 OpenAPI specification endpoint conflict between "/api/v1/models" and "/api/v1/models/" is resolved by changing the models router endpoint to "/list", preventing duplicate operationId errors when generating TypeScript API clients. [#18758](https://github.com/open-webui/open-webui/issues/18758)
|
||||
- 🏷️ Model tags are now de-duplicated case-insensitively in both the model selector and workspace models page, preventing duplicate entries with different capitalization from appearing in filter dropdowns. [#18716](https://github.com/open-webui/open-webui/pull/18716), [#18711](https://github.com/open-webui/open-webui/issues/18711)
|
||||
- 📄 Docling RAG parameter configuration is now correctly saved in the admin UI by fixing the typo in the "DOCLING_PARAMS" parameter name. [#18390](https://github.com/open-webui/open-webui/pull/18390)
|
||||
- 📃 Tika document processing now automatically detects content types instead of relying on potentially incorrect browser-provided mime-types, improving file handling accuracy for formats like RTF. [#18765](https://github.com/open-webui/open-webui/pull/18765), [#18683](https://github.com/open-webui/open-webui/issues/18683)
|
||||
- 🖼️ Image and video uploads to knowledge bases now display proper error messages instead of showing an infinite spinner when the content extraction engine does not support these file types. [#18514](https://github.com/open-webui/open-webui/issues/18514)
|
||||
- 📝 Notes PDF export now properly detects and applies dark mode styling consistently across both the notes list and individual note pages, with a shared utility function to eliminate code duplication. [#18526](https://github.com/open-webui/open-webui/issues/18526)
|
||||
- 💭 Details tags for reasoning content are now correctly identified and rendered even when the same tag is present in user messages. [#18840](https://github.com/open-webui/open-webui/pull/18840), [#18294](https://github.com/open-webui/open-webui/issues/18294)
|
||||
- 📊 Mermaid and Vega rendering errors now display inline with the code instead of showing repetitive toast notifications, improving user experience when models generate invalid diagram syntax. [Commit](https://github.com/open-webui/open-webui/commit/fdc0f04a8b7dd0bc9f9dc0e7e30854f7a0eea3e9)
|
||||
- 📈 Mermaid diagram rendering errors no longer cause UI unavailability or display error messages below the input box. [#18493](https://github.com/open-webui/open-webui/pull/18493), [#18340](https://github.com/open-webui/open-webui/issues/18340)
|
||||
- 🔗 Web search SSL verification is now asynchronous, preventing the website from hanging during web search operations. [#18714](https://github.com/open-webui/open-webui/pull/18714), [#18699](https://github.com/open-webui/open-webui/issues/18699)
|
||||
- 🌍 Web search results now correctly use HTTP proxy environment variables when WEB_SEARCH_TRUST_ENV is enabled. [#18667](https://github.com/open-webui/open-webui/pull/18667), [#7008](https://github.com/open-webui/open-webui/discussions/7008)
|
||||
- 🔍 Google Programmable Search Engine now properly includes referer headers, enabling API keys with HTTP referrer restrictions configured in Google Cloud Console. [#18871](https://github.com/open-webui/open-webui/pull/18871), [#18870](https://github.com/open-webui/open-webui/issues/18870)
|
||||
- ⚡ YouTube video transcript fetching now works correctly when using a proxy connection. [#18419](https://github.com/open-webui/open-webui/pull/18419)
|
||||
- 🎙️ Speech-to-text transcription no longer deletes or replaces existing text in the prompt input field, properly preserving any previously entered content. [#18540](https://github.com/open-webui/open-webui/issues/18540)
|
||||
- 🎙️ The "Instant Auto-Send After Voice Transcription" setting now functions correctly and automatically sends transcribed text when enabled. [#18466](https://github.com/open-webui/open-webui/issues/18466)
|
||||
- ⚙️ Chat settings now load properly when reopening a tab or starting a new session by initializing defaults when sessionStorage is empty. [#18438](https://github.com/open-webui/open-webui/pull/18438)
|
||||
- 🔎 Folder tag search in the sidebar now correctly handles folder names with multiple spaces by replacing all spaces with underscores. [Commit](https://github.com/open-webui/open-webui/commit/a8fe979af68e47e4e4bb3eb76e48d93d60cd2a45)
|
||||
- 🛠️ Functions page now updates immediately after deleting a function, removing the need for a manual page reload. [#18912](https://github.com/open-webui/open-webui/pull/18912), [#18908](https://github.com/open-webui/open-webui/issues/18908)
|
||||
- 🛠️ Native tool calling now properly supports sequential tool calls with shared context, allowing tools to access images and data from previous tool executions in the same conversation. [#18664](https://github.com/open-webui/open-webui/pull/18664)
|
||||
- 🎯 Globally enabled actions in the model editor now correctly apply as global instead of being treated as disabled. [#18577](https://github.com/open-webui/open-webui/pull/18577)
|
||||
- 📋 Clipboard images pasted via the "{{CLIPBOARD}}" prompt variable are now correctly converted to base64 format before being sent to the backend, resolving base64 encoding errors. [#18432](https://github.com/open-webui/open-webui/pull/18432), [#18425](https://github.com/open-webui/open-webui/issues/18425)
|
||||
- 📋 File list is now cleared when switching to models that do not support file uploads, preventing files from being sent to incompatible models. [#18496](https://github.com/open-webui/open-webui/pull/18496)
|
||||
- 📂 Move menu no longer displays when folders are empty. [#18484](https://github.com/open-webui/open-webui/pull/18484)
|
||||
- 📁 Folder and channel creation now validates that names are not empty, preventing creation of folders or channels with no name and showing an error toast if attempted. [#18564](https://github.com/open-webui/open-webui/pull/18564)
|
||||
- 🖊️ Rich text input no longer removes text between equals signs when pasting code with comparison operators. [#18551](https://github.com/open-webui/open-webui/issues/18551)
|
||||
- ⌨️ Keyboard shortcuts now display the correct keys for international and non-QWERTY keyboard layouts by detecting the user's layout using the Keyboard API. [#18533](https://github.com/open-webui/open-webui/pull/18533)
|
||||
- 🌐 "Attach Webpage" button now displays with correct disabled styling when a model does not support file uploads. [#18483](https://github.com/open-webui/open-webui/pull/18483)
|
||||
- 🎚️ Divider no longer displays in the integrations menu when no integrations are enabled. [#18487](https://github.com/open-webui/open-webui/pull/18487)
|
||||
- 📱 Chat controls button is now properly hidden on mobile for users without admin or explicit chat control permissions. [#18641](https://github.com/open-webui/open-webui/pull/18641)
|
||||
- 📍 User menu, download submenu, and move submenu are now repositioned to prevent overlap with the Chat Controls sidebar when it is open. [Commit](https://github.com/open-webui/open-webui/commit/414ab51cb6df1ab0d6c85ac6c1f2c5c9a5f8e2aa)
|
||||
- 🎯 Artifacts button no longer appears in the chat menu when there are no artifacts to display. [Commit](https://github.com/open-webui/open-webui/commit/ed6449d35f84f68dc75ee5c6b3f4748a3fda0096)
|
||||
- 🎨 Artifacts view now automatically displays when opening an existing conversation containing artifacts, improving user experience. [#18215](https://github.com/open-webui/open-webui/pull/18215)
|
||||
- 🖌️ Formatting toolbar is no longer hidden under images or code blocks in chat and now displays correctly above all message content.
|
||||
- 🎨 Layout shift near system instructions is prevented by properly rendering the chat component when system prompts are empty. [#18594](https://github.com/open-webui/open-webui/pull/18594)
|
||||
- 📐 Modal layout shift caused by scrollbar appearance is prevented by adding a stable scrollbar gutter. [#18591](https://github.com/open-webui/open-webui/pull/18591)
|
||||
- ✨ Spacing between icon and label in the user menu dropdown items is now consistent. [#18595](https://github.com/open-webui/open-webui/pull/18595)
|
||||
- 💬 Duplicate prompt suggestions no longer cause the webpage to freeze or throw JavaScript errors by implementing proper key management with composite keys. [#18841](https://github.com/open-webui/open-webui/pull/18841), [#18566](https://github.com/open-webui/open-webui/issues/18566)
|
||||
- 🔍 Chat preview loading in the search modal now works correctly for all search results by fixing an index boundary check that previously caused out-of-bounds errors. [#18911](https://github.com/open-webui/open-webui/pull/18911)
|
||||
- ♿ Screen reader support was enhanced by wrapping messages in semantic elements with descriptive aria-labels, adding "Assistant is typing" and "Response complete" announcements for improved accessibility. [#18735](https://github.com/open-webui/open-webui/pull/18735)
|
||||
- 🔒 Incorrect await call in the OAuth 2.1 flow is removed, eliminating a logged exception during authentication. [#18236](https://github.com/open-webui/open-webui/pull/18236)
|
||||
- 🛡️ Duplicate crossorigin attribute in the manifest file was removed. [#18413](https://github.com/open-webui/open-webui/pull/18413)
|
||||
|
||||
### Changed
|
||||
|
||||
- 🔄 Firecrawl integration was refactored to use the official Firecrawl SDK instead of direct HTTP requests and langchain_community FireCrawlLoader, improving reliability and performance with batch scraping support and enhanced error handling. [#18635](https://github.com/open-webui/open-webui/pull/18635)
|
||||
- 📄 MinerU content extraction engine now only supports PDF files following the upstream removal of LibreOffice document conversion in version 2.0.0; users needing to process office documents should convert them to PDF format first. [#18448](https://github.com/open-webui/open-webui/issues/18448)
|
||||
|
||||
## [0.6.34] - 2025-10-16
|
||||
|
||||
### Added
|
||||
|
||||
- 📄 MinerU is now supported as a document parser backend, with support for both local and managed API deployments. [#18306](https://github.com/open-webui/open-webui/pull/18306)
|
||||
- 🔒 JWT token expiration default is now set to 4 weeks instead of never expiring, with security warnings displayed in backend logs and admin UI when set to unlimited. [#18261](https://github.com/open-webui/open-webui/pull/18261), [#18262](https://github.com/open-webui/open-webui/pull/18262)
|
||||
- ⚡ Page loading performance is improved by preventing unnecessary API requests when sidebar folders are not expanded. [#18179](https://github.com/open-webui/open-webui/pull/18179), [#17476](https://github.com/open-webui/open-webui/issues/17476)
|
||||
- 📁 File hash values are now included in the knowledge endpoint response, enabling efficient file synchronization through hash comparison. [#18284](https://github.com/open-webui/open-webui/pull/18284), [#18283](https://github.com/open-webui/open-webui/issues/18283)
|
||||
- 🎨 Chat dialog scrollbar visibility is improved by increasing its width, making it easier to use for navigation. [#18369](https://github.com/open-webui/open-webui/pull/18369), [#11782](https://github.com/open-webui/open-webui/issues/11782)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Translations for Catalan, Chinese, Czech, Finnish, German, Kabyle, Korean, Portuguese (Brazil), Spanish, Thai, and Turkish were enhanced and expanded.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 📚 Focused retrieval mode now works correctly, preventing the system from forcing full context mode and loading all documents in a knowledge base regardless of settings. [#18133](https://github.com/open-webui/open-webui/issues/18133)
|
||||
- 🔧 Filter inlet functions now correctly execute on tool call continuations, ensuring parameter persistence throughout tool interactions. [#18222](https://github.com/open-webui/open-webui/issues/18222)
|
||||
- 🛠️ External tool servers now properly support DELETE requests with body data. [#18289](https://github.com/open-webui/open-webui/pull/18289), [#18287](https://github.com/open-webui/open-webui/issues/18287)
|
||||
- 🗄️ Oracle23ai vector database client now correctly handles variable initialization, resolving UnboundLocalError when retrieving items from collections. [#18356](https://github.com/open-webui/open-webui/issues/18356)
|
||||
- 🔧 Model auto-pull functionality now works correctly even when user settings remain unmodified. [#18324](https://github.com/open-webui/open-webui/pull/18324)
|
||||
- 🎨 Duplicate HTML content in artifacts is now prevented by improving code block detection logic. [#18195](https://github.com/open-webui/open-webui/pull/18195), [#6154](https://github.com/open-webui/open-webui/issues/6154)
|
||||
- 💬 Pinned chats now appear in the Reference Chats list and can be referenced in conversations. [#18288](https://github.com/open-webui/open-webui/issues/18288)
|
||||
- 📝 Misleading knowledge base warning text in documents settings is clarified to correctly instruct users about reindexing vectors. [#18263](https://github.com/open-webui/open-webui/pull/18263)
|
||||
- 🔔 Toast notifications can now be dismissed even when a modal is open. [#18260](https://github.com/open-webui/open-webui/pull/18260)
|
||||
- 🔘 The "Chats" button in the sidebar now correctly toggles chat list visibility without navigating away from the current page. [#18232](https://github.com/open-webui/open-webui/pull/18232)
|
||||
- 🎯 The Integrations menu no longer closes prematurely when clicking outside the Valves modal. [#18310](https://github.com/open-webui/open-webui/pull/18310)
|
||||
- 🛠️ Tool ID display issues where "undefined" was incorrectly shown in the interface are now resolved. [#18178](https://github.com/open-webui/open-webui/pull/18178)
|
||||
- 🛠️ Model management issues caused by excessively long model IDs are now prevented through validation that limits model IDs to 256 characters. [#18125](https://github.com/open-webui/open-webui/issues/18125)
|
||||
|
||||
## [0.6.33] - 2025-10-08
|
||||
|
||||
### Added
|
||||
|
||||
- 🎨 Workspace interface received a comprehensive redesign across Models, Knowledge, Prompts, and Tools sections, featuring reorganized controls, view filters for created vs shared items, tag selectors, improved visual hierarchy, and streamlined import/export functionality. [Commit](https://github.com/open-webui/open-webui/commit/2c59a288603d8c5f004f223ee00fef37cc763a8e), [Commit](https://github.com/open-webui/open-webui/commit/6050c86ab6ef6b8c96dd3f99c62a6867011b67a4), [Commit](https://github.com/open-webui/open-webui/commit/96ecb47bc71c072aa34ef2be10781b042bef4e8c), [Commit](https://github.com/open-webui/open-webui/commit/2250d102b28075a9611696e911536547abb8b38a), [Commit](https://github.com/open-webui/open-webui/commit/23c8f6d507bfee75ab0015a3e2972d5c26f7e9bf), [Commit](https://github.com/open-webui/open-webui/commit/a743b16728c6ae24b8befbc2d7f24eb9e20c4ad5)
|
||||
- 🛠️ Functions admin interface received a comprehensive redesign with creator attribution display, ownership filters for created vs shared items, improved organization, and refined styling. [Commit](https://github.com/open-webui/open-webui/commit/f5e1a42f51acc0b9d5b63a33c1ca2e42470239c1)
|
||||
- ⚡ Page initialization performance is significantly improved through parallel data loading and optimized folder API calls, reducing initial page load time. [#17559](https://github.com/open-webui/open-webui/pull/17559), [#17889](https://github.com/open-webui/open-webui/pull/17889)
|
||||
- ⚡ Chat overview component is now dynamically loaded on demand, reducing initial page bundle size by approximately 470KB and improving first-screen loading speed. [#17595](https://github.com/open-webui/open-webui/pull/17595)
|
||||
- 📁 Folders can now be attached to chats using the "#" command, automatically expanding to include all files within the folder for streamlined knowledge base integration. [Commit](https://github.com/open-webui/open-webui/commit/d2cb78179d66dc85188172a08622d4c97a2ea1ee)
|
||||
- 📱 Progressive Web App now supports Android share target functionality, allowing users to share web pages, YouTube videos, and text directly to Open WebUI from the system share menu. [#17633](https://github.com/open-webui/open-webui/pull/17633), [#17125](https://github.com/open-webui/open-webui/issues/17125)
|
||||
- 🗄️ Redis session storage is now available as an experimental option for OAuth authentication flows via the ENABLE_STAR_SESSIONS_MIDDLEWARE environment variable, providing shared session state across multi-replica deployments to address CSRF errors, though currently only basic Redis setups are supported. [#17223](https://github.com/open-webui/open-webui/pull/17223), [#15373](https://github.com/open-webui/open-webui/issues/15373), [Docs:Commit](https://github.com/open-webui/docs/commit/14052347f165d1b597615370373d7289ce44c7f9)
|
||||
- 📊 Vega and Vega-Lite chart visualization renderers are now supported in code blocks, enabling inline rendering of data visualizations with automatic compilation of Vega-Lite specifications. [#18033](https://github.com/open-webui/open-webui/pull/18033), [#18040](https://github.com/open-webui/open-webui/pull/18040), [#18022](https://github.com/open-webui/open-webui/issues/18022)
|
||||
- 🔗 OpenAI connections now support custom HTTP headers, enabling users to configure authentication and routing headers for specific deployment requirements. [#18021](https://github.com/open-webui/open-webui/pull/18021), [#9732](https://github.com/open-webui/open-webui/discussions/9732)
|
||||
- 🔐 OpenID Connect authentication now supports OIDC providers without email scope via the ENABLE_OAUTH_WITHOUT_EMAIL environment variable, enabling compatibility with identity providers that don't expose email addresses. [#18047](https://github.com/open-webui/open-webui/pull/18047), [#18045](https://github.com/open-webui/open-webui/issues/18045)
|
||||
- 🤖 Ollama model management modal now features individual model update cancellation, comprehensive tooltips for all buttons, and streamlined notification behavior to reduce toast spam. [#16863](https://github.com/open-webui/open-webui/pull/16863)
|
||||
- ☁️ OneDrive file picker now includes search functionality and "My Organization" pivot for business accounts, enabling easier file discovery across organizational content. [#17930](https://github.com/open-webui/open-webui/pull/17930), [#17929](https://github.com/open-webui/open-webui/issues/17929)
|
||||
- 📊 Chat overview flow diagram now supports toggling between vertical and horizontal layout orientations for improved visualization flexibility. [#17941](https://github.com/open-webui/open-webui/pull/17941)
|
||||
- 🔊 OpenAI Text-to-Speech engine now supports additional parameters, allowing users to customize TTS behavior with provider-specific options via JSON configuration. [#17985](https://github.com/open-webui/open-webui/issues/17985), [#17188](https://github.com/open-webui/open-webui/pull/17188)
|
||||
- 🛠️ Tool server list now displays server name, URL, and type (OpenAPI or MCP) for easier identification and management. [#18062](https://github.com/open-webui/open-webui/issues/18062)
|
||||
- 📁 Folders now remember the last selected model, automatically applying it when starting new chats within that folder. [#17836](https://github.com/open-webui/open-webui/issues/17836)
|
||||
- 🔢 Ollama embedding endpoint now supports the optional dimensions parameter for controlling embedding output size, compatible with Ollama v0.11.11 and later. [#17942](https://github.com/open-webui/open-webui/pull/17942)
|
||||
- ⚡ Workspace knowledge page load time is improved by removing redundant API calls, enhancing overall responsiveness. [#18057](https://github.com/open-webui/open-webui/pull/18057)
|
||||
- ⚡ File metadata query performance is enhanced by selecting only relevant columns instead of retrieving entire records, reducing database overhead. [#18013](https://github.com/open-webui/open-webui/pull/18013)
|
||||
- 📄 Note PDF exports now include titles and properly render in dark mode with appropriate background colors. [Commit](https://github.com/open-webui/open-webui/commit/216fb5c3db1a223ffe6e72d97aa9551fe0e2d028)
|
||||
- 📄 Docling document extraction now supports additional parameters for VLM pipeline configuration, enabling customized vision model settings. [#17363](https://github.com/open-webui/open-webui/pull/17363)
|
||||
- ⚙️ Server startup script now supports passing arbitrary arguments to uvicorn, enabling custom server configuration options. [#17919](https://github.com/open-webui/open-webui/pull/17919), [#17918](https://github.com/open-webui/open-webui/issues/17918)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Translations for German, Danish, Spanish, Korean, Portuguese (Brazil), Simplified Chinese, and Traditional Chinese were enhanced and expanded.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 💬 System prompts are no longer duplicated in chat requests, eliminating confusion and excessive token usage caused by repeated instructions being sent to models. [#17198](https://github.com/open-webui/open-webui/issues/17198), [#16855](https://github.com/open-webui/open-webui/issues/16855)
|
||||
- 🔐 MCP OAuth 2.1 authentication now complies with the standard by implementing PKCE with S256 code challenge method and explicitly passing client credentials during token authorization, resolving "code_challenge: Field required" and "client_id: Field required" errors when connecting to OAuth-secured MCP servers. [Commit](https://github.com/open-webui/open-webui/commit/911a114ad459f5deebd97543c13c2b90196efb54), [#18010](https://github.com/open-webui/open-webui/issues/18010), [#18087](https://github.com/open-webui/open-webui/pull/18087)
|
||||
- 🔐 OAuth signup flow now handles password hashing correctly by migrating from passlib to native bcrypt, preventing failures when passwords exceed 72 bytes. [#17917](https://github.com/open-webui/open-webui/issues/17917)
|
||||
- 🔐 OAuth token refresh errors are resolved by properly registering and storing OAuth clients, fixing "Constructor parameter should be str" exceptions for Google, Microsoft, and OIDC providers. [#17829](https://github.com/open-webui/open-webui/issues/17829)
|
||||
- 🔐 OAuth server metadata URL is now correctly accessed via the proper attribute, fixing automatic token refresh and logout functionality for Microsoft OAuth provider when OPENID_PROVIDER_URL is not set. [#18065](https://github.com/open-webui/open-webui/pull/18065)
|
||||
- 🔐 OAuth credential decryption failures now allow the application to start gracefully with clear error messages instead of crashing, preventing complete service outages when WEBUI_SECRET_KEY mismatches occur during database migrations or environment changes. [#18094](https://github.com/open-webui/open-webui/pull/18094), [#18092](https://github.com/open-webui/open-webui/issues/18092)
|
||||
- 🔐 OAuth 2.1 server discovery now correctly attempts all configured discovery URLs in sequence instead of only trying the first URL. [#17906](https://github.com/open-webui/open-webui/pull/17906), [#17904](https://github.com/open-webui/open-webui/issues/17904), [#18026](https://github.com/open-webui/open-webui/pull/18026)
|
||||
- 🔐 Login redirect now correctly honors the redirect query parameter after authentication, ensuring users are returned to their intended destination with query parameters intact instead of defaulting to the homepage. [#18071](https://github.com/open-webui/open-webui/issues/18071)
|
||||
- ☁️ OneDrive Business integration authentication regression is resolved, ensuring the popup now properly triggers when connecting to OneDrive accounts. [#17902](https://github.com/open-webui/open-webui/pull/17902), [#17825](https://github.com/open-webui/open-webui/discussions/17825), [#17816](https://github.com/open-webui/open-webui/issues/17816)
|
||||
- 👥 Default group settings now persist correctly after page navigation, ensuring configuration changes are properly saved and retained. [#17899](https://github.com/open-webui/open-webui/issues/17899), [#18003](https://github.com/open-webui/open-webui/issues/18003)
|
||||
- 📁 Folder data integrity is now verified on retrieval, automatically fixing orphaned folders with invalid parent references and ensuring proper cascading deletion of nested folder structures. [Commit](https://github.com/open-webui/open-webui/commit/5448618dd5ea181b9635b77040cef60926a902ff)
|
||||
- 🗄️ Redis Sentinel and Redis Cluster configurations with the experimental ENABLE_STAR_SESSIONS_MIDDLEWARE feature are now properly isolated by making the feature opt-in only, preventing ReadOnlyError failures when connecting to read replicas in multi-node Redis deployments. [#18073](https://github.com/open-webui/open-webui/issues/18073)
|
||||
- 📊 Mermaid and Vega diagram rendering now displays error toast notifications when syntax errors are detected, helping users identify and fix diagram issues instead of silently failing. [#18068](https://github.com/open-webui/open-webui/pull/18068)
|
||||
- 🤖 Reasoning models that return reasoning_content instead of content no longer cause NoneType errors during chat title generation, follow-up suggestions, and tag generation. [#18080](https://github.com/open-webui/open-webui/pull/18080)
|
||||
- 📚 Citation rendering now correctly handles multiple source references in a single bracket, parsing formats like [1,2] and [1, 2] into separate clickable citation links. [#18120](https://github.com/open-webui/open-webui/pull/18120)
|
||||
- 🔍 Web search now handles individual source failures gracefully, continuing to process remaining sources instead of failing entirely when a single URL is unreachable or returns an error. [Commit](https://github.com/open-webui/open-webui/commit/e000494e488090c5f66989a2b3f89d3eaeb7946b), [Commit](https://github.com/open-webui/open-webui/commit/53e98620bff38ab9280aee5165af0a704bdd99b9)
|
||||
- 🔍 Hybrid search with reranking now handles empty result sets gracefully instead of crashing with ValueError when all results are filtered out due to relevance thresholds. [#18096](https://github.com/open-webui/open-webui/issues/18096)
|
||||
- 🔍 Reranking models without defined padding tokens now work correctly by automatically falling back to eos_token_id as pad_token_id, fixing "Cannot handle batch sizes > 1" errors for models like Qwen3-Reranker. [#18108](https://github.com/open-webui/open-webui/pull/18108), [#16027](https://github.com/open-webui/open-webui/discussions/16027)
|
||||
- 🔍 Model selector search now correctly returns results for non-admin users by dynamically updating the search index when the model list changes, fixing a race condition that caused empty search results. [#17996](https://github.com/open-webui/open-webui/pull/17996), [#17960](https://github.com/open-webui/open-webui/pull/17960)
|
||||
- ⚡ Task model function calling performance is improved by excluding base64 image data from payloads, significantly reducing token count and memory usage when images are present in conversations. [#17897](https://github.com/open-webui/open-webui/pull/17897)
|
||||
- 🤖 Text selection "Ask" action now correctly recognizes and uses local models configured via direct connections instead of only showing external provider models. [#17896](https://github.com/open-webui/open-webui/issues/17896)
|
||||
- 🛑 Task cancellation API now returns accurate response status, correctly reporting successful cancellations instead of incorrectly indicating failures. [#17920](https://github.com/open-webui/open-webui/issues/17920)
|
||||
- 💬 Follow-up query suggestions are now generated and displayed in temporary chats, matching the behavior of saved chats. [#14987](https://github.com/open-webui/open-webui/issues/14987)
|
||||
- 🔊 Azure Text-to-Speech now properly escapes special characters like ampersands in SSML, preventing HTTP 400 errors and ensuring audio generation succeeds for all text content. [#17962](https://github.com/open-webui/open-webui/issues/17962)
|
||||
- 🛠️ OpenAPI tool server calls with optional parameters now execute successfully even when no arguments are provided, removing the incorrect requirement for a request body. [#18036](https://github.com/open-webui/open-webui/issues/18036)
|
||||
- 🛠️ MCP mode tool server connections no longer incorrectly validate the OpenAPI path field, allowing seamless switching between OpenAPI and MCP connection types. [#17989](https://github.com/open-webui/open-webui/pull/17989), [#17988](https://github.com/open-webui/open-webui/issues/17988)
|
||||
- 🛠️ Third-party tool responses containing non-UTF8 or invalid byte sequences are now handled gracefully without causing request failures. [#17882](https://github.com/open-webui/open-webui/pull/17882)
|
||||
- 🎨 Workspace filter dropdown now correctly renders model tags as strings instead of displaying individual characters, fixing broken filtering interface when models have multiple tags. [#18034](https://github.com/open-webui/open-webui/issues/18034)
|
||||
- ⌨️ Ctrl+Enter keyboard shortcut now correctly sends messages in mobile and narrow browser views on Chrome instead of inserting newlines. [#17975](https://github.com/open-webui/open-webui/issues/17975)
|
||||
- ⌨️ Tab characters are now preserved when pasting code or formatted text into the chat input box in plain text mode. [#17958](https://github.com/open-webui/open-webui/issues/17958)
|
||||
- 📋 Text selection copying from the chat input box now correctly copies only the selected text instead of the entire textbox content. [#17911](https://github.com/open-webui/open-webui/issues/17911)
|
||||
- 🔍 Web search query logging now uses debug level instead of info level, preventing user search queries from appearing in production logs. [#17888](https://github.com/open-webui/open-webui/pull/17888)
|
||||
- 📝 Debug print statements in middleware were removed to prevent excessive log pollution and respect configured logging levels. [#17943](https://github.com/open-webui/open-webui/issues/17943)
|
||||
|
||||
### Changed
|
||||
|
||||
- 🗄️ Milvus vector database dependency is updated from pymilvus 2.5.0 to 2.6.2, ensuring compatibility with newer Milvus versions but requiring users on older Milvus instances to either upgrade their database or manually downgrade the pymilvus package. [#18066](https://github.com/open-webui/open-webui/pull/18066)
|
||||
|
||||
## [0.6.32] - 2025-09-29
|
||||
|
||||
### Added
|
||||
|
||||
- ⚡ JSON model import moved to backend processing for significant performance improvements when importing large model files. [#17871](https://github.com/open-webui/open-webui/pull/17871)
|
||||
- ⚠️ Visual warnings for group permissions that display when a permission is disabled in a group but remains enabled in the default user role, clarifying inheritance behavior for administrators. [#17848](https://github.com/open-webui/open-webui/pull/17848)
|
||||
- 🗄️ Milvus multi-tenancy mode using shared collections with resource ID filtering for improved scalability, mirroring the existing Qdrant implementation and configurable via ENABLE_MILVUS_MULTITENANCY_MODE environment variable. [#17837](https://github.com/open-webui/open-webui/pull/17837)
|
||||
- 🛠️ Enhanced tool result processing with improved error handling, better MCP tool result handling, and performance improvements for embedded UI components. [Commit](https://github.com/open-webui/open-webui/commit/4f06f29348b2c9d71c87d1bbe5b748a368f5101f)
|
||||
- 👥 New user groups now automatically inherit default group permissions, streamlining the admin setup process by eliminating manual permission configuration. [#17843](https://github.com/open-webui/open-webui/pull/17843)
|
||||
- 🗂️ Bulk unarchive functionality for all chats, providing a single backend endpoint to efficiently restore all archived chats at once. [#17857](https://github.com/open-webui/open-webui/pull/17857)
|
||||
- 🏷️ Browser tab title toggle setting allows users to control whether chat titles appear in the browser tab or display only "Open WebUI". [#17851](https://github.com/open-webui/open-webui/pull/17851)
|
||||
- 💬 Reply-to-message functionality in channels, allowing users to reply directly to specific messages with visual threading and context display. [Commit](https://github.com/open-webui/open-webui/commit/1a18928c94903ad1f1f0391b8ade042c3e60205b)
|
||||
- 🔧 Tool server import and export functionality, allowing direct upload of openapi.json and openapi.yaml files as an alternative to URL-based configuration. [#14446](https://github.com/open-webui/open-webui/issues/14446)
|
||||
- 🔧 User valve configuration for Functions is now available in the integration menu, providing consistent management alongside Tools. [#17784](https://github.com/open-webui/open-webui/issues/17784)
|
||||
- 🔐 Admin permission toggle for controlling public sharing of notes, configurable via USER_PERMISSIONS_NOTES_ALLOW_PUBLIC_SHARING environment variable. [#17801](https://github.com/open-webui/open-webui/pull/17801), [Docs:#715](https://github.com/open-webui/docs/pull/715)
|
||||
- 🗄️ DISKANN index type support for Milvus vector database with configurable maximum degree and search list size parameters. [#17770](https://github.com/open-webui/open-webui/pull/17770), [Docs:Commit](https://github.com/open-webui/docs/commit/cec50ab4d4b659558ca1ccd4b5e6fc024f05fb83)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Translations for Chinese (Simplified & Traditional) and Bosnian (Latin) were enhanced and expanded.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🛠️ MCP tool calls are now correctly routed to the appropriate server when multiple streamable-http MCP servers are enabled, preventing "Tool not found" errors. [#17817](https://github.com/open-webui/open-webui/issues/17817)
|
||||
- 🛠️ External tool servers (OpenAPI/MCP) now properly process and return tool results to the model, restoring functionality that was broken in v0.6.31. [#17764](https://github.com/open-webui/open-webui/issues/17764)
|
||||
- 🔧 User valve detection now correctly identifies valves in imported tool code, ensuring gear icons appear in the integrations menu for all tools with user valves. [#17765](https://github.com/open-webui/open-webui/issues/17765)
|
||||
- 🔐 MCP OAuth discovery now correctly handles multi-tenant configurations by including subpaths in metadata URL discovery. [#17768](https://github.com/open-webui/open-webui/issues/17768)
|
||||
- 🗄️ Milvus query operations now correctly use -1 instead of None for unlimited queries, preventing TypeError exceptions. [#17769](https://github.com/open-webui/open-webui/pull/17769), [#17088](https://github.com/open-webui/open-webui/issues/17088)
|
||||
- 📁 File upload error messages are now displayed when files are modified during upload, preventing user confusion on Android and Windows devices. [#17777](https://github.com/open-webui/open-webui/pull/17777)
|
||||
- 🎨 MessageInput Integrations button hover effect now displays correctly with proper visual feedback. [#17767](https://github.com/open-webui/open-webui/pull/17767)
|
||||
- 🎯 "Set as default" label positioning is fixed to ensure it remains clickable in all scenarios, including multi-model configurations. [#17779](https://github.com/open-webui/open-webui/pull/17779)
|
||||
- 🎛️ Floating buttons now correctly retrieve message context by using the proper messageId parameter in createMessagesList calls. [#17823](https://github.com/open-webui/open-webui/pull/17823)
|
||||
- 📌 Pinned chats are now properly cleared from the sidebar after archiving all chats, ensuring UI consistency without requiring a page refresh. [#17832](https://github.com/open-webui/open-webui/pull/17832)
|
||||
- 🗑️ Delete confirmation modals now properly truncate long names for Notes, Prompts, Tools, and Functions to prevent modal overflow. [#17812](https://github.com/open-webui/open-webui/pull/17812)
|
||||
- 🌐 Internationalization function calls now use proper Svelte store subscription syntax, preventing "i18n.t is not a function" errors on the model creation page. [#17819](https://github.com/open-webui/open-webui/pull/17819)
|
||||
- 🎨 Playground chat interface button layout is corrected to prevent vertical text rendering for Assistant/User role buttons. [#17819](https://github.com/open-webui/open-webui/pull/17819)
|
||||
- 🏷️ UI text truncation is improved across multiple components including usernames in admin panels, arena model names, model tags, and filter tags to prevent layout overflow issues. [#17805](https://github.com/open-webui/open-webui/pull/17805), [#17803](https://github.com/open-webui/open-webui/pull/17803), [#17791](https://github.com/open-webui/open-webui/pull/17791), [#17796](https://github.com/open-webui/open-webui/pull/17796)
|
||||
|
||||
## [0.6.31] - 2025-09-25
|
||||
|
||||
### Added
|
||||
|
||||
- 🔌 MCP (streamable HTTP) server support was added alongside existing OpenAPI server integration, allowing users to connect both server types through an improved server configuration interface. [#15932](https://github.com/open-webui/open-webui/issues/15932) [#16651](https://github.com/open-webui/open-webui/pull/16651), [Commit](https://github.com/open-webui/open-webui/commit/fd7385c3921eb59af76a26f4c475aedb38ce2406), [Commit](https://github.com/open-webui/open-webui/commit/777e81f7a8aca957a359d51df8388e5af4721a68), [Commit](https://github.com/open-webui/open-webui/commit/de7f7b3d855641450f8e5aac34fbae0665e0b80e), [Commit](https://github.com/open-webui/open-webui/commit/f1bbf3a91e4713039364b790e886e59b401572d0), [Commit](https://github.com/open-webui/open-webui/commit/c55afc42559c32a6f0c8beb0f1bb18e9360ab8af), [Commit](https://github.com/open-webui/open-webui/commit/61f20acf61f4fe30c0e5b0180949f6e1a8cf6524)
|
||||
- 🔐 To enable MCP server authentication, OAuth 2.1 dynamic client registration was implemented with secure automatic client registration, encrypted session management, and seamless authentication flows. [Commit](https://github.com/open-webui/open-webui/commit/972be4eda5a394c111e849075f94099c9c0dd9aa), [Commit](https://github.com/open-webui/open-webui/commit/77e971dd9fbeee806e2864e686df5ec75e82104b), [Commit](https://github.com/open-webui/open-webui/commit/879abd7feea3692a2f157da4a458d30f27217508), [Commit](https://github.com/open-webui/open-webui/commit/422d38fd114b1ebd8a7dbb114d64e14791e67d7a), [Docs:#709](https://github.com/open-webui/docs/pull/709)
|
||||
- 🛠️ External & Built-In Tools can now support rich UI element embedding ([Docs](https://docs.openwebui.com/features/plugin/tools/development)), allowing tools to return HTML content and interactive iframes that display directly within chat conversations with configurable security settings. [Commit](https://github.com/open-webui/open-webui/commit/07c5b25bc8b63173f406feb3ba183d375fedee6a), [Commit](https://github.com/open-webui/open-webui/commit/a5d8882bba7933a2c2c31c0a1405aba507c370bb), [Commit](https://github.com/open-webui/open-webui/commit/7be5b7f50f498de97359003609fc5993a172f084), [Commit](https://github.com/open-webui/open-webui/commit/a89ffccd7e96705a4a40e845289f4fcf9c4ae596)
|
||||
- 📝 Note editor now supports drag-and-drop reordering of list items with visual drag handles, making list organization more intuitive and efficient. [Commit](https://github.com/open-webui/open-webui/commit/e4e97e727e9b4971f1c363b1280ca3a101599d88), [Commit](https://github.com/open-webui/open-webui/commit/aeb5288a3c7a6e9e0a47b807cc52f870c1b7dbe6)
|
||||
- 🔍 Search modal was enhanced with quick action buttons for starting new conversations and creating notes, with intelligent content pre-population from search queries. [Commit](https://github.com/open-webui/open-webui/commit/aa6f63a335e172fec1dc94b2056541f52c1167a6), [Commit](https://github.com/open-webui/open-webui/commit/612a52d7bb7dbe9fa0bbbc8ac0a552d2b9801146), [Commit](https://github.com/open-webui/open-webui/commit/b03529b006f3148e895b1094584e1ab129ecac5b)
|
||||
- 🛠️ Tool user valve configuration interface was added to the integrations menu, displaying clickable gear icon buttons with tooltips for tools that support user-specific settings, making personal tool configurations easily accessible. [Commit](https://github.com/open-webui/open-webui/commit/27d61307cdce97ed11a05ec13fc300249d6022cd)
|
||||
- 👥 Channel access control was enhanced to require write permissions for posting, editing, and deleting messages, while read-only users can view content but cannot contribute. [#17543](https://github.com/open-webui/open-webui/pull/17543)
|
||||
- 💬 Channel models now support image processing, allowing AI assistants to view and analyze images shared in conversation threads. [Commit](https://github.com/open-webui/open-webui/commit/9f0010e234a6f40782a66021435d3c02b9c23639)
|
||||
- 🌐 Attach Webpage button was added to the message input menu, providing a user-friendly modal interface for attaching web content and YouTube videos as an alternative to the existing URL syntax. [#17534](https://github.com/open-webui/open-webui/pull/17534)
|
||||
- 🔐 Redis session storage support was added for OAuth redirects, providing better state handling in multi-pod Kubernetes deployments and resolving CSRF mismatch errors. [#17223](https://github.com/open-webui/open-webui/pull/17223), [#15373](https://github.com/open-webui/open-webui/issues/15373)
|
||||
- 🔍 Ollama Cloud web search integration was added as a new search engine option, providing access to web search functionality through Ollama's cloud infrastructure. [Commit](https://github.com/open-webui/open-webui/commit/e06489d92baca095b8f376fbef223298c7772579), [Commit](https://github.com/open-webui/open-webui/commit/4b6d34438bcfc45463dc7a9cb984794b32c1f0a1), [Commit](https://github.com/open-webui/open-webui/commit/05c46008da85357dc6890b846789dfaa59f4a520), [Commit](https://github.com/open-webui/open-webui/commit/fe65fe0b97ec5a8fff71592ff04a25c8e123d108), [Docs:#708](https://github.com/open-webui/docs/pull/708)
|
||||
- 🔍 Perplexity Websearch API integration was added as a new search engine option, providing access to the new websearch functionality provided by Perplexity. [#17756](https://github.com/open-webui/open-webui/issues/17756), [Commit](https://github.com/open-webui/open-webui/pull/17747/commits/7f411dd5cc1c29733216f79e99eeeed0406a2afe)
|
||||
- ☁️ OneDrive integration was improved to support separate client IDs for personal and business authentication, enabling both integrations to work simultaneously. [#17619](https://github.com/open-webui/open-webui/pull/17619), [Docs](https://docs.openwebui.com/tutorials/integrations/onedrive-sharepoint), [Docs](https://docs.openwebui.com/getting-started/env-configuration/#onedrive)
|
||||
- 📝 Pending user overlay content now supports markdown formatting, enabling rich text display for custom messages similar to banner functionality. [#17681](https://github.com/open-webui/open-webui/pull/17681)
|
||||
- 🎨 Image generation model selection was centralized to enable dynamic model override in function calls, allowing pipes and tools to specify different models than the global default while maintaining backward compatibility. [#17689](https://github.com/open-webui/open-webui/pull/17689)
|
||||
- 🎨 Interface design was modernized with updated visual styling, improved spacing, and refined component layouts across modals, sidebar, settings, and navigation elements. [Commit](https://github.com/open-webui/open-webui/commit/27a91cc80a24bda0a3a188bc3120a8ab57b00881), [Commit](https://github.com/open-webui/open-webui/commit/4ad743098615f9c58daa9df392f31109aeceeb16), [Commit](https://github.com/open-webui/open-webui/commit/fd7385c3921eb59af76a26f4c475aedb38ce2406)
|
||||
- 📊 Notes query performance was optimized through database-level filtering and separated access control logic, reducing memory usage and eliminating N+1 query problems for better scalability. [#17607](https://github.com/open-webui/open-webui/pull/17607) [Commit](https://github.com/open-webui/open-webui/pull/17747/commits/da661756fa7eec754270e6dd8c67cbf74a28a17f)
|
||||
- ⚡ Page loading performance was optimized by deferring API requests until components are actually opened, including ChangelogModal, ModelSelector, RecursiveFolder, ArchivedChatsModal, and SearchModal. [#17542](https://github.com/open-webui/open-webui/pull/17542), [#17555](https://github.com/open-webui/open-webui/pull/17555), [#17557](https://github.com/open-webui/open-webui/pull/17557), [#17541](https://github.com/open-webui/open-webui/pull/17541), [#17640](https://github.com/open-webui/open-webui/pull/17640)
|
||||
- ⚡ Bundle size was reduced by 1.58MB through optimized highlight.js language support, improving page loading speed and reducing bandwidth usage. [#17645](https://github.com/open-webui/open-webui/pull/17645)
|
||||
- ⚡ Editor collaboration functionality was refactored to reduce package size by 390KB and minimize compilation errors, improving build performance and reliability. [#17593](https://github.com/open-webui/open-webui/pull/17593)
|
||||
- ♿ Enhanced user interface accessibility through the addition of unique element IDs, improving targeting for testing, styling, and assistive technologies while providing better semantic markup for screen readers and accessibility tools. [#17746](https://github.com/open-webui/open-webui/pull/17746)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Translations for Portuguese (Brazil), Chinese (Simplified and Traditional), Korean, Irish, Spanish, Finnish, French, Kabyle, Russian, and Catalan were enhanced and improved.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🛡️ SVG content security was enhanced by implementing DOMPurify sanitization to prevent XSS attacks through malicious SVG elements, ensuring safe rendering of user-generated SVG content. [Commit](https://github.com/open-webui/open-webui/pull/17747/commits/750a659a9fee7687e667d9d755e17b8a0c77d557)
|
||||
- ☁️ OneDrive attachment menu rendering issues were resolved by restructuring the submenu interface from dropdown to tabbed navigation, preventing menu items from being hidden or clipped due to overflow constraints. [#17554](https://github.com/open-webui/open-webui/issues/17554), [Commit](https://github.com/open-webui/open-webui/pull/17747/commits/90e4b49b881b644465831cc3028bb44f0f7a2196)
|
||||
- 💬 Attached conversation references now persist throughout the entire chat session, ensuring models can continue querying referenced conversations after multiple conversation turns. [#17750](https://github.com/open-webui/open-webui/issues/17750)
|
||||
- 🔍 Search modal text box focus issues after pinning or unpinning chats were resolved, allowing users to properly exit the search interface by clicking outside the text box. [#17743](https://github.com/open-webui/open-webui/issues/17743)
|
||||
- 🔍 Search function chat list is now properly updated in real-time when chats are created or deleted, eliminating stale search results and preview loading failures. [#17741](https://github.com/open-webui/open-webui/issues/17741)
|
||||
- 💬 Chat jitter and delayed code block expansion in multi-model sessions were resolved by reverting dynamic CodeEditor loading, restoring stable rendering behavior. [#17715](https://github.com/open-webui/open-webui/pull/17715), [#17684](https://github.com/open-webui/open-webui/issues/17684)
|
||||
- 📎 File upload handling was improved to properly recognize uploaded files even when no accompanying text message is provided, resolving issues where attachments were ignored in custom prompts. [#17492](https://github.com/open-webui/open-webui/issues/17492)
|
||||
- 💬 Chat conversation referencing within projects was restored by including foldered chats in the reference menu, allowing users to properly quote conversations from within their project scope. [#17530](https://github.com/open-webui/open-webui/issues/17530)
|
||||
- 🔍 RAG query generation is now skipped when all attached files are set to full context mode, preventing unnecessary retrieval operations and improving system efficiency. [#17744](https://github.com/open-webui/open-webui/pull/17744)
|
||||
- 💾 Memory leaks in file handling and HTTP connections are prevented through proper resource cleanup, ensuring stable memory usage during large file downloads and processing operations. [#17608](https://github.com/open-webui/open-webui/pull/17608)
|
||||
- 🔐 OAuth access token refresh errors are resolved by properly implementing async/await patterns, preventing "coroutine object has no attribute get" failures during token expiry. [#17585](https://github.com/open-webui/open-webui/issues/17585), [#17678](https://github.com/open-webui/open-webui/issues/17678)
|
||||
- ⚙️ Valve behavior was improved to properly handle default values and array types, ensuring only explicitly set values are persisted while maintaining consistent distinction between custom and default valve states. [#17664](https://github.com/open-webui/open-webui/pull/17664)
|
||||
- 🔍 Hybrid search functionality was enhanced to handle inconsistent parameter types and prevent failures when collection results are None, empty, or in unexpected formats. [#17617](https://github.com/open-webui/open-webui/pull/17617)
|
||||
- 📁 Empty folder deletion is now allowed regardless of chat deletion permission restrictions, resolving cases where users couldn't remove folders after deleting all contained chats. [#17683](https://github.com/open-webui/open-webui/pull/17683)
|
||||
- 📝 Rich text editor console errors were resolved by adding proper error handling when the TipTap editor view is not available or not yet mounted. [#17697](https://github.com/open-webui/open-webui/issues/17697)
|
||||
- 🗒️ Hidden models are now properly excluded from the notes section dropdown and default model selection, preventing users from accessing models they shouldn't see. [#17722](https://github.com/open-webui/open-webui/pull/17722)
|
||||
- 🖼️ AI-generated image download filenames now use a clean, translatable "Generated Image" format instead of potentially problematic response text, improving file management and compatibility. [#17721](https://github.com/open-webui/open-webui/pull/17721)
|
||||
- 🎨 Toggle switch display issues in the Integrations interface are fixed, preventing background highlighting and obscuring on hover. [#17564](https://github.com/open-webui/open-webui/issues/17564)
|
||||
|
||||
### Changed
|
||||
|
||||
- 👥 Channel permissions now require write access for message posting, editing, and deletion, with existing user groups defaulting to read-only access requiring manual admin migration to write permissions for full participation.
|
||||
- ☁️ OneDrive environment variable configuration was updated to use separate ONEDRIVE_CLIENT_ID_PERSONAL and ONEDRIVE_CLIENT_ID_BUSINESS variables for better client ID separation, while maintaining backward compatibility with the legacy ONEDRIVE_CLIENT_ID variable. [Docs](https://docs.openwebui.com/tutorials/integrations/onedrive-sharepoint), [Docs](https://docs.openwebui.com/getting-started/env-configuration/#onedrive)
|
||||
|
||||
## [0.6.30] - 2025-09-17
|
||||
|
||||
### Added
|
||||
|
||||
- 🔑 Microsoft Entra ID authentication type support was added for Azure OpenAI connections, enabling enhanced security and streamlined authentication workflows.
|
||||
|
||||
### Fixed
|
||||
|
||||
- ☁️ OneDrive integration was fixed after recent breakage, restoring reliable account connectivity and file access.
|
||||
|
||||
## [0.6.29] - 2025-09-17
|
||||
|
||||
### Added
|
||||
|
||||
- 🎨 The chat input menu has been completely overhauled with a revolutionary new design, consolidating attachments under a unified '+' button, organizing integrations into a streamlined options menu, and introducing powerful, interactive selectors for attaching chats, notes, and knowledge base items. [Commit](https://github.com/open-webui/open-webui/commit/a68342d5a887e36695e21f8c2aec593b159654ff), [Commit](https://github.com/open-webui/open-webui/commit/96b8aaf83ff341fef432649366bc5155bac6cf20), [Commit](https://github.com/open-webui/open-webui/commit/4977e6d50f7b931372c96dd5979ca635d58aeb78), [Commit](https://github.com/open-webui/open-webui/commit/d973db829f7ec98b8f8fe7d3b2822d588e79f94e), [Commit](https://github.com/open-webui/open-webui/commit/d4c628de09654df76653ad9bce9cb3263e2f27c8), [Commit](https://github.com/open-webui/open-webui/commit/cd740f436db4ea308dbede14ef7ff56e8126f51b), [Commit](https://github.com/open-webui/open-webui/commit/5c2db102d06b5c18beb248d795682ff422e9b6d1), [Commit](https://github.com/open-webui/open-webui/commit/031cf38655a1a2973194d2eaa0fbbd17aca8ee92), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/3ed0a6d11fea1a054e0bc8aa8dfbe417c7c53e51), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/eadec9e86e01bc8f9fb90dfe7a7ae4fc3bfa6420), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/c03ca7270e64e3a002d321237160c0ddaf2bb129), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/b53ddfbd19aa94e9cbf7210acb31c3cfafafa5fe), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/c923461882fcde30ae297a95e91176c95b9b72e1)
|
||||
- 🤖 AI models can now be mentioned in channels to automatically generate responses, enabling multi-model conversations where mentioned models participate directly in threaded discussions with full context awareness. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/4fe97d8794ee18e087790caab9e5d82886006145)
|
||||
- 💬 The Channels feature now utilizes the modern rich text editor, including support for '/', '@', and '#' command suggestions. [Commit](https://github.com/open-webui/open-webui/commit/06c1426e14ac0dfaf723485dbbc9723a4d89aba9), [Commit](https://github.com/open-webui/open-webui/commit/02f7c3258b62970ce79716f75d15467a96565054)
|
||||
- 📎 Channel message input now supports direct paste functionality for images and files from the clipboard, streamlining content sharing workflows. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/6549fc839f86c40c26c2ef4dedcaf763a9304418)
|
||||
- ⚙️ Models can now be configured with default features (Web Search, Image Generation) and filters that automatically activate when a user selects the model. [Commit](https://github.com/open-webui/open-webui/commit/9a555478273355a5177bfc7f7211c64778e4c8de), [Commit](https://github.com/open-webui/open-webui/commit/384a53b339820068e92f7eaea0d9f3e0536c19c2), [Commit](https://github.com/open-webui/open-webui/commit/d7f43bfc1a30c065def8c50d77c2579c1a3c5c67), [Commit](https://github.com/open-webui/open-webui/commit/6a67a2217cc5946ad771e479e3a37ac213210748)
|
||||
- 💬 The ability to reference other chats as context within a conversation was added via the attachment menu. [Commit](https://github.com/open-webui/open-webui/commit/e097bbdf11ae4975c622e086df00d054291cdeb3), [Commit](https://github.com/open-webui/open-webui/commit/f3cd2ffb18e7dedbe88430f9ae7caa6b3cfd79d0), [Commit](https://github.com/open-webui/open-webui/commit/74263c872c5d574a9bb0944d7984f748dc772dba), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/aa8ab349ed2fcb46d1cf994b9c0de2ec2ea35d0d), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/025eef754f0d46789981defd473d001e3b1d0ca2)
|
||||
- 🎨 The command suggestion UI for prompts ('/'), models ('@'), and knowledge ('#') was completely overhauled with a more responsive and keyboard-navigable interface. [Commit](https://github.com/open-webui/open-webui/commit/6b69c4da0fb9329ccf7024483960e070cf52ccab), [Commit](https://github.com/open-webui/open-webui/commit/06a6855f844456eceaa4d410c93379460e208202), [Commit](https://github.com/open-webui/open-webui/commit/c55f5578280b936cf581a743df3703e3db1afd54), [Commit](https://github.com/open-webui/open-webui/commit/f68d1ba394d4423d369f827894cde99d760b2402)
|
||||
- 👥 User and channel suggestions were added to the mention system, enabling '@' mentions for users and models, and '#' mentions for channels with searchable user lookup and clickable navigation. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/bbd1d2b58c89b35daea234f1fc9208f2af840899), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/aef1e06f0bb72065a25579c982dd49157e320268), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/779db74d7e9b7b00d099b7d65cfbc8a831e74690)
|
||||
- 📁 Folder functionality was enhanced with custom background image support, improved drag-and-drop capabilities for moving folders to root level, and better menu interactions. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/2a234829f5dfdfde27fdfd30591caa908340efb4), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/2b1ee8b0dc5f7c0caaafdd218f20705059fa72e2), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/b1e5bc8e490745f701909c19b6a444b67c04660e), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/3e584132686372dfeef187596a7c557aa5f48308)
|
||||
- ☁️ OneDrive integration configuration now supports selecting between personal and work/school account types via ENABLE_ONEDRIVE_PERSONAL and ENABLE_ONEDRIVE_BUSINESS environment variables. [#17354](https://github.com/open-webui/open-webui/pull/17354), [Commit](https://github.com/open-webui/open-webui/commit/e1e3009a30f9808ce06582d81a60e391f5ca09ec), [Docs:#697](https://github.com/open-webui/docs/pull/697)
|
||||
- ⚡ Mermaid.js is now dynamically loaded on demand, significantly reducing first-screen loading time and improving initial page performance. [#17476](https://github.com/open-webui/open-webui/issues/17476), [#17477](https://github.com/open-webui/open-webui/pull/17477)
|
||||
- ⚡ Azure MSAL browser library is now dynamically loaded on demand, reducing initial bundle size by 730KB and improving first-screen loading speed. [#17479](https://github.com/open-webui/open-webui/pull/17479)
|
||||
- ⚡ CodeEditor component is now dynamically loaded on demand, reducing initial bundle size by 1MB and improving first-screen loading speed. [#17498](https://github.com/open-webui/open-webui/pull/17498)
|
||||
- ⚡ Hugging Face Transformers library is now dynamically loaded on demand, reducing initial bundle size by 1.9MB and improving first-screen loading speed. [#17499](https://github.com/open-webui/open-webui/pull/17499)
|
||||
- ⚡ jsPDF and html2canvas-pro libraries are now dynamically loaded on demand, reducing initial bundle size by 980KB and improving first-screen loading speed. [#17502](https://github.com/open-webui/open-webui/pull/17502)
|
||||
- ⚡ Leaflet mapping library is now dynamically loaded on demand, reducing initial bundle size by 454KB and improving first-screen loading speed. [#17503](https://github.com/open-webui/open-webui/pull/17503)
|
||||
- 📊 OpenTelemetry metrics collection was enhanced to properly handle HTTP 500 errors and ensure metrics are recorded even during exceptions. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/b14617a653c6bdcfd3102c12f971924fd1faf572)
|
||||
- 🔒 OAuth token retrieval logic was refactored, improving the reliability and consistency of authentication handling across the backend. [Commit](https://github.com/open-webui/open-webui/commit/6c0a5fa91cdbf6ffb74667ee61ca96bebfdfbc50)
|
||||
- 💻 Code block output processing was improved to handle Python execution results more reliably, along with refined visual styling and button layouts. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/0e5320c39e308ff97f2ca9e289618af12479eb6e)
|
||||
- ⚡ Message input processing was optimized to skip unnecessary text variable handling when input is empty, improving performance. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/e1386fe80b77126a12dabc4ad058abe9b024b275)
|
||||
- 📄 Individual chat PDF export was added to the sidebar chat menu, allowing users to export single conversations as PDF documents with both stylized and plain text options. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/d041d58bb619689cd04a391b4f8191b23941ca62)
|
||||
- 🛠️ Function validation was enhanced with improved valve validation and better error handling during function loading and synchronization. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/e66e0526ed6a116323285f79f44237538b6c75e6), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/8edfd29102e0a61777b23d3575eaa30be37b59a5)
|
||||
- 🔔 Notification toast interaction was enhanced with drag detection to prevent accidental clicks and added keyboard support for accessibility. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/621e7679c427b6f0efa85f95235319238bf171ad)
|
||||
- 🗓️ Improved date and time formatting dynamically adapts to the selected language, ensuring consistent localization across the UI. [#17409](https://github.com/open-webui/open-webui/pull/17409), [Commit](https://github.com/open-webui/open-webui/commit/2227f24bd6d861b1fad8d2cabacf7d62ce137d0c)
|
||||
- 🔒 Feishu SSO integration was added, allowing users to authenticate via Feishu. [#17284](https://github.com/open-webui/open-webui/pull/17284), [Docs:#685](https://github.com/open-webui/docs/pull/685)
|
||||
- 🔠 Toggle filters in the chat input options menu are now sorted alphabetically for easier navigation. [Commit](https://github.com/open-webui/open-webui/commit/ca853ca4656180487afcd84230d214f91db52533)
|
||||
- 🎨 Long chat titles in the sidebar are now truncated to prevent text overflow and maintain a clean layout. [#17356](https://github.com/open-webui/open-webui/pull/17356)
|
||||
- 🎨 Temporary chat interface design was refined with improved layout and visual consistency. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/67549dcadd670285d491bd41daf3d081a70fd094), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/2ca34217e68f3b439899c75881dfb050f49c9eb2), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/fb02ec52a5df3f58b53db4ab3a995c15f83503cd)
|
||||
- 🎨 Download icon consistency was improved across the entire interface by standardizing the icon component used in menus, functions, tools, and export features. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/596be451ece7e11b5cd25465d49670c27a1cb33f)
|
||||
- 🎨 Settings interface was enhanced with improved iconography and reorganized the 'Chats' section into 'Data Controls' for better clarity. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/8bf0b40fdd978b5af6548a6e1fb3aabd90bcd5cd)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Translations for Finnish, German, Kabyle, Portuguese (Brazil), Simplified Chinese, Spanish (Spain), and Traditional Chinese (Taiwan) were enhanced and expanded.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 📚 Knowledge base permission logic was corrected to ensure private collection owners can access their own content when embedding bypass is enabled. [#17432](https://github.com/open-webui/open-webui/issues/17432), [Commit](https://github.com/open-webui/open-webui/commit/a51f0c30ec1472d71487eab3e15d0351a2716b12)
|
||||
- ⚙️ Connection URL editing in Admin Settings now properly saves changes instead of reverting to original values, fixing issues with both Ollama and OpenAI-compatible endpoints. [#17435](https://github.com/open-webui/open-webui/issues/17435), [Commit](https://github.com/open-webui/open-webui/commit/e4c864de7eb0d577843a80688677ce3659d1f81f)
|
||||
- 📊 Usage information collection from Google models was corrected to handle providers that send usage data alongside content chunks instead of separately. [#17421](https://github.com/open-webui/open-webui/pull/17421), [Commit](https://github.com/open-webui/open-webui/commit/c2f98a4cd29ed738f395fef09c42ab8e73cd46a0)
|
||||
- ⚙️ Settings modal scrolling issue was resolved by moving image compression controls to a dedicated modal, preventing the main settings from becoming scrollable out of view. [#17474](https://github.com/open-webui/open-webui/issues/17474), [Commit](https://github.com/open-webui/open-webui/commit/fed5615c19b0045a55b0be426b468a57bfda4b66)
|
||||
- 📁 Folder click behavior was improved to prevent accidental actions by implementing proper double-click detection and timing delays for folder expansion and selection. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/19e3214997170eea6ee92452e8c778e04a28e396)
|
||||
- 🔐 Access control component reliability was improved with better null checking and error handling for group permissions and private access scenarios. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/c8780a7f934c5e49a21b438f2f30232f83cf75d2), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/32015c392dbc6b7367a6a91d9e173e675ea3402c)
|
||||
- 🔗 The citation modal now correctly displays and links to external web page sources in addition to internal documents. [Commit](https://github.com/open-webui/open-webui/commit/9208a84185a7e59524f00a7576667d493c3ac7d4)
|
||||
- 🔗 Web and YouTube attachment handling was fixed, ensuring their content is now reliably processed and included in the chat context for retrieval. [Commit](https://github.com/open-webui/open-webui/commit/210197fd438b52080cda5d6ce3d47b92cdc264c8)
|
||||
- 📂 Large file upload failures are resolved by correcting the processing logic for scenarios where document embedding is bypassed. [Commit](https://github.com/open-webui/open-webui/commit/051b6daa8299fd332503bd584563556e2ae6adab)
|
||||
- 🌐 Rich text input placeholder text now correctly updates when the interface language is switched, ensuring proper localization. [#17473](https://github.com/open-webui/open-webui/pull/17473), [Commit](https://github.com/open-webui/open-webui/commit/77358031f5077e6efe5cc08d8d4e5831c7cd1cd9)
|
||||
- 📊 Llama.cpp server timing metrics are now correctly parsed and displayed by fixing a typo in the response handling. [#17350](https://github.com/open-webui/open-webui/issues/17350), [Commit](https://github.com/open-webui/open-webui/commit/cf72f5503f39834b9da44ebbb426a3674dad0caa)
|
||||
- 🛠️ Filter functions with file_handler configuration now properly handle messages without file attachments, preventing runtime errors. [#17423](https://github.com/open-webui/open-webui/pull/17423)
|
||||
- 🔔 Channel notification delivery was fixed to properly handle background task execution and user access checking. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/1077b2ac8b96e49c2ad2620e76eb65bbb2a3a1f3)
|
||||
|
||||
### Changed
|
||||
|
||||
- 📝 Prompt template variables are now optional by default instead of being forced as required, allowing flexible workflows with optional metadata fields. [#17447](https://github.com/open-webui/open-webui/issues/17447), [Commit](https://github.com/open-webui/open-webui/commit/d5824b1b495fcf86e57171769bcec2a0f698b070), [Docs:#696](https://github.com/open-webui/docs/pull/696)
|
||||
- 🛠️ Direct external tool servers now require explicit user selection from the input interface instead of being automatically included in conversations, providing better control over tool usage. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/0f04227c34ca32746c43a9323e2df32299fcb6af), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/99bba12de279dd55c55ded35b2e4f819af1c9ab5)
|
||||
- 📺 Widescreen mode option was removed from Channels interface, with all channel layouts now using full-width display. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/d46b7b8f1b99a8054b55031fe935c8a16d5ec956)
|
||||
- 🎛️ The plain textarea input option was deprecated, and the custom text editor is now the standard for all chat inputs. [Commit](https://github.com/open-webui/open-webui/commit/153afd832ccd12a1e5fd99b085008d080872c161)
|
||||
|
||||
## [0.6.28] - 2025-09-10
|
||||
|
||||
### Added
|
||||
|
||||
- 🔍 The "@" command for model selection now supports real-time search and filtering, improving usability and aligning its behavior with other input commands. [#17307](https://github.com/open-webui/open-webui/issues/17307), [Commit](https://github.com/open-webui/open-webui/commit/f2a09c71499489ee71599af4a179e7518aaf658b)
|
||||
- 🛠️ External tool server data handling is now more robust, automatically attempting to parse specifications as JSON before falling back to YAML, regardless of the URL extension. [Commit](https://github.com/open-webui/open-webui/commit/774c0056bde88ed4831422efa81506488e3d6641)
|
||||
- 🎯 The "Title" field is now automatically focused when creating a new chat folder, streamlining the folder creation process. [#17315](https://github.com/open-webui/open-webui/issues/17315), [Commit](https://github.com/open-webui/open-webui/commit/c51a651a2d5e2a27546416666812e9b92205562d)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Brazilian Portuguese and Simplified Chinese translations were expanded and refined.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🔊 A regression affecting Text-to-Speech for local providers using the OpenAI engine was fixed by reverting a URL joining change. [#17316](https://github.com/open-webui/open-webui/issues/17316), [Commit](https://github.com/open-webui/open-webui/commit/8339f59cdfc63f2d58c8e26933d1bf1438479d75)
|
||||
- 🪧 A regression was fixed where the input modal for prompts with placeholders would not open, causing the raw prompt text to be pasted into the chat input field instead. [#17325](https://github.com/open-webui/open-webui/issues/17325), [Commit](https://github.com/open-webui/open-webui/commit/d5cb65527eaa4831459a4c7dbf187daa9c0525ae)
|
||||
- 🔑 An issue was resolved where modified connection keys in the OpenAIConnection component did not take effect. [#17324](https://github.com/open-webui/open-webui/pull/17324)
|
||||
|
||||
## [0.6.27] - 2025-09-09
|
||||
|
||||
### Added
|
||||
|
||||
- 📁 Emoji folder icons were added, allowing users to personalize workspace organization with visual cues, including improved chevron display. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/1588f42fe777ad5d807e3f2fc8dbbc47a8db87c0), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/b70c0f36c0f5bbfc2a767429984d6fba1a7bb26c), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/11dea8795bfce42aa5d8d58ef316ded05173bd87), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/c0a47169fa059154d5f5a9ea6b94f9a66d82f255)
|
||||
- 📁 The 'Search Collection' input field now dynamically displays the total number of files within the knowledge base. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/fbbe1117ae4c9c8fec6499d790eee275818eccc5)
|
||||
- ☁️ A provider toggle in connection settings now allows users to manually specify Azure OpenAI deployments. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/5bdd334b74fbd154085f2d590f4afdba32469c8a)
|
||||
- ⚡ Model list caching performance was optimized by fixing cache key generation to reduce redundant API calls. [#17158](https://github.com/open-webui/open-webui/pull/17158)
|
||||
- 🎨 Azure OpenAI image generation is now supported, with configurations for IMAGES_OPENAI_API_VERSION via environment variable and admin UI. [#17147](https://github.com/open-webui/open-webui/pull/17147), [#16274](https://github.com/open-webui/open-webui/discussions/16274), [Docs:#679](https://github.com/open-webui/docs/pull/679)
|
||||
- ⚡ Comprehensive N+1 query performance is optimized by reducing database queries from 1+N to 1+1 patterns across major listing endpoints. [#17165](https://github.com/open-webui/open-webui/pull/17165), [#17160](https://github.com/open-webui/open-webui/pull/17160), [#17161](https://github.com/open-webui/open-webui/pull/17161), [#17162](https://github.com/open-webui/open-webui/pull/17162), [#17159](https://github.com/open-webui/open-webui/pull/17159), [#17166](https://github.com/open-webui/open-webui/pull/17166)
|
||||
- ⚡ The PDF.js library is now dynamically loaded, significantly reducing initial page load size and improving responsiveness. [#17222](https://github.com/open-webui/open-webui/pull/17222)
|
||||
- ⚡ The heic2any library is now dynamically loaded across various message input components, including channels, for faster page loads. [#17225](https://github.com/open-webui/open-webui/pull/17225), [#17229](https://github.com/open-webui/open-webui/pull/17229)
|
||||
- 📚 The knowledge API now supports a "delete_file" query parameter, allowing configurable file deletion behavior. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/22c4ef4fb096498066b73befe993ae3a82f7a8e7)
|
||||
- 📊 Llama.cpp timing statistics are now integrated into the usage field for comprehensive model performance metrics. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/e830b4959ecd4b2795e29e53026984a58a7696a9)
|
||||
- 🗄️ The PGVECTOR_CREATE_EXTENSION environment variable now allows control over automatic pgvector extension creation. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/c2b4976c82d335ed524bd80dc914b5e2f5bfbd9e), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/b45219c8b15b48d5ee3d42983e1107bbcefbab01), [Docs:#672](https://github.com/open-webui/docs/pull/672)
|
||||
- 🔒 Comprehensive server-side OAuth token management was implemented, securely storing encrypted tokens in a new database table and introducing an automatic refresh mechanism, enabling seamless and secure forwarding of valid user-specific OAuth tokens to downstream services, including OpenAI-compatible endpoints and external tool servers via the new "system_oauth" authentication type, resolving long-standing issues such as large token size limitations, stale/expired tokens, and reliable token propagation, and enhancing overall security by minimizing client-side token exposure, configurable via "ENABLE_OAUTH_ID_TOKEN_COOKIE" and "OAUTH_SESSION_TOKEN_ENCRYPTION_KEY" environment variables. [Docs:#683](https://github.com/open-webui/docs/pull/683), [#17210](https://github.com/open-webui/open-webui/pull/17210), [#8957](https://github.com/open-webui/open-webui/discussions/8957), [#11029](https://github.com/open-webui/open-webui/discussions/11029), [#17178](https://github.com/open-webui/open-webui/issues/17178), [#17183](https://github.com/open-webui/open-webui/issues/17183), [Commit](https://github.com/open-webui/open-webui/commit/217f4daef09b36d3d4cc4681e11d3ebd9984a1a5), [Commit](https://github.com/open-webui/open-webui/commit/fc11e4384fe98fac659e10596f67c23483578867), [Commit](https://github.com/open-webui/open-webui/commit/f11bdc6ab5dd5682bb3e27166e77581f5b8af3e0), [Commit](https://github.com/open-webui/open-webui/commit/f71834720e623761d972d4d740e9bbd90a3a86c6), [Commit](https://github.com/open-webui/open-webui/commit/b5bb6ae177dcdc4e8274d7e5ffa50bc8099fd466), [Commit](https://github.com/open-webui/open-webui/commit/b786d1e3f3308ef4f0f95d7130ddbcaaca4fc927), [Commit](https://github.com/open-webui/open-webui/commit/8a9f8627017bd0a74cbd647891552b26e56aabb7), [Commit](https://github.com/open-webui/open-webui/commit/30d1dc2c60e303756120fe1c5538968c4e6139f4), [Commit](https://github.com/open-webui/open-webui/commit/2b2d123531eb3f42c0e940593832a64e2806240d), [Commit](https://github.com/open-webui/open-webui/commit/6f6412dd16c63c2bb4df79a96b814bf69cb3f880)
|
||||
- 🔒 Conditional Permission Hardening for OpenShift Deployments: Added a build argument to enable optional permission hardening for OpenShift and container environments. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/0ebe4f8f8490451ac8e85a4846f010854d9b54e5)
|
||||
- 👥 Regex pattern support is added for OAuth blocked groups, allowing more flexible group filtering rules. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/df66e21472646648d008ebb22b0e8d5424d491df)
|
||||
- 💬 Web search result display was enhanced to include titles and favicons, providing a clearer overview of search sources. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/33f04a771455e3fabf8f0e8ebb994ae7f41b8ed4), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/0a85dd4bca23022729eafdbc82c8c139fa365af2), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/16090bc2721fde492afa2c4af5927e2b668527e1), [#17197](https://github.com/open-webui/open-webui/pull/17197), [#14179](https://github.com/open-webui/open-webui/issues/14179), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/1cdb7aed1ee9bf81f2fd0404be52dcfa64f8ed4f), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/f2525ebc447c008cf7269ef20ce04fa456f302c4), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/7f523de408ede4075349d8de71ae0214b7e1a62e), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/3d37e4a42d344051ae715ab59bd7b5718e46c343), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/cd5e2be27b613314aadda6107089331783987985), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/6dc0df247347aede2762fe2065cf30275fd137ae)
|
||||
- 💬 A new setting was added to control whether clicking a suggested prompt automatically sends the message or only inserts the text. [#17192](https://github.com/open-webui/open-webui/issues/17192), [Commit](https://github.com/open-webui/open-webui/commit/e023a98f11fc52feb21e4065ec707cc98e50c7d3)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Translations for Portuguese (Brazil), Simplified Chinese, Catalan, and Spanish were enhanced and expanded.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🔍 Hybrid search functionality now correctly handles lexical-semantic weight labels and avoids errors when BM25 weight is zero. [#17049](https://github.com/open-webui/open-webui/pull/17049), [#17046](https://github.com/open-webui/open-webui/issues/17046)
|
||||
- 🛑 Task stopping errors are prevented by gracefully handling multiple stop requests for the same task. [#17195](https://github.com/open-webui/open-webui/pull/17195)
|
||||
- 🐍 Code execution package detection precision is improved in Pyodide to prevent unnecessary package inclusions. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/bbe116795860a81a647d9567e0d9cb1950650095)
|
||||
- 🛠️ Tool message format API compliance is fixed by ensuring content fields in tool call responses contain valid string values instead of null. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/37bf0087e5b8a324009c9d06b304027df351ea6b)
|
||||
- 📱 Mobile app config API authentication now supports Authorization header token verification with cookie fallback for iOS and Android requests. [#17175](https://github.com/open-webui/open-webui/pull/17175)
|
||||
- 💾 Knowledge file save race conditions are prevented by serializing API calls and adding an "isSaving" guard. [#17137](https://github.com/open-webui/open-webui/pull/17137), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/4ca936f0bf9813bee11ec8aea41d7e34fb6b16a9)
|
||||
- 🔐 The SSO login button visibility is restored for OIDC PKCE authentication without a client secret. [#17012](https://github.com/open-webui/open-webui/pull/17012)
|
||||
- 🔊 Text-to-Speech (TTS) API requests now use proper URL joining methods, ensuring reliable functionality regardless of trailing slashes in the base URL. [#17061](https://github.com/open-webui/open-webui/pull/17061)
|
||||
- 🛡️ Admin account creation on Hugging Face Spaces now correctly detects the configured port, resolving issues with custom port deployments. [#17064](https://github.com/open-webui/open-webui/pull/17064)
|
||||
- 📁 Unicode filename support is improved for external document loaders by properly URL-encoding filenames in HTTP headers. [#17013](https://github.com/open-webui/open-webui/pull/17013), [#17000](https://github.com/open-webui/open-webui/issues/17000)
|
||||
- 🔗 Web page and YouTube attachments are now correctly processed by setting their type as "text" and using collection names for accurate content retrieval. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/487979859a6ffcfd60468f523822cdf838fbef5b)
|
||||
- ✍️ Message input composition event handling is fixed to properly manage text input for multilingual users using Input Method Editors (IME). [#17085](https://github.com/open-webui/open-webui/pull/17085)
|
||||
- 💬 Follow-up tooltip duplication is removed, streamlining the user interface and preventing visual clutter. [#17186](https://github.com/open-webui/open-webui/pull/17186)
|
||||
- 🎨 Chat button text display is corrected by preventing clipping of descending characters and removing unnecessary capitalization. [#17191](https://github.com/open-webui/open-webui/pull/17191)
|
||||
- 🧠 RAG Loop/Error with Gemma 3.1 2B Instruct is fixed by correctly unwrapping unexpected single-item list responses from models. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/1bc9711afd2b72cd07c4e539a83783868733767c), [#17213](https://github.com/open-webui/open-webui/issues/17213)
|
||||
- 🖼️ HEIC conversion failures are resolved, improving robustness of image handling. [#17225](https://github.com/open-webui/open-webui/pull/17225)
|
||||
- 📦 The slim Docker image size regression has been fixed by refining the build process to correctly exclude components when USE_SLIM=true. [#16997](https://github.com/open-webui/open-webui/issues/16997), [Commit](https://github.com/open-webui/open-webui/commit/be373e9fd42ac73b0302bdb487e16dbeae178b4e), [Commit](https://github.com/open-webui/open-webui/commit/0ebe4f8f8490451ac8e85a4846f010854d9b54e5)
|
||||
- 📁 Knowledge base update validation errors are resolved, ensuring seamless management via UI or API. [#17244](https://github.com/open-webui/open-webui/issues/17244), [Commit](https://github.com/open-webui/open-webui/commit/9aac1489080a5c9441e89b1a56de0d3a672bc5fb)
|
||||
- 🔐 Resolved a security issue where a global web search setting overrode model-specific restrictions, ensuring model-level settings are now correctly prioritized. [#17151](https://github.com/open-webui/open-webui/issues/17151), [Commit](https://github.com/open-webui/open-webui/commit/9368d0ac751ec3072d5a96712b80a9b20a642ce6)
|
||||
- 🔐 OAuth redirect reliability is improved by robustly preserving the intended redirect path using session storage. [#17235](https://github.com/open-webui/open-webui/issues/17235), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/4f2b821088367da18374027919594365c7a3f459), [#15575](https://github.com/open-webui/open-webui/pull/15575), [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/d9f97c832c556fae4b116759da0177bf4fe619de)
|
||||
- 🔐 Fixed a security vulnerability where knowledge base access within chat folders persisted after permissions were revoked. [#17182](https://github.com/open-webui/open-webui/issues/17182), [Commit](https://github.com/open-webui/open-webui/commit/40e40d1dddf9ca937e99af41c8ca038dbc93a7e6)
|
||||
- 🔒 OIDC access denied errors are now displayed as user-friendly toast notifications instead of raw JSON. [#17208](https://github.com/open-webui/open-webui/issues/17208), [Commit](https://github.com/open-webui/open-webui/commit/3d6d050ad82d360adc42d6e9f42e8faf8d13c9f4)
|
||||
- 💬 Chat exception handling is enhanced to prevent system instability during message generation and ensure graceful error recovery. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/f56889c5c7f0cf1a501c05d35dfa614e4f8b6958)
|
||||
- 🔒 Static asset authentication is improved by adding crossorigin="use-credentials" attributes to all link elements, enabling proper cookie forwarding for proxy environments and authenticated requests to favicon, manifest, and stylesheet resources. [#17280](https://github.com/open-webui/open-webui/pull/17280), [Commit](https://github.com/open-webui/open-webui/commit/f17d8b5d19e1a05df7d63f53e939c99772a59c1e)
|
||||
|
||||
### Changed
|
||||
|
||||
- 🛠️ Renamed "Tools" to "External Tools" across the UI for clearer distinction between built-in and external functionalities. [Commit](https://github.com/open-webui/open-webui/pull/17070/commits/0bca4e230ef276bec468889e3be036242ad11086f)
|
||||
- 🛡️ Default permission validation for message regeneration and deletion actions is enhanced to provide more restrictive access controls, improving chat security and user data protection. [#17285](https://github.com/open-webui/open-webui/pull/17285)
|
||||
|
||||
## [0.6.26] - 2025-08-28
|
||||
|
||||
### Added
|
||||
|
||||
- 🛂 **Granular Chat Interaction Permissions**: Added fine-grained permission controls for individual chat actions including "Continue Response", "Regenerate Response", "Rate Response", and "Delete Messages". Administrators can now configure these permissions per user group or set system defaults via environment variables, providing enhanced security and governance by preventing potential system prompt leakage through response continuation and enabling precise control over user interactions with AI responses.
|
||||
- 🧠 **Custom Reasoning Tags Configuration**: Added configurable reasoning tag detection for AI model responses, allowing administrators and users to customize how the system identifies and processes reasoning content. Users can now define custom reasoning tag pairs, use default tags like "think" and "reasoning", or disable reasoning detection entirely through the Advanced Parameters interface, providing enhanced control over AI thought process visibility.
|
||||
- 📱 **Pull-to-Refresh SupportA**: Added pull-to-refresh functionality allowing user to easily refresh the interface by pulling down on the navbar area. This resolves timeout issues that occurred when temporarily switching away from the app during long AI response generations, eliminating the need to close and relaunch the PWA.
|
||||
- 📱 **Pull-to-Refresh Support**: Added pull-to-refresh functionality allowing user to easily refresh the interface by pulling down on the navbar area. This resolves timeout issues that occurred when temporarily switching away from the app during long AI response generations, eliminating the need to close and relaunch the PWA.
|
||||
- 📁 **Configurable File Upload Processing Mode**: Added "process_in_background" query parameter to the file upload API endpoint, allowing clients to choose between asynchronous (default) and synchronous file processing. Setting "process_in_background=false" forces the upload request to wait until extraction and embedding complete, returning immediately usable files and simplifying integration for backend API consumers that prefer blocking calls over polling workflows.
|
||||
- 🔐 **Azure Document Intelligence DefaultAzureCredential Support**: Added support for authenticating with Azure Document Intelligence using DefaultAzureCredential in addition to API key authentication, enabling seamless integration with Azure Entra ID and managed identity authentication for enterprise Azure environments.
|
||||
- 🔐 **Authentication Bootstrapping Enhancements**: Added "ENABLE_INITIAL_ADMIN_SIGNUP" environment variable and "?form=true" URL parameter to enable initial admin user creation and forced login form display in SSO-only deployments. This resolves bootstrap issues where administrators couldn't create the first user when login forms were disabled, allowing proper initialization of SSO-configured deployments without requiring temporary configuration changes.
|
||||
|
|
|
|||
20
Dockerfile
|
|
@ -4,6 +4,7 @@
|
|||
ARG USE_CUDA=false
|
||||
ARG USE_OLLAMA=false
|
||||
ARG USE_SLIM=false
|
||||
ARG USE_PERMISSION_HARDENING=false
|
||||
# Tested with cu117 for CUDA 11 and cu121 for CUDA 12 (default)
|
||||
ARG USE_CUDA_VER=cu128
|
||||
# any sentence transformer model; models to use can be found at https://huggingface.co/models?library=sentence-transformers
|
||||
|
|
@ -25,6 +26,9 @@ ARG GID=0
|
|||
FROM --platform=$BUILDPLATFORM node:22-alpine3.20 AS build
|
||||
ARG BUILD_HASH
|
||||
|
||||
# Set Node.js options (heap limit Allocation failed - JavaScript heap out of memory)
|
||||
# ENV NODE_OPTIONS="--max-old-space-size=4096"
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# to store git revision in build
|
||||
|
|
@ -45,6 +49,7 @@ ARG USE_CUDA
|
|||
ARG USE_OLLAMA
|
||||
ARG USE_CUDA_VER
|
||||
ARG USE_SLIM
|
||||
ARG USE_PERMISSION_HARDENING
|
||||
ARG USE_EMBEDDING_MODEL
|
||||
ARG USE_RERANKING_MODEL
|
||||
ARG UID
|
||||
|
|
@ -123,7 +128,6 @@ RUN apt-get update && \
|
|||
COPY --chown=$UID:$GID ./backend/requirements.txt ./requirements.txt
|
||||
|
||||
RUN pip3 install --no-cache-dir uv && \
|
||||
if [ "$USE_SLIM" != "true" ]; then \
|
||||
if [ "$USE_CUDA" = "true" ]; then \
|
||||
# If you use CUDA the whisper and embedding model will be downloaded on first use
|
||||
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/$USE_CUDA_DOCKER_VER --no-cache-dir && \
|
||||
|
|
@ -134,17 +138,17 @@ RUN pip3 install --no-cache-dir uv && \
|
|||
else \
|
||||
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu --no-cache-dir && \
|
||||
uv pip install --system -r requirements.txt --no-cache-dir && \
|
||||
if [ "$USE_SLIM" != "true" ]; then \
|
||||
python -c "import os; from sentence_transformers import SentenceTransformer; SentenceTransformer(os.environ['RAG_EMBEDDING_MODEL'], device='cpu')" && \
|
||||
python -c "import os; from faster_whisper import WhisperModel; WhisperModel(os.environ['WHISPER_MODEL'], device='cpu', compute_type='int8', download_root=os.environ['WHISPER_MODEL_DIR'])"; \
|
||||
python -c "import os; import tiktoken; tiktoken.get_encoding(os.environ['TIKTOKEN_ENCODING_NAME'])"; \
|
||||
fi; \
|
||||
else \
|
||||
uv pip install --system -r requirements.txt --no-cache-dir; \
|
||||
fi; \
|
||||
mkdir -p /app/backend/data && chown -R $UID:$GID /app/backend/data/
|
||||
mkdir -p /app/backend/data && chown -R $UID:$GID /app/backend/data/ && \
|
||||
rm -rf /var/lib/apt/lists/*;
|
||||
|
||||
# Install Ollama if requested
|
||||
RUN if [ "$USE_OLLAMA" = "true" ] && [ "$USE_SLIM" != "true" ]; then \
|
||||
RUN if [ "$USE_OLLAMA" = "true" ]; then \
|
||||
date +%s > /tmp/ollama_build_hash && \
|
||||
echo "Cache broken at timestamp: `cat /tmp/ollama_build_hash`" && \
|
||||
curl -fsSL https://ollama.com/install.sh | sh && \
|
||||
|
|
@ -170,11 +174,13 @@ HEALTHCHECK CMD curl --silent --fail http://localhost:${PORT:-8080}/health | jq
|
|||
# Minimal, atomic permission hardening for OpenShift (arbitrary UID):
|
||||
# - Group 0 owns /app and /root
|
||||
# - Directories are group-writable and have SGID so new files inherit GID 0
|
||||
RUN set -eux; \
|
||||
RUN if [ "$USE_PERMISSION_HARDENING" = "true" ]; then \
|
||||
set -eux; \
|
||||
chgrp -R 0 /app /root || true; \
|
||||
chmod -R g+rwX /app /root || true; \
|
||||
find /app -type d -exec chmod g+s {} + || true; \
|
||||
find /root -type d -exec chmod g+s {} + || true
|
||||
find /root -type d -exec chmod g+s {} + || true; \
|
||||
fi
|
||||
|
||||
USER $UID:$GID
|
||||
|
||||
|
|
|
|||
11
LICENSE_NOTICE
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
# Open WebUI Multi-License Notice
|
||||
|
||||
This repository contains code governed by multiple licenses based on the date and origin of contribution:
|
||||
|
||||
1. All code committed prior to commit a76068d69cd59568b920dfab85dc573dbbb8f131 is licensed under the MIT License (see LICENSE_HISTORY).
|
||||
|
||||
2. All code committed from commit a76068d69cd59568b920dfab85dc573dbbb8f131 up to and including commit 60d84a3aae9802339705826e9095e272e3c83623 is licensed under the BSD 3-Clause License (see LICENSE_HISTORY).
|
||||
|
||||
3. All code contributed or modified after commit 60d84a3aae9802339705826e9095e272e3c83623 is licensed under the Open WebUI License (see LICENSE).
|
||||
|
||||
For details on which commits are covered by which license, refer to LICENSE_HISTORY.
|
||||
41
README.md
|
|
@ -17,7 +17,7 @@ Passionate about open-source AI? [Join our team →](https://careers.openwebui.c
|
|||

|
||||
|
||||
> [!TIP]
|
||||
> **Looking for an [Enterprise Plan](https://docs.openwebui.com/enterprise)?** – **[Speak with Our Sales Team Today!](mailto:sales@openwebui.com)**
|
||||
> **Looking for an [Enterprise Plan](https://docs.openwebui.com/enterprise)?** – **[Speak with Our Sales Team Today!](https://docs.openwebui.com/enterprise)**
|
||||
>
|
||||
> Get **enhanced capabilities**, including **custom theming and branding**, **Service Level Agreement (SLA) support**, **Long-Term Support (LTS) versions**, and **more!**
|
||||
|
||||
|
|
@ -65,43 +65,6 @@ For more information, be sure to check out our [Open WebUI Documentation](https:
|
|||
|
||||
Want to learn more about Open WebUI's features? Check out our [Open WebUI documentation](https://docs.openwebui.com/features) for a comprehensive overview!
|
||||
|
||||
## Sponsors 🙌
|
||||
|
||||
#### Emerald
|
||||
|
||||
<table>
|
||||
<!-- <tr>
|
||||
<td>
|
||||
<a href="https://n8n.io/" target="_blank">
|
||||
<img src="https://docs.openwebui.com/sponsors/logos/n8n.png" alt="n8n" style="width: 8rem; height: 8rem; border-radius: .75rem;" />
|
||||
</a>
|
||||
</td>
|
||||
<td>
|
||||
<a href="https://n8n.io/">n8n</a> • Does your interface have a backend yet?<br>Try <a href="https://n8n.io/">n8n</a>
|
||||
</td>
|
||||
</tr> -->
|
||||
<tr>
|
||||
<td>
|
||||
<a href="https://tailscale.com/blog/self-host-a-local-ai-stack/?utm_source=OpenWebUI&utm_medium=paid-ad-placement&utm_campaign=OpenWebUI-Docs" target="_blank">
|
||||
<img src="https://docs.openwebui.com/sponsors/logos/tailscale.png" alt="Tailscale" style="width: 8rem; height: 8rem; border-radius: .75rem;" />
|
||||
</a>
|
||||
</td>
|
||||
<td>
|
||||
<a href="https://tailscale.com/blog/self-host-a-local-ai-stack/?utm_source=OpenWebUI&utm_medium=paid-ad-placement&utm_campaign=OpenWebUI-Docs">Tailscale</a> • Connect self-hosted AI to any device with Tailscale
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<a href="https://warp.dev/open-webui" target="_blank">
|
||||
<img src="https://docs.openwebui.com/sponsors/logos/warp.png" alt="Warp" style="width: 8rem; height: 8rem; border-radius: .75rem;" />
|
||||
</a>
|
||||
</td>
|
||||
<td>
|
||||
<a href="https://warp.dev/open-webui">Warp</a> • The intelligent terminal for developers
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
---
|
||||
|
||||
We are incredibly grateful for the generous support of our sponsors. Their contributions help us to maintain and improve our project, ensuring we can continue to deliver quality work to our community. Thank you!
|
||||
|
|
@ -248,7 +211,7 @@ Discover upcoming features on our roadmap in the [Open WebUI Documentation](http
|
|||
|
||||
## License 📜
|
||||
|
||||
This project is licensed under the [Open WebUI License](LICENSE), a revised BSD-3-Clause license. You receive all the same rights as the classic BSD-3 license: you can use, modify, and distribute the software, including in proprietary and commercial products, with minimal restrictions. The only additional requirement is to preserve the "Open WebUI" branding, as detailed in the LICENSE file. For full terms, see the [LICENSE](LICENSE) document. 📄
|
||||
This project contains code under multiple licenses. The current codebase includes components licensed under the Open WebUI License with an additional requirement to preserve the "Open WebUI" branding, as well as prior contributions under their respective original licenses. For a detailed record of license changes and the applicable terms for each section of the code, please refer to [LICENSE_HISTORY](./LICENSE_HISTORY). For complete and updated licensing details, please see the [LICENSE](./LICENSE) and [LICENSE_HISTORY](./LICENSE_HISTORY) files.
|
||||
|
||||
## Support 💬
|
||||
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
export CORS_ALLOW_ORIGIN="http://localhost:5173"
|
||||
export CORS_ALLOW_ORIGIN="http://localhost:5173;http://localhost:8080"
|
||||
PORT="${PORT:-8080}"
|
||||
uvicorn open_webui.main:app --port $PORT --host 0.0.0.0 --forwarded-allow-ips '*' --reload
|
||||
|
|
|
|||
|
|
@ -222,10 +222,11 @@ class PersistentConfig(Generic[T]):
|
|||
|
||||
|
||||
class AppConfig:
|
||||
_state: dict[str, PersistentConfig]
|
||||
_redis: Union[redis.Redis, redis.cluster.RedisCluster] = None
|
||||
_redis_key_prefix: str
|
||||
|
||||
_state: dict[str, PersistentConfig]
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
redis_url: Optional[str] = None,
|
||||
|
|
@ -233,9 +234,8 @@ class AppConfig:
|
|||
redis_cluster: Optional[bool] = False,
|
||||
redis_key_prefix: str = "open-webui",
|
||||
):
|
||||
super().__setattr__("_state", {})
|
||||
super().__setattr__("_redis_key_prefix", redis_key_prefix)
|
||||
if redis_url:
|
||||
super().__setattr__("_redis_key_prefix", redis_key_prefix)
|
||||
super().__setattr__(
|
||||
"_redis",
|
||||
get_redis_connection(
|
||||
|
|
@ -246,6 +246,8 @@ class AppConfig:
|
|||
),
|
||||
)
|
||||
|
||||
super().__setattr__("_state", {})
|
||||
|
||||
def __setattr__(self, key, value):
|
||||
if isinstance(value, PersistentConfig):
|
||||
self._state[key] = value
|
||||
|
|
@ -285,35 +287,46 @@ class AppConfig:
|
|||
# WEBUI_AUTH (Required for security)
|
||||
####################################
|
||||
|
||||
ENABLE_API_KEY = PersistentConfig(
|
||||
"ENABLE_API_KEY",
|
||||
"auth.api_key.enable",
|
||||
os.environ.get("ENABLE_API_KEY", "True").lower() == "true",
|
||||
ENABLE_API_KEYS = PersistentConfig(
|
||||
"ENABLE_API_KEYS",
|
||||
"auth.enable_api_keys",
|
||||
os.environ.get("ENABLE_API_KEYS", "False").lower() == "true",
|
||||
)
|
||||
|
||||
ENABLE_API_KEY_ENDPOINT_RESTRICTIONS = PersistentConfig(
|
||||
"ENABLE_API_KEY_ENDPOINT_RESTRICTIONS",
|
||||
ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS = PersistentConfig(
|
||||
"ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS",
|
||||
"auth.api_key.endpoint_restrictions",
|
||||
os.environ.get("ENABLE_API_KEY_ENDPOINT_RESTRICTIONS", "False").lower() == "true",
|
||||
os.environ.get(
|
||||
"ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS",
|
||||
os.environ.get("ENABLE_API_KEY_ENDPOINT_RESTRICTIONS", "False"),
|
||||
).lower()
|
||||
== "true",
|
||||
)
|
||||
|
||||
API_KEY_ALLOWED_ENDPOINTS = PersistentConfig(
|
||||
"API_KEY_ALLOWED_ENDPOINTS",
|
||||
API_KEYS_ALLOWED_ENDPOINTS = PersistentConfig(
|
||||
"API_KEYS_ALLOWED_ENDPOINTS",
|
||||
"auth.api_key.allowed_endpoints",
|
||||
os.environ.get("API_KEY_ALLOWED_ENDPOINTS", ""),
|
||||
os.environ.get(
|
||||
"API_KEYS_ALLOWED_ENDPOINTS", os.environ.get("API_KEY_ALLOWED_ENDPOINTS", "")
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
JWT_EXPIRES_IN = PersistentConfig(
|
||||
"JWT_EXPIRES_IN", "auth.jwt_expiry", os.environ.get("JWT_EXPIRES_IN", "-1")
|
||||
"JWT_EXPIRES_IN", "auth.jwt_expiry", os.environ.get("JWT_EXPIRES_IN", "4w")
|
||||
)
|
||||
|
||||
if JWT_EXPIRES_IN.value == "-1":
|
||||
log.warning(
|
||||
"⚠️ SECURITY WARNING: JWT_EXPIRES_IN is set to '-1'\n"
|
||||
" See: https://docs.openwebui.com/getting-started/env-configuration\n"
|
||||
)
|
||||
|
||||
####################################
|
||||
# OAuth config
|
||||
####################################
|
||||
|
||||
ENABLE_OAUTH_PERSISTENT_CONFIG = (
|
||||
os.environ.get("ENABLE_OAUTH_PERSISTENT_CONFIG", "True").lower() == "true"
|
||||
os.environ.get("ENABLE_OAUTH_PERSISTENT_CONFIG", "False").lower() == "true"
|
||||
)
|
||||
|
||||
ENABLE_OAUTH_SIGNUP = PersistentConfig(
|
||||
|
|
@ -513,6 +526,30 @@ OAUTH_GROUPS_CLAIM = PersistentConfig(
|
|||
os.environ.get("OAUTH_GROUPS_CLAIM", os.environ.get("OAUTH_GROUP_CLAIM", "groups")),
|
||||
)
|
||||
|
||||
FEISHU_CLIENT_ID = PersistentConfig(
|
||||
"FEISHU_CLIENT_ID",
|
||||
"oauth.feishu.client_id",
|
||||
os.environ.get("FEISHU_CLIENT_ID", ""),
|
||||
)
|
||||
|
||||
FEISHU_CLIENT_SECRET = PersistentConfig(
|
||||
"FEISHU_CLIENT_SECRET",
|
||||
"oauth.feishu.client_secret",
|
||||
os.environ.get("FEISHU_CLIENT_SECRET", ""),
|
||||
)
|
||||
|
||||
FEISHU_OAUTH_SCOPE = PersistentConfig(
|
||||
"FEISHU_OAUTH_SCOPE",
|
||||
"oauth.feishu.scope",
|
||||
os.environ.get("FEISHU_OAUTH_SCOPE", "contact:user.base:readonly"),
|
||||
)
|
||||
|
||||
FEISHU_REDIRECT_URI = PersistentConfig(
|
||||
"FEISHU_REDIRECT_URI",
|
||||
"oauth.feishu.redirect_uri",
|
||||
os.environ.get("FEISHU_REDIRECT_URI", ""),
|
||||
)
|
||||
|
||||
ENABLE_OAUTH_ROLE_MANAGEMENT = PersistentConfig(
|
||||
"ENABLE_OAUTH_ROLE_MANAGEMENT",
|
||||
"oauth.enable_role_mapping",
|
||||
|
|
@ -538,25 +575,34 @@ OAUTH_BLOCKED_GROUPS = PersistentConfig(
|
|||
os.environ.get("OAUTH_BLOCKED_GROUPS", "[]"),
|
||||
)
|
||||
|
||||
OAUTH_GROUPS_SEPARATOR = os.environ.get("OAUTH_GROUPS_SEPARATOR", ";")
|
||||
|
||||
OAUTH_ROLES_CLAIM = PersistentConfig(
|
||||
"OAUTH_ROLES_CLAIM",
|
||||
"oauth.roles_claim",
|
||||
os.environ.get("OAUTH_ROLES_CLAIM", "roles"),
|
||||
)
|
||||
|
||||
SEP = os.environ.get("OAUTH_ROLES_SEPARATOR", ",")
|
||||
|
||||
OAUTH_ALLOWED_ROLES = PersistentConfig(
|
||||
"OAUTH_ALLOWED_ROLES",
|
||||
"oauth.allowed_roles",
|
||||
[
|
||||
role.strip()
|
||||
for role in os.environ.get("OAUTH_ALLOWED_ROLES", "user,admin").split(",")
|
||||
for role in os.environ.get("OAUTH_ALLOWED_ROLES", f"user{SEP}admin").split(SEP)
|
||||
if role
|
||||
],
|
||||
)
|
||||
|
||||
OAUTH_ADMIN_ROLES = PersistentConfig(
|
||||
"OAUTH_ADMIN_ROLES",
|
||||
"oauth.admin_roles",
|
||||
[role.strip() for role in os.environ.get("OAUTH_ADMIN_ROLES", "admin").split(",")],
|
||||
[
|
||||
role.strip()
|
||||
for role in os.environ.get("OAUTH_ADMIN_ROLES", "admin").split(SEP)
|
||||
if role
|
||||
],
|
||||
)
|
||||
|
||||
OAUTH_ALLOWED_DOMAINS = PersistentConfig(
|
||||
|
|
@ -579,8 +625,8 @@ def load_oauth_providers():
|
|||
OAUTH_PROVIDERS.clear()
|
||||
if GOOGLE_CLIENT_ID.value and GOOGLE_CLIENT_SECRET.value:
|
||||
|
||||
def google_oauth_register(client: OAuth):
|
||||
client.register(
|
||||
def google_oauth_register(oauth: OAuth):
|
||||
client = oauth.register(
|
||||
name="google",
|
||||
client_id=GOOGLE_CLIENT_ID.value,
|
||||
client_secret=GOOGLE_CLIENT_SECRET.value,
|
||||
|
|
@ -595,6 +641,7 @@ def load_oauth_providers():
|
|||
},
|
||||
redirect_uri=GOOGLE_REDIRECT_URI.value,
|
||||
)
|
||||
return client
|
||||
|
||||
OAUTH_PROVIDERS["google"] = {
|
||||
"redirect_uri": GOOGLE_REDIRECT_URI.value,
|
||||
|
|
@ -607,8 +654,8 @@ def load_oauth_providers():
|
|||
and MICROSOFT_CLIENT_TENANT_ID.value
|
||||
):
|
||||
|
||||
def microsoft_oauth_register(client: OAuth):
|
||||
client.register(
|
||||
def microsoft_oauth_register(oauth: OAuth):
|
||||
client = oauth.register(
|
||||
name="microsoft",
|
||||
client_id=MICROSOFT_CLIENT_ID.value,
|
||||
client_secret=MICROSOFT_CLIENT_SECRET.value,
|
||||
|
|
@ -623,6 +670,7 @@ def load_oauth_providers():
|
|||
},
|
||||
redirect_uri=MICROSOFT_REDIRECT_URI.value,
|
||||
)
|
||||
return client
|
||||
|
||||
OAUTH_PROVIDERS["microsoft"] = {
|
||||
"redirect_uri": MICROSOFT_REDIRECT_URI.value,
|
||||
|
|
@ -632,8 +680,8 @@ def load_oauth_providers():
|
|||
|
||||
if GITHUB_CLIENT_ID.value and GITHUB_CLIENT_SECRET.value:
|
||||
|
||||
def github_oauth_register(client: OAuth):
|
||||
client.register(
|
||||
def github_oauth_register(oauth: OAuth):
|
||||
client = oauth.register(
|
||||
name="github",
|
||||
client_id=GITHUB_CLIENT_ID.value,
|
||||
client_secret=GITHUB_CLIENT_SECRET.value,
|
||||
|
|
@ -651,6 +699,7 @@ def load_oauth_providers():
|
|||
},
|
||||
redirect_uri=GITHUB_CLIENT_REDIRECT_URI.value,
|
||||
)
|
||||
return client
|
||||
|
||||
OAUTH_PROVIDERS["github"] = {
|
||||
"redirect_uri": GITHUB_CLIENT_REDIRECT_URI.value,
|
||||
|
|
@ -660,11 +709,11 @@ def load_oauth_providers():
|
|||
|
||||
if (
|
||||
OAUTH_CLIENT_ID.value
|
||||
and OAUTH_CLIENT_SECRET.value
|
||||
and (OAUTH_CLIENT_SECRET.value or OAUTH_CODE_CHALLENGE_METHOD.value)
|
||||
and OPENID_PROVIDER_URL.value
|
||||
):
|
||||
|
||||
def oidc_oauth_register(client: OAuth):
|
||||
def oidc_oauth_register(oauth: OAuth):
|
||||
client_kwargs = {
|
||||
"scope": OAUTH_SCOPES.value,
|
||||
**(
|
||||
|
|
@ -690,7 +739,7 @@ def load_oauth_providers():
|
|||
% ("S256", OAUTH_CODE_CHALLENGE_METHOD.value)
|
||||
)
|
||||
|
||||
client.register(
|
||||
client = oauth.register(
|
||||
name="oidc",
|
||||
client_id=OAUTH_CLIENT_ID.value,
|
||||
client_secret=OAUTH_CLIENT_SECRET.value,
|
||||
|
|
@ -698,6 +747,7 @@ def load_oauth_providers():
|
|||
client_kwargs=client_kwargs,
|
||||
redirect_uri=OPENID_REDIRECT_URI.value,
|
||||
)
|
||||
return client
|
||||
|
||||
OAUTH_PROVIDERS["oidc"] = {
|
||||
"name": OAUTH_PROVIDER_NAME.value,
|
||||
|
|
@ -705,6 +755,34 @@ def load_oauth_providers():
|
|||
"register": oidc_oauth_register,
|
||||
}
|
||||
|
||||
if FEISHU_CLIENT_ID.value and FEISHU_CLIENT_SECRET.value:
|
||||
|
||||
def feishu_oauth_register(oauth: OAuth):
|
||||
client = oauth.register(
|
||||
name="feishu",
|
||||
client_id=FEISHU_CLIENT_ID.value,
|
||||
client_secret=FEISHU_CLIENT_SECRET.value,
|
||||
access_token_url="https://open.feishu.cn/open-apis/authen/v2/oauth/token",
|
||||
authorize_url="https://accounts.feishu.cn/open-apis/authen/v1/authorize",
|
||||
api_base_url="https://open.feishu.cn/open-apis",
|
||||
userinfo_endpoint="https://open.feishu.cn/open-apis/authen/v1/user_info",
|
||||
client_kwargs={
|
||||
"scope": FEISHU_OAUTH_SCOPE.value,
|
||||
**(
|
||||
{"timeout": int(OAUTH_TIMEOUT.value)}
|
||||
if OAUTH_TIMEOUT.value
|
||||
else {}
|
||||
),
|
||||
},
|
||||
redirect_uri=FEISHU_REDIRECT_URI.value,
|
||||
)
|
||||
return client
|
||||
|
||||
OAUTH_PROVIDERS["feishu"] = {
|
||||
"register": feishu_oauth_register,
|
||||
"sub_claim": "user_id",
|
||||
}
|
||||
|
||||
configured_providers = []
|
||||
if GOOGLE_CLIENT_ID.value:
|
||||
configured_providers.append("Google")
|
||||
|
|
@ -712,6 +790,8 @@ def load_oauth_providers():
|
|||
configured_providers.append("Microsoft")
|
||||
if GITHUB_CLIENT_ID.value:
|
||||
configured_providers.append("GitHub")
|
||||
if FEISHU_CLIENT_ID.value:
|
||||
configured_providers.append("Feishu")
|
||||
|
||||
if configured_providers and not OPENID_PROVIDER_URL.value:
|
||||
provider_list = ", ".join(configured_providers)
|
||||
|
|
@ -1049,6 +1129,7 @@ ENABLE_LOGIN_FORM = PersistentConfig(
|
|||
os.environ.get("ENABLE_LOGIN_FORM", "True").lower() == "true",
|
||||
)
|
||||
|
||||
ENABLE_PASSWORD_AUTH = os.environ.get("ENABLE_PASSWORD_AUTH", "True").lower() == "true"
|
||||
|
||||
DEFAULT_LOCALE = PersistentConfig(
|
||||
"DEFAULT_LOCALE",
|
||||
|
|
@ -1060,6 +1141,12 @@ DEFAULT_MODELS = PersistentConfig(
|
|||
"DEFAULT_MODELS", "ui.default_models", os.environ.get("DEFAULT_MODELS", None)
|
||||
)
|
||||
|
||||
DEFAULT_PINNED_MODELS = PersistentConfig(
|
||||
"DEFAULT_PINNED_MODELS",
|
||||
"ui.default_pinned_models",
|
||||
os.environ.get("DEFAULT_PINNED_MODELS", None),
|
||||
)
|
||||
|
||||
try:
|
||||
default_prompt_suggestions = json.loads(
|
||||
os.environ.get("DEFAULT_PROMPT_SUGGESTIONS", "[]")
|
||||
|
|
@ -1155,6 +1242,34 @@ USER_PERMISSIONS_WORKSPACE_TOOLS_ACCESS = (
|
|||
os.environ.get("USER_PERMISSIONS_WORKSPACE_TOOLS_ACCESS", "False").lower() == "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_WORKSPACE_MODELS_IMPORT = (
|
||||
os.environ.get("USER_PERMISSIONS_WORKSPACE_MODELS_IMPORT", "False").lower()
|
||||
== "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_WORKSPACE_MODELS_EXPORT = (
|
||||
os.environ.get("USER_PERMISSIONS_WORKSPACE_MODELS_EXPORT", "False").lower()
|
||||
== "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_WORKSPACE_PROMPTS_IMPORT = (
|
||||
os.environ.get("USER_PERMISSIONS_WORKSPACE_PROMPTS_IMPORT", "False").lower()
|
||||
== "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_WORKSPACE_PROMPTS_EXPORT = (
|
||||
os.environ.get("USER_PERMISSIONS_WORKSPACE_PROMPTS_EXPORT", "False").lower()
|
||||
== "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_WORKSPACE_TOOLS_IMPORT = (
|
||||
os.environ.get("USER_PERMISSIONS_WORKSPACE_TOOLS_IMPORT", "False").lower() == "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_WORKSPACE_TOOLS_EXPORT = (
|
||||
os.environ.get("USER_PERMISSIONS_WORKSPACE_TOOLS_EXPORT", "False").lower() == "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_WORKSPACE_MODELS_ALLOW_PUBLIC_SHARING = (
|
||||
os.environ.get(
|
||||
"USER_PERMISSIONS_WORKSPACE_MODELS_ALLOW_PUBLIC_SHARING", "False"
|
||||
|
|
@ -1162,6 +1277,11 @@ USER_PERMISSIONS_WORKSPACE_MODELS_ALLOW_PUBLIC_SHARING = (
|
|||
== "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_NOTES_ALLOW_PUBLIC_SHARING = (
|
||||
os.environ.get("USER_PERMISSIONS_NOTES_ALLOW_PUBLIC_SHARING", "False").lower()
|
||||
== "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ALLOW_PUBLIC_SHARING = (
|
||||
os.environ.get(
|
||||
"USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ALLOW_PUBLIC_SHARING", "False"
|
||||
|
|
@ -1286,6 +1406,10 @@ USER_PERMISSIONS_FEATURES_NOTES = (
|
|||
os.environ.get("USER_PERMISSIONS_FEATURES_NOTES", "True").lower() == "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_FEATURES_API_KEYS = (
|
||||
os.environ.get("USER_PERMISSIONS_FEATURES_API_KEYS", "False").lower() == "true"
|
||||
)
|
||||
|
||||
|
||||
DEFAULT_USER_PERMISSIONS = {
|
||||
"workspace": {
|
||||
|
|
@ -1293,12 +1417,19 @@ DEFAULT_USER_PERMISSIONS = {
|
|||
"knowledge": USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ACCESS,
|
||||
"prompts": USER_PERMISSIONS_WORKSPACE_PROMPTS_ACCESS,
|
||||
"tools": USER_PERMISSIONS_WORKSPACE_TOOLS_ACCESS,
|
||||
"models_import": USER_PERMISSIONS_WORKSPACE_MODELS_IMPORT,
|
||||
"models_export": USER_PERMISSIONS_WORKSPACE_MODELS_EXPORT,
|
||||
"prompts_import": USER_PERMISSIONS_WORKSPACE_PROMPTS_IMPORT,
|
||||
"prompts_export": USER_PERMISSIONS_WORKSPACE_PROMPTS_EXPORT,
|
||||
"tools_import": USER_PERMISSIONS_WORKSPACE_TOOLS_IMPORT,
|
||||
"tools_export": USER_PERMISSIONS_WORKSPACE_TOOLS_EXPORT,
|
||||
},
|
||||
"sharing": {
|
||||
"public_models": USER_PERMISSIONS_WORKSPACE_MODELS_ALLOW_PUBLIC_SHARING,
|
||||
"public_knowledge": USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ALLOW_PUBLIC_SHARING,
|
||||
"public_prompts": USER_PERMISSIONS_WORKSPACE_PROMPTS_ALLOW_PUBLIC_SHARING,
|
||||
"public_tools": USER_PERMISSIONS_WORKSPACE_TOOLS_ALLOW_PUBLIC_SHARING,
|
||||
"public_notes": USER_PERMISSIONS_NOTES_ALLOW_PUBLIC_SHARING,
|
||||
},
|
||||
"chat": {
|
||||
"controls": USER_PERMISSIONS_CHAT_CONTROLS,
|
||||
|
|
@ -1322,6 +1453,7 @@ DEFAULT_USER_PERMISSIONS = {
|
|||
"temporary_enforced": USER_PERMISSIONS_CHAT_TEMPORARY_ENFORCED,
|
||||
},
|
||||
"features": {
|
||||
"api_keys": USER_PERMISSIONS_FEATURES_API_KEYS,
|
||||
"direct_tool_servers": USER_PERMISSIONS_FEATURES_DIRECT_TOOL_SERVERS,
|
||||
"web_search": USER_PERMISSIONS_FEATURES_WEB_SEARCH,
|
||||
"image_generation": USER_PERMISSIONS_FEATURES_IMAGE_GENERATION,
|
||||
|
|
@ -1741,6 +1873,38 @@ Output:
|
|||
#### Output:
|
||||
"""
|
||||
|
||||
|
||||
VOICE_MODE_PROMPT_TEMPLATE = PersistentConfig(
|
||||
"VOICE_MODE_PROMPT_TEMPLATE",
|
||||
"task.voice.prompt_template",
|
||||
os.environ.get("VOICE_MODE_PROMPT_TEMPLATE", ""),
|
||||
)
|
||||
|
||||
DEFAULT_VOICE_MODE_PROMPT_TEMPLATE = """You are a friendly, concise voice assistant.
|
||||
|
||||
Everything you say will be spoken aloud.
|
||||
Keep responses short, clear, and natural.
|
||||
|
||||
STYLE:
|
||||
- Use simple words and short sentences.
|
||||
- Sound warm and conversational.
|
||||
- Avoid long explanations, lists, or complex phrasing.
|
||||
|
||||
BEHAVIOR:
|
||||
- Give the quickest helpful answer first.
|
||||
- Offer extra detail only if needed.
|
||||
- Ask for clarification only when necessary.
|
||||
|
||||
VOICE OPTIMIZATION:
|
||||
- Break information into small, easy-to-hear chunks.
|
||||
- Avoid dense wording or anything that sounds like reading text.
|
||||
|
||||
ERROR HANDLING:
|
||||
- If unsure, say so briefly and offer options.
|
||||
- If something is unsafe or impossible, decline kindly and suggest a safe alternative.
|
||||
|
||||
Stay consistent, helpful, and easy to listen to."""
|
||||
|
||||
TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE = PersistentConfig(
|
||||
"TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE",
|
||||
"task.tools.prompt_template",
|
||||
|
|
@ -1950,16 +2114,23 @@ if VECTOR_DB == "chroma":
|
|||
# this uses the model defined in the Dockerfile ENV variable. If you dont use docker or docker based deployments such as k8s, the default embedding model will be used (sentence-transformers/all-MiniLM-L6-v2)
|
||||
|
||||
# Milvus
|
||||
|
||||
MILVUS_URI = os.environ.get("MILVUS_URI", f"{DATA_DIR}/vector_db/milvus.db")
|
||||
MILVUS_DB = os.environ.get("MILVUS_DB", "default")
|
||||
MILVUS_TOKEN = os.environ.get("MILVUS_TOKEN", None)
|
||||
|
||||
MILVUS_INDEX_TYPE = os.environ.get("MILVUS_INDEX_TYPE", "HNSW")
|
||||
MILVUS_METRIC_TYPE = os.environ.get("MILVUS_METRIC_TYPE", "COSINE")
|
||||
MILVUS_HNSW_M = int(os.environ.get("MILVUS_HNSW_M", "16"))
|
||||
MILVUS_HNSW_EFCONSTRUCTION = int(os.environ.get("MILVUS_HNSW_EFCONSTRUCTION", "100"))
|
||||
MILVUS_IVF_FLAT_NLIST = int(os.environ.get("MILVUS_IVF_FLAT_NLIST", "128"))
|
||||
MILVUS_DISKANN_MAX_DEGREE = int(os.environ.get("MILVUS_DISKANN_MAX_DEGREE", "56"))
|
||||
MILVUS_DISKANN_SEARCH_LIST_SIZE = int(
|
||||
os.environ.get("MILVUS_DISKANN_SEARCH_LIST_SIZE", "100")
|
||||
)
|
||||
ENABLE_MILVUS_MULTITENANCY_MODE = (
|
||||
os.environ.get("ENABLE_MILVUS_MULTITENANCY_MODE", "false").lower() == "true"
|
||||
)
|
||||
# Hyphens not allowed, need to use underscores in collection names
|
||||
MILVUS_COLLECTION_PREFIX = os.environ.get("MILVUS_COLLECTION_PREFIX", "open_webui")
|
||||
|
||||
# Qdrant
|
||||
QDRANT_URI = os.environ.get("QDRANT_URI", None)
|
||||
|
|
@ -2004,6 +2175,19 @@ PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH = int(
|
|||
os.environ.get("PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH", "1536")
|
||||
)
|
||||
|
||||
PGVECTOR_USE_HALFVEC = os.getenv("PGVECTOR_USE_HALFVEC", "false").lower() == "true"
|
||||
|
||||
if PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH > 2000 and not PGVECTOR_USE_HALFVEC:
|
||||
raise ValueError(
|
||||
"PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH is set to "
|
||||
f"{PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH}, which exceeds the 2000 dimension limit of the "
|
||||
"'vector' type. Set PGVECTOR_USE_HALFVEC=true to enable the 'halfvec' "
|
||||
"type required for high-dimensional embeddings."
|
||||
)
|
||||
|
||||
PGVECTOR_CREATE_EXTENSION = (
|
||||
os.getenv("PGVECTOR_CREATE_EXTENSION", "true").lower() == "true"
|
||||
)
|
||||
PGVECTOR_PGCRYPTO = os.getenv("PGVECTOR_PGCRYPTO", "false").lower() == "true"
|
||||
PGVECTOR_PGCRYPTO_KEY = os.getenv("PGVECTOR_PGCRYPTO_KEY", None)
|
||||
if PGVECTOR_PGCRYPTO and not PGVECTOR_PGCRYPTO_KEY:
|
||||
|
|
@ -2050,6 +2234,40 @@ else:
|
|||
except Exception:
|
||||
PGVECTOR_POOL_RECYCLE = 3600
|
||||
|
||||
PGVECTOR_INDEX_METHOD = os.getenv("PGVECTOR_INDEX_METHOD", "").strip().lower()
|
||||
if PGVECTOR_INDEX_METHOD not in ("ivfflat", "hnsw", ""):
|
||||
PGVECTOR_INDEX_METHOD = ""
|
||||
|
||||
PGVECTOR_HNSW_M = os.environ.get("PGVECTOR_HNSW_M", 16)
|
||||
|
||||
if PGVECTOR_HNSW_M == "":
|
||||
PGVECTOR_HNSW_M = 16
|
||||
else:
|
||||
try:
|
||||
PGVECTOR_HNSW_M = int(PGVECTOR_HNSW_M)
|
||||
except Exception:
|
||||
PGVECTOR_HNSW_M = 16
|
||||
|
||||
PGVECTOR_HNSW_EF_CONSTRUCTION = os.environ.get("PGVECTOR_HNSW_EF_CONSTRUCTION", 64)
|
||||
|
||||
if PGVECTOR_HNSW_EF_CONSTRUCTION == "":
|
||||
PGVECTOR_HNSW_EF_CONSTRUCTION = 64
|
||||
else:
|
||||
try:
|
||||
PGVECTOR_HNSW_EF_CONSTRUCTION = int(PGVECTOR_HNSW_EF_CONSTRUCTION)
|
||||
except Exception:
|
||||
PGVECTOR_HNSW_EF_CONSTRUCTION = 64
|
||||
|
||||
PGVECTOR_IVFFLAT_LISTS = os.environ.get("PGVECTOR_IVFFLAT_LISTS", 100)
|
||||
|
||||
if PGVECTOR_IVFFLAT_LISTS == "":
|
||||
PGVECTOR_IVFFLAT_LISTS = 100
|
||||
else:
|
||||
try:
|
||||
PGVECTOR_IVFFLAT_LISTS = int(PGVECTOR_IVFFLAT_LISTS)
|
||||
except Exception:
|
||||
PGVECTOR_IVFFLAT_LISTS = 100
|
||||
|
||||
# Pinecone
|
||||
PINECONE_API_KEY = os.environ.get("PINECONE_API_KEY", None)
|
||||
PINECONE_ENVIRONMENT = os.environ.get("PINECONE_ENVIRONMENT", None)
|
||||
|
|
@ -2119,10 +2337,20 @@ ENABLE_ONEDRIVE_INTEGRATION = PersistentConfig(
|
|||
os.getenv("ENABLE_ONEDRIVE_INTEGRATION", "False").lower() == "true",
|
||||
)
|
||||
|
||||
ONEDRIVE_CLIENT_ID = PersistentConfig(
|
||||
"ONEDRIVE_CLIENT_ID",
|
||||
"onedrive.client_id",
|
||||
os.environ.get("ONEDRIVE_CLIENT_ID", ""),
|
||||
|
||||
ENABLE_ONEDRIVE_PERSONAL = (
|
||||
os.environ.get("ENABLE_ONEDRIVE_PERSONAL", "True").lower() == "true"
|
||||
)
|
||||
ENABLE_ONEDRIVE_BUSINESS = (
|
||||
os.environ.get("ENABLE_ONEDRIVE_BUSINESS", "True").lower() == "true"
|
||||
)
|
||||
|
||||
ONEDRIVE_CLIENT_ID = os.environ.get("ONEDRIVE_CLIENT_ID", "")
|
||||
ONEDRIVE_CLIENT_ID_PERSONAL = os.environ.get(
|
||||
"ONEDRIVE_CLIENT_ID_PERSONAL", ONEDRIVE_CLIENT_ID
|
||||
)
|
||||
ONEDRIVE_CLIENT_ID_BUSINESS = os.environ.get(
|
||||
"ONEDRIVE_CLIENT_ID_BUSINESS", ONEDRIVE_CLIENT_ID
|
||||
)
|
||||
|
||||
ONEDRIVE_SHAREPOINT_URL = PersistentConfig(
|
||||
|
|
@ -2211,6 +2439,36 @@ DATALAB_MARKER_OUTPUT_FORMAT = PersistentConfig(
|
|||
os.environ.get("DATALAB_MARKER_OUTPUT_FORMAT", "markdown"),
|
||||
)
|
||||
|
||||
MINERU_API_MODE = PersistentConfig(
|
||||
"MINERU_API_MODE",
|
||||
"rag.mineru_api_mode",
|
||||
os.environ.get("MINERU_API_MODE", "local"), # "local" or "cloud"
|
||||
)
|
||||
|
||||
MINERU_API_URL = PersistentConfig(
|
||||
"MINERU_API_URL",
|
||||
"rag.mineru_api_url",
|
||||
os.environ.get("MINERU_API_URL", "http://localhost:8000"),
|
||||
)
|
||||
|
||||
MINERU_API_KEY = PersistentConfig(
|
||||
"MINERU_API_KEY",
|
||||
"rag.mineru_api_key",
|
||||
os.environ.get("MINERU_API_KEY", ""),
|
||||
)
|
||||
|
||||
mineru_params = os.getenv("MINERU_PARAMS", "")
|
||||
try:
|
||||
mineru_params = json.loads(mineru_params)
|
||||
except json.JSONDecodeError:
|
||||
mineru_params = {}
|
||||
|
||||
MINERU_PARAMS = PersistentConfig(
|
||||
"MINERU_PARAMS",
|
||||
"rag.mineru_params",
|
||||
mineru_params,
|
||||
)
|
||||
|
||||
EXTERNAL_DOCUMENT_LOADER_URL = PersistentConfig(
|
||||
"EXTERNAL_DOCUMENT_LOADER_URL",
|
||||
"rag.external_document_loader_url",
|
||||
|
|
@ -2235,6 +2493,30 @@ DOCLING_SERVER_URL = PersistentConfig(
|
|||
os.getenv("DOCLING_SERVER_URL", "http://docling:5001"),
|
||||
)
|
||||
|
||||
docling_params = os.getenv("DOCLING_PARAMS", "")
|
||||
try:
|
||||
docling_params = json.loads(docling_params)
|
||||
except json.JSONDecodeError:
|
||||
docling_params = {}
|
||||
|
||||
DOCLING_PARAMS = PersistentConfig(
|
||||
"DOCLING_PARAMS",
|
||||
"rag.docling_params",
|
||||
docling_params,
|
||||
)
|
||||
|
||||
DOCLING_DO_OCR = PersistentConfig(
|
||||
"DOCLING_DO_OCR",
|
||||
"rag.docling_do_ocr",
|
||||
os.getenv("DOCLING_DO_OCR", "True").lower() == "true",
|
||||
)
|
||||
|
||||
DOCLING_FORCE_OCR = PersistentConfig(
|
||||
"DOCLING_FORCE_OCR",
|
||||
"rag.docling_force_ocr",
|
||||
os.getenv("DOCLING_FORCE_OCR", "False").lower() == "true",
|
||||
)
|
||||
|
||||
DOCLING_OCR_ENGINE = PersistentConfig(
|
||||
"DOCLING_OCR_ENGINE",
|
||||
"rag.docling_ocr_engine",
|
||||
|
|
@ -2247,6 +2529,24 @@ DOCLING_OCR_LANG = PersistentConfig(
|
|||
os.getenv("DOCLING_OCR_LANG", "eng,fra,deu,spa"),
|
||||
)
|
||||
|
||||
DOCLING_PDF_BACKEND = PersistentConfig(
|
||||
"DOCLING_PDF_BACKEND",
|
||||
"rag.docling_pdf_backend",
|
||||
os.getenv("DOCLING_PDF_BACKEND", "dlparse_v4"),
|
||||
)
|
||||
|
||||
DOCLING_TABLE_MODE = PersistentConfig(
|
||||
"DOCLING_TABLE_MODE",
|
||||
"rag.docling_table_mode",
|
||||
os.getenv("DOCLING_TABLE_MODE", "accurate"),
|
||||
)
|
||||
|
||||
DOCLING_PIPELINE = PersistentConfig(
|
||||
"DOCLING_PIPELINE",
|
||||
"rag.docling_pipeline",
|
||||
os.getenv("DOCLING_PIPELINE", "standard"),
|
||||
)
|
||||
|
||||
DOCLING_DO_PICTURE_DESCRIPTION = PersistentConfig(
|
||||
"DOCLING_DO_PICTURE_DESCRIPTION",
|
||||
"rag.docling_do_picture_description",
|
||||
|
|
@ -2299,6 +2599,12 @@ DOCUMENT_INTELLIGENCE_KEY = PersistentConfig(
|
|||
os.getenv("DOCUMENT_INTELLIGENCE_KEY", ""),
|
||||
)
|
||||
|
||||
MISTRAL_OCR_API_BASE_URL = PersistentConfig(
|
||||
"MISTRAL_OCR_API_BASE_URL",
|
||||
"rag.MISTRAL_OCR_API_BASE_URL",
|
||||
os.getenv("MISTRAL_OCR_API_BASE_URL", "https://api.mistral.ai/v1"),
|
||||
)
|
||||
|
||||
MISTRAL_OCR_API_KEY = PersistentConfig(
|
||||
"MISTRAL_OCR_API_KEY",
|
||||
"rag.mistral_ocr_api_key",
|
||||
|
|
@ -2337,6 +2643,13 @@ ENABLE_RAG_HYBRID_SEARCH = PersistentConfig(
|
|||
os.environ.get("ENABLE_RAG_HYBRID_SEARCH", "").lower() == "true",
|
||||
)
|
||||
|
||||
ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS = PersistentConfig(
|
||||
"ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS",
|
||||
"rag.enable_hybrid_search_enriched_texts",
|
||||
os.environ.get("ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS", "False").lower()
|
||||
== "true",
|
||||
)
|
||||
|
||||
RAG_FULL_CONTEXT = PersistentConfig(
|
||||
"RAG_FULL_CONTEXT",
|
||||
"rag.full_context",
|
||||
|
|
@ -2524,10 +2837,6 @@ Provide a clear and direct response to the user's query, including inline citati
|
|||
<context>
|
||||
{{CONTEXT}}
|
||||
</context>
|
||||
|
||||
<user_query>
|
||||
{{QUERY}}
|
||||
</user_query>
|
||||
"""
|
||||
|
||||
RAG_TEMPLATE = PersistentConfig(
|
||||
|
|
@ -2580,6 +2889,26 @@ ENABLE_RAG_LOCAL_WEB_FETCH = (
|
|||
os.getenv("ENABLE_RAG_LOCAL_WEB_FETCH", "False").lower() == "true"
|
||||
)
|
||||
|
||||
|
||||
DEFAULT_WEB_FETCH_FILTER_LIST = [
|
||||
"!169.254.169.254",
|
||||
"!fd00:ec2::254",
|
||||
"!metadata.google.internal",
|
||||
"!metadata.azure.com",
|
||||
"!100.100.100.200",
|
||||
]
|
||||
|
||||
web_fetch_filter_list = os.getenv("WEB_FETCH_FILTER_LIST", "")
|
||||
if web_fetch_filter_list == "":
|
||||
web_fetch_filter_list = []
|
||||
else:
|
||||
web_fetch_filter_list = [
|
||||
item.strip() for item in web_fetch_filter_list.split(",") if item.strip()
|
||||
]
|
||||
|
||||
WEB_FETCH_FILTER_LIST = list(set(DEFAULT_WEB_FETCH_FILTER_LIST + web_fetch_filter_list))
|
||||
|
||||
|
||||
YOUTUBE_LOADER_LANGUAGE = PersistentConfig(
|
||||
"YOUTUBE_LOADER_LANGUAGE",
|
||||
"rag.youtube_loader_language",
|
||||
|
|
@ -2638,6 +2967,7 @@ WEB_SEARCH_DOMAIN_FILTER_LIST = PersistentConfig(
|
|||
# "wikipedia.com",
|
||||
# "wikimedia.org",
|
||||
# "wikidata.org",
|
||||
# "!stackoverflow.com",
|
||||
],
|
||||
)
|
||||
|
||||
|
|
@ -2675,6 +3005,12 @@ WEB_SEARCH_TRUST_ENV = PersistentConfig(
|
|||
)
|
||||
|
||||
|
||||
OLLAMA_CLOUD_WEB_SEARCH_API_KEY = PersistentConfig(
|
||||
"OLLAMA_CLOUD_WEB_SEARCH_API_KEY",
|
||||
"rag.web.search.ollama_cloud_api_key",
|
||||
os.getenv("OLLAMA_CLOUD_API_KEY", ""),
|
||||
)
|
||||
|
||||
SEARXNG_QUERY_URL = PersistentConfig(
|
||||
"SEARXNG_QUERY_URL",
|
||||
"rag.web.search.searxng_query_url",
|
||||
|
|
@ -2803,6 +3139,24 @@ BING_SEARCH_V7_SUBSCRIPTION_KEY = PersistentConfig(
|
|||
os.environ.get("BING_SEARCH_V7_SUBSCRIPTION_KEY", ""),
|
||||
)
|
||||
|
||||
AZURE_AI_SEARCH_API_KEY = PersistentConfig(
|
||||
"AZURE_AI_SEARCH_API_KEY",
|
||||
"rag.web.search.azure_ai_search_api_key",
|
||||
os.environ.get("AZURE_AI_SEARCH_API_KEY", ""),
|
||||
)
|
||||
|
||||
AZURE_AI_SEARCH_ENDPOINT = PersistentConfig(
|
||||
"AZURE_AI_SEARCH_ENDPOINT",
|
||||
"rag.web.search.azure_ai_search_endpoint",
|
||||
os.environ.get("AZURE_AI_SEARCH_ENDPOINT", ""),
|
||||
)
|
||||
|
||||
AZURE_AI_SEARCH_INDEX_NAME = PersistentConfig(
|
||||
"AZURE_AI_SEARCH_INDEX_NAME",
|
||||
"rag.web.search.azure_ai_search_index_name",
|
||||
os.environ.get("AZURE_AI_SEARCH_INDEX_NAME", ""),
|
||||
)
|
||||
|
||||
EXA_API_KEY = PersistentConfig(
|
||||
"EXA_API_KEY",
|
||||
"rag.web.search.exa_api_key",
|
||||
|
|
@ -2827,6 +3181,12 @@ PERPLEXITY_SEARCH_CONTEXT_USAGE = PersistentConfig(
|
|||
os.getenv("PERPLEXITY_SEARCH_CONTEXT_USAGE", "medium"),
|
||||
)
|
||||
|
||||
PERPLEXITY_SEARCH_API_URL = PersistentConfig(
|
||||
"PERPLEXITY_SEARCH_API_URL",
|
||||
"rag.web.search.perplexity_search_api_url",
|
||||
os.getenv("PERPLEXITY_SEARCH_API_URL", "https://api.perplexity.ai/search"),
|
||||
)
|
||||
|
||||
SOUGOU_API_SID = PersistentConfig(
|
||||
"SOUGOU_API_SID",
|
||||
"rag.web.search.sougou_api_sid",
|
||||
|
|
@ -2903,16 +3263,30 @@ EXTERNAL_WEB_LOADER_API_KEY = PersistentConfig(
|
|||
# Images
|
||||
####################################
|
||||
|
||||
ENABLE_IMAGE_GENERATION = PersistentConfig(
|
||||
"ENABLE_IMAGE_GENERATION",
|
||||
"image_generation.enable",
|
||||
os.environ.get("ENABLE_IMAGE_GENERATION", "").lower() == "true",
|
||||
)
|
||||
|
||||
IMAGE_GENERATION_ENGINE = PersistentConfig(
|
||||
"IMAGE_GENERATION_ENGINE",
|
||||
"image_generation.engine",
|
||||
os.getenv("IMAGE_GENERATION_ENGINE", "openai"),
|
||||
)
|
||||
|
||||
ENABLE_IMAGE_GENERATION = PersistentConfig(
|
||||
"ENABLE_IMAGE_GENERATION",
|
||||
"image_generation.enable",
|
||||
os.environ.get("ENABLE_IMAGE_GENERATION", "").lower() == "true",
|
||||
IMAGE_GENERATION_MODEL = PersistentConfig(
|
||||
"IMAGE_GENERATION_MODEL",
|
||||
"image_generation.model",
|
||||
os.getenv("IMAGE_GENERATION_MODEL", ""),
|
||||
)
|
||||
|
||||
IMAGE_SIZE = PersistentConfig(
|
||||
"IMAGE_SIZE", "image_generation.size", os.getenv("IMAGE_SIZE", "512x512")
|
||||
)
|
||||
|
||||
IMAGE_STEPS = PersistentConfig(
|
||||
"IMAGE_STEPS", "image_generation.steps", int(os.getenv("IMAGE_STEPS", 50))
|
||||
)
|
||||
|
||||
ENABLE_IMAGE_PROMPT_GENERATION = PersistentConfig(
|
||||
|
|
@ -2932,35 +3306,16 @@ AUTOMATIC1111_API_AUTH = PersistentConfig(
|
|||
os.getenv("AUTOMATIC1111_API_AUTH", ""),
|
||||
)
|
||||
|
||||
AUTOMATIC1111_CFG_SCALE = PersistentConfig(
|
||||
"AUTOMATIC1111_CFG_SCALE",
|
||||
"image_generation.automatic1111.cfg_scale",
|
||||
(
|
||||
float(os.environ.get("AUTOMATIC1111_CFG_SCALE"))
|
||||
if os.environ.get("AUTOMATIC1111_CFG_SCALE")
|
||||
else None
|
||||
),
|
||||
)
|
||||
automatic1111_params = os.getenv("AUTOMATIC1111_PARAMS", "")
|
||||
try:
|
||||
automatic1111_params = json.loads(automatic1111_params)
|
||||
except json.JSONDecodeError:
|
||||
automatic1111_params = {}
|
||||
|
||||
|
||||
AUTOMATIC1111_SAMPLER = PersistentConfig(
|
||||
"AUTOMATIC1111_SAMPLER",
|
||||
"image_generation.automatic1111.sampler",
|
||||
(
|
||||
os.environ.get("AUTOMATIC1111_SAMPLER")
|
||||
if os.environ.get("AUTOMATIC1111_SAMPLER")
|
||||
else None
|
||||
),
|
||||
)
|
||||
|
||||
AUTOMATIC1111_SCHEDULER = PersistentConfig(
|
||||
"AUTOMATIC1111_SCHEDULER",
|
||||
"image_generation.automatic1111.scheduler",
|
||||
(
|
||||
os.environ.get("AUTOMATIC1111_SCHEDULER")
|
||||
if os.environ.get("AUTOMATIC1111_SCHEDULER")
|
||||
else None
|
||||
),
|
||||
AUTOMATIC1111_PARAMS = PersistentConfig(
|
||||
"AUTOMATIC1111_PARAMS",
|
||||
"image_generation.automatic1111.api_params",
|
||||
automatic1111_params,
|
||||
)
|
||||
|
||||
COMFYUI_BASE_URL = PersistentConfig(
|
||||
|
|
@ -3103,12 +3458,30 @@ IMAGES_OPENAI_API_BASE_URL = PersistentConfig(
|
|||
"image_generation.openai.api_base_url",
|
||||
os.getenv("IMAGES_OPENAI_API_BASE_URL", OPENAI_API_BASE_URL),
|
||||
)
|
||||
IMAGES_OPENAI_API_VERSION = PersistentConfig(
|
||||
"IMAGES_OPENAI_API_VERSION",
|
||||
"image_generation.openai.api_version",
|
||||
os.getenv("IMAGES_OPENAI_API_VERSION", ""),
|
||||
)
|
||||
|
||||
IMAGES_OPENAI_API_KEY = PersistentConfig(
|
||||
"IMAGES_OPENAI_API_KEY",
|
||||
"image_generation.openai.api_key",
|
||||
os.getenv("IMAGES_OPENAI_API_KEY", OPENAI_API_KEY),
|
||||
)
|
||||
|
||||
images_openai_params = os.getenv("IMAGES_OPENAI_PARAMS", "")
|
||||
try:
|
||||
images_openai_params = json.loads(images_openai_params)
|
||||
except json.JSONDecodeError:
|
||||
images_openai_params = {}
|
||||
|
||||
|
||||
IMAGES_OPENAI_API_PARAMS = PersistentConfig(
|
||||
"IMAGES_OPENAI_API_PARAMS", "image_generation.openai.params", images_openai_params
|
||||
)
|
||||
|
||||
|
||||
IMAGES_GEMINI_API_BASE_URL = PersistentConfig(
|
||||
"IMAGES_GEMINI_API_BASE_URL",
|
||||
"image_generation.gemini.api_base_url",
|
||||
|
|
@ -3120,18 +3493,79 @@ IMAGES_GEMINI_API_KEY = PersistentConfig(
|
|||
os.getenv("IMAGES_GEMINI_API_KEY", GEMINI_API_KEY),
|
||||
)
|
||||
|
||||
IMAGE_SIZE = PersistentConfig(
|
||||
"IMAGE_SIZE", "image_generation.size", os.getenv("IMAGE_SIZE", "512x512")
|
||||
IMAGES_GEMINI_ENDPOINT_METHOD = PersistentConfig(
|
||||
"IMAGES_GEMINI_ENDPOINT_METHOD",
|
||||
"image_generation.gemini.endpoint_method",
|
||||
os.getenv("IMAGES_GEMINI_ENDPOINT_METHOD", ""),
|
||||
)
|
||||
|
||||
IMAGE_STEPS = PersistentConfig(
|
||||
"IMAGE_STEPS", "image_generation.steps", int(os.getenv("IMAGE_STEPS", 50))
|
||||
|
||||
IMAGE_EDIT_ENGINE = PersistentConfig(
|
||||
"IMAGE_EDIT_ENGINE",
|
||||
"images.edit.engine",
|
||||
os.getenv("IMAGE_EDIT_ENGINE", "openai"),
|
||||
)
|
||||
|
||||
IMAGE_GENERATION_MODEL = PersistentConfig(
|
||||
"IMAGE_GENERATION_MODEL",
|
||||
"image_generation.model",
|
||||
os.getenv("IMAGE_GENERATION_MODEL", ""),
|
||||
IMAGE_EDIT_MODEL = PersistentConfig(
|
||||
"IMAGE_EDIT_MODEL",
|
||||
"images.edit.model",
|
||||
os.getenv("IMAGE_EDIT_MODEL", ""),
|
||||
)
|
||||
|
||||
IMAGE_EDIT_SIZE = PersistentConfig(
|
||||
"IMAGE_EDIT_SIZE", "images.edit.size", os.getenv("IMAGE_EDIT_SIZE", "")
|
||||
)
|
||||
|
||||
IMAGES_EDIT_OPENAI_API_BASE_URL = PersistentConfig(
|
||||
"IMAGES_EDIT_OPENAI_API_BASE_URL",
|
||||
"images.edit.openai.api_base_url",
|
||||
os.getenv("IMAGES_EDIT_OPENAI_API_BASE_URL", OPENAI_API_BASE_URL),
|
||||
)
|
||||
IMAGES_EDIT_OPENAI_API_VERSION = PersistentConfig(
|
||||
"IMAGES_EDIT_OPENAI_API_VERSION",
|
||||
"images.edit.openai.api_version",
|
||||
os.getenv("IMAGES_EDIT_OPENAI_API_VERSION", ""),
|
||||
)
|
||||
|
||||
IMAGES_EDIT_OPENAI_API_KEY = PersistentConfig(
|
||||
"IMAGES_EDIT_OPENAI_API_KEY",
|
||||
"images.edit.openai.api_key",
|
||||
os.getenv("IMAGES_EDIT_OPENAI_API_KEY", OPENAI_API_KEY),
|
||||
)
|
||||
|
||||
IMAGES_EDIT_GEMINI_API_BASE_URL = PersistentConfig(
|
||||
"IMAGES_EDIT_GEMINI_API_BASE_URL",
|
||||
"images.edit.gemini.api_base_url",
|
||||
os.getenv("IMAGES_EDIT_GEMINI_API_BASE_URL", GEMINI_API_BASE_URL),
|
||||
)
|
||||
IMAGES_EDIT_GEMINI_API_KEY = PersistentConfig(
|
||||
"IMAGES_EDIT_GEMINI_API_KEY",
|
||||
"images.edit.gemini.api_key",
|
||||
os.getenv("IMAGES_EDIT_GEMINI_API_KEY", GEMINI_API_KEY),
|
||||
)
|
||||
|
||||
|
||||
IMAGES_EDIT_COMFYUI_BASE_URL = PersistentConfig(
|
||||
"IMAGES_EDIT_COMFYUI_BASE_URL",
|
||||
"images.edit.comfyui.base_url",
|
||||
os.getenv("IMAGES_EDIT_COMFYUI_BASE_URL", ""),
|
||||
)
|
||||
IMAGES_EDIT_COMFYUI_API_KEY = PersistentConfig(
|
||||
"IMAGES_EDIT_COMFYUI_API_KEY",
|
||||
"images.edit.comfyui.api_key",
|
||||
os.getenv("IMAGES_EDIT_COMFYUI_API_KEY", ""),
|
||||
)
|
||||
|
||||
IMAGES_EDIT_COMFYUI_WORKFLOW = PersistentConfig(
|
||||
"IMAGES_EDIT_COMFYUI_WORKFLOW",
|
||||
"images.edit.comfyui.workflow",
|
||||
os.getenv("IMAGES_EDIT_COMFYUI_WORKFLOW", ""),
|
||||
)
|
||||
|
||||
IMAGES_EDIT_COMFYUI_WORKFLOW_NODES = PersistentConfig(
|
||||
"IMAGES_EDIT_COMFYUI_WORKFLOW_NODES",
|
||||
"images.edit.comfyui.nodes",
|
||||
[],
|
||||
)
|
||||
|
||||
####################################
|
||||
|
|
@ -3166,6 +3600,10 @@ DEEPGRAM_API_KEY = PersistentConfig(
|
|||
os.getenv("DEEPGRAM_API_KEY", ""),
|
||||
)
|
||||
|
||||
# ElevenLabs configuration
|
||||
ELEVENLABS_API_BASE_URL = os.getenv(
|
||||
"ELEVENLABS_API_BASE_URL", "https://api.elevenlabs.io"
|
||||
)
|
||||
|
||||
AUDIO_STT_OPENAI_API_BASE_URL = PersistentConfig(
|
||||
"AUDIO_STT_OPENAI_API_BASE_URL",
|
||||
|
|
@ -3233,6 +3671,24 @@ AUDIO_STT_AZURE_MAX_SPEAKERS = PersistentConfig(
|
|||
os.getenv("AUDIO_STT_AZURE_MAX_SPEAKERS", ""),
|
||||
)
|
||||
|
||||
AUDIO_STT_MISTRAL_API_KEY = PersistentConfig(
|
||||
"AUDIO_STT_MISTRAL_API_KEY",
|
||||
"audio.stt.mistral.api_key",
|
||||
os.getenv("AUDIO_STT_MISTRAL_API_KEY", ""),
|
||||
)
|
||||
|
||||
AUDIO_STT_MISTRAL_API_BASE_URL = PersistentConfig(
|
||||
"AUDIO_STT_MISTRAL_API_BASE_URL",
|
||||
"audio.stt.mistral.api_base_url",
|
||||
os.getenv("AUDIO_STT_MISTRAL_API_BASE_URL", "https://api.mistral.ai/v1"),
|
||||
)
|
||||
|
||||
AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS = PersistentConfig(
|
||||
"AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS",
|
||||
"audio.stt.mistral.use_chat_completions",
|
||||
os.getenv("AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS", "false").lower() == "true",
|
||||
)
|
||||
|
||||
AUDIO_TTS_OPENAI_API_BASE_URL = PersistentConfig(
|
||||
"AUDIO_TTS_OPENAI_API_BASE_URL",
|
||||
"audio.tts.openai.api_base_url",
|
||||
|
|
@ -3244,6 +3700,19 @@ AUDIO_TTS_OPENAI_API_KEY = PersistentConfig(
|
|||
os.getenv("AUDIO_TTS_OPENAI_API_KEY", OPENAI_API_KEY),
|
||||
)
|
||||
|
||||
audio_tts_openai_params = os.getenv("AUDIO_TTS_OPENAI_PARAMS", "")
|
||||
try:
|
||||
audio_tts_openai_params = json.loads(audio_tts_openai_params)
|
||||
except json.JSONDecodeError:
|
||||
audio_tts_openai_params = {}
|
||||
|
||||
AUDIO_TTS_OPENAI_PARAMS = PersistentConfig(
|
||||
"AUDIO_TTS_OPENAI_PARAMS",
|
||||
"audio.tts.openai.params",
|
||||
audio_tts_openai_params,
|
||||
)
|
||||
|
||||
|
||||
AUDIO_TTS_API_KEY = PersistentConfig(
|
||||
"AUDIO_TTS_API_KEY",
|
||||
"audio.tts.api_key",
|
||||
|
|
|
|||
|
|
@ -38,6 +38,7 @@ class ERROR_MESSAGES(str, Enum):
|
|||
ID_TAKEN = "Uh-oh! This id is already registered. Please choose another id string."
|
||||
MODEL_ID_TAKEN = "Uh-oh! This model id is already registered. Please choose another model id string."
|
||||
NAME_TAG_TAKEN = "Uh-oh! This name tag is already registered. Please choose another name tag string."
|
||||
MODEL_ID_TOO_LONG = "The model id is too long. Please make sure your model id is less than 256 characters long."
|
||||
|
||||
INVALID_TOKEN = (
|
||||
"Your session has expired or the token is invalid. Please sign in again."
|
||||
|
|
|
|||
|
|
@ -212,6 +212,11 @@ ENABLE_FORWARD_USER_INFO_HEADERS = (
|
|||
os.environ.get("ENABLE_FORWARD_USER_INFO_HEADERS", "False").lower() == "true"
|
||||
)
|
||||
|
||||
# Experimental feature, may be removed in future
|
||||
ENABLE_STAR_SESSIONS_MIDDLEWARE = (
|
||||
os.environ.get("ENABLE_STAR_SESSIONS_MIDDLEWARE", "False").lower() == "true"
|
||||
)
|
||||
|
||||
####################################
|
||||
# WEBUI_BUILD_HASH
|
||||
####################################
|
||||
|
|
@ -465,12 +470,33 @@ ENABLE_COMPRESSION_MIDDLEWARE = (
|
|||
os.environ.get("ENABLE_COMPRESSION_MIDDLEWARE", "True").lower() == "true"
|
||||
)
|
||||
|
||||
####################################
|
||||
# OAUTH Configuration
|
||||
####################################
|
||||
ENABLE_OAUTH_EMAIL_FALLBACK = (
|
||||
os.environ.get("ENABLE_OAUTH_EMAIL_FALLBACK", "False").lower() == "true"
|
||||
)
|
||||
|
||||
ENABLE_OAUTH_ID_TOKEN_COOKIE = (
|
||||
os.environ.get("ENABLE_OAUTH_ID_TOKEN_COOKIE", "True").lower() == "true"
|
||||
)
|
||||
|
||||
OAUTH_CLIENT_INFO_ENCRYPTION_KEY = os.environ.get(
|
||||
"OAUTH_CLIENT_INFO_ENCRYPTION_KEY", WEBUI_SECRET_KEY
|
||||
)
|
||||
|
||||
OAUTH_SESSION_TOKEN_ENCRYPTION_KEY = os.environ.get(
|
||||
"OAUTH_SESSION_TOKEN_ENCRYPTION_KEY", WEBUI_SECRET_KEY
|
||||
)
|
||||
|
||||
####################################
|
||||
# SCIM Configuration
|
||||
####################################
|
||||
|
||||
SCIM_ENABLED = os.environ.get("SCIM_ENABLED", "False").lower() == "true"
|
||||
ENABLE_SCIM = (
|
||||
os.environ.get("ENABLE_SCIM", os.environ.get("SCIM_ENABLED", "False")).lower()
|
||||
== "true"
|
||||
)
|
||||
SCIM_TOKEN = os.environ.get("SCIM_TOKEN", "")
|
||||
|
||||
####################################
|
||||
|
|
@ -518,6 +544,10 @@ else:
|
|||
# CHAT
|
||||
####################################
|
||||
|
||||
ENABLE_CHAT_RESPONSE_BASE64_IMAGE_URL_CONVERSION = (
|
||||
os.environ.get("REPLACE_IMAGE_URLS_IN_CHAT_RESPONSE", "False").lower() == "true"
|
||||
)
|
||||
|
||||
CHAT_RESPONSE_STREAM_DELTA_CHUNK_SIZE = os.environ.get(
|
||||
"CHAT_RESPONSE_STREAM_DELTA_CHUNK_SIZE", "1"
|
||||
)
|
||||
|
|
@ -534,16 +564,31 @@ else:
|
|||
|
||||
|
||||
CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = os.environ.get(
|
||||
"CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES", "10"
|
||||
"CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES", "30"
|
||||
)
|
||||
|
||||
if CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES == "":
|
||||
CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = 10
|
||||
CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = 30
|
||||
else:
|
||||
try:
|
||||
CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = int(CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES)
|
||||
except Exception:
|
||||
CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = 10
|
||||
CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = 30
|
||||
|
||||
|
||||
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE = os.environ.get(
|
||||
"CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE", ""
|
||||
)
|
||||
|
||||
if CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE == "":
|
||||
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE = None
|
||||
else:
|
||||
try:
|
||||
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE = int(
|
||||
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE
|
||||
)
|
||||
except Exception:
|
||||
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE = None
|
||||
|
||||
|
||||
####################################
|
||||
|
|
@ -557,6 +602,17 @@ ENABLE_WEBSOCKET_SUPPORT = (
|
|||
|
||||
WEBSOCKET_MANAGER = os.environ.get("WEBSOCKET_MANAGER", "")
|
||||
|
||||
WEBSOCKET_REDIS_OPTIONS = os.environ.get("WEBSOCKET_REDIS_OPTIONS", "")
|
||||
if WEBSOCKET_REDIS_OPTIONS == "":
|
||||
log.debug("No WEBSOCKET_REDIS_OPTIONS provided, defaulting to None")
|
||||
WEBSOCKET_REDIS_OPTIONS = None
|
||||
else:
|
||||
try:
|
||||
WEBSOCKET_REDIS_OPTIONS = json.loads(WEBSOCKET_REDIS_OPTIONS)
|
||||
except Exception:
|
||||
log.warning("Invalid WEBSOCKET_REDIS_OPTIONS, defaulting to None")
|
||||
WEBSOCKET_REDIS_OPTIONS = None
|
||||
|
||||
WEBSOCKET_REDIS_URL = os.environ.get("WEBSOCKET_REDIS_URL", REDIS_URL)
|
||||
WEBSOCKET_REDIS_CLUSTER = (
|
||||
os.environ.get("WEBSOCKET_REDIS_CLUSTER", str(REDIS_CLUSTER)).lower() == "true"
|
||||
|
|
@ -571,6 +627,23 @@ except ValueError:
|
|||
|
||||
WEBSOCKET_SENTINEL_HOSTS = os.environ.get("WEBSOCKET_SENTINEL_HOSTS", "")
|
||||
WEBSOCKET_SENTINEL_PORT = os.environ.get("WEBSOCKET_SENTINEL_PORT", "26379")
|
||||
WEBSOCKET_SERVER_LOGGING = (
|
||||
os.environ.get("WEBSOCKET_SERVER_LOGGING", "False").lower() == "true"
|
||||
)
|
||||
WEBSOCKET_SERVER_ENGINEIO_LOGGING = (
|
||||
os.environ.get("WEBSOCKET_SERVER_LOGGING", "False").lower() == "true"
|
||||
)
|
||||
WEBSOCKET_SERVER_PING_TIMEOUT = os.environ.get("WEBSOCKET_SERVER_PING_TIMEOUT", "20")
|
||||
try:
|
||||
WEBSOCKET_SERVER_PING_TIMEOUT = int(WEBSOCKET_SERVER_PING_TIMEOUT)
|
||||
except ValueError:
|
||||
WEBSOCKET_SERVER_PING_TIMEOUT = 20
|
||||
|
||||
WEBSOCKET_SERVER_PING_INTERVAL = os.environ.get("WEBSOCKET_SERVER_PING_INTERVAL", "25")
|
||||
try:
|
||||
WEBSOCKET_SERVER_PING_INTERVAL = int(WEBSOCKET_SERVER_PING_INTERVAL)
|
||||
except ValueError:
|
||||
WEBSOCKET_SERVER_PING_INTERVAL = 25
|
||||
|
||||
|
||||
AIOHTTP_CLIENT_TIMEOUT = os.environ.get("AIOHTTP_CLIENT_TIMEOUT", "")
|
||||
|
|
@ -683,7 +756,9 @@ if OFFLINE_MODE:
|
|||
# AUDIT LOGGING
|
||||
####################################
|
||||
# Where to store log file
|
||||
AUDIT_LOGS_FILE_PATH = f"{DATA_DIR}/audit.log"
|
||||
# Defaults to the DATA_DIR/audit.log. To set AUDIT_LOGS_FILE_PATH you need to
|
||||
# provide the whole path, like: /app/audit.log
|
||||
AUDIT_LOGS_FILE_PATH = os.getenv("AUDIT_LOGS_FILE_PATH", f"{DATA_DIR}/audit.log")
|
||||
# Maximum size of a file before rotating into a new log file
|
||||
AUDIT_LOG_FILE_ROTATION_SIZE = os.getenv("AUDIT_LOG_FILE_ROTATION_SIZE", "10MB")
|
||||
|
||||
|
|
|
|||
|
|
@ -19,6 +19,7 @@ from fastapi import (
|
|||
from starlette.responses import Response, StreamingResponse
|
||||
|
||||
|
||||
from open_webui.constants import ERROR_MESSAGES
|
||||
from open_webui.socket.main import (
|
||||
get_event_call,
|
||||
get_event_emitter,
|
||||
|
|
@ -60,8 +61,20 @@ def get_function_module_by_id(request: Request, pipe_id: str):
|
|||
function_module, _, _ = get_function_module_from_cache(request, pipe_id)
|
||||
|
||||
if hasattr(function_module, "valves") and hasattr(function_module, "Valves"):
|
||||
Valves = function_module.Valves
|
||||
valves = Functions.get_function_valves_by_id(pipe_id)
|
||||
function_module.valves = function_module.Valves(**(valves if valves else {}))
|
||||
|
||||
if valves:
|
||||
try:
|
||||
function_module.valves = Valves(
|
||||
**{k: v for k, v in valves.items() if v is not None}
|
||||
)
|
||||
except Exception as e:
|
||||
log.exception(f"Error loading valves for function {pipe_id}: {e}")
|
||||
raise e
|
||||
else:
|
||||
function_module.valves = Valves()
|
||||
|
||||
return function_module
|
||||
|
||||
|
||||
|
|
@ -70,65 +83,75 @@ async def get_function_models(request):
|
|||
pipe_models = []
|
||||
|
||||
for pipe in pipes:
|
||||
function_module = get_function_module_by_id(request, pipe.id)
|
||||
try:
|
||||
function_module = get_function_module_by_id(request, pipe.id)
|
||||
|
||||
# Check if function is a manifold
|
||||
if hasattr(function_module, "pipes"):
|
||||
sub_pipes = []
|
||||
has_user_valves = False
|
||||
if hasattr(function_module, "UserValves"):
|
||||
has_user_valves = True
|
||||
|
||||
# Handle pipes being a list, sync function, or async function
|
||||
try:
|
||||
if callable(function_module.pipes):
|
||||
if asyncio.iscoroutinefunction(function_module.pipes):
|
||||
sub_pipes = await function_module.pipes()
|
||||
else:
|
||||
sub_pipes = function_module.pipes()
|
||||
else:
|
||||
sub_pipes = function_module.pipes
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
# Check if function is a manifold
|
||||
if hasattr(function_module, "pipes"):
|
||||
sub_pipes = []
|
||||
|
||||
log.debug(
|
||||
f"get_function_models: function '{pipe.id}' is a manifold of {sub_pipes}"
|
||||
)
|
||||
# Handle pipes being a list, sync function, or async function
|
||||
try:
|
||||
if callable(function_module.pipes):
|
||||
if asyncio.iscoroutinefunction(function_module.pipes):
|
||||
sub_pipes = await function_module.pipes()
|
||||
else:
|
||||
sub_pipes = function_module.pipes()
|
||||
else:
|
||||
sub_pipes = function_module.pipes
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
sub_pipes = []
|
||||
|
||||
for p in sub_pipes:
|
||||
sub_pipe_id = f'{pipe.id}.{p["id"]}'
|
||||
sub_pipe_name = p["name"]
|
||||
log.debug(
|
||||
f"get_function_models: function '{pipe.id}' is a manifold of {sub_pipes}"
|
||||
)
|
||||
|
||||
if hasattr(function_module, "name"):
|
||||
sub_pipe_name = f"{function_module.name}{sub_pipe_name}"
|
||||
for p in sub_pipes:
|
||||
sub_pipe_id = f'{pipe.id}.{p["id"]}'
|
||||
sub_pipe_name = p["name"]
|
||||
|
||||
pipe_flag = {"type": pipe.type}
|
||||
if hasattr(function_module, "name"):
|
||||
sub_pipe_name = f"{function_module.name}{sub_pipe_name}"
|
||||
|
||||
pipe_flag = {"type": pipe.type}
|
||||
|
||||
pipe_models.append(
|
||||
{
|
||||
"id": sub_pipe_id,
|
||||
"name": sub_pipe_name,
|
||||
"object": "model",
|
||||
"created": pipe.created_at,
|
||||
"owned_by": "openai",
|
||||
"pipe": pipe_flag,
|
||||
"has_user_valves": has_user_valves,
|
||||
}
|
||||
)
|
||||
else:
|
||||
pipe_flag = {"type": "pipe"}
|
||||
|
||||
log.debug(
|
||||
f"get_function_models: function '{pipe.id}' is a single pipe {{ 'id': {pipe.id}, 'name': {pipe.name} }}"
|
||||
)
|
||||
|
||||
pipe_models.append(
|
||||
{
|
||||
"id": sub_pipe_id,
|
||||
"name": sub_pipe_name,
|
||||
"id": pipe.id,
|
||||
"name": pipe.name,
|
||||
"object": "model",
|
||||
"created": pipe.created_at,
|
||||
"owned_by": "openai",
|
||||
"pipe": pipe_flag,
|
||||
"has_user_valves": has_user_valves,
|
||||
}
|
||||
)
|
||||
else:
|
||||
pipe_flag = {"type": "pipe"}
|
||||
|
||||
log.debug(
|
||||
f"get_function_models: function '{pipe.id}' is a single pipe {{ 'id': {pipe.id}, 'name': {pipe.name} }}"
|
||||
)
|
||||
|
||||
pipe_models.append(
|
||||
{
|
||||
"id": pipe.id,
|
||||
"name": pipe.name,
|
||||
"object": "model",
|
||||
"created": pipe.created_at,
|
||||
"owned_by": "openai",
|
||||
"pipe": pipe_flag,
|
||||
}
|
||||
)
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
continue
|
||||
|
||||
return pipe_models
|
||||
|
||||
|
|
@ -219,6 +242,16 @@ async def generate_function_chat_completion(
|
|||
__task__ = metadata.get("task", None)
|
||||
__task_body__ = metadata.get("task_body", None)
|
||||
|
||||
oauth_token = None
|
||||
try:
|
||||
if request.cookies.get("oauth_session_id", None):
|
||||
oauth_token = await request.app.state.oauth_manager.get_oauth_token(
|
||||
user.id,
|
||||
request.cookies.get("oauth_session_id", None),
|
||||
)
|
||||
except Exception as e:
|
||||
log.error(f"Error getting OAuth token: {e}")
|
||||
|
||||
extra_params = {
|
||||
"__event_emitter__": __event_emitter__,
|
||||
"__event_call__": __event_call__,
|
||||
|
|
@ -230,6 +263,7 @@ async def generate_function_chat_completion(
|
|||
"__files__": files,
|
||||
"__user__": user.model_dump() if isinstance(user, UserModel) else {},
|
||||
"__metadata__": metadata,
|
||||
"__oauth_token__": oauth_token,
|
||||
"__request__": request,
|
||||
}
|
||||
extra_params["__tools__"] = await get_tools(
|
||||
|
|
|
|||
|
|
@ -8,6 +8,7 @@ import shutil
|
|||
import sys
|
||||
import time
|
||||
import random
|
||||
import re
|
||||
from uuid import uuid4
|
||||
|
||||
|
||||
|
|
@ -50,6 +51,11 @@ from starlette.middleware.sessions import SessionMiddleware
|
|||
from starlette.responses import Response, StreamingResponse
|
||||
from starlette.datastructures import Headers
|
||||
|
||||
from starsessions import (
|
||||
SessionMiddleware as StarSessionsMiddleware,
|
||||
SessionAutoloadMiddleware,
|
||||
)
|
||||
from starsessions.stores.redis import RedisStore
|
||||
|
||||
from open_webui.utils import logger
|
||||
from open_webui.utils.audit import AuditLevel, AuditLoggingMiddleware
|
||||
|
|
@ -110,9 +116,6 @@ from open_webui.config import (
|
|||
OLLAMA_API_CONFIGS,
|
||||
# OpenAI
|
||||
ENABLE_OPENAI_API,
|
||||
ONEDRIVE_CLIENT_ID,
|
||||
ONEDRIVE_SHAREPOINT_URL,
|
||||
ONEDRIVE_SHAREPOINT_TENANT_ID,
|
||||
OPENAI_API_BASE_URLS,
|
||||
OPENAI_API_KEYS,
|
||||
OPENAI_API_CONFIGS,
|
||||
|
|
@ -143,9 +146,7 @@ from open_webui.config import (
|
|||
# Image
|
||||
AUTOMATIC1111_API_AUTH,
|
||||
AUTOMATIC1111_BASE_URL,
|
||||
AUTOMATIC1111_CFG_SCALE,
|
||||
AUTOMATIC1111_SAMPLER,
|
||||
AUTOMATIC1111_SCHEDULER,
|
||||
AUTOMATIC1111_PARAMS,
|
||||
COMFYUI_BASE_URL,
|
||||
COMFYUI_API_KEY,
|
||||
COMFYUI_WORKFLOW,
|
||||
|
|
@ -157,9 +158,24 @@ from open_webui.config import (
|
|||
IMAGE_SIZE,
|
||||
IMAGE_STEPS,
|
||||
IMAGES_OPENAI_API_BASE_URL,
|
||||
IMAGES_OPENAI_API_VERSION,
|
||||
IMAGES_OPENAI_API_KEY,
|
||||
IMAGES_OPENAI_API_PARAMS,
|
||||
IMAGES_GEMINI_API_BASE_URL,
|
||||
IMAGES_GEMINI_API_KEY,
|
||||
IMAGES_GEMINI_ENDPOINT_METHOD,
|
||||
IMAGE_EDIT_ENGINE,
|
||||
IMAGE_EDIT_MODEL,
|
||||
IMAGE_EDIT_SIZE,
|
||||
IMAGES_EDIT_OPENAI_API_BASE_URL,
|
||||
IMAGES_EDIT_OPENAI_API_KEY,
|
||||
IMAGES_EDIT_OPENAI_API_VERSION,
|
||||
IMAGES_EDIT_GEMINI_API_BASE_URL,
|
||||
IMAGES_EDIT_GEMINI_API_KEY,
|
||||
IMAGES_EDIT_COMFYUI_BASE_URL,
|
||||
IMAGES_EDIT_COMFYUI_API_KEY,
|
||||
IMAGES_EDIT_COMFYUI_WORKFLOW,
|
||||
IMAGES_EDIT_COMFYUI_WORKFLOW_NODES,
|
||||
# Audio
|
||||
AUDIO_STT_ENGINE,
|
||||
AUDIO_STT_MODEL,
|
||||
|
|
@ -171,13 +187,17 @@ from open_webui.config import (
|
|||
AUDIO_STT_AZURE_LOCALES,
|
||||
AUDIO_STT_AZURE_BASE_URL,
|
||||
AUDIO_STT_AZURE_MAX_SPEAKERS,
|
||||
AUDIO_TTS_API_KEY,
|
||||
AUDIO_STT_MISTRAL_API_KEY,
|
||||
AUDIO_STT_MISTRAL_API_BASE_URL,
|
||||
AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS,
|
||||
AUDIO_TTS_ENGINE,
|
||||
AUDIO_TTS_MODEL,
|
||||
AUDIO_TTS_VOICE,
|
||||
AUDIO_TTS_OPENAI_API_BASE_URL,
|
||||
AUDIO_TTS_OPENAI_API_KEY,
|
||||
AUDIO_TTS_OPENAI_PARAMS,
|
||||
AUDIO_TTS_API_KEY,
|
||||
AUDIO_TTS_SPLIT_ON,
|
||||
AUDIO_TTS_VOICE,
|
||||
AUDIO_TTS_AZURE_SPEECH_REGION,
|
||||
AUDIO_TTS_AZURE_SPEECH_BASE_URL,
|
||||
AUDIO_TTS_AZURE_SPEECH_OUTPUT_FORMAT,
|
||||
|
|
@ -238,19 +258,30 @@ from open_webui.config import (
|
|||
DATALAB_MARKER_DISABLE_IMAGE_EXTRACTION,
|
||||
DATALAB_MARKER_FORMAT_LINES,
|
||||
DATALAB_MARKER_OUTPUT_FORMAT,
|
||||
MINERU_API_MODE,
|
||||
MINERU_API_URL,
|
||||
MINERU_API_KEY,
|
||||
MINERU_PARAMS,
|
||||
DATALAB_MARKER_USE_LLM,
|
||||
EXTERNAL_DOCUMENT_LOADER_URL,
|
||||
EXTERNAL_DOCUMENT_LOADER_API_KEY,
|
||||
TIKA_SERVER_URL,
|
||||
DOCLING_SERVER_URL,
|
||||
DOCLING_PARAMS,
|
||||
DOCLING_DO_OCR,
|
||||
DOCLING_FORCE_OCR,
|
||||
DOCLING_OCR_ENGINE,
|
||||
DOCLING_OCR_LANG,
|
||||
DOCLING_PDF_BACKEND,
|
||||
DOCLING_TABLE_MODE,
|
||||
DOCLING_PIPELINE,
|
||||
DOCLING_DO_PICTURE_DESCRIPTION,
|
||||
DOCLING_PICTURE_DESCRIPTION_MODE,
|
||||
DOCLING_PICTURE_DESCRIPTION_LOCAL,
|
||||
DOCLING_PICTURE_DESCRIPTION_API,
|
||||
DOCUMENT_INTELLIGENCE_ENDPOINT,
|
||||
DOCUMENT_INTELLIGENCE_KEY,
|
||||
MISTRAL_OCR_API_BASE_URL,
|
||||
MISTRAL_OCR_API_KEY,
|
||||
RAG_TEXT_SPLITTER,
|
||||
TIKTOKEN_ENCODING_NAME,
|
||||
|
|
@ -266,6 +297,7 @@ from open_webui.config import (
|
|||
WEB_SEARCH_CONCURRENT_REQUESTS,
|
||||
WEB_SEARCH_TRUST_ENV,
|
||||
WEB_SEARCH_DOMAIN_FILTER_LIST,
|
||||
OLLAMA_CLOUD_WEB_SEARCH_API_KEY,
|
||||
JINA_API_KEY,
|
||||
SEARCHAPI_API_KEY,
|
||||
SEARCHAPI_ENGINE,
|
||||
|
|
@ -288,6 +320,7 @@ from open_webui.config import (
|
|||
PERPLEXITY_API_KEY,
|
||||
PERPLEXITY_MODEL,
|
||||
PERPLEXITY_SEARCH_CONTEXT_USAGE,
|
||||
PERPLEXITY_SEARCH_API_URL,
|
||||
SOUGOU_API_SID,
|
||||
SOUGOU_API_SK,
|
||||
KAGI_SEARCH_API_KEY,
|
||||
|
|
@ -297,14 +330,18 @@ from open_webui.config import (
|
|||
GOOGLE_PSE_ENGINE_ID,
|
||||
GOOGLE_DRIVE_CLIENT_ID,
|
||||
GOOGLE_DRIVE_API_KEY,
|
||||
ONEDRIVE_CLIENT_ID,
|
||||
ENABLE_ONEDRIVE_INTEGRATION,
|
||||
ONEDRIVE_CLIENT_ID_PERSONAL,
|
||||
ONEDRIVE_CLIENT_ID_BUSINESS,
|
||||
ONEDRIVE_SHAREPOINT_URL,
|
||||
ONEDRIVE_SHAREPOINT_TENANT_ID,
|
||||
ENABLE_ONEDRIVE_PERSONAL,
|
||||
ENABLE_ONEDRIVE_BUSINESS,
|
||||
ENABLE_RAG_HYBRID_SEARCH,
|
||||
ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS,
|
||||
ENABLE_RAG_LOCAL_WEB_FETCH,
|
||||
ENABLE_WEB_LOADER_SSL_VERIFICATION,
|
||||
ENABLE_GOOGLE_DRIVE_INTEGRATION,
|
||||
ENABLE_ONEDRIVE_INTEGRATION,
|
||||
UPLOAD_DIR,
|
||||
EXTERNAL_WEB_SEARCH_URL,
|
||||
EXTERNAL_WEB_SEARCH_API_KEY,
|
||||
|
|
@ -320,9 +357,9 @@ from open_webui.config import (
|
|||
JWT_EXPIRES_IN,
|
||||
ENABLE_SIGNUP,
|
||||
ENABLE_LOGIN_FORM,
|
||||
ENABLE_API_KEY,
|
||||
ENABLE_API_KEY_ENDPOINT_RESTRICTIONS,
|
||||
API_KEY_ALLOWED_ENDPOINTS,
|
||||
ENABLE_API_KEYS,
|
||||
ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS,
|
||||
API_KEYS_ALLOWED_ENDPOINTS,
|
||||
ENABLE_CHANNELS,
|
||||
ENABLE_NOTES,
|
||||
ENABLE_COMMUNITY_SHARING,
|
||||
|
|
@ -336,6 +373,7 @@ from open_webui.config import (
|
|||
PENDING_USER_OVERLAY_TITLE,
|
||||
DEFAULT_PROMPT_SUGGESTIONS,
|
||||
DEFAULT_MODELS,
|
||||
DEFAULT_PINNED_MODELS,
|
||||
DEFAULT_ARENA_MODEL,
|
||||
MODEL_ORDER_LIST,
|
||||
EVALUATION_ARENA_MODELS,
|
||||
|
|
@ -394,6 +432,7 @@ from open_webui.config import (
|
|||
TAGS_GENERATION_PROMPT_TEMPLATE,
|
||||
IMAGE_PROMPT_GENERATION_PROMPT_TEMPLATE,
|
||||
TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE,
|
||||
VOICE_MODE_PROMPT_TEMPLATE,
|
||||
QUERY_GENERATION_PROMPT_TEMPLATE,
|
||||
AUTOCOMPLETE_GENERATION_PROMPT_TEMPLATE,
|
||||
AUTOCOMPLETE_GENERATION_INPUT_MAX_LENGTH,
|
||||
|
|
@ -426,7 +465,7 @@ from open_webui.env import (
|
|||
WEBUI_AUTH_TRUSTED_NAME_HEADER,
|
||||
WEBUI_AUTH_SIGNOUT_REDIRECT_URL,
|
||||
# SCIM
|
||||
SCIM_ENABLED,
|
||||
ENABLE_SCIM,
|
||||
SCIM_TOKEN,
|
||||
ENABLE_COMPRESSION_MIDDLEWARE,
|
||||
ENABLE_WEBSOCKET_SUPPORT,
|
||||
|
|
@ -436,6 +475,7 @@ from open_webui.env import (
|
|||
ENABLE_OTEL,
|
||||
EXTERNAL_PWA_MANIFEST_URL,
|
||||
AIOHTTP_CLIENT_SESSION_SSL,
|
||||
ENABLE_STAR_SESSIONS_MIDDLEWARE,
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -443,6 +483,7 @@ from open_webui.utils.models import (
|
|||
get_all_models,
|
||||
get_all_base_models,
|
||||
check_model_access,
|
||||
get_filtered_models,
|
||||
)
|
||||
from open_webui.utils.chat import (
|
||||
generate_chat_completion as chat_completion_handler,
|
||||
|
|
@ -461,7 +502,14 @@ from open_webui.utils.auth import (
|
|||
get_verified_user,
|
||||
)
|
||||
from open_webui.utils.plugin import install_tool_and_function_dependencies
|
||||
from open_webui.utils.oauth import OAuthManager
|
||||
from open_webui.utils.oauth import (
|
||||
get_oauth_client_info_with_dynamic_client_registration,
|
||||
encrypt_data,
|
||||
decrypt_data,
|
||||
OAuthManager,
|
||||
OAuthClientManager,
|
||||
OAuthClientInformationFull,
|
||||
)
|
||||
from open_webui.utils.security_headers import SecurityHeadersMiddleware
|
||||
from open_webui.utils.redis import get_redis_connection
|
||||
|
||||
|
|
@ -591,7 +639,13 @@ app = FastAPI(
|
|||
lifespan=lifespan,
|
||||
)
|
||||
|
||||
# For Open WebUI OIDC/OAuth2
|
||||
oauth_manager = OAuthManager(app)
|
||||
app.state.oauth_manager = oauth_manager
|
||||
|
||||
# For Integrations
|
||||
oauth_client_manager = OAuthClientManager(app)
|
||||
app.state.oauth_client_manager = oauth_client_manager
|
||||
|
||||
app.state.instance_id = None
|
||||
app.state.config = AppConfig(
|
||||
|
|
@ -667,7 +721,7 @@ app.state.config.ENABLE_DIRECT_CONNECTIONS = ENABLE_DIRECT_CONNECTIONS
|
|||
#
|
||||
########################################
|
||||
|
||||
app.state.SCIM_ENABLED = SCIM_ENABLED
|
||||
app.state.ENABLE_SCIM = ENABLE_SCIM
|
||||
app.state.SCIM_TOKEN = SCIM_TOKEN
|
||||
|
||||
########################################
|
||||
|
|
@ -689,11 +743,11 @@ app.state.config.WEBUI_URL = WEBUI_URL
|
|||
app.state.config.ENABLE_SIGNUP = ENABLE_SIGNUP
|
||||
app.state.config.ENABLE_LOGIN_FORM = ENABLE_LOGIN_FORM
|
||||
|
||||
app.state.config.ENABLE_API_KEY = ENABLE_API_KEY
|
||||
app.state.config.ENABLE_API_KEY_ENDPOINT_RESTRICTIONS = (
|
||||
ENABLE_API_KEY_ENDPOINT_RESTRICTIONS
|
||||
app.state.config.ENABLE_API_KEYS = ENABLE_API_KEYS
|
||||
app.state.config.ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS = (
|
||||
ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS
|
||||
)
|
||||
app.state.config.API_KEY_ALLOWED_ENDPOINTS = API_KEY_ALLOWED_ENDPOINTS
|
||||
app.state.config.API_KEYS_ALLOWED_ENDPOINTS = API_KEYS_ALLOWED_ENDPOINTS
|
||||
|
||||
app.state.config.JWT_EXPIRES_IN = JWT_EXPIRES_IN
|
||||
|
||||
|
|
@ -702,6 +756,10 @@ app.state.config.ADMIN_EMAIL = ADMIN_EMAIL
|
|||
|
||||
|
||||
app.state.config.DEFAULT_MODELS = DEFAULT_MODELS
|
||||
app.state.config.DEFAULT_PINNED_MODELS = DEFAULT_PINNED_MODELS
|
||||
app.state.config.MODEL_ORDER_LIST = MODEL_ORDER_LIST
|
||||
|
||||
|
||||
app.state.config.DEFAULT_PROMPT_SUGGESTIONS = DEFAULT_PROMPT_SUGGESTIONS
|
||||
app.state.config.DEFAULT_USER_ROLE = DEFAULT_USER_ROLE
|
||||
|
||||
|
|
@ -713,7 +771,6 @@ app.state.config.RESPONSE_WATERMARK = RESPONSE_WATERMARK
|
|||
app.state.config.USER_PERMISSIONS = USER_PERMISSIONS
|
||||
app.state.config.WEBHOOK_URL = WEBHOOK_URL
|
||||
app.state.config.BANNERS = WEBUI_BANNERS
|
||||
app.state.config.MODEL_ORDER_LIST = MODEL_ORDER_LIST
|
||||
|
||||
|
||||
app.state.config.ENABLE_CHANNELS = ENABLE_CHANNELS
|
||||
|
|
@ -791,6 +848,9 @@ app.state.config.FILE_IMAGE_COMPRESSION_HEIGHT = FILE_IMAGE_COMPRESSION_HEIGHT
|
|||
app.state.config.RAG_FULL_CONTEXT = RAG_FULL_CONTEXT
|
||||
app.state.config.BYPASS_EMBEDDING_AND_RETRIEVAL = BYPASS_EMBEDDING_AND_RETRIEVAL
|
||||
app.state.config.ENABLE_RAG_HYBRID_SEARCH = ENABLE_RAG_HYBRID_SEARCH
|
||||
app.state.config.ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS = (
|
||||
ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS
|
||||
)
|
||||
app.state.config.ENABLE_WEB_LOADER_SSL_VERIFICATION = ENABLE_WEB_LOADER_SSL_VERIFICATION
|
||||
|
||||
app.state.config.CONTENT_EXTRACTION_ENGINE = CONTENT_EXTRACTION_ENGINE
|
||||
|
|
@ -811,15 +871,26 @@ app.state.config.EXTERNAL_DOCUMENT_LOADER_URL = EXTERNAL_DOCUMENT_LOADER_URL
|
|||
app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY = EXTERNAL_DOCUMENT_LOADER_API_KEY
|
||||
app.state.config.TIKA_SERVER_URL = TIKA_SERVER_URL
|
||||
app.state.config.DOCLING_SERVER_URL = DOCLING_SERVER_URL
|
||||
app.state.config.DOCLING_PARAMS = DOCLING_PARAMS
|
||||
app.state.config.DOCLING_DO_OCR = DOCLING_DO_OCR
|
||||
app.state.config.DOCLING_FORCE_OCR = DOCLING_FORCE_OCR
|
||||
app.state.config.DOCLING_OCR_ENGINE = DOCLING_OCR_ENGINE
|
||||
app.state.config.DOCLING_OCR_LANG = DOCLING_OCR_LANG
|
||||
app.state.config.DOCLING_PDF_BACKEND = DOCLING_PDF_BACKEND
|
||||
app.state.config.DOCLING_TABLE_MODE = DOCLING_TABLE_MODE
|
||||
app.state.config.DOCLING_PIPELINE = DOCLING_PIPELINE
|
||||
app.state.config.DOCLING_DO_PICTURE_DESCRIPTION = DOCLING_DO_PICTURE_DESCRIPTION
|
||||
app.state.config.DOCLING_PICTURE_DESCRIPTION_MODE = DOCLING_PICTURE_DESCRIPTION_MODE
|
||||
app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL = DOCLING_PICTURE_DESCRIPTION_LOCAL
|
||||
app.state.config.DOCLING_PICTURE_DESCRIPTION_API = DOCLING_PICTURE_DESCRIPTION_API
|
||||
app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT = DOCUMENT_INTELLIGENCE_ENDPOINT
|
||||
app.state.config.DOCUMENT_INTELLIGENCE_KEY = DOCUMENT_INTELLIGENCE_KEY
|
||||
app.state.config.MISTRAL_OCR_API_BASE_URL = MISTRAL_OCR_API_BASE_URL
|
||||
app.state.config.MISTRAL_OCR_API_KEY = MISTRAL_OCR_API_KEY
|
||||
app.state.config.MINERU_API_MODE = MINERU_API_MODE
|
||||
app.state.config.MINERU_API_URL = MINERU_API_URL
|
||||
app.state.config.MINERU_API_KEY = MINERU_API_KEY
|
||||
app.state.config.MINERU_PARAMS = MINERU_PARAMS
|
||||
|
||||
app.state.config.TEXT_SPLITTER = RAG_TEXT_SPLITTER
|
||||
app.state.config.TIKTOKEN_ENCODING_NAME = TIKTOKEN_ENCODING_NAME
|
||||
|
|
@ -871,6 +942,8 @@ app.state.config.BYPASS_WEB_SEARCH_WEB_LOADER = BYPASS_WEB_SEARCH_WEB_LOADER
|
|||
|
||||
app.state.config.ENABLE_GOOGLE_DRIVE_INTEGRATION = ENABLE_GOOGLE_DRIVE_INTEGRATION
|
||||
app.state.config.ENABLE_ONEDRIVE_INTEGRATION = ENABLE_ONEDRIVE_INTEGRATION
|
||||
|
||||
app.state.config.OLLAMA_CLOUD_WEB_SEARCH_API_KEY = OLLAMA_CLOUD_WEB_SEARCH_API_KEY
|
||||
app.state.config.SEARXNG_QUERY_URL = SEARXNG_QUERY_URL
|
||||
app.state.config.YACY_QUERY_URL = YACY_QUERY_URL
|
||||
app.state.config.YACY_USERNAME = YACY_USERNAME
|
||||
|
|
@ -897,6 +970,7 @@ app.state.config.EXA_API_KEY = EXA_API_KEY
|
|||
app.state.config.PERPLEXITY_API_KEY = PERPLEXITY_API_KEY
|
||||
app.state.config.PERPLEXITY_MODEL = PERPLEXITY_MODEL
|
||||
app.state.config.PERPLEXITY_SEARCH_CONTEXT_USAGE = PERPLEXITY_SEARCH_CONTEXT_USAGE
|
||||
app.state.config.PERPLEXITY_SEARCH_API_URL = PERPLEXITY_SEARCH_API_URL
|
||||
app.state.config.SOUGOU_API_SID = SOUGOU_API_SID
|
||||
app.state.config.SOUGOU_API_SK = SOUGOU_API_SK
|
||||
app.state.config.EXTERNAL_WEB_SEARCH_URL = EXTERNAL_WEB_SEARCH_URL
|
||||
|
|
@ -1019,26 +1093,41 @@ app.state.config.IMAGE_GENERATION_ENGINE = IMAGE_GENERATION_ENGINE
|
|||
app.state.config.ENABLE_IMAGE_GENERATION = ENABLE_IMAGE_GENERATION
|
||||
app.state.config.ENABLE_IMAGE_PROMPT_GENERATION = ENABLE_IMAGE_PROMPT_GENERATION
|
||||
|
||||
app.state.config.IMAGE_GENERATION_MODEL = IMAGE_GENERATION_MODEL
|
||||
app.state.config.IMAGE_SIZE = IMAGE_SIZE
|
||||
app.state.config.IMAGE_STEPS = IMAGE_STEPS
|
||||
|
||||
app.state.config.IMAGES_OPENAI_API_BASE_URL = IMAGES_OPENAI_API_BASE_URL
|
||||
app.state.config.IMAGES_OPENAI_API_VERSION = IMAGES_OPENAI_API_VERSION
|
||||
app.state.config.IMAGES_OPENAI_API_KEY = IMAGES_OPENAI_API_KEY
|
||||
app.state.config.IMAGES_OPENAI_API_PARAMS = IMAGES_OPENAI_API_PARAMS
|
||||
|
||||
app.state.config.IMAGES_GEMINI_API_BASE_URL = IMAGES_GEMINI_API_BASE_URL
|
||||
app.state.config.IMAGES_GEMINI_API_KEY = IMAGES_GEMINI_API_KEY
|
||||
|
||||
app.state.config.IMAGE_GENERATION_MODEL = IMAGE_GENERATION_MODEL
|
||||
app.state.config.IMAGES_GEMINI_ENDPOINT_METHOD = IMAGES_GEMINI_ENDPOINT_METHOD
|
||||
|
||||
app.state.config.AUTOMATIC1111_BASE_URL = AUTOMATIC1111_BASE_URL
|
||||
app.state.config.AUTOMATIC1111_API_AUTH = AUTOMATIC1111_API_AUTH
|
||||
app.state.config.AUTOMATIC1111_CFG_SCALE = AUTOMATIC1111_CFG_SCALE
|
||||
app.state.config.AUTOMATIC1111_SAMPLER = AUTOMATIC1111_SAMPLER
|
||||
app.state.config.AUTOMATIC1111_SCHEDULER = AUTOMATIC1111_SCHEDULER
|
||||
app.state.config.AUTOMATIC1111_PARAMS = AUTOMATIC1111_PARAMS
|
||||
|
||||
app.state.config.COMFYUI_BASE_URL = COMFYUI_BASE_URL
|
||||
app.state.config.COMFYUI_API_KEY = COMFYUI_API_KEY
|
||||
app.state.config.COMFYUI_WORKFLOW = COMFYUI_WORKFLOW
|
||||
app.state.config.COMFYUI_WORKFLOW_NODES = COMFYUI_WORKFLOW_NODES
|
||||
|
||||
app.state.config.IMAGE_SIZE = IMAGE_SIZE
|
||||
app.state.config.IMAGE_STEPS = IMAGE_STEPS
|
||||
|
||||
app.state.config.IMAGE_EDIT_ENGINE = IMAGE_EDIT_ENGINE
|
||||
app.state.config.IMAGE_EDIT_MODEL = IMAGE_EDIT_MODEL
|
||||
app.state.config.IMAGE_EDIT_SIZE = IMAGE_EDIT_SIZE
|
||||
app.state.config.IMAGES_EDIT_OPENAI_API_BASE_URL = IMAGES_EDIT_OPENAI_API_BASE_URL
|
||||
app.state.config.IMAGES_EDIT_OPENAI_API_KEY = IMAGES_EDIT_OPENAI_API_KEY
|
||||
app.state.config.IMAGES_EDIT_OPENAI_API_VERSION = IMAGES_EDIT_OPENAI_API_VERSION
|
||||
app.state.config.IMAGES_EDIT_GEMINI_API_BASE_URL = IMAGES_EDIT_GEMINI_API_BASE_URL
|
||||
app.state.config.IMAGES_EDIT_GEMINI_API_KEY = IMAGES_EDIT_GEMINI_API_KEY
|
||||
app.state.config.IMAGES_EDIT_COMFYUI_BASE_URL = IMAGES_EDIT_COMFYUI_BASE_URL
|
||||
app.state.config.IMAGES_EDIT_COMFYUI_API_KEY = IMAGES_EDIT_COMFYUI_API_KEY
|
||||
app.state.config.IMAGES_EDIT_COMFYUI_WORKFLOW = IMAGES_EDIT_COMFYUI_WORKFLOW
|
||||
app.state.config.IMAGES_EDIT_COMFYUI_WORKFLOW_NODES = IMAGES_EDIT_COMFYUI_WORKFLOW_NODES
|
||||
|
||||
|
||||
########################################
|
||||
|
|
@ -1064,11 +1153,21 @@ app.state.config.AUDIO_STT_AZURE_LOCALES = AUDIO_STT_AZURE_LOCALES
|
|||
app.state.config.AUDIO_STT_AZURE_BASE_URL = AUDIO_STT_AZURE_BASE_URL
|
||||
app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS = AUDIO_STT_AZURE_MAX_SPEAKERS
|
||||
|
||||
app.state.config.TTS_OPENAI_API_BASE_URL = AUDIO_TTS_OPENAI_API_BASE_URL
|
||||
app.state.config.TTS_OPENAI_API_KEY = AUDIO_TTS_OPENAI_API_KEY
|
||||
app.state.config.AUDIO_STT_MISTRAL_API_KEY = AUDIO_STT_MISTRAL_API_KEY
|
||||
app.state.config.AUDIO_STT_MISTRAL_API_BASE_URL = AUDIO_STT_MISTRAL_API_BASE_URL
|
||||
app.state.config.AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS = (
|
||||
AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS
|
||||
)
|
||||
|
||||
app.state.config.TTS_ENGINE = AUDIO_TTS_ENGINE
|
||||
|
||||
app.state.config.TTS_MODEL = AUDIO_TTS_MODEL
|
||||
app.state.config.TTS_VOICE = AUDIO_TTS_VOICE
|
||||
|
||||
app.state.config.TTS_OPENAI_API_BASE_URL = AUDIO_TTS_OPENAI_API_BASE_URL
|
||||
app.state.config.TTS_OPENAI_API_KEY = AUDIO_TTS_OPENAI_API_KEY
|
||||
app.state.config.TTS_OPENAI_PARAMS = AUDIO_TTS_OPENAI_PARAMS
|
||||
|
||||
app.state.config.TTS_API_KEY = AUDIO_TTS_API_KEY
|
||||
app.state.config.TTS_SPLIT_ON = AUDIO_TTS_SPLIT_ON
|
||||
|
||||
|
|
@ -1122,6 +1221,7 @@ app.state.config.AUTOCOMPLETE_GENERATION_PROMPT_TEMPLATE = (
|
|||
app.state.config.AUTOCOMPLETE_GENERATION_INPUT_MAX_LENGTH = (
|
||||
AUTOCOMPLETE_GENERATION_INPUT_MAX_LENGTH
|
||||
)
|
||||
app.state.config.VOICE_MODE_PROMPT_TEMPLATE = VOICE_MODE_PROMPT_TEMPLATE
|
||||
|
||||
|
||||
########################################
|
||||
|
|
@ -1132,6 +1232,10 @@ app.state.config.AUTOCOMPLETE_GENERATION_INPUT_MAX_LENGTH = (
|
|||
|
||||
app.state.MODELS = {}
|
||||
|
||||
# Add the middleware to the app
|
||||
if ENABLE_COMPRESSION_MIDDLEWARE:
|
||||
app.add_middleware(CompressMiddleware)
|
||||
|
||||
|
||||
class RedirectMiddleware(BaseHTTPMiddleware):
|
||||
async def dispatch(self, request: Request, call_next):
|
||||
|
|
@ -1140,12 +1244,32 @@ class RedirectMiddleware(BaseHTTPMiddleware):
|
|||
path = request.url.path
|
||||
query_params = dict(parse_qs(urlparse(str(request.url)).query))
|
||||
|
||||
redirect_params = {}
|
||||
|
||||
# Check for the specific watch path and the presence of 'v' parameter
|
||||
if path.endswith("/watch") and "v" in query_params:
|
||||
# Extract the first 'v' parameter
|
||||
video_id = query_params["v"][0]
|
||||
encoded_video_id = urlencode({"youtube": video_id})
|
||||
redirect_url = f"/?{encoded_video_id}"
|
||||
youtube_video_id = query_params["v"][0]
|
||||
redirect_params["youtube"] = youtube_video_id
|
||||
|
||||
if "shared" in query_params and len(query_params["shared"]) > 0:
|
||||
# PWA share_target support
|
||||
|
||||
text = query_params["shared"][0]
|
||||
if text:
|
||||
urls = re.match(r"https://\S+", text)
|
||||
if urls:
|
||||
from open_webui.retrieval.loaders.youtube import _parse_video_id
|
||||
|
||||
if youtube_video_id := _parse_video_id(urls[0]):
|
||||
redirect_params["youtube"] = youtube_video_id
|
||||
else:
|
||||
redirect_params["load-url"] = urls[0]
|
||||
else:
|
||||
redirect_params["q"] = text
|
||||
|
||||
if redirect_params:
|
||||
redirect_url = f"/?{urlencode(redirect_params)}"
|
||||
return RedirectResponse(url=redirect_url)
|
||||
|
||||
# Proceed with the normal flow of other requests
|
||||
|
|
@ -1153,14 +1277,53 @@ class RedirectMiddleware(BaseHTTPMiddleware):
|
|||
return response
|
||||
|
||||
|
||||
# Add the middleware to the app
|
||||
if ENABLE_COMPRESSION_MIDDLEWARE:
|
||||
app.add_middleware(CompressMiddleware)
|
||||
|
||||
app.add_middleware(RedirectMiddleware)
|
||||
app.add_middleware(SecurityHeadersMiddleware)
|
||||
|
||||
|
||||
class APIKeyRestrictionMiddleware(BaseHTTPMiddleware):
|
||||
async def dispatch(self, request: Request, call_next):
|
||||
auth_header = request.headers.get("Authorization")
|
||||
token = None
|
||||
|
||||
if auth_header:
|
||||
scheme, token = auth_header.split(" ")
|
||||
|
||||
# Only apply restrictions if an sk- API key is used
|
||||
if token and token.startswith("sk-"):
|
||||
# Check if restrictions are enabled
|
||||
if request.app.state.config.ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS:
|
||||
allowed_paths = [
|
||||
path.strip()
|
||||
for path in str(
|
||||
request.app.state.config.API_KEYS_ALLOWED_ENDPOINTS
|
||||
).split(",")
|
||||
if path.strip()
|
||||
]
|
||||
|
||||
request_path = request.url.path
|
||||
|
||||
# Match exact path or prefix path
|
||||
is_allowed = any(
|
||||
request_path == allowed or request_path.startswith(allowed + "/")
|
||||
for allowed in allowed_paths
|
||||
)
|
||||
|
||||
if not is_allowed:
|
||||
return JSONResponse(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
content={
|
||||
"detail": "API key not allowed to access this endpoint."
|
||||
},
|
||||
)
|
||||
|
||||
response = await call_next(request)
|
||||
return response
|
||||
|
||||
|
||||
app.add_middleware(APIKeyRestrictionMiddleware)
|
||||
|
||||
|
||||
@app.middleware("http")
|
||||
async def commit_session_after_request(request: Request, call_next):
|
||||
response = await call_next(request)
|
||||
|
|
@ -1176,7 +1339,7 @@ async def check_url(request: Request, call_next):
|
|||
request.headers.get("Authorization")
|
||||
)
|
||||
|
||||
request.state.enable_api_key = app.state.config.ENABLE_API_KEY
|
||||
request.state.enable_api_keys = app.state.config.ENABLE_API_KEYS
|
||||
response = await call_next(request)
|
||||
process_time = int(time.time()) - start_time
|
||||
response.headers["X-Process-Time"] = str(process_time)
|
||||
|
|
@ -1251,7 +1414,7 @@ app.include_router(
|
|||
app.include_router(utils.router, prefix="/api/v1/utils", tags=["utils"])
|
||||
|
||||
# SCIM 2.0 API for identity management
|
||||
if SCIM_ENABLED:
|
||||
if ENABLE_SCIM:
|
||||
app.include_router(scim.router, prefix="/api/v1/scim/v2", tags=["scim"])
|
||||
|
||||
|
||||
|
|
@ -1280,33 +1443,6 @@ if audit_level != AuditLevel.NONE:
|
|||
async def get_models(
|
||||
request: Request, refresh: bool = False, user=Depends(get_verified_user)
|
||||
):
|
||||
def get_filtered_models(models, user):
|
||||
filtered_models = []
|
||||
for model in models:
|
||||
if model.get("arena"):
|
||||
if has_access(
|
||||
user.id,
|
||||
type="read",
|
||||
access_control=model.get("info", {})
|
||||
.get("meta", {})
|
||||
.get("access_control", {}),
|
||||
):
|
||||
filtered_models.append(model)
|
||||
continue
|
||||
|
||||
model_info = Models.get_model_by_id(model["id"])
|
||||
if model_info:
|
||||
if (
|
||||
(user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL)
|
||||
or user.id == model_info.user_id
|
||||
or has_access(
|
||||
user.id, type="read", access_control=model_info.access_control
|
||||
)
|
||||
):
|
||||
filtered_models.append(model)
|
||||
|
||||
return filtered_models
|
||||
|
||||
all_models = await get_all_models(request, refresh=refresh, user=user)
|
||||
|
||||
models = []
|
||||
|
|
@ -1342,12 +1478,7 @@ async def get_models(
|
|||
)
|
||||
)
|
||||
|
||||
# Filter out models that the user does not have access to
|
||||
if (
|
||||
user.role == "user"
|
||||
or (user.role == "admin" and not BYPASS_ADMIN_ACCESS_CONTROL)
|
||||
) and not BYPASS_MODEL_ACCESS_CONTROL:
|
||||
models = get_filtered_models(models, user)
|
||||
models = get_filtered_models(models, user)
|
||||
|
||||
log.debug(
|
||||
f"/api/models returned filtered models accessible to the user: {json.dumps([model.get('id') for model in models])}"
|
||||
|
|
@ -1442,6 +1573,9 @@ async def chat_completion(
|
|||
reasoning_tags = form_data.get("params", {}).get("reasoning_tags")
|
||||
|
||||
# Model Params
|
||||
if model_info_params.get("stream_response") is not None:
|
||||
form_data["stream"] = model_info_params.get("stream_response")
|
||||
|
||||
if model_info_params.get("stream_delta_chunk_size"):
|
||||
stream_delta_chunk_size = model_info_params.get("stream_delta_chunk_size")
|
||||
|
||||
|
|
@ -1476,7 +1610,7 @@ async def chat_completion(
|
|||
}
|
||||
|
||||
if metadata.get("chat_id") and (user and user.role != "admin"):
|
||||
if metadata["chat_id"] != "local":
|
||||
if not metadata["chat_id"].startswith("local:"):
|
||||
chat = Chats.get_chat_by_id_and_user_id(metadata["chat_id"], user.id)
|
||||
if chat is None:
|
||||
raise HTTPException(
|
||||
|
|
@ -1503,13 +1637,14 @@ async def chat_completion(
|
|||
response = await chat_completion_handler(request, form_data, user)
|
||||
if metadata.get("chat_id") and metadata.get("message_id"):
|
||||
try:
|
||||
Chats.upsert_message_to_chat_by_id_and_message_id(
|
||||
metadata["chat_id"],
|
||||
metadata["message_id"],
|
||||
{
|
||||
"model": model_id,
|
||||
},
|
||||
)
|
||||
if not metadata["chat_id"].startswith("local:"):
|
||||
Chats.upsert_message_to_chat_by_id_and_message_id(
|
||||
metadata["chat_id"],
|
||||
metadata["message_id"],
|
||||
{
|
||||
"model": model_id,
|
||||
},
|
||||
)
|
||||
except:
|
||||
pass
|
||||
|
||||
|
|
@ -1520,30 +1655,50 @@ async def chat_completion(
|
|||
log.info("Chat processing was cancelled")
|
||||
try:
|
||||
event_emitter = get_event_emitter(metadata)
|
||||
await event_emitter(
|
||||
{"type": "task-cancelled"},
|
||||
await asyncio.shield(
|
||||
event_emitter(
|
||||
{"type": "chat:tasks:cancel"},
|
||||
)
|
||||
)
|
||||
except Exception as e:
|
||||
pass
|
||||
finally:
|
||||
raise # re-raise to ensure proper task cancellation handling
|
||||
except Exception as e:
|
||||
log.debug(f"Error processing chat payload: {e}")
|
||||
if metadata.get("chat_id") and metadata.get("message_id"):
|
||||
# Update the chat message with the error
|
||||
try:
|
||||
Chats.upsert_message_to_chat_by_id_and_message_id(
|
||||
metadata["chat_id"],
|
||||
metadata["message_id"],
|
||||
if not metadata["chat_id"].startswith("local:"):
|
||||
Chats.upsert_message_to_chat_by_id_and_message_id(
|
||||
metadata["chat_id"],
|
||||
metadata["message_id"],
|
||||
{
|
||||
"error": {"content": str(e)},
|
||||
},
|
||||
)
|
||||
|
||||
event_emitter = get_event_emitter(metadata)
|
||||
await event_emitter(
|
||||
{
|
||||
"error": {"content": str(e)},
|
||||
},
|
||||
"type": "chat:message:error",
|
||||
"data": {"error": {"content": str(e)}},
|
||||
}
|
||||
)
|
||||
await event_emitter(
|
||||
{"type": "chat:tasks:cancel"},
|
||||
)
|
||||
|
||||
except:
|
||||
pass
|
||||
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail=str(e),
|
||||
)
|
||||
finally:
|
||||
try:
|
||||
if mcp_clients := metadata.get("mcp_clients"):
|
||||
for client in reversed(mcp_clients.values()):
|
||||
await client.disconnect()
|
||||
except Exception as e:
|
||||
log.debug(f"Error cleaning up: {e}")
|
||||
pass
|
||||
|
||||
if (
|
||||
metadata.get("session_id")
|
||||
|
|
@ -1644,8 +1799,18 @@ async def list_tasks_by_chat_id_endpoint(
|
|||
@app.get("/api/config")
|
||||
async def get_app_config(request: Request):
|
||||
user = None
|
||||
if "token" in request.cookies:
|
||||
token = None
|
||||
|
||||
auth_header = request.headers.get("Authorization")
|
||||
if auth_header:
|
||||
cred = get_http_authorization_cred(auth_header)
|
||||
if cred:
|
||||
token = cred.credentials
|
||||
|
||||
if not token and "token" in request.cookies:
|
||||
token = request.cookies.get("token")
|
||||
|
||||
if token:
|
||||
try:
|
||||
data = decode_token(token)
|
||||
except Exception as e:
|
||||
|
|
@ -1680,7 +1845,7 @@ async def get_app_config(request: Request):
|
|||
"auth_trusted_header": bool(app.state.AUTH_TRUSTED_EMAIL_HEADER),
|
||||
"enable_signup_password_confirmation": ENABLE_SIGNUP_PASSWORD_CONFIRMATION,
|
||||
"enable_ldap": app.state.config.ENABLE_LDAP,
|
||||
"enable_api_key": app.state.config.ENABLE_API_KEY,
|
||||
"enable_api_keys": app.state.config.ENABLE_API_KEYS,
|
||||
"enable_signup": app.state.config.ENABLE_SIGNUP,
|
||||
"enable_login_form": app.state.config.ENABLE_LOGIN_FORM,
|
||||
"enable_websocket": ENABLE_WEBSOCKET_SUPPORT,
|
||||
|
|
@ -1703,6 +1868,14 @@ async def get_app_config(request: Request):
|
|||
"enable_google_drive_integration": app.state.config.ENABLE_GOOGLE_DRIVE_INTEGRATION,
|
||||
"enable_onedrive_integration": app.state.config.ENABLE_ONEDRIVE_INTEGRATION,
|
||||
"translation_languages": app.state.config.TRANSLATION_LANGUAGES,
|
||||
**(
|
||||
{
|
||||
"enable_onedrive_personal": ENABLE_ONEDRIVE_PERSONAL,
|
||||
"enable_onedrive_business": ENABLE_ONEDRIVE_BUSINESS,
|
||||
}
|
||||
if app.state.config.ENABLE_ONEDRIVE_INTEGRATION
|
||||
else {}
|
||||
),
|
||||
}
|
||||
if user is not None
|
||||
else {}
|
||||
|
|
@ -1711,6 +1884,7 @@ async def get_app_config(request: Request):
|
|||
**(
|
||||
{
|
||||
"default_models": app.state.config.DEFAULT_MODELS,
|
||||
"default_pinned_models": app.state.config.DEFAULT_PINNED_MODELS,
|
||||
"default_prompt_suggestions": app.state.config.DEFAULT_PROMPT_SUGGESTIONS,
|
||||
"user_count": user_count,
|
||||
"code": {
|
||||
|
|
@ -1740,7 +1914,8 @@ async def get_app_config(request: Request):
|
|||
"api_key": GOOGLE_DRIVE_API_KEY.value,
|
||||
},
|
||||
"onedrive": {
|
||||
"client_id": ONEDRIVE_CLIENT_ID.value,
|
||||
"client_id_personal": ONEDRIVE_CLIENT_ID_PERSONAL,
|
||||
"client_id_business": ONEDRIVE_CLIENT_ID_BUSINESS,
|
||||
"sharepoint_url": ONEDRIVE_SHAREPOINT_URL.value,
|
||||
"sharepoint_tenant_id": ONEDRIVE_SHAREPOINT_TENANT_ID.value,
|
||||
},
|
||||
|
|
@ -1860,17 +2035,177 @@ async def get_current_usage(user=Depends(get_verified_user)):
|
|||
# OAuth Login & Callback
|
||||
############################
|
||||
|
||||
# SessionMiddleware is used by authlib for oauth
|
||||
if len(OAUTH_PROVIDERS) > 0:
|
||||
|
||||
# Initialize OAuth client manager with any MCP tool servers using OAuth 2.1
|
||||
if len(app.state.config.TOOL_SERVER_CONNECTIONS) > 0:
|
||||
for tool_server_connection in app.state.config.TOOL_SERVER_CONNECTIONS:
|
||||
if tool_server_connection.get("type", "openapi") == "mcp":
|
||||
server_id = tool_server_connection.get("info", {}).get("id")
|
||||
auth_type = tool_server_connection.get("auth_type", "none")
|
||||
|
||||
if server_id and auth_type == "oauth_2.1":
|
||||
oauth_client_info = tool_server_connection.get("info", {}).get(
|
||||
"oauth_client_info", ""
|
||||
)
|
||||
|
||||
try:
|
||||
oauth_client_info = decrypt_data(oauth_client_info)
|
||||
app.state.oauth_client_manager.add_client(
|
||||
f"mcp:{server_id}",
|
||||
OAuthClientInformationFull(**oauth_client_info),
|
||||
)
|
||||
except Exception as e:
|
||||
log.error(
|
||||
f"Error adding OAuth client for MCP tool server {server_id}: {e}"
|
||||
)
|
||||
pass
|
||||
|
||||
try:
|
||||
if ENABLE_STAR_SESSIONS_MIDDLEWARE:
|
||||
redis_session_store = RedisStore(
|
||||
url=REDIS_URL,
|
||||
prefix=(f"{REDIS_KEY_PREFIX}:session:" if REDIS_KEY_PREFIX else "session:"),
|
||||
)
|
||||
|
||||
app.add_middleware(SessionAutoloadMiddleware)
|
||||
app.add_middleware(
|
||||
StarSessionsMiddleware,
|
||||
store=redis_session_store,
|
||||
cookie_name="owui-session",
|
||||
cookie_same_site=WEBUI_SESSION_COOKIE_SAME_SITE,
|
||||
cookie_https_only=WEBUI_SESSION_COOKIE_SECURE,
|
||||
)
|
||||
log.info("Using Redis for session")
|
||||
else:
|
||||
raise ValueError("No Redis URL provided")
|
||||
except Exception as e:
|
||||
app.add_middleware(
|
||||
SessionMiddleware,
|
||||
secret_key=WEBUI_SECRET_KEY,
|
||||
session_cookie="oui-session",
|
||||
session_cookie="owui-session",
|
||||
same_site=WEBUI_SESSION_COOKIE_SAME_SITE,
|
||||
https_only=WEBUI_SESSION_COOKIE_SECURE,
|
||||
)
|
||||
|
||||
|
||||
async def register_client(self, request, client_id: str) -> bool:
|
||||
server_type, server_id = client_id.split(":", 1)
|
||||
|
||||
connection = None
|
||||
connection_idx = None
|
||||
|
||||
for idx, conn in enumerate(request.app.state.config.TOOL_SERVER_CONNECTIONS or []):
|
||||
if conn.get("type", "openapi") == server_type:
|
||||
info = conn.get("info", {})
|
||||
if info.get("id") == server_id:
|
||||
connection = conn
|
||||
connection_idx = idx
|
||||
break
|
||||
|
||||
if connection is None or connection_idx is None:
|
||||
log.warning(
|
||||
f"Unable to locate MCP tool server configuration for client {client_id} during re-registration"
|
||||
)
|
||||
return False
|
||||
|
||||
server_url = connection.get("url")
|
||||
oauth_server_key = (connection.get("config") or {}).get("oauth_server_key")
|
||||
|
||||
try:
|
||||
oauth_client_info = (
|
||||
await get_oauth_client_info_with_dynamic_client_registration(
|
||||
request,
|
||||
client_id,
|
||||
server_url,
|
||||
oauth_server_key,
|
||||
)
|
||||
)
|
||||
except Exception as e:
|
||||
log.error(f"Dynamic client re-registration failed for {client_id}: {e}")
|
||||
return False
|
||||
|
||||
try:
|
||||
request.app.state.config.TOOL_SERVER_CONNECTIONS[connection_idx] = {
|
||||
**connection,
|
||||
"info": {
|
||||
**connection.get("info", {}),
|
||||
"oauth_client_info": encrypt_data(
|
||||
oauth_client_info.model_dump(mode="json")
|
||||
),
|
||||
},
|
||||
}
|
||||
except Exception as e:
|
||||
log.error(
|
||||
f"Failed to persist updated OAuth client info for tool server {client_id}: {e}"
|
||||
)
|
||||
return False
|
||||
|
||||
oauth_client_manager.remove_client(client_id)
|
||||
oauth_client_manager.add_client(client_id, oauth_client_info)
|
||||
log.info(f"Re-registered OAuth client {client_id} for tool server")
|
||||
return True
|
||||
|
||||
|
||||
@app.get("/oauth/clients/{client_id}/authorize")
|
||||
async def oauth_client_authorize(
|
||||
client_id: str,
|
||||
request: Request,
|
||||
response: Response,
|
||||
user=Depends(get_verified_user),
|
||||
):
|
||||
# ensure_valid_client_registration
|
||||
client = oauth_client_manager.get_client(client_id)
|
||||
client_info = oauth_client_manager.get_client_info(client_id)
|
||||
if client is None or client_info is None:
|
||||
raise HTTPException(status.HTTP_404_NOT_FOUND)
|
||||
|
||||
if not await oauth_client_manager._preflight_authorization_url(client, client_info):
|
||||
log.info(
|
||||
"Detected invalid OAuth client %s; attempting re-registration",
|
||||
client_id,
|
||||
)
|
||||
|
||||
registered = await register_client(request, client_id)
|
||||
if not registered:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail="Failed to re-register OAuth client",
|
||||
)
|
||||
|
||||
client = oauth_client_manager.get_client(client_id)
|
||||
client_info = oauth_client_manager.get_client_info(client_id)
|
||||
if client is None or client_info is None:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail="OAuth client unavailable after re-registration",
|
||||
)
|
||||
|
||||
if not await oauth_client_manager._preflight_authorization_url(
|
||||
client, client_info
|
||||
):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail="OAuth client registration is still invalid after re-registration",
|
||||
)
|
||||
|
||||
return await oauth_client_manager.handle_authorize(request, client_id=client_id)
|
||||
|
||||
|
||||
@app.get("/oauth/clients/{client_id}/callback")
|
||||
async def oauth_client_callback(
|
||||
client_id: str,
|
||||
request: Request,
|
||||
response: Response,
|
||||
user=Depends(get_verified_user),
|
||||
):
|
||||
return await oauth_client_manager.handle_callback(
|
||||
request,
|
||||
client_id=client_id,
|
||||
user_id=user.id if user else None,
|
||||
response=response,
|
||||
)
|
||||
|
||||
|
||||
@app.get("/oauth/{provider}/login")
|
||||
async def oauth_login(provider: str, request: Request):
|
||||
return await oauth_manager.handle_login(request, provider)
|
||||
|
|
@ -1882,8 +2217,9 @@ async def oauth_login(provider: str, request: Request):
|
|||
# - This is considered insecure in general, as OAuth providers do not always verify email addresses
|
||||
# 3. If there is no user, and ENABLE_OAUTH_SIGNUP is true, create a user
|
||||
# - Email addresses are considered unique, so we fail registration if the email address is already taken
|
||||
@app.get("/oauth/{provider}/callback")
|
||||
async def oauth_callback(provider: str, request: Request, response: Response):
|
||||
@app.get("/oauth/{provider}/login/callback")
|
||||
@app.get("/oauth/{provider}/callback") # Legacy endpoint
|
||||
async def oauth_login_callback(provider: str, request: Request, response: Response):
|
||||
return await oauth_manager.handle_callback(request, provider, response)
|
||||
|
||||
|
||||
|
|
@ -1913,6 +2249,11 @@ async def get_manifest_json():
|
|||
"purpose": "maskable",
|
||||
},
|
||||
],
|
||||
"share_target": {
|
||||
"action": "/",
|
||||
"method": "GET",
|
||||
"params": {"text": "shared"},
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,146 @@
|
|||
"""add_group_member_table
|
||||
|
||||
Revision ID: 37f288994c47
|
||||
Revises: a5c220713937
|
||||
Create Date: 2025-11-17 03:45:25.123939
|
||||
|
||||
"""
|
||||
|
||||
import uuid
|
||||
import time
|
||||
import json
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = "37f288994c47"
|
||||
down_revision: Union[str, None] = "a5c220713937"
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
# 1. Create new table
|
||||
op.create_table(
|
||||
"group_member",
|
||||
sa.Column("id", sa.Text(), primary_key=True, unique=True, nullable=False),
|
||||
sa.Column(
|
||||
"group_id",
|
||||
sa.Text(),
|
||||
sa.ForeignKey("group.id", ondelete="CASCADE"),
|
||||
nullable=False,
|
||||
),
|
||||
sa.Column(
|
||||
"user_id",
|
||||
sa.Text(),
|
||||
sa.ForeignKey("user.id", ondelete="CASCADE"),
|
||||
nullable=False,
|
||||
),
|
||||
sa.Column("created_at", sa.BigInteger(), nullable=True),
|
||||
sa.Column("updated_at", sa.BigInteger(), nullable=True),
|
||||
sa.UniqueConstraint("group_id", "user_id", name="uq_group_member_group_user"),
|
||||
)
|
||||
|
||||
connection = op.get_bind()
|
||||
|
||||
# 2. Read existing group with user_ids JSON column
|
||||
group_table = sa.Table(
|
||||
"group",
|
||||
sa.MetaData(),
|
||||
sa.Column("id", sa.Text()),
|
||||
sa.Column("user_ids", sa.JSON()), # JSON stored as text in SQLite + PG
|
||||
)
|
||||
|
||||
results = connection.execute(
|
||||
sa.select(group_table.c.id, group_table.c.user_ids)
|
||||
).fetchall()
|
||||
|
||||
print(results)
|
||||
|
||||
# 3. Insert members into group_member table
|
||||
gm_table = sa.Table(
|
||||
"group_member",
|
||||
sa.MetaData(),
|
||||
sa.Column("id", sa.Text()),
|
||||
sa.Column("group_id", sa.Text()),
|
||||
sa.Column("user_id", sa.Text()),
|
||||
sa.Column("created_at", sa.BigInteger()),
|
||||
sa.Column("updated_at", sa.BigInteger()),
|
||||
)
|
||||
|
||||
now = int(time.time())
|
||||
for group_id, user_ids in results:
|
||||
if not user_ids:
|
||||
continue
|
||||
|
||||
if isinstance(user_ids, str):
|
||||
try:
|
||||
user_ids = json.loads(user_ids)
|
||||
except Exception:
|
||||
continue # skip invalid JSON
|
||||
|
||||
if not isinstance(user_ids, list):
|
||||
continue
|
||||
|
||||
rows = [
|
||||
{
|
||||
"id": str(uuid.uuid4()),
|
||||
"group_id": group_id,
|
||||
"user_id": uid,
|
||||
"created_at": now,
|
||||
"updated_at": now,
|
||||
}
|
||||
for uid in user_ids
|
||||
]
|
||||
|
||||
if rows:
|
||||
connection.execute(gm_table.insert(), rows)
|
||||
|
||||
# 4. Optionally drop the old column
|
||||
with op.batch_alter_table("group") as batch:
|
||||
batch.drop_column("user_ids")
|
||||
|
||||
|
||||
def downgrade():
|
||||
# Reverse: restore user_ids column
|
||||
with op.batch_alter_table("group") as batch:
|
||||
batch.add_column(sa.Column("user_ids", sa.JSON()))
|
||||
|
||||
connection = op.get_bind()
|
||||
gm_table = sa.Table(
|
||||
"group_member",
|
||||
sa.MetaData(),
|
||||
sa.Column("group_id", sa.Text()),
|
||||
sa.Column("user_id", sa.Text()),
|
||||
sa.Column("created_at", sa.BigInteger()),
|
||||
sa.Column("updated_at", sa.BigInteger()),
|
||||
)
|
||||
|
||||
group_table = sa.Table(
|
||||
"group",
|
||||
sa.MetaData(),
|
||||
sa.Column("id", sa.Text()),
|
||||
sa.Column("user_ids", sa.JSON()),
|
||||
)
|
||||
|
||||
# Build JSON arrays again
|
||||
results = connection.execute(sa.select(group_table.c.id)).fetchall()
|
||||
|
||||
for (group_id,) in results:
|
||||
members = connection.execute(
|
||||
sa.select(gm_table.c.user_id).where(gm_table.c.group_id == group_id)
|
||||
).fetchall()
|
||||
|
||||
member_ids = [m[0] for m in members]
|
||||
|
||||
connection.execute(
|
||||
group_table.update()
|
||||
.where(group_table.c.id == group_id)
|
||||
.values(user_ids=member_ids)
|
||||
)
|
||||
|
||||
# Drop the new table
|
||||
op.drop_table("group_member")
|
||||
|
|
@ -0,0 +1,52 @@
|
|||
"""Add oauth_session table
|
||||
|
||||
Revision ID: 38d63c18f30f
|
||||
Revises: 3af16a1c9fb6
|
||||
Create Date: 2025-09-08 14:19:59.583921
|
||||
|
||||
"""
|
||||
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = "38d63c18f30f"
|
||||
down_revision: Union[str, None] = "3af16a1c9fb6"
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
# Create oauth_session table
|
||||
op.create_table(
|
||||
"oauth_session",
|
||||
sa.Column("id", sa.Text(), nullable=False),
|
||||
sa.Column("user_id", sa.Text(), nullable=False),
|
||||
sa.Column("provider", sa.Text(), nullable=False),
|
||||
sa.Column("token", sa.Text(), nullable=False),
|
||||
sa.Column("expires_at", sa.BigInteger(), nullable=False),
|
||||
sa.Column("created_at", sa.BigInteger(), nullable=False),
|
||||
sa.Column("updated_at", sa.BigInteger(), nullable=False),
|
||||
sa.PrimaryKeyConstraint("id"),
|
||||
sa.ForeignKeyConstraint(["user_id"], ["user.id"], ondelete="CASCADE"),
|
||||
)
|
||||
|
||||
# Create indexes for better performance
|
||||
op.create_index("idx_oauth_session_user_id", "oauth_session", ["user_id"])
|
||||
op.create_index("idx_oauth_session_expires_at", "oauth_session", ["expires_at"])
|
||||
op.create_index(
|
||||
"idx_oauth_session_user_provider", "oauth_session", ["user_id", "provider"]
|
||||
)
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
# Drop indexes first
|
||||
op.drop_index("idx_oauth_session_user_provider", table_name="oauth_session")
|
||||
op.drop_index("idx_oauth_session_expires_at", table_name="oauth_session")
|
||||
op.drop_index("idx_oauth_session_user_id", table_name="oauth_session")
|
||||
|
||||
# Drop the table
|
||||
op.drop_table("oauth_session")
|
||||
|
|
@ -0,0 +1,34 @@
|
|||
"""Add reply_to_id column to message
|
||||
|
||||
Revision ID: a5c220713937
|
||||
Revises: 38d63c18f30f
|
||||
Create Date: 2025-09-27 02:24:18.058455
|
||||
|
||||
"""
|
||||
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = "a5c220713937"
|
||||
down_revision: Union[str, None] = "38d63c18f30f"
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
# Add 'reply_to_id' column to the 'message' table for replying to messages
|
||||
op.add_column(
|
||||
"message",
|
||||
sa.Column("reply_to_id", sa.Text(), nullable=True),
|
||||
)
|
||||
pass
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
# Remove 'reply_to_id' column from the 'message' table
|
||||
op.drop_column("message", "reply_to_id")
|
||||
|
||||
pass
|
||||
|
|
@ -7,7 +7,6 @@ from open_webui.models.users import UserModel, Users
|
|||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from pydantic import BaseModel
|
||||
from sqlalchemy import Boolean, Column, String, Text
|
||||
from open_webui.utils.auth import verify_password
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
||||
|
|
@ -122,7 +121,9 @@ class AuthsTable:
|
|||
else:
|
||||
return None
|
||||
|
||||
def authenticate_user(self, email: str, password: str) -> Optional[UserModel]:
|
||||
def authenticate_user(
|
||||
self, email: str, verify_password: callable
|
||||
) -> Optional[UserModel]:
|
||||
log.info(f"authenticate_user: {email}")
|
||||
|
||||
user = Users.get_user_by_email(email)
|
||||
|
|
@ -133,7 +134,7 @@ class AuthsTable:
|
|||
with get_db() as db:
|
||||
auth = db.query(Auth).filter_by(id=user.id, active=True).first()
|
||||
if auth:
|
||||
if verify_password(password, auth.password):
|
||||
if verify_password(auth.password):
|
||||
return user
|
||||
else:
|
||||
return None
|
||||
|
|
|
|||
|
|
@ -57,6 +57,10 @@ class ChannelModel(BaseModel):
|
|||
####################
|
||||
|
||||
|
||||
class ChannelResponse(ChannelModel):
|
||||
write_access: bool = False
|
||||
|
||||
|
||||
class ChannelForm(BaseModel):
|
||||
name: str
|
||||
description: Optional[str] = None
|
||||
|
|
|
|||
|
|
@ -236,7 +236,7 @@ class ChatTable:
|
|||
|
||||
return chat.chat.get("title", "New Chat")
|
||||
|
||||
def get_messages_by_chat_id(self, id: str) -> Optional[dict]:
|
||||
def get_messages_map_by_chat_id(self, id: str) -> Optional[dict]:
|
||||
chat = self.get_chat_by_id(id)
|
||||
if chat is None:
|
||||
return None
|
||||
|
|
@ -297,6 +297,27 @@ class ChatTable:
|
|||
chat["history"] = history
|
||||
return self.update_chat_by_id(id, chat)
|
||||
|
||||
def add_message_files_by_id_and_message_id(
|
||||
self, id: str, message_id: str, files: list[dict]
|
||||
) -> list[dict]:
|
||||
chat = self.get_chat_by_id(id)
|
||||
if chat is None:
|
||||
return None
|
||||
|
||||
chat = chat.chat
|
||||
history = chat.get("history", {})
|
||||
|
||||
message_files = []
|
||||
|
||||
if message_id in history.get("messages", {}):
|
||||
message_files = history["messages"][message_id].get("files", [])
|
||||
message_files = message_files + files
|
||||
history["messages"][message_id]["files"] = message_files
|
||||
|
||||
chat["history"] = history
|
||||
self.update_chat_by_id(id, chat)
|
||||
return message_files
|
||||
|
||||
def insert_shared_chat_by_chat_id(self, chat_id: str) -> Optional[ChatModel]:
|
||||
with get_db() as db:
|
||||
# Get the existing chat to share
|
||||
|
|
@ -366,6 +387,15 @@ class ChatTable:
|
|||
except Exception:
|
||||
return False
|
||||
|
||||
def unarchive_all_chats_by_user_id(self, user_id: str) -> bool:
|
||||
try:
|
||||
with get_db() as db:
|
||||
db.query(Chat).filter_by(user_id=user_id).update({"archived": False})
|
||||
db.commit()
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def update_chat_share_id_by_id(
|
||||
self, id: str, share_id: Optional[str]
|
||||
) -> Optional[ChatModel]:
|
||||
|
|
@ -431,7 +461,10 @@ class ChatTable:
|
|||
order_by = filter.get("order_by")
|
||||
direction = filter.get("direction")
|
||||
|
||||
if order_by and direction and getattr(Chat, order_by):
|
||||
if order_by and direction:
|
||||
if not getattr(Chat, order_by, None):
|
||||
raise ValueError("Invalid order_by field")
|
||||
|
||||
if direction.lower() == "asc":
|
||||
query = query.order_by(getattr(Chat, order_by).asc())
|
||||
elif direction.lower() == "desc":
|
||||
|
|
@ -492,12 +525,19 @@ class ChatTable:
|
|||
self,
|
||||
user_id: str,
|
||||
include_archived: bool = False,
|
||||
include_folders: bool = False,
|
||||
include_pinned: bool = False,
|
||||
skip: Optional[int] = None,
|
||||
limit: Optional[int] = None,
|
||||
) -> list[ChatTitleIdResponse]:
|
||||
with get_db() as db:
|
||||
query = db.query(Chat).filter_by(user_id=user_id).filter_by(folder_id=None)
|
||||
query = query.filter(or_(Chat.pinned == False, Chat.pinned == None))
|
||||
query = db.query(Chat).filter_by(user_id=user_id)
|
||||
|
||||
if not include_folders:
|
||||
query = query.filter_by(folder_id=None)
|
||||
|
||||
if not include_pinned:
|
||||
query = query.filter(or_(Chat.pinned == False, Chat.pinned == None))
|
||||
|
||||
if not include_archived:
|
||||
query = query.filter_by(archived=False)
|
||||
|
|
@ -746,15 +786,20 @@ class ChatTable:
|
|||
)
|
||||
|
||||
elif dialect_name == "postgresql":
|
||||
# PostgreSQL relies on proper JSON query for search
|
||||
# PostgreSQL doesn't allow null bytes in text. We filter those out by checking
|
||||
# the JSON representation for \u0000 before attempting text extraction
|
||||
postgres_content_sql = (
|
||||
"EXISTS ("
|
||||
" SELECT 1 "
|
||||
" FROM json_array_elements(Chat.chat->'messages') AS message "
|
||||
" WHERE LOWER(message->>'content') LIKE '%' || :content_key || '%'"
|
||||
" WHERE message->'content' IS NOT NULL "
|
||||
" AND (message->'content')::text NOT LIKE '%\\u0000%' "
|
||||
" AND LOWER(message->>'content') LIKE '%' || :content_key || '%'"
|
||||
")"
|
||||
)
|
||||
postgres_content_clause = text(postgres_content_sql)
|
||||
# Also filter out chats with null bytes in title
|
||||
query = query.filter(text("Chat.title::text NOT LIKE '%\\x00%'"))
|
||||
query = query.filter(
|
||||
or_(
|
||||
Chat.title.ilike(bindparam("title_key")),
|
||||
|
|
@ -805,7 +850,7 @@ class ChatTable:
|
|||
return [ChatModel.model_validate(chat) for chat in all_chats]
|
||||
|
||||
def get_chats_by_folder_id_and_user_id(
|
||||
self, folder_id: str, user_id: str
|
||||
self, folder_id: str, user_id: str, skip: int = 0, limit: int = 60
|
||||
) -> list[ChatModel]:
|
||||
with get_db() as db:
|
||||
query = db.query(Chat).filter_by(folder_id=folder_id, user_id=user_id)
|
||||
|
|
@ -814,6 +859,11 @@ class ChatTable:
|
|||
|
||||
query = query.order_by(Chat.updated_at.desc())
|
||||
|
||||
if skip:
|
||||
query = query.offset(skip)
|
||||
if limit:
|
||||
query = query.limit(limit)
|
||||
|
||||
all_chats = query.all()
|
||||
return [ChatModel.model_validate(chat) for chat in all_chats]
|
||||
|
||||
|
|
@ -943,6 +993,16 @@ class ChatTable:
|
|||
|
||||
return count
|
||||
|
||||
def count_chats_by_folder_id_and_user_id(self, folder_id: str, user_id: str) -> int:
|
||||
with get_db() as db:
|
||||
query = db.query(Chat).filter_by(user_id=user_id)
|
||||
|
||||
query = query.filter_by(folder_id=folder_id)
|
||||
count = query.count()
|
||||
|
||||
log.info(f"Count of chats for folder '{folder_id}': {count}")
|
||||
return count
|
||||
|
||||
def delete_tag_by_id_and_user_id_and_tag_name(
|
||||
self, id: str, user_id: str, tag_name: str
|
||||
) -> bool:
|
||||
|
|
|
|||
|
|
@ -82,6 +82,7 @@ class FileModelResponse(BaseModel):
|
|||
|
||||
class FileMetadataResponse(BaseModel):
|
||||
id: str
|
||||
hash: Optional[str] = None
|
||||
meta: dict
|
||||
created_at: int # timestamp in epoch
|
||||
updated_at: int # timestamp in epoch
|
||||
|
|
@ -97,6 +98,12 @@ class FileForm(BaseModel):
|
|||
access_control: Optional[dict] = None
|
||||
|
||||
|
||||
class FileUpdateForm(BaseModel):
|
||||
hash: Optional[str] = None
|
||||
data: Optional[dict] = None
|
||||
meta: Optional[dict] = None
|
||||
|
||||
|
||||
class FilesTable:
|
||||
def insert_new_file(self, user_id: str, form_data: FileForm) -> Optional[FileModel]:
|
||||
with get_db() as db:
|
||||
|
|
@ -130,12 +137,24 @@ class FilesTable:
|
|||
except Exception:
|
||||
return None
|
||||
|
||||
def get_file_by_id_and_user_id(self, id: str, user_id: str) -> Optional[FileModel]:
|
||||
with get_db() as db:
|
||||
try:
|
||||
file = db.query(File).filter_by(id=id, user_id=user_id).first()
|
||||
if file:
|
||||
return FileModel.model_validate(file)
|
||||
else:
|
||||
return None
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def get_file_metadata_by_id(self, id: str) -> Optional[FileMetadataResponse]:
|
||||
with get_db() as db:
|
||||
try:
|
||||
file = db.get(File, id)
|
||||
return FileMetadataResponse(
|
||||
id=file.id,
|
||||
hash=file.hash,
|
||||
meta=file.meta,
|
||||
created_at=file.created_at,
|
||||
updated_at=file.updated_at,
|
||||
|
|
@ -147,6 +166,15 @@ class FilesTable:
|
|||
with get_db() as db:
|
||||
return [FileModel.model_validate(file) for file in db.query(File).all()]
|
||||
|
||||
def check_access_by_user_id(self, id, user_id, permission="write") -> bool:
|
||||
file = self.get_file_by_id(id)
|
||||
if not file:
|
||||
return False
|
||||
if file.user_id == user_id:
|
||||
return True
|
||||
# Implement additional access control logic here as needed
|
||||
return False
|
||||
|
||||
def get_files_by_ids(self, ids: list[str]) -> list[FileModel]:
|
||||
with get_db() as db:
|
||||
return [
|
||||
|
|
@ -162,11 +190,14 @@ class FilesTable:
|
|||
return [
|
||||
FileMetadataResponse(
|
||||
id=file.id,
|
||||
hash=file.hash,
|
||||
meta=file.meta,
|
||||
created_at=file.created_at,
|
||||
updated_at=file.updated_at,
|
||||
)
|
||||
for file in db.query(File)
|
||||
for file in db.query(
|
||||
File.id, File.hash, File.meta, File.created_at, File.updated_at
|
||||
)
|
||||
.filter(File.id.in_(ids))
|
||||
.order_by(File.updated_at.desc())
|
||||
.all()
|
||||
|
|
@ -179,6 +210,29 @@ class FilesTable:
|
|||
for file in db.query(File).filter_by(user_id=user_id).all()
|
||||
]
|
||||
|
||||
def update_file_by_id(
|
||||
self, id: str, form_data: FileUpdateForm
|
||||
) -> Optional[FileModel]:
|
||||
with get_db() as db:
|
||||
try:
|
||||
file = db.query(File).filter_by(id=id).first()
|
||||
|
||||
if form_data.hash is not None:
|
||||
file.hash = form_data.hash
|
||||
|
||||
if form_data.data is not None:
|
||||
file.data = {**(file.data if file.data else {}), **form_data.data}
|
||||
|
||||
if form_data.meta is not None:
|
||||
file.meta = {**(file.meta if file.meta else {}), **form_data.meta}
|
||||
|
||||
file.updated_at = int(time.time())
|
||||
db.commit()
|
||||
return FileModel.model_validate(file)
|
||||
except Exception as e:
|
||||
log.exception(f"Error updating file completely by id: {e}")
|
||||
return None
|
||||
|
||||
def update_file_hash_by_id(self, id: str, hash: str) -> Optional[FileModel]:
|
||||
with get_db() as db:
|
||||
try:
|
||||
|
|
|
|||
|
|
@ -50,6 +50,20 @@ class FolderModel(BaseModel):
|
|||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
|
||||
class FolderMetadataResponse(BaseModel):
|
||||
icon: Optional[str] = None
|
||||
|
||||
|
||||
class FolderNameIdResponse(BaseModel):
|
||||
id: str
|
||||
name: str
|
||||
meta: Optional[FolderMetadataResponse] = None
|
||||
parent_id: Optional[str] = None
|
||||
is_expanded: bool = False
|
||||
created_at: int
|
||||
updated_at: int
|
||||
|
||||
|
||||
####################
|
||||
# Forms
|
||||
####################
|
||||
|
|
@ -58,6 +72,14 @@ class FolderModel(BaseModel):
|
|||
class FolderForm(BaseModel):
|
||||
name: str
|
||||
data: Optional[dict] = None
|
||||
meta: Optional[dict] = None
|
||||
model_config = ConfigDict(extra="allow")
|
||||
|
||||
|
||||
class FolderUpdateForm(BaseModel):
|
||||
name: Optional[str] = None
|
||||
data: Optional[dict] = None
|
||||
meta: Optional[dict] = None
|
||||
model_config = ConfigDict(extra="allow")
|
||||
|
||||
|
||||
|
|
@ -191,7 +213,7 @@ class FolderTable:
|
|||
return
|
||||
|
||||
def update_folder_by_id_and_user_id(
|
||||
self, id: str, user_id: str, form_data: FolderForm
|
||||
self, id: str, user_id: str, form_data: FolderUpdateForm
|
||||
) -> Optional[FolderModel]:
|
||||
try:
|
||||
with get_db() as db:
|
||||
|
|
@ -222,8 +244,13 @@ class FolderTable:
|
|||
**form_data["data"],
|
||||
}
|
||||
|
||||
folder.updated_at = int(time.time())
|
||||
if "meta" in form_data:
|
||||
folder.meta = {
|
||||
**(folder.meta or {}),
|
||||
**form_data["meta"],
|
||||
}
|
||||
|
||||
folder.updated_at = int(time.time())
|
||||
db.commit()
|
||||
|
||||
return FolderModel.model_validate(folder)
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ import time
|
|||
from typing import Optional
|
||||
|
||||
from open_webui.internal.db import Base, JSONField, get_db
|
||||
from open_webui.models.users import Users
|
||||
from open_webui.models.users import Users, UserModel
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
from sqlalchemy import BigInteger, Boolean, Column, String, Text, Index
|
||||
|
|
@ -37,6 +37,7 @@ class Function(Base):
|
|||
class FunctionMeta(BaseModel):
|
||||
description: Optional[str] = None
|
||||
manifest: Optional[dict] = {}
|
||||
model_config = ConfigDict(extra="allow")
|
||||
|
||||
|
||||
class FunctionModel(BaseModel):
|
||||
|
|
@ -54,11 +55,31 @@ class FunctionModel(BaseModel):
|
|||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
|
||||
class FunctionWithValvesModel(BaseModel):
|
||||
id: str
|
||||
user_id: str
|
||||
name: str
|
||||
type: str
|
||||
content: str
|
||||
meta: FunctionMeta
|
||||
valves: Optional[dict] = None
|
||||
is_active: bool = False
|
||||
is_global: bool = False
|
||||
updated_at: int # timestamp in epoch
|
||||
created_at: int # timestamp in epoch
|
||||
|
||||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
|
||||
####################
|
||||
# Forms
|
||||
####################
|
||||
|
||||
|
||||
class FunctionUserResponse(FunctionModel):
|
||||
user: Optional[UserModel] = None
|
||||
|
||||
|
||||
class FunctionResponse(BaseModel):
|
||||
id: str
|
||||
user_id: str
|
||||
|
|
@ -111,8 +132,8 @@ class FunctionsTable:
|
|||
return None
|
||||
|
||||
def sync_functions(
|
||||
self, user_id: str, functions: list[FunctionModel]
|
||||
) -> list[FunctionModel]:
|
||||
self, user_id: str, functions: list[FunctionWithValvesModel]
|
||||
) -> list[FunctionWithValvesModel]:
|
||||
# Synchronize functions for a user by updating existing ones, inserting new ones, and removing those that are no longer present.
|
||||
try:
|
||||
with get_db() as db:
|
||||
|
|
@ -166,19 +187,48 @@ class FunctionsTable:
|
|||
except Exception:
|
||||
return None
|
||||
|
||||
def get_functions(self, active_only=False) -> list[FunctionModel]:
|
||||
def get_functions(
|
||||
self, active_only=False, include_valves=False
|
||||
) -> list[FunctionModel | FunctionWithValvesModel]:
|
||||
with get_db() as db:
|
||||
if active_only:
|
||||
functions = db.query(Function).filter_by(is_active=True).all()
|
||||
|
||||
else:
|
||||
functions = db.query(Function).all()
|
||||
|
||||
if include_valves:
|
||||
return [
|
||||
FunctionModel.model_validate(function)
|
||||
for function in db.query(Function).filter_by(is_active=True).all()
|
||||
FunctionWithValvesModel.model_validate(function)
|
||||
for function in functions
|
||||
]
|
||||
else:
|
||||
return [
|
||||
FunctionModel.model_validate(function)
|
||||
for function in db.query(Function).all()
|
||||
FunctionModel.model_validate(function) for function in functions
|
||||
]
|
||||
|
||||
def get_function_list(self) -> list[FunctionUserResponse]:
|
||||
with get_db() as db:
|
||||
functions = db.query(Function).order_by(Function.updated_at.desc()).all()
|
||||
user_ids = list(set(func.user_id for func in functions))
|
||||
|
||||
users = Users.get_users_by_user_ids(user_ids) if user_ids else []
|
||||
users_dict = {user.id: user for user in users}
|
||||
|
||||
return [
|
||||
FunctionUserResponse.model_validate(
|
||||
{
|
||||
**FunctionModel.model_validate(func).model_dump(),
|
||||
"user": (
|
||||
users_dict.get(func.user_id).model_dump()
|
||||
if func.user_id in users_dict
|
||||
else None
|
||||
),
|
||||
}
|
||||
)
|
||||
for func in functions
|
||||
]
|
||||
|
||||
def get_functions_by_type(
|
||||
self, type: str, active_only=False
|
||||
) -> list[FunctionModel]:
|
||||
|
|
@ -237,6 +287,29 @@ class FunctionsTable:
|
|||
except Exception:
|
||||
return None
|
||||
|
||||
def update_function_metadata_by_id(
|
||||
self, id: str, metadata: dict
|
||||
) -> Optional[FunctionModel]:
|
||||
with get_db() as db:
|
||||
try:
|
||||
function = db.get(Function, id)
|
||||
|
||||
if function:
|
||||
if function.meta:
|
||||
function.meta = {**function.meta, **metadata}
|
||||
else:
|
||||
function.meta = metadata
|
||||
|
||||
function.updated_at = int(time.time())
|
||||
db.commit()
|
||||
db.refresh(function)
|
||||
return self.get_function_by_id(id)
|
||||
else:
|
||||
return None
|
||||
except Exception as e:
|
||||
log.exception(f"Error updating function metadata by id {id}: {e}")
|
||||
return None
|
||||
|
||||
def get_user_valves_by_id_and_user_id(
|
||||
self, id: str, user_id: str
|
||||
) -> Optional[dict]:
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ from open_webui.models.files import FileMetadataResponse
|
|||
|
||||
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
from sqlalchemy import BigInteger, Column, String, Text, JSON, func
|
||||
from sqlalchemy import BigInteger, Column, String, Text, JSON, func, ForeignKey
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
|
@ -35,7 +35,6 @@ class Group(Base):
|
|||
meta = Column(JSON, nullable=True)
|
||||
|
||||
permissions = Column(JSON, nullable=True)
|
||||
user_ids = Column(JSON, nullable=True)
|
||||
|
||||
created_at = Column(BigInteger)
|
||||
updated_at = Column(BigInteger)
|
||||
|
|
@ -53,12 +52,33 @@ class GroupModel(BaseModel):
|
|||
meta: Optional[dict] = None
|
||||
|
||||
permissions: Optional[dict] = None
|
||||
user_ids: list[str] = []
|
||||
|
||||
created_at: int # timestamp in epoch
|
||||
updated_at: int # timestamp in epoch
|
||||
|
||||
|
||||
class GroupMember(Base):
|
||||
__tablename__ = "group_member"
|
||||
|
||||
id = Column(Text, unique=True, primary_key=True)
|
||||
group_id = Column(
|
||||
Text,
|
||||
ForeignKey("group.id", ondelete="CASCADE"),
|
||||
nullable=False,
|
||||
)
|
||||
user_id = Column(Text, nullable=False)
|
||||
created_at = Column(BigInteger, nullable=True)
|
||||
updated_at = Column(BigInteger, nullable=True)
|
||||
|
||||
|
||||
class GroupMemberModel(BaseModel):
|
||||
id: str
|
||||
group_id: str
|
||||
user_id: str
|
||||
created_at: Optional[int] = None # timestamp in epoch
|
||||
updated_at: Optional[int] = None # timestamp in epoch
|
||||
|
||||
|
||||
####################
|
||||
# Forms
|
||||
####################
|
||||
|
|
@ -72,7 +92,7 @@ class GroupResponse(BaseModel):
|
|||
permissions: Optional[dict] = None
|
||||
data: Optional[dict] = None
|
||||
meta: Optional[dict] = None
|
||||
user_ids: list[str] = []
|
||||
member_count: Optional[int] = None
|
||||
created_at: int # timestamp in epoch
|
||||
updated_at: int # timestamp in epoch
|
||||
|
||||
|
|
@ -87,7 +107,7 @@ class UserIdsForm(BaseModel):
|
|||
user_ids: Optional[list[str]] = None
|
||||
|
||||
|
||||
class GroupUpdateForm(GroupForm, UserIdsForm):
|
||||
class GroupUpdateForm(GroupForm):
|
||||
pass
|
||||
|
||||
|
||||
|
|
@ -131,12 +151,8 @@ class GroupTable:
|
|||
return [
|
||||
GroupModel.model_validate(group)
|
||||
for group in db.query(Group)
|
||||
.filter(
|
||||
func.json_array_length(Group.user_ids) > 0
|
||||
) # Ensure array exists
|
||||
.filter(
|
||||
Group.user_ids.cast(String).like(f'%"{user_id}"%')
|
||||
) # String-based check
|
||||
.join(GroupMember, GroupMember.group_id == Group.id)
|
||||
.filter(GroupMember.user_id == user_id)
|
||||
.order_by(Group.updated_at.desc())
|
||||
.all()
|
||||
]
|
||||
|
|
@ -149,12 +165,46 @@ class GroupTable:
|
|||
except Exception:
|
||||
return None
|
||||
|
||||
def get_group_user_ids_by_id(self, id: str) -> Optional[str]:
|
||||
group = self.get_group_by_id(id)
|
||||
if group:
|
||||
return group.user_ids
|
||||
else:
|
||||
return None
|
||||
def get_group_user_ids_by_id(self, id: str) -> Optional[list[str]]:
|
||||
with get_db() as db:
|
||||
members = (
|
||||
db.query(GroupMember.user_id).filter(GroupMember.group_id == id).all()
|
||||
)
|
||||
|
||||
if not members:
|
||||
return None
|
||||
|
||||
return [m[0] for m in members]
|
||||
|
||||
def set_group_user_ids_by_id(self, group_id: str, user_ids: list[str]) -> None:
|
||||
with get_db() as db:
|
||||
# Delete existing members
|
||||
db.query(GroupMember).filter(GroupMember.group_id == group_id).delete()
|
||||
|
||||
# Insert new members
|
||||
now = int(time.time())
|
||||
new_members = [
|
||||
GroupMember(
|
||||
id=str(uuid.uuid4()),
|
||||
group_id=group_id,
|
||||
user_id=user_id,
|
||||
created_at=now,
|
||||
updated_at=now,
|
||||
)
|
||||
for user_id in user_ids
|
||||
]
|
||||
|
||||
db.add_all(new_members)
|
||||
db.commit()
|
||||
|
||||
def get_group_member_count_by_id(self, id: str) -> int:
|
||||
with get_db() as db:
|
||||
count = (
|
||||
db.query(func.count(GroupMember.user_id))
|
||||
.filter(GroupMember.group_id == id)
|
||||
.scalar()
|
||||
)
|
||||
return count if count else 0
|
||||
|
||||
def update_group_by_id(
|
||||
self, id: str, form_data: GroupUpdateForm, overwrite: bool = False
|
||||
|
|
@ -195,20 +245,29 @@ class GroupTable:
|
|||
def remove_user_from_all_groups(self, user_id: str) -> bool:
|
||||
with get_db() as db:
|
||||
try:
|
||||
groups = self.get_groups_by_member_id(user_id)
|
||||
# Find all groups the user belongs to
|
||||
groups = (
|
||||
db.query(Group)
|
||||
.join(GroupMember, GroupMember.group_id == Group.id)
|
||||
.filter(GroupMember.user_id == user_id)
|
||||
.all()
|
||||
)
|
||||
|
||||
# Remove the user from each group
|
||||
for group in groups:
|
||||
group.user_ids.remove(user_id)
|
||||
db.query(Group).filter_by(id=group.id).update(
|
||||
{
|
||||
"user_ids": group.user_ids,
|
||||
"updated_at": int(time.time()),
|
||||
}
|
||||
)
|
||||
db.commit()
|
||||
db.query(GroupMember).filter(
|
||||
GroupMember.group_id == group.id, GroupMember.user_id == user_id
|
||||
).delete()
|
||||
|
||||
db.query(Group).filter_by(id=group.id).update(
|
||||
{"updated_at": int(time.time())}
|
||||
)
|
||||
|
||||
db.commit()
|
||||
return True
|
||||
|
||||
except Exception:
|
||||
db.rollback()
|
||||
return False
|
||||
|
||||
def create_groups_by_group_names(
|
||||
|
|
@ -246,37 +305,61 @@ class GroupTable:
|
|||
def sync_groups_by_group_names(self, user_id: str, group_names: list[str]) -> bool:
|
||||
with get_db() as db:
|
||||
try:
|
||||
groups = db.query(Group).filter(Group.name.in_(group_names)).all()
|
||||
group_ids = [group.id for group in groups]
|
||||
now = int(time.time())
|
||||
|
||||
# Remove user from groups not in the new list
|
||||
existing_groups = self.get_groups_by_member_id(user_id)
|
||||
# 1. Groups that SHOULD contain the user
|
||||
target_groups = (
|
||||
db.query(Group).filter(Group.name.in_(group_names)).all()
|
||||
)
|
||||
target_group_ids = {g.id for g in target_groups}
|
||||
|
||||
for group in existing_groups:
|
||||
if group.id not in group_ids:
|
||||
group.user_ids.remove(user_id)
|
||||
db.query(Group).filter_by(id=group.id).update(
|
||||
{
|
||||
"user_ids": group.user_ids,
|
||||
"updated_at": int(time.time()),
|
||||
}
|
||||
# 2. Groups the user is CURRENTLY in
|
||||
existing_group_ids = {
|
||||
g.id
|
||||
for g in db.query(Group)
|
||||
.join(GroupMember, GroupMember.group_id == Group.id)
|
||||
.filter(GroupMember.user_id == user_id)
|
||||
.all()
|
||||
}
|
||||
|
||||
# 3. Determine adds + removals
|
||||
groups_to_add = target_group_ids - existing_group_ids
|
||||
groups_to_remove = existing_group_ids - target_group_ids
|
||||
|
||||
# 4. Remove in one bulk delete
|
||||
if groups_to_remove:
|
||||
db.query(GroupMember).filter(
|
||||
GroupMember.user_id == user_id,
|
||||
GroupMember.group_id.in_(groups_to_remove),
|
||||
).delete(synchronize_session=False)
|
||||
|
||||
db.query(Group).filter(Group.id.in_(groups_to_remove)).update(
|
||||
{"updated_at": now}, synchronize_session=False
|
||||
)
|
||||
|
||||
# 5. Bulk insert missing memberships
|
||||
for group_id in groups_to_add:
|
||||
db.add(
|
||||
GroupMember(
|
||||
id=str(uuid.uuid4()),
|
||||
group_id=group_id,
|
||||
user_id=user_id,
|
||||
created_at=now,
|
||||
updated_at=now,
|
||||
)
|
||||
)
|
||||
|
||||
# Add user to new groups
|
||||
for group in groups:
|
||||
if user_id not in group.user_ids:
|
||||
group.user_ids.append(user_id)
|
||||
db.query(Group).filter_by(id=group.id).update(
|
||||
{
|
||||
"user_ids": group.user_ids,
|
||||
"updated_at": int(time.time()),
|
||||
}
|
||||
)
|
||||
if groups_to_add:
|
||||
db.query(Group).filter(Group.id.in_(groups_to_add)).update(
|
||||
{"updated_at": now}, synchronize_session=False
|
||||
)
|
||||
|
||||
db.commit()
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
db.rollback()
|
||||
return False
|
||||
|
||||
def add_users_to_group(
|
||||
|
|
@ -288,21 +371,31 @@ class GroupTable:
|
|||
if not group:
|
||||
return None
|
||||
|
||||
group_user_ids = group.user_ids
|
||||
if not group_user_ids or not isinstance(group_user_ids, list):
|
||||
group_user_ids = []
|
||||
now = int(time.time())
|
||||
|
||||
group_user_ids = list(set(group_user_ids)) # Deduplicate
|
||||
for user_id in user_ids or []:
|
||||
try:
|
||||
db.add(
|
||||
GroupMember(
|
||||
id=str(uuid.uuid4()),
|
||||
group_id=id,
|
||||
user_id=user_id,
|
||||
created_at=now,
|
||||
updated_at=now,
|
||||
)
|
||||
)
|
||||
db.flush() # Detect unique constraint violation early
|
||||
except Exception:
|
||||
db.rollback() # Clear failed INSERT
|
||||
db.begin() # Start a new transaction
|
||||
continue # Duplicate → ignore
|
||||
|
||||
for user_id in user_ids:
|
||||
if user_id not in group_user_ids:
|
||||
group_user_ids.append(user_id)
|
||||
|
||||
group.user_ids = group_user_ids
|
||||
group.updated_at = int(time.time())
|
||||
group.updated_at = now
|
||||
db.commit()
|
||||
db.refresh(group)
|
||||
|
||||
return GroupModel.model_validate(group)
|
||||
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
return None
|
||||
|
|
@ -316,23 +409,22 @@ class GroupTable:
|
|||
if not group:
|
||||
return None
|
||||
|
||||
group_user_ids = group.user_ids
|
||||
|
||||
if not group_user_ids or not isinstance(group_user_ids, list):
|
||||
if not user_ids:
|
||||
return GroupModel.model_validate(group)
|
||||
|
||||
group_user_ids = list(set(group_user_ids)) # Deduplicate
|
||||
|
||||
# Remove each user from group_member
|
||||
for user_id in user_ids:
|
||||
if user_id in group_user_ids:
|
||||
group_user_ids.remove(user_id)
|
||||
db.query(GroupMember).filter(
|
||||
GroupMember.group_id == id, GroupMember.user_id == user_id
|
||||
).delete()
|
||||
|
||||
group.user_ids = group_user_ids
|
||||
# Update group timestamp
|
||||
group.updated_at = int(time.time())
|
||||
|
||||
db.commit()
|
||||
db.refresh(group)
|
||||
return GroupModel.model_validate(group)
|
||||
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
return None
|
||||
|
|
|
|||
|
|
@ -8,6 +8,7 @@ from open_webui.internal.db import Base, get_db
|
|||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
from open_webui.models.files import FileMetadataResponse
|
||||
from open_webui.models.groups import Groups
|
||||
from open_webui.models.users import Users, UserResponse
|
||||
|
||||
|
||||
|
|
@ -128,11 +129,18 @@ class KnowledgeTable:
|
|||
|
||||
def get_knowledge_bases(self) -> list[KnowledgeUserModel]:
|
||||
with get_db() as db:
|
||||
knowledge_bases = []
|
||||
for knowledge in (
|
||||
all_knowledge = (
|
||||
db.query(Knowledge).order_by(Knowledge.updated_at.desc()).all()
|
||||
):
|
||||
user = Users.get_user_by_id(knowledge.user_id)
|
||||
)
|
||||
|
||||
user_ids = list(set(knowledge.user_id for knowledge in all_knowledge))
|
||||
|
||||
users = Users.get_users_by_user_ids(user_ids) if user_ids else []
|
||||
users_dict = {user.id: user for user in users}
|
||||
|
||||
knowledge_bases = []
|
||||
for knowledge in all_knowledge:
|
||||
user = users_dict.get(knowledge.user_id)
|
||||
knowledge_bases.append(
|
||||
KnowledgeUserModel.model_validate(
|
||||
{
|
||||
|
|
@ -143,15 +151,27 @@ class KnowledgeTable:
|
|||
)
|
||||
return knowledge_bases
|
||||
|
||||
def check_access_by_user_id(self, id, user_id, permission="write") -> bool:
|
||||
knowledge = self.get_knowledge_by_id(id)
|
||||
if not knowledge:
|
||||
return False
|
||||
if knowledge.user_id == user_id:
|
||||
return True
|
||||
user_group_ids = {group.id for group in Groups.get_groups_by_member_id(user_id)}
|
||||
return has_access(user_id, permission, knowledge.access_control, user_group_ids)
|
||||
|
||||
def get_knowledge_bases_by_user_id(
|
||||
self, user_id: str, permission: str = "write"
|
||||
) -> list[KnowledgeUserModel]:
|
||||
knowledge_bases = self.get_knowledge_bases()
|
||||
user_group_ids = {group.id for group in Groups.get_groups_by_member_id(user_id)}
|
||||
return [
|
||||
knowledge_base
|
||||
for knowledge_base in knowledge_bases
|
||||
if knowledge_base.user_id == user_id
|
||||
or has_access(user_id, permission, knowledge_base.access_control)
|
||||
or has_access(
|
||||
user_id, permission, knowledge_base.access_control, user_group_ids
|
||||
)
|
||||
]
|
||||
|
||||
def get_knowledge_by_id(self, id: str) -> Optional[KnowledgeModel]:
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ from typing import Optional
|
|||
|
||||
from open_webui.internal.db import Base, get_db
|
||||
from open_webui.models.tags import TagModel, Tag, Tags
|
||||
from open_webui.models.users import Users, UserNameResponse
|
||||
|
||||
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
|
|
@ -43,6 +44,7 @@ class Message(Base):
|
|||
user_id = Column(Text)
|
||||
channel_id = Column(Text, nullable=True)
|
||||
|
||||
reply_to_id = Column(Text, nullable=True)
|
||||
parent_id = Column(Text, nullable=True)
|
||||
|
||||
content = Column(Text)
|
||||
|
|
@ -60,6 +62,7 @@ class MessageModel(BaseModel):
|
|||
user_id: str
|
||||
channel_id: Optional[str] = None
|
||||
|
||||
reply_to_id: Optional[str] = None
|
||||
parent_id: Optional[str] = None
|
||||
|
||||
content: str
|
||||
|
|
@ -77,6 +80,7 @@ class MessageModel(BaseModel):
|
|||
|
||||
class MessageForm(BaseModel):
|
||||
content: str
|
||||
reply_to_id: Optional[str] = None
|
||||
parent_id: Optional[str] = None
|
||||
data: Optional[dict] = None
|
||||
meta: Optional[dict] = None
|
||||
|
|
@ -88,7 +92,15 @@ class Reactions(BaseModel):
|
|||
count: int
|
||||
|
||||
|
||||
class MessageResponse(MessageModel):
|
||||
class MessageUserResponse(MessageModel):
|
||||
user: Optional[UserNameResponse] = None
|
||||
|
||||
|
||||
class MessageReplyToResponse(MessageUserResponse):
|
||||
reply_to_message: Optional[MessageUserResponse] = None
|
||||
|
||||
|
||||
class MessageResponse(MessageReplyToResponse):
|
||||
latest_reply_at: Optional[int]
|
||||
reply_count: int
|
||||
reactions: list[Reactions]
|
||||
|
|
@ -107,6 +119,7 @@ class MessageTable:
|
|||
"id": id,
|
||||
"user_id": user_id,
|
||||
"channel_id": channel_id,
|
||||
"reply_to_id": form_data.reply_to_id,
|
||||
"parent_id": form_data.parent_id,
|
||||
"content": form_data.content,
|
||||
"data": form_data.data,
|
||||
|
|
@ -128,19 +141,32 @@ class MessageTable:
|
|||
if not message:
|
||||
return None
|
||||
|
||||
reactions = self.get_reactions_by_message_id(id)
|
||||
replies = self.get_replies_by_message_id(id)
|
||||
reply_to_message = (
|
||||
self.get_message_by_id(message.reply_to_id)
|
||||
if message.reply_to_id
|
||||
else None
|
||||
)
|
||||
|
||||
return MessageResponse(
|
||||
**{
|
||||
reactions = self.get_reactions_by_message_id(id)
|
||||
thread_replies = self.get_thread_replies_by_message_id(id)
|
||||
|
||||
user = Users.get_user_by_id(message.user_id)
|
||||
return MessageResponse.model_validate(
|
||||
{
|
||||
**MessageModel.model_validate(message).model_dump(),
|
||||
"latest_reply_at": replies[0].created_at if replies else None,
|
||||
"reply_count": len(replies),
|
||||
"user": user.model_dump() if user else None,
|
||||
"reply_to_message": (
|
||||
reply_to_message.model_dump() if reply_to_message else None
|
||||
),
|
||||
"latest_reply_at": (
|
||||
thread_replies[0].created_at if thread_replies else None
|
||||
),
|
||||
"reply_count": len(thread_replies),
|
||||
"reactions": reactions,
|
||||
}
|
||||
)
|
||||
|
||||
def get_replies_by_message_id(self, id: str) -> list[MessageModel]:
|
||||
def get_thread_replies_by_message_id(self, id: str) -> list[MessageReplyToResponse]:
|
||||
with get_db() as db:
|
||||
all_messages = (
|
||||
db.query(Message)
|
||||
|
|
@ -148,7 +174,27 @@ class MessageTable:
|
|||
.order_by(Message.created_at.desc())
|
||||
.all()
|
||||
)
|
||||
return [MessageModel.model_validate(message) for message in all_messages]
|
||||
|
||||
messages = []
|
||||
for message in all_messages:
|
||||
reply_to_message = (
|
||||
self.get_message_by_id(message.reply_to_id)
|
||||
if message.reply_to_id
|
||||
else None
|
||||
)
|
||||
messages.append(
|
||||
MessageReplyToResponse.model_validate(
|
||||
{
|
||||
**MessageModel.model_validate(message).model_dump(),
|
||||
"reply_to_message": (
|
||||
reply_to_message.model_dump()
|
||||
if reply_to_message
|
||||
else None
|
||||
),
|
||||
}
|
||||
)
|
||||
)
|
||||
return messages
|
||||
|
||||
def get_reply_user_ids_by_message_id(self, id: str) -> list[str]:
|
||||
with get_db() as db:
|
||||
|
|
@ -159,7 +205,7 @@ class MessageTable:
|
|||
|
||||
def get_messages_by_channel_id(
|
||||
self, channel_id: str, skip: int = 0, limit: int = 50
|
||||
) -> list[MessageModel]:
|
||||
) -> list[MessageReplyToResponse]:
|
||||
with get_db() as db:
|
||||
all_messages = (
|
||||
db.query(Message)
|
||||
|
|
@ -169,11 +215,31 @@ class MessageTable:
|
|||
.limit(limit)
|
||||
.all()
|
||||
)
|
||||
return [MessageModel.model_validate(message) for message in all_messages]
|
||||
|
||||
messages = []
|
||||
for message in all_messages:
|
||||
reply_to_message = (
|
||||
self.get_message_by_id(message.reply_to_id)
|
||||
if message.reply_to_id
|
||||
else None
|
||||
)
|
||||
messages.append(
|
||||
MessageReplyToResponse.model_validate(
|
||||
{
|
||||
**MessageModel.model_validate(message).model_dump(),
|
||||
"reply_to_message": (
|
||||
reply_to_message.model_dump()
|
||||
if reply_to_message
|
||||
else None
|
||||
),
|
||||
}
|
||||
)
|
||||
)
|
||||
return messages
|
||||
|
||||
def get_messages_by_parent_id(
|
||||
self, channel_id: str, parent_id: str, skip: int = 0, limit: int = 50
|
||||
) -> list[MessageModel]:
|
||||
) -> list[MessageReplyToResponse]:
|
||||
with get_db() as db:
|
||||
message = db.get(Message, parent_id)
|
||||
|
||||
|
|
@ -193,7 +259,26 @@ class MessageTable:
|
|||
if len(all_messages) < limit:
|
||||
all_messages.append(message)
|
||||
|
||||
return [MessageModel.model_validate(message) for message in all_messages]
|
||||
messages = []
|
||||
for message in all_messages:
|
||||
reply_to_message = (
|
||||
self.get_message_by_id(message.reply_to_id)
|
||||
if message.reply_to_id
|
||||
else None
|
||||
)
|
||||
messages.append(
|
||||
MessageReplyToResponse.model_validate(
|
||||
{
|
||||
**MessageModel.model_validate(message).model_dump(),
|
||||
"reply_to_message": (
|
||||
reply_to_message.model_dump()
|
||||
if reply_to_message
|
||||
else None
|
||||
),
|
||||
}
|
||||
)
|
||||
)
|
||||
return messages
|
||||
|
||||
def update_message_by_id(
|
||||
self, id: str, form_data: MessageForm
|
||||
|
|
@ -201,8 +286,14 @@ class MessageTable:
|
|||
with get_db() as db:
|
||||
message = db.get(Message, id)
|
||||
message.content = form_data.content
|
||||
message.data = form_data.data
|
||||
message.meta = form_data.meta
|
||||
message.data = {
|
||||
**(message.data if message.data else {}),
|
||||
**(form_data.data if form_data.data else {}),
|
||||
}
|
||||
message.meta = {
|
||||
**(message.meta if message.meta else {}),
|
||||
**(form_data.meta if form_data.meta else {}),
|
||||
}
|
||||
message.updated_at = int(time.time_ns())
|
||||
db.commit()
|
||||
db.refresh(message)
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ from typing import Optional
|
|||
from open_webui.internal.db import Base, JSONField, get_db
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
from open_webui.models.groups import Groups
|
||||
from open_webui.models.users import Users, UserResponse
|
||||
|
||||
|
||||
|
|
@ -175,9 +176,16 @@ class ModelsTable:
|
|||
|
||||
def get_models(self) -> list[ModelUserResponse]:
|
||||
with get_db() as db:
|
||||
all_models = db.query(Model).filter(Model.base_model_id != None).all()
|
||||
|
||||
user_ids = list(set(model.user_id for model in all_models))
|
||||
|
||||
users = Users.get_users_by_user_ids(user_ids) if user_ids else []
|
||||
users_dict = {user.id: user for user in users}
|
||||
|
||||
models = []
|
||||
for model in db.query(Model).filter(Model.base_model_id != None).all():
|
||||
user = Users.get_user_by_id(model.user_id)
|
||||
for model in all_models:
|
||||
user = users_dict.get(model.user_id)
|
||||
models.append(
|
||||
ModelUserResponse.model_validate(
|
||||
{
|
||||
|
|
@ -199,11 +207,12 @@ class ModelsTable:
|
|||
self, user_id: str, permission: str = "write"
|
||||
) -> list[ModelUserResponse]:
|
||||
models = self.get_models()
|
||||
user_group_ids = {group.id for group in Groups.get_groups_by_member_id(user_id)}
|
||||
return [
|
||||
model
|
||||
for model in models
|
||||
if model.user_id == user_id
|
||||
or has_access(user_id, permission, model.access_control)
|
||||
or has_access(user_id, permission, model.access_control, user_group_ids)
|
||||
]
|
||||
|
||||
def get_model_by_id(self, id: str) -> Optional[ModelModel]:
|
||||
|
|
|
|||
|
|
@ -2,8 +2,10 @@ import json
|
|||
import time
|
||||
import uuid
|
||||
from typing import Optional
|
||||
from functools import lru_cache
|
||||
|
||||
from open_webui.internal.db import Base, get_db
|
||||
from open_webui.models.groups import Groups
|
||||
from open_webui.utils.access_control import has_access
|
||||
from open_webui.models.users import Users, UserResponse
|
||||
|
||||
|
|
@ -96,21 +98,85 @@ class NoteTable:
|
|||
db.commit()
|
||||
return note
|
||||
|
||||
def get_notes(self) -> list[NoteModel]:
|
||||
def get_notes(
|
||||
self, skip: Optional[int] = None, limit: Optional[int] = None
|
||||
) -> list[NoteModel]:
|
||||
with get_db() as db:
|
||||
notes = db.query(Note).order_by(Note.updated_at.desc()).all()
|
||||
query = db.query(Note).order_by(Note.updated_at.desc())
|
||||
if skip is not None:
|
||||
query = query.offset(skip)
|
||||
if limit is not None:
|
||||
query = query.limit(limit)
|
||||
notes = query.all()
|
||||
return [NoteModel.model_validate(note) for note in notes]
|
||||
|
||||
def get_notes_by_user_id(
|
||||
self, user_id: str, permission: str = "write"
|
||||
self,
|
||||
user_id: str,
|
||||
skip: Optional[int] = None,
|
||||
limit: Optional[int] = None,
|
||||
) -> list[NoteModel]:
|
||||
notes = self.get_notes()
|
||||
return [
|
||||
note
|
||||
for note in notes
|
||||
if note.user_id == user_id
|
||||
or has_access(user_id, permission, note.access_control)
|
||||
]
|
||||
with get_db() as db:
|
||||
query = db.query(Note).filter(Note.user_id == user_id)
|
||||
query = query.order_by(Note.updated_at.desc())
|
||||
|
||||
if skip is not None:
|
||||
query = query.offset(skip)
|
||||
if limit is not None:
|
||||
query = query.limit(limit)
|
||||
|
||||
notes = query.all()
|
||||
return [NoteModel.model_validate(note) for note in notes]
|
||||
|
||||
def get_notes_by_permission(
|
||||
self,
|
||||
user_id: str,
|
||||
permission: str = "write",
|
||||
skip: Optional[int] = None,
|
||||
limit: Optional[int] = None,
|
||||
) -> list[NoteModel]:
|
||||
with get_db() as db:
|
||||
user_groups = Groups.get_groups_by_member_id(user_id)
|
||||
user_group_ids = {group.id for group in user_groups}
|
||||
|
||||
# Order newest-first. We stream to keep memory usage low.
|
||||
query = (
|
||||
db.query(Note)
|
||||
.order_by(Note.updated_at.desc())
|
||||
.execution_options(stream_results=True)
|
||||
.yield_per(256)
|
||||
)
|
||||
|
||||
results: list[NoteModel] = []
|
||||
n_skipped = 0
|
||||
|
||||
for note in query:
|
||||
# Fast-pass #1: owner
|
||||
if note.user_id == user_id:
|
||||
permitted = True
|
||||
# Fast-pass #2: public/open
|
||||
elif note.access_control is None:
|
||||
# Technically this should mean public access for both read and write, but we'll only do read for now
|
||||
# We might want to change this behavior later
|
||||
permitted = permission == "read"
|
||||
else:
|
||||
permitted = has_access(
|
||||
user_id, permission, note.access_control, user_group_ids
|
||||
)
|
||||
|
||||
if not permitted:
|
||||
continue
|
||||
|
||||
# Apply skip AFTER permission filtering so it counts only accessible notes
|
||||
if skip and n_skipped < skip:
|
||||
n_skipped += 1
|
||||
continue
|
||||
|
||||
results.append(NoteModel.model_validate(note))
|
||||
if limit is not None and len(results) >= limit:
|
||||
break
|
||||
|
||||
return results
|
||||
|
||||
def get_note_by_id(self, id: str) -> Optional[NoteModel]:
|
||||
with get_db() as db:
|
||||
|
|
|
|||
277
backend/open_webui/models/oauth_sessions.py
Normal file
|
|
@ -0,0 +1,277 @@
|
|||
import time
|
||||
import logging
|
||||
import uuid
|
||||
from typing import Optional, List
|
||||
import base64
|
||||
import hashlib
|
||||
import json
|
||||
|
||||
from cryptography.fernet import Fernet
|
||||
|
||||
from open_webui.internal.db import Base, get_db
|
||||
from open_webui.env import SRC_LOG_LEVELS, OAUTH_SESSION_TOKEN_ENCRYPTION_KEY
|
||||
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
from sqlalchemy import BigInteger, Column, String, Text, Index
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
||||
|
||||
####################
|
||||
# DB MODEL
|
||||
####################
|
||||
|
||||
|
||||
class OAuthSession(Base):
|
||||
__tablename__ = "oauth_session"
|
||||
|
||||
id = Column(Text, primary_key=True)
|
||||
user_id = Column(Text, nullable=False)
|
||||
provider = Column(Text, nullable=False)
|
||||
token = Column(
|
||||
Text, nullable=False
|
||||
) # JSON with access_token, id_token, refresh_token
|
||||
expires_at = Column(BigInteger, nullable=False)
|
||||
created_at = Column(BigInteger, nullable=False)
|
||||
updated_at = Column(BigInteger, nullable=False)
|
||||
|
||||
# Add indexes for better performance
|
||||
__table_args__ = (
|
||||
Index("idx_oauth_session_user_id", "user_id"),
|
||||
Index("idx_oauth_session_expires_at", "expires_at"),
|
||||
Index("idx_oauth_session_user_provider", "user_id", "provider"),
|
||||
)
|
||||
|
||||
|
||||
class OAuthSessionModel(BaseModel):
|
||||
id: str
|
||||
user_id: str
|
||||
provider: str
|
||||
token: dict
|
||||
expires_at: int # timestamp in epoch
|
||||
created_at: int # timestamp in epoch
|
||||
updated_at: int # timestamp in epoch
|
||||
|
||||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
|
||||
####################
|
||||
# Forms
|
||||
####################
|
||||
|
||||
|
||||
class OAuthSessionResponse(BaseModel):
|
||||
id: str
|
||||
user_id: str
|
||||
provider: str
|
||||
expires_at: int
|
||||
|
||||
|
||||
class OAuthSessionTable:
|
||||
def __init__(self):
|
||||
self.encryption_key = OAUTH_SESSION_TOKEN_ENCRYPTION_KEY
|
||||
if not self.encryption_key:
|
||||
raise Exception("OAUTH_SESSION_TOKEN_ENCRYPTION_KEY is not set")
|
||||
|
||||
# check if encryption key is in the right format for Fernet (32 url-safe base64-encoded bytes)
|
||||
if len(self.encryption_key) != 44:
|
||||
key_bytes = hashlib.sha256(self.encryption_key.encode()).digest()
|
||||
self.encryption_key = base64.urlsafe_b64encode(key_bytes)
|
||||
else:
|
||||
self.encryption_key = self.encryption_key.encode()
|
||||
|
||||
try:
|
||||
self.fernet = Fernet(self.encryption_key)
|
||||
except Exception as e:
|
||||
log.error(f"Error initializing Fernet with provided key: {e}")
|
||||
raise
|
||||
|
||||
def _encrypt_token(self, token) -> str:
|
||||
"""Encrypt OAuth tokens for storage"""
|
||||
try:
|
||||
token_json = json.dumps(token)
|
||||
encrypted = self.fernet.encrypt(token_json.encode()).decode()
|
||||
return encrypted
|
||||
except Exception as e:
|
||||
log.error(f"Error encrypting tokens: {e}")
|
||||
raise
|
||||
|
||||
def _decrypt_token(self, token: str):
|
||||
"""Decrypt OAuth tokens from storage"""
|
||||
try:
|
||||
decrypted = self.fernet.decrypt(token.encode()).decode()
|
||||
return json.loads(decrypted)
|
||||
except Exception as e:
|
||||
log.error(f"Error decrypting tokens: {e}")
|
||||
raise
|
||||
|
||||
def create_session(
|
||||
self,
|
||||
user_id: str,
|
||||
provider: str,
|
||||
token: dict,
|
||||
) -> Optional[OAuthSessionModel]:
|
||||
"""Create a new OAuth session"""
|
||||
try:
|
||||
with get_db() as db:
|
||||
current_time = int(time.time())
|
||||
id = str(uuid.uuid4())
|
||||
|
||||
result = OAuthSession(
|
||||
**{
|
||||
"id": id,
|
||||
"user_id": user_id,
|
||||
"provider": provider,
|
||||
"token": self._encrypt_token(token),
|
||||
"expires_at": token.get("expires_at"),
|
||||
"created_at": current_time,
|
||||
"updated_at": current_time,
|
||||
}
|
||||
)
|
||||
|
||||
db.add(result)
|
||||
db.commit()
|
||||
db.refresh(result)
|
||||
|
||||
if result:
|
||||
result.token = token # Return decrypted token
|
||||
return OAuthSessionModel.model_validate(result)
|
||||
else:
|
||||
return None
|
||||
except Exception as e:
|
||||
log.error(f"Error creating OAuth session: {e}")
|
||||
return None
|
||||
|
||||
def get_session_by_id(self, session_id: str) -> Optional[OAuthSessionModel]:
|
||||
"""Get OAuth session by ID"""
|
||||
try:
|
||||
with get_db() as db:
|
||||
session = db.query(OAuthSession).filter_by(id=session_id).first()
|
||||
if session:
|
||||
session.token = self._decrypt_token(session.token)
|
||||
return OAuthSessionModel.model_validate(session)
|
||||
|
||||
return None
|
||||
except Exception as e:
|
||||
log.error(f"Error getting OAuth session by ID: {e}")
|
||||
return None
|
||||
|
||||
def get_session_by_id_and_user_id(
|
||||
self, session_id: str, user_id: str
|
||||
) -> Optional[OAuthSessionModel]:
|
||||
"""Get OAuth session by ID and user ID"""
|
||||
try:
|
||||
with get_db() as db:
|
||||
session = (
|
||||
db.query(OAuthSession)
|
||||
.filter_by(id=session_id, user_id=user_id)
|
||||
.first()
|
||||
)
|
||||
if session:
|
||||
session.token = self._decrypt_token(session.token)
|
||||
return OAuthSessionModel.model_validate(session)
|
||||
|
||||
return None
|
||||
except Exception as e:
|
||||
log.error(f"Error getting OAuth session by ID: {e}")
|
||||
return None
|
||||
|
||||
def get_session_by_provider_and_user_id(
|
||||
self, provider: str, user_id: str
|
||||
) -> Optional[OAuthSessionModel]:
|
||||
"""Get OAuth session by provider and user ID"""
|
||||
try:
|
||||
with get_db() as db:
|
||||
session = (
|
||||
db.query(OAuthSession)
|
||||
.filter_by(provider=provider, user_id=user_id)
|
||||
.first()
|
||||
)
|
||||
if session:
|
||||
session.token = self._decrypt_token(session.token)
|
||||
return OAuthSessionModel.model_validate(session)
|
||||
|
||||
return None
|
||||
except Exception as e:
|
||||
log.error(f"Error getting OAuth session by provider and user ID: {e}")
|
||||
return None
|
||||
|
||||
def get_sessions_by_user_id(self, user_id: str) -> List[OAuthSessionModel]:
|
||||
"""Get all OAuth sessions for a user"""
|
||||
try:
|
||||
with get_db() as db:
|
||||
sessions = db.query(OAuthSession).filter_by(user_id=user_id).all()
|
||||
|
||||
results = []
|
||||
for session in sessions:
|
||||
session.token = self._decrypt_token(session.token)
|
||||
results.append(OAuthSessionModel.model_validate(session))
|
||||
|
||||
return results
|
||||
|
||||
except Exception as e:
|
||||
log.error(f"Error getting OAuth sessions by user ID: {e}")
|
||||
return []
|
||||
|
||||
def update_session_by_id(
|
||||
self, session_id: str, token: dict
|
||||
) -> Optional[OAuthSessionModel]:
|
||||
"""Update OAuth session tokens"""
|
||||
try:
|
||||
with get_db() as db:
|
||||
current_time = int(time.time())
|
||||
|
||||
db.query(OAuthSession).filter_by(id=session_id).update(
|
||||
{
|
||||
"token": self._encrypt_token(token),
|
||||
"expires_at": token.get("expires_at"),
|
||||
"updated_at": current_time,
|
||||
}
|
||||
)
|
||||
db.commit()
|
||||
session = db.query(OAuthSession).filter_by(id=session_id).first()
|
||||
|
||||
if session:
|
||||
session.token = self._decrypt_token(session.token)
|
||||
return OAuthSessionModel.model_validate(session)
|
||||
|
||||
return None
|
||||
except Exception as e:
|
||||
log.error(f"Error updating OAuth session tokens: {e}")
|
||||
return None
|
||||
|
||||
def delete_session_by_id(self, session_id: str) -> bool:
|
||||
"""Delete an OAuth session"""
|
||||
try:
|
||||
with get_db() as db:
|
||||
result = db.query(OAuthSession).filter_by(id=session_id).delete()
|
||||
db.commit()
|
||||
return result > 0
|
||||
except Exception as e:
|
||||
log.error(f"Error deleting OAuth session: {e}")
|
||||
return False
|
||||
|
||||
def delete_sessions_by_user_id(self, user_id: str) -> bool:
|
||||
"""Delete all OAuth sessions for a user"""
|
||||
try:
|
||||
with get_db() as db:
|
||||
result = db.query(OAuthSession).filter_by(user_id=user_id).delete()
|
||||
db.commit()
|
||||
return True
|
||||
except Exception as e:
|
||||
log.error(f"Error deleting OAuth sessions by user ID: {e}")
|
||||
return False
|
||||
|
||||
def delete_sessions_by_provider(self, provider: str) -> bool:
|
||||
"""Delete all OAuth sessions for a provider"""
|
||||
try:
|
||||
with get_db() as db:
|
||||
db.query(OAuthSession).filter_by(provider=provider).delete()
|
||||
db.commit()
|
||||
return True
|
||||
except Exception as e:
|
||||
log.error(f"Error deleting OAuth sessions by provider {provider}: {e}")
|
||||
return False
|
||||
|
||||
|
||||
OAuthSessions = OAuthSessionTable()
|
||||
|
|
@ -2,6 +2,7 @@ import time
|
|||
from typing import Optional
|
||||
|
||||
from open_webui.internal.db import Base, get_db
|
||||
from open_webui.models.groups import Groups
|
||||
from open_webui.models.users import Users, UserResponse
|
||||
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
|
|
@ -103,10 +104,16 @@ class PromptsTable:
|
|||
|
||||
def get_prompts(self) -> list[PromptUserResponse]:
|
||||
with get_db() as db:
|
||||
prompts = []
|
||||
all_prompts = db.query(Prompt).order_by(Prompt.timestamp.desc()).all()
|
||||
|
||||
for prompt in db.query(Prompt).order_by(Prompt.timestamp.desc()).all():
|
||||
user = Users.get_user_by_id(prompt.user_id)
|
||||
user_ids = list(set(prompt.user_id for prompt in all_prompts))
|
||||
|
||||
users = Users.get_users_by_user_ids(user_ids) if user_ids else []
|
||||
users_dict = {user.id: user for user in users}
|
||||
|
||||
prompts = []
|
||||
for prompt in all_prompts:
|
||||
user = users_dict.get(prompt.user_id)
|
||||
prompts.append(
|
||||
PromptUserResponse.model_validate(
|
||||
{
|
||||
|
|
@ -122,12 +129,13 @@ class PromptsTable:
|
|||
self, user_id: str, permission: str = "write"
|
||||
) -> list[PromptUserResponse]:
|
||||
prompts = self.get_prompts()
|
||||
user_group_ids = {group.id for group in Groups.get_groups_by_member_id(user_id)}
|
||||
|
||||
return [
|
||||
prompt
|
||||
for prompt in prompts
|
||||
if prompt.user_id == user_id
|
||||
or has_access(user_id, permission, prompt.access_control)
|
||||
or has_access(user_id, permission, prompt.access_control, user_group_ids)
|
||||
]
|
||||
|
||||
def update_prompt_by_command(
|
||||
|
|
|
|||
|
|
@ -4,6 +4,8 @@ from typing import Optional
|
|||
|
||||
from open_webui.internal.db import Base, JSONField, get_db
|
||||
from open_webui.models.users import Users, UserResponse
|
||||
from open_webui.models.groups import Groups
|
||||
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
from sqlalchemy import BigInteger, Column, String, Text, JSON
|
||||
|
|
@ -93,6 +95,8 @@ class ToolResponse(BaseModel):
|
|||
class ToolUserResponse(ToolResponse):
|
||||
user: Optional[UserResponse] = None
|
||||
|
||||
model_config = ConfigDict(extra="allow")
|
||||
|
||||
|
||||
class ToolForm(BaseModel):
|
||||
id: str
|
||||
|
|
@ -144,9 +148,16 @@ class ToolsTable:
|
|||
|
||||
def get_tools(self) -> list[ToolUserModel]:
|
||||
with get_db() as db:
|
||||
all_tools = db.query(Tool).order_by(Tool.updated_at.desc()).all()
|
||||
|
||||
user_ids = list(set(tool.user_id for tool in all_tools))
|
||||
|
||||
users = Users.get_users_by_user_ids(user_ids) if user_ids else []
|
||||
users_dict = {user.id: user for user in users}
|
||||
|
||||
tools = []
|
||||
for tool in db.query(Tool).order_by(Tool.updated_at.desc()).all():
|
||||
user = Users.get_user_by_id(tool.user_id)
|
||||
for tool in all_tools:
|
||||
user = users_dict.get(tool.user_id)
|
||||
tools.append(
|
||||
ToolUserModel.model_validate(
|
||||
{
|
||||
|
|
@ -161,12 +172,13 @@ class ToolsTable:
|
|||
self, user_id: str, permission: str = "write"
|
||||
) -> list[ToolUserModel]:
|
||||
tools = self.get_tools()
|
||||
user_group_ids = {group.id for group in Groups.get_groups_by_member_id(user_id)}
|
||||
|
||||
return [
|
||||
tool
|
||||
for tool in tools
|
||||
if tool.user_id == user_id
|
||||
or has_access(user_id, permission, tool.access_control)
|
||||
or has_access(user_id, permission, tool.access_control, user_group_ids)
|
||||
]
|
||||
|
||||
def get_tool_valves_by_id(self, id: str) -> Optional[dict]:
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ from open_webui.internal.db import Base, JSONField, get_db
|
|||
|
||||
from open_webui.env import DATABASE_USER_ACTIVE_STATUS_UPDATE_INTERVAL
|
||||
from open_webui.models.chats import Chats
|
||||
from open_webui.models.groups import Groups
|
||||
from open_webui.models.groups import Groups, GroupMember
|
||||
from open_webui.utils.misc import throttle
|
||||
|
||||
|
||||
|
|
@ -95,8 +95,12 @@ class UpdateProfileForm(BaseModel):
|
|||
date_of_birth: Optional[datetime.date] = None
|
||||
|
||||
|
||||
class UserGroupIdsModel(UserModel):
|
||||
group_ids: list[str] = []
|
||||
|
||||
|
||||
class UserListResponse(BaseModel):
|
||||
users: list[UserModel]
|
||||
users: list[UserGroupIdsModel]
|
||||
total: int
|
||||
|
||||
|
||||
|
|
@ -107,11 +111,21 @@ class UserInfoResponse(BaseModel):
|
|||
role: str
|
||||
|
||||
|
||||
class UserIdNameResponse(BaseModel):
|
||||
id: str
|
||||
name: str
|
||||
|
||||
|
||||
class UserInfoListResponse(BaseModel):
|
||||
users: list[UserInfoResponse]
|
||||
total: int
|
||||
|
||||
|
||||
class UserIdNameListResponse(BaseModel):
|
||||
users: list[UserIdNameResponse]
|
||||
total: int
|
||||
|
||||
|
||||
class UserResponse(BaseModel):
|
||||
id: str
|
||||
name: str
|
||||
|
|
@ -210,9 +224,12 @@ class UsersTable:
|
|||
filter: Optional[dict] = None,
|
||||
skip: Optional[int] = None,
|
||||
limit: Optional[int] = None,
|
||||
) -> UserListResponse:
|
||||
) -> dict:
|
||||
with get_db() as db:
|
||||
query = db.query(User)
|
||||
# Join GroupMember so we can order by group_id when requested
|
||||
query = db.query(User).outerjoin(
|
||||
GroupMember, GroupMember.user_id == User.id
|
||||
)
|
||||
|
||||
if filter:
|
||||
query_key = filter.get("query")
|
||||
|
|
@ -227,7 +244,16 @@ class UsersTable:
|
|||
order_by = filter.get("order_by")
|
||||
direction = filter.get("direction")
|
||||
|
||||
if order_by == "name":
|
||||
if order_by and order_by.startswith("group_id:"):
|
||||
group_id = order_by.split(":", 1)[1]
|
||||
|
||||
if direction == "asc":
|
||||
query = query.order_by((GroupMember.group_id == group_id).asc())
|
||||
else:
|
||||
query = query.order_by(
|
||||
(GroupMember.group_id == group_id).desc()
|
||||
)
|
||||
elif order_by == "name":
|
||||
if direction == "asc":
|
||||
query = query.order_by(User.name.asc())
|
||||
else:
|
||||
|
|
@ -264,6 +290,9 @@ class UsersTable:
|
|||
else:
|
||||
query = query.order_by(User.created_at.desc())
|
||||
|
||||
# Count BEFORE pagination
|
||||
total = query.count()
|
||||
|
||||
if skip:
|
||||
query = query.offset(skip)
|
||||
if limit:
|
||||
|
|
@ -272,7 +301,7 @@ class UsersTable:
|
|||
users = query.all()
|
||||
return {
|
||||
"users": [UserModel.model_validate(user) for user in users],
|
||||
"total": db.query(User).count(),
|
||||
"total": total,
|
||||
}
|
||||
|
||||
def get_users_by_user_ids(self, user_ids: list[str]) -> list[UserModel]:
|
||||
|
|
@ -312,6 +341,15 @@ class UsersTable:
|
|||
except Exception:
|
||||
return None
|
||||
|
||||
def get_num_users_active_today(self) -> Optional[int]:
|
||||
with get_db() as db:
|
||||
current_timestamp = int(datetime.datetime.now().timestamp())
|
||||
today_midnight_timestamp = current_timestamp - (current_timestamp % 86400)
|
||||
query = db.query(User).filter(
|
||||
User.last_active_at > today_midnight_timestamp
|
||||
)
|
||||
return query.count()
|
||||
|
||||
def update_user_role_by_id(self, id: str, role: str) -> Optional[UserModel]:
|
||||
try:
|
||||
with get_db() as db:
|
||||
|
|
|
|||
|
|
@ -1,9 +1,11 @@
|
|||
import requests
|
||||
import logging, os
|
||||
from typing import Iterator, List, Union
|
||||
from urllib.parse import quote
|
||||
|
||||
from langchain_core.document_loaders import BaseLoader
|
||||
from langchain_core.documents import Document
|
||||
from open_webui.utils.headers import include_user_info_headers
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
|
@ -17,6 +19,7 @@ class ExternalDocumentLoader(BaseLoader):
|
|||
url: str,
|
||||
api_key: str,
|
||||
mime_type=None,
|
||||
user=None,
|
||||
**kwargs,
|
||||
) -> None:
|
||||
self.url = url
|
||||
|
|
@ -25,6 +28,8 @@ class ExternalDocumentLoader(BaseLoader):
|
|||
self.file_path = file_path
|
||||
self.mime_type = mime_type
|
||||
|
||||
self.user = user
|
||||
|
||||
def load(self) -> List[Document]:
|
||||
with open(self.file_path, "rb") as f:
|
||||
data = f.read()
|
||||
|
|
@ -37,10 +42,13 @@ class ExternalDocumentLoader(BaseLoader):
|
|||
headers["Authorization"] = f"Bearer {self.api_key}"
|
||||
|
||||
try:
|
||||
headers["X-Filename"] = os.path.basename(self.file_path)
|
||||
headers["X-Filename"] = quote(os.path.basename(self.file_path))
|
||||
except:
|
||||
pass
|
||||
|
||||
if self.user is not None:
|
||||
headers = include_user_info_headers(headers, self.user)
|
||||
|
||||
url = self.url
|
||||
if url.endswith("/"):
|
||||
url = url[:-1]
|
||||
|
|
|
|||
|
|
@ -27,6 +27,7 @@ from open_webui.retrieval.loaders.external_document import ExternalDocumentLoade
|
|||
|
||||
from open_webui.retrieval.loaders.mistral import MistralLoader
|
||||
from open_webui.retrieval.loaders.datalab_marker import DatalabMarkerLoader
|
||||
from open_webui.retrieval.loaders.mineru import MinerULoader
|
||||
|
||||
|
||||
from open_webui.env import SRC_LOG_LEVELS, GLOBAL_LOG_LEVEL
|
||||
|
|
@ -148,7 +149,7 @@ class DoclingLoader:
|
|||
)
|
||||
}
|
||||
|
||||
params = {"image_export_mode": "placeholder", "table_mode": "accurate"}
|
||||
params = {"image_export_mode": "placeholder"}
|
||||
|
||||
if self.params:
|
||||
if self.params.get("do_picture_description"):
|
||||
|
|
@ -174,7 +175,15 @@ class DoclingLoader:
|
|||
self.params.get("picture_description_api", {})
|
||||
)
|
||||
|
||||
if self.params.get("ocr_engine") and self.params.get("ocr_lang"):
|
||||
params["do_ocr"] = self.params.get("do_ocr")
|
||||
|
||||
params["force_ocr"] = self.params.get("force_ocr")
|
||||
|
||||
if (
|
||||
self.params.get("do_ocr")
|
||||
and self.params.get("ocr_engine")
|
||||
and self.params.get("ocr_lang")
|
||||
):
|
||||
params["ocr_engine"] = self.params.get("ocr_engine")
|
||||
params["ocr_lang"] = [
|
||||
lang.strip()
|
||||
|
|
@ -182,6 +191,15 @@ class DoclingLoader:
|
|||
if lang.strip()
|
||||
]
|
||||
|
||||
if self.params.get("pdf_backend"):
|
||||
params["pdf_backend"] = self.params.get("pdf_backend")
|
||||
|
||||
if self.params.get("table_mode"):
|
||||
params["table_mode"] = self.params.get("table_mode")
|
||||
|
||||
if self.params.get("pipeline"):
|
||||
params["pipeline"] = self.params.get("pipeline")
|
||||
|
||||
endpoint = f"{self.url}/v1/convert/file"
|
||||
r = requests.post(endpoint, files=files, data=params)
|
||||
|
||||
|
|
@ -210,6 +228,7 @@ class DoclingLoader:
|
|||
class Loader:
|
||||
def __init__(self, engine: str = "", **kwargs):
|
||||
self.engine = engine
|
||||
self.user = kwargs.get("user", None)
|
||||
self.kwargs = kwargs
|
||||
|
||||
def load(
|
||||
|
|
@ -246,6 +265,7 @@ class Loader:
|
|||
url=self.kwargs.get("EXTERNAL_DOCUMENT_LOADER_URL"),
|
||||
api_key=self.kwargs.get("EXTERNAL_DOCUMENT_LOADER_API_KEY"),
|
||||
mime_type=file_content_type,
|
||||
user=self.user,
|
||||
)
|
||||
elif self.engine == "tika" and self.kwargs.get("TIKA_SERVER_URL"):
|
||||
if self._is_text_file(file_ext, file_content_type):
|
||||
|
|
@ -254,7 +274,6 @@ class Loader:
|
|||
loader = TikaLoader(
|
||||
url=self.kwargs.get("TIKA_SERVER_URL"),
|
||||
file_path=file_path,
|
||||
mime_type=file_content_type,
|
||||
extract_images=self.kwargs.get("PDF_EXTRACT_IMAGES"),
|
||||
)
|
||||
elif (
|
||||
|
|
@ -329,11 +348,9 @@ class Loader:
|
|||
self.engine == "document_intelligence"
|
||||
and self.kwargs.get("DOCUMENT_INTELLIGENCE_ENDPOINT") != ""
|
||||
and (
|
||||
file_ext in ["pdf", "xls", "xlsx", "docx", "ppt", "pptx"]
|
||||
file_ext in ["pdf", "docx", "ppt", "pptx"]
|
||||
or file_content_type
|
||||
in [
|
||||
"application/vnd.ms-excel",
|
||||
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
|
||||
"application/vnd.openxmlformats-officedocument.wordprocessingml.document",
|
||||
"application/vnd.ms-powerpoint",
|
||||
"application/vnd.openxmlformats-officedocument.presentationml.presentation",
|
||||
|
|
@ -352,6 +369,16 @@ class Loader:
|
|||
api_endpoint=self.kwargs.get("DOCUMENT_INTELLIGENCE_ENDPOINT"),
|
||||
azure_credential=DefaultAzureCredential(),
|
||||
)
|
||||
elif self.engine == "mineru" and file_ext in [
|
||||
"pdf"
|
||||
]: # MinerU currently only supports PDF
|
||||
loader = MinerULoader(
|
||||
file_path=file_path,
|
||||
api_mode=self.kwargs.get("MINERU_API_MODE", "local"),
|
||||
api_url=self.kwargs.get("MINERU_API_URL", "http://localhost:8000"),
|
||||
api_key=self.kwargs.get("MINERU_API_KEY", ""),
|
||||
params=self.kwargs.get("MINERU_PARAMS", {}),
|
||||
)
|
||||
elif (
|
||||
self.engine == "mistral_ocr"
|
||||
and self.kwargs.get("MISTRAL_OCR_API_KEY") != ""
|
||||
|
|
@ -359,16 +386,9 @@ class Loader:
|
|||
in ["pdf"] # Mistral OCR currently only supports PDF and images
|
||||
):
|
||||
loader = MistralLoader(
|
||||
api_key=self.kwargs.get("MISTRAL_OCR_API_KEY"), file_path=file_path
|
||||
)
|
||||
elif (
|
||||
self.engine == "external"
|
||||
and self.kwargs.get("MISTRAL_OCR_API_KEY") != ""
|
||||
and file_ext
|
||||
in ["pdf"] # Mistral OCR currently only supports PDF and images
|
||||
):
|
||||
loader = MistralLoader(
|
||||
api_key=self.kwargs.get("MISTRAL_OCR_API_KEY"), file_path=file_path
|
||||
base_url=self.kwargs.get("MISTRAL_OCR_API_BASE_URL"),
|
||||
api_key=self.kwargs.get("MISTRAL_OCR_API_KEY"),
|
||||
file_path=file_path,
|
||||
)
|
||||
else:
|
||||
if file_ext == "pdf":
|
||||
|
|
|
|||
522
backend/open_webui/retrieval/loaders/mineru.py
Normal file
|
|
@ -0,0 +1,522 @@
|
|||
import os
|
||||
import time
|
||||
import requests
|
||||
import logging
|
||||
import tempfile
|
||||
import zipfile
|
||||
from typing import List, Optional
|
||||
from langchain_core.documents import Document
|
||||
from fastapi import HTTPException, status
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class MinerULoader:
|
||||
"""
|
||||
MinerU document parser loader supporting both Cloud API and Local API modes.
|
||||
|
||||
Cloud API: Uses MinerU managed service with async task-based processing
|
||||
Local API: Uses self-hosted MinerU API with synchronous processing
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
file_path: str,
|
||||
api_mode: str = "local",
|
||||
api_url: str = "http://localhost:8000",
|
||||
api_key: str = "",
|
||||
params: dict = None,
|
||||
):
|
||||
self.file_path = file_path
|
||||
self.api_mode = api_mode.lower()
|
||||
self.api_url = api_url.rstrip("/")
|
||||
self.api_key = api_key
|
||||
|
||||
# Parse params dict with defaults
|
||||
self.params = params or {}
|
||||
self.enable_ocr = params.get("enable_ocr", False)
|
||||
self.enable_formula = params.get("enable_formula", True)
|
||||
self.enable_table = params.get("enable_table", True)
|
||||
self.language = params.get("language", "en")
|
||||
self.model_version = params.get("model_version", "pipeline")
|
||||
|
||||
self.page_ranges = self.params.pop("page_ranges", "")
|
||||
|
||||
# Validate API mode
|
||||
if self.api_mode not in ["local", "cloud"]:
|
||||
raise ValueError(
|
||||
f"Invalid API mode: {self.api_mode}. Must be 'local' or 'cloud'"
|
||||
)
|
||||
|
||||
# Validate Cloud API requirements
|
||||
if self.api_mode == "cloud" and not self.api_key:
|
||||
raise ValueError("API key is required for Cloud API mode")
|
||||
|
||||
def load(self) -> List[Document]:
|
||||
"""
|
||||
Main entry point for loading and parsing the document.
|
||||
Routes to Cloud or Local API based on api_mode.
|
||||
"""
|
||||
try:
|
||||
if self.api_mode == "cloud":
|
||||
return self._load_cloud_api()
|
||||
else:
|
||||
return self._load_local_api()
|
||||
except Exception as e:
|
||||
log.error(f"Error loading document with MinerU: {e}")
|
||||
raise
|
||||
|
||||
def _load_local_api(self) -> List[Document]:
|
||||
"""
|
||||
Load document using Local API (synchronous).
|
||||
Posts file to /file_parse endpoint and gets immediate response.
|
||||
"""
|
||||
log.info(f"Using MinerU Local API at {self.api_url}")
|
||||
|
||||
filename = os.path.basename(self.file_path)
|
||||
|
||||
# Build form data for Local API
|
||||
form_data = {
|
||||
**self.params,
|
||||
"return_md": "true",
|
||||
}
|
||||
|
||||
# Page ranges (Local API uses start_page_id and end_page_id)
|
||||
if self.page_ranges:
|
||||
# For simplicity, if page_ranges is specified, log a warning
|
||||
# Full page range parsing would require parsing the string
|
||||
log.warning(
|
||||
f"Page ranges '{self.page_ranges}' specified but Local API uses different format. "
|
||||
"Consider using start_page_id/end_page_id parameters if needed."
|
||||
)
|
||||
|
||||
try:
|
||||
with open(self.file_path, "rb") as f:
|
||||
files = {"files": (filename, f, "application/octet-stream")}
|
||||
|
||||
log.info(f"Sending file to MinerU Local API: {filename}")
|
||||
log.debug(f"Local API parameters: {form_data}")
|
||||
|
||||
response = requests.post(
|
||||
f"{self.api_url}/file_parse",
|
||||
data=form_data,
|
||||
files=files,
|
||||
timeout=300, # 5 minute timeout for large documents
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
except FileNotFoundError:
|
||||
raise HTTPException(
|
||||
status.HTTP_404_NOT_FOUND, detail=f"File not found: {self.file_path}"
|
||||
)
|
||||
except requests.Timeout:
|
||||
raise HTTPException(
|
||||
status.HTTP_504_GATEWAY_TIMEOUT,
|
||||
detail="MinerU Local API request timed out",
|
||||
)
|
||||
except requests.HTTPError as e:
|
||||
error_detail = f"MinerU Local API request failed: {e}"
|
||||
if e.response is not None:
|
||||
try:
|
||||
error_data = e.response.json()
|
||||
error_detail += f" - {error_data}"
|
||||
except:
|
||||
error_detail += f" - {e.response.text}"
|
||||
raise HTTPException(status.HTTP_400_BAD_REQUEST, detail=error_detail)
|
||||
except Exception as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Error calling MinerU Local API: {str(e)}",
|
||||
)
|
||||
|
||||
# Parse response
|
||||
try:
|
||||
result = response.json()
|
||||
except ValueError as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_502_BAD_GATEWAY,
|
||||
detail=f"Invalid JSON response from MinerU Local API: {e}",
|
||||
)
|
||||
|
||||
# Extract markdown content from response
|
||||
if "results" not in result:
|
||||
raise HTTPException(
|
||||
status.HTTP_502_BAD_GATEWAY,
|
||||
detail="MinerU Local API response missing 'results' field",
|
||||
)
|
||||
|
||||
results = result["results"]
|
||||
if not results:
|
||||
raise HTTPException(
|
||||
status.HTTP_400_BAD_REQUEST,
|
||||
detail="MinerU returned empty results",
|
||||
)
|
||||
|
||||
# Get the first (and typically only) result
|
||||
file_result = list(results.values())[0]
|
||||
markdown_content = file_result.get("md_content", "")
|
||||
|
||||
if not markdown_content:
|
||||
raise HTTPException(
|
||||
status.HTTP_400_BAD_REQUEST,
|
||||
detail="MinerU returned empty markdown content",
|
||||
)
|
||||
|
||||
log.info(f"Successfully parsed document with MinerU Local API: {filename}")
|
||||
|
||||
# Create metadata
|
||||
metadata = {
|
||||
"source": filename,
|
||||
"api_mode": "local",
|
||||
"backend": result.get("backend", "unknown"),
|
||||
"version": result.get("version", "unknown"),
|
||||
}
|
||||
|
||||
return [Document(page_content=markdown_content, metadata=metadata)]
|
||||
|
||||
def _load_cloud_api(self) -> List[Document]:
|
||||
"""
|
||||
Load document using Cloud API (asynchronous).
|
||||
Uses batch upload endpoint to avoid need for public file URLs.
|
||||
"""
|
||||
log.info(f"Using MinerU Cloud API at {self.api_url}")
|
||||
|
||||
filename = os.path.basename(self.file_path)
|
||||
|
||||
# Step 1: Request presigned upload URL
|
||||
batch_id, upload_url = self._request_upload_url(filename)
|
||||
|
||||
# Step 2: Upload file to presigned URL
|
||||
self._upload_to_presigned_url(upload_url)
|
||||
|
||||
# Step 3: Poll for results
|
||||
result = self._poll_batch_status(batch_id, filename)
|
||||
|
||||
# Step 4: Download and extract markdown from ZIP
|
||||
markdown_content = self._download_and_extract_zip(
|
||||
result["full_zip_url"], filename
|
||||
)
|
||||
|
||||
log.info(f"Successfully parsed document with MinerU Cloud API: {filename}")
|
||||
|
||||
# Create metadata
|
||||
metadata = {
|
||||
"source": filename,
|
||||
"api_mode": "cloud",
|
||||
"batch_id": batch_id,
|
||||
}
|
||||
|
||||
return [Document(page_content=markdown_content, metadata=metadata)]
|
||||
|
||||
def _request_upload_url(self, filename: str) -> tuple:
|
||||
"""
|
||||
Request presigned upload URL from Cloud API.
|
||||
Returns (batch_id, upload_url).
|
||||
"""
|
||||
headers = {
|
||||
"Authorization": f"Bearer {self.api_key}",
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
|
||||
# Build request body
|
||||
request_body = {
|
||||
**self.params,
|
||||
"files": [
|
||||
{
|
||||
"name": filename,
|
||||
"is_ocr": self.enable_ocr,
|
||||
}
|
||||
],
|
||||
}
|
||||
|
||||
# Add page ranges if specified
|
||||
if self.page_ranges:
|
||||
request_body["files"][0]["page_ranges"] = self.page_ranges
|
||||
|
||||
log.info(f"Requesting upload URL for: {filename}")
|
||||
log.debug(f"Cloud API request body: {request_body}")
|
||||
|
||||
try:
|
||||
response = requests.post(
|
||||
f"{self.api_url}/file-urls/batch",
|
||||
headers=headers,
|
||||
json=request_body,
|
||||
timeout=30,
|
||||
)
|
||||
response.raise_for_status()
|
||||
except requests.HTTPError as e:
|
||||
error_detail = f"Failed to request upload URL: {e}"
|
||||
if e.response is not None:
|
||||
try:
|
||||
error_data = e.response.json()
|
||||
error_detail += f" - {error_data.get('msg', error_data)}"
|
||||
except:
|
||||
error_detail += f" - {e.response.text}"
|
||||
raise HTTPException(status.HTTP_400_BAD_REQUEST, detail=error_detail)
|
||||
except Exception as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Error requesting upload URL: {str(e)}",
|
||||
)
|
||||
|
||||
try:
|
||||
result = response.json()
|
||||
except ValueError as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_502_BAD_GATEWAY,
|
||||
detail=f"Invalid JSON response: {e}",
|
||||
)
|
||||
|
||||
# Check for API error response
|
||||
if result.get("code") != 0:
|
||||
raise HTTPException(
|
||||
status.HTTP_400_BAD_REQUEST,
|
||||
detail=f"MinerU Cloud API error: {result.get('msg', 'Unknown error')}",
|
||||
)
|
||||
|
||||
data = result.get("data", {})
|
||||
batch_id = data.get("batch_id")
|
||||
file_urls = data.get("file_urls", [])
|
||||
|
||||
if not batch_id or not file_urls:
|
||||
raise HTTPException(
|
||||
status.HTTP_502_BAD_GATEWAY,
|
||||
detail="MinerU Cloud API response missing batch_id or file_urls",
|
||||
)
|
||||
|
||||
upload_url = file_urls[0]
|
||||
log.info(f"Received upload URL for batch: {batch_id}")
|
||||
|
||||
return batch_id, upload_url
|
||||
|
||||
def _upload_to_presigned_url(self, upload_url: str) -> None:
|
||||
"""
|
||||
Upload file to presigned URL (no authentication needed).
|
||||
"""
|
||||
log.info(f"Uploading file to presigned URL")
|
||||
|
||||
try:
|
||||
with open(self.file_path, "rb") as f:
|
||||
response = requests.put(
|
||||
upload_url,
|
||||
data=f,
|
||||
timeout=300, # 5 minute timeout for large files
|
||||
)
|
||||
response.raise_for_status()
|
||||
except FileNotFoundError:
|
||||
raise HTTPException(
|
||||
status.HTTP_404_NOT_FOUND, detail=f"File not found: {self.file_path}"
|
||||
)
|
||||
except requests.Timeout:
|
||||
raise HTTPException(
|
||||
status.HTTP_504_GATEWAY_TIMEOUT,
|
||||
detail="File upload to presigned URL timed out",
|
||||
)
|
||||
except requests.HTTPError as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_400_BAD_REQUEST,
|
||||
detail=f"Failed to upload file to presigned URL: {e}",
|
||||
)
|
||||
except Exception as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Error uploading file: {str(e)}",
|
||||
)
|
||||
|
||||
log.info("File uploaded successfully")
|
||||
|
||||
def _poll_batch_status(self, batch_id: str, filename: str) -> dict:
|
||||
"""
|
||||
Poll batch status until completion.
|
||||
Returns the result dict for the file.
|
||||
"""
|
||||
headers = {
|
||||
"Authorization": f"Bearer {self.api_key}",
|
||||
}
|
||||
|
||||
max_iterations = 300 # 10 minutes max (2 seconds per iteration)
|
||||
poll_interval = 2 # seconds
|
||||
|
||||
log.info(f"Polling batch status: {batch_id}")
|
||||
|
||||
for iteration in range(max_iterations):
|
||||
try:
|
||||
response = requests.get(
|
||||
f"{self.api_url}/extract-results/batch/{batch_id}",
|
||||
headers=headers,
|
||||
timeout=30,
|
||||
)
|
||||
response.raise_for_status()
|
||||
except requests.HTTPError as e:
|
||||
error_detail = f"Failed to poll batch status: {e}"
|
||||
if e.response is not None:
|
||||
try:
|
||||
error_data = e.response.json()
|
||||
error_detail += f" - {error_data.get('msg', error_data)}"
|
||||
except:
|
||||
error_detail += f" - {e.response.text}"
|
||||
raise HTTPException(status.HTTP_400_BAD_REQUEST, detail=error_detail)
|
||||
except Exception as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Error polling batch status: {str(e)}",
|
||||
)
|
||||
|
||||
try:
|
||||
result = response.json()
|
||||
except ValueError as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_502_BAD_GATEWAY,
|
||||
detail=f"Invalid JSON response while polling: {e}",
|
||||
)
|
||||
|
||||
# Check for API error response
|
||||
if result.get("code") != 0:
|
||||
raise HTTPException(
|
||||
status.HTTP_400_BAD_REQUEST,
|
||||
detail=f"MinerU Cloud API error: {result.get('msg', 'Unknown error')}",
|
||||
)
|
||||
|
||||
data = result.get("data", {})
|
||||
extract_result = data.get("extract_result", [])
|
||||
|
||||
# Find our file in the batch results
|
||||
file_result = None
|
||||
for item in extract_result:
|
||||
if item.get("file_name") == filename:
|
||||
file_result = item
|
||||
break
|
||||
|
||||
if not file_result:
|
||||
raise HTTPException(
|
||||
status.HTTP_502_BAD_GATEWAY,
|
||||
detail=f"File {filename} not found in batch results",
|
||||
)
|
||||
|
||||
state = file_result.get("state")
|
||||
|
||||
if state == "done":
|
||||
log.info(f"Processing complete for {filename}")
|
||||
return file_result
|
||||
elif state == "failed":
|
||||
error_msg = file_result.get("err_msg", "Unknown error")
|
||||
raise HTTPException(
|
||||
status.HTTP_400_BAD_REQUEST,
|
||||
detail=f"MinerU processing failed: {error_msg}",
|
||||
)
|
||||
elif state in ["waiting-file", "pending", "running", "converting"]:
|
||||
# Still processing
|
||||
if iteration % 10 == 0: # Log every 20 seconds
|
||||
log.info(
|
||||
f"Processing status: {state} (iteration {iteration + 1}/{max_iterations})"
|
||||
)
|
||||
time.sleep(poll_interval)
|
||||
else:
|
||||
log.warning(f"Unknown state: {state}")
|
||||
time.sleep(poll_interval)
|
||||
|
||||
# Timeout
|
||||
raise HTTPException(
|
||||
status.HTTP_504_GATEWAY_TIMEOUT,
|
||||
detail="MinerU processing timed out after 10 minutes",
|
||||
)
|
||||
|
||||
def _download_and_extract_zip(self, zip_url: str, filename: str) -> str:
|
||||
"""
|
||||
Download ZIP file from CDN and extract markdown content.
|
||||
Returns the markdown content as a string.
|
||||
"""
|
||||
log.info(f"Downloading results from: {zip_url}")
|
||||
|
||||
try:
|
||||
response = requests.get(zip_url, timeout=60)
|
||||
response.raise_for_status()
|
||||
except requests.HTTPError as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_400_BAD_REQUEST,
|
||||
detail=f"Failed to download results ZIP: {e}",
|
||||
)
|
||||
except Exception as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Error downloading results: {str(e)}",
|
||||
)
|
||||
|
||||
# Save ZIP to temporary file and extract
|
||||
try:
|
||||
with tempfile.NamedTemporaryFile(delete=False, suffix=".zip") as tmp_zip:
|
||||
tmp_zip.write(response.content)
|
||||
tmp_zip_path = tmp_zip.name
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmp_dir:
|
||||
# Extract ZIP
|
||||
with zipfile.ZipFile(tmp_zip_path, "r") as zip_ref:
|
||||
zip_ref.extractall(tmp_dir)
|
||||
|
||||
# Find markdown file - search recursively for any .md file
|
||||
markdown_content = None
|
||||
found_md_path = None
|
||||
|
||||
# First, list all files in the ZIP for debugging
|
||||
all_files = []
|
||||
for root, dirs, files in os.walk(tmp_dir):
|
||||
for file in files:
|
||||
full_path = os.path.join(root, file)
|
||||
all_files.append(full_path)
|
||||
# Look for any .md file
|
||||
if file.endswith(".md"):
|
||||
found_md_path = full_path
|
||||
log.info(f"Found markdown file at: {full_path}")
|
||||
try:
|
||||
with open(full_path, "r", encoding="utf-8") as f:
|
||||
markdown_content = f.read()
|
||||
if (
|
||||
markdown_content
|
||||
): # Use the first non-empty markdown file
|
||||
break
|
||||
except Exception as e:
|
||||
log.warning(f"Failed to read {full_path}: {e}")
|
||||
if markdown_content:
|
||||
break
|
||||
|
||||
if markdown_content is None:
|
||||
log.error(f"Available files in ZIP: {all_files}")
|
||||
# Try to provide more helpful error message
|
||||
md_files = [f for f in all_files if f.endswith(".md")]
|
||||
if md_files:
|
||||
error_msg = (
|
||||
f"Found .md files but couldn't read them: {md_files}"
|
||||
)
|
||||
else:
|
||||
error_msg = (
|
||||
f"No .md files found in ZIP. Available files: {all_files}"
|
||||
)
|
||||
raise HTTPException(
|
||||
status.HTTP_502_BAD_GATEWAY,
|
||||
detail=error_msg,
|
||||
)
|
||||
|
||||
# Clean up temporary ZIP file
|
||||
os.unlink(tmp_zip_path)
|
||||
|
||||
except zipfile.BadZipFile as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_502_BAD_GATEWAY,
|
||||
detail=f"Invalid ZIP file received: {e}",
|
||||
)
|
||||
except Exception as e:
|
||||
raise HTTPException(
|
||||
status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Error extracting ZIP: {str(e)}",
|
||||
)
|
||||
|
||||
if not markdown_content:
|
||||
raise HTTPException(
|
||||
status.HTTP_400_BAD_REQUEST,
|
||||
detail="Extracted markdown content is empty",
|
||||
)
|
||||
|
||||
log.info(
|
||||
f"Successfully extracted markdown content ({len(markdown_content)} characters)"
|
||||
)
|
||||
return markdown_content
|
||||
|
|
@ -30,10 +30,9 @@ class MistralLoader:
|
|||
- Enhanced error handling with retryable error classification
|
||||
"""
|
||||
|
||||
BASE_API_URL = "https://api.mistral.ai/v1"
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
base_url: str,
|
||||
api_key: str,
|
||||
file_path: str,
|
||||
timeout: int = 300, # 5 minutes default
|
||||
|
|
@ -55,6 +54,9 @@ class MistralLoader:
|
|||
if not os.path.exists(file_path):
|
||||
raise FileNotFoundError(f"File not found at {file_path}")
|
||||
|
||||
self.base_url = (
|
||||
base_url.rstrip("/") if base_url else "https://api.mistral.ai/v1"
|
||||
)
|
||||
self.api_key = api_key
|
||||
self.file_path = file_path
|
||||
self.timeout = timeout
|
||||
|
|
@ -240,7 +242,7 @@ class MistralLoader:
|
|||
in a context manager to minimize memory usage duration.
|
||||
"""
|
||||
log.info("Uploading file to Mistral API")
|
||||
url = f"{self.BASE_API_URL}/files"
|
||||
url = f"{self.base_url}/files"
|
||||
|
||||
def upload_request():
|
||||
# MEMORY OPTIMIZATION: Use context manager to minimize file handle lifetime
|
||||
|
|
@ -275,7 +277,7 @@ class MistralLoader:
|
|||
|
||||
async def _upload_file_async(self, session: aiohttp.ClientSession) -> str:
|
||||
"""Async file upload with streaming for better memory efficiency."""
|
||||
url = f"{self.BASE_API_URL}/files"
|
||||
url = f"{self.base_url}/files"
|
||||
|
||||
async def upload_request():
|
||||
# Create multipart writer for streaming upload
|
||||
|
|
@ -321,7 +323,7 @@ class MistralLoader:
|
|||
def _get_signed_url(self, file_id: str) -> str:
|
||||
"""Retrieves a temporary signed URL for the uploaded file (sync version)."""
|
||||
log.info(f"Getting signed URL for file ID: {file_id}")
|
||||
url = f"{self.BASE_API_URL}/files/{file_id}/url"
|
||||
url = f"{self.base_url}/files/{file_id}/url"
|
||||
params = {"expiry": 1}
|
||||
signed_url_headers = {**self.headers, "Accept": "application/json"}
|
||||
|
||||
|
|
@ -346,7 +348,7 @@ class MistralLoader:
|
|||
self, session: aiohttp.ClientSession, file_id: str
|
||||
) -> str:
|
||||
"""Async signed URL retrieval."""
|
||||
url = f"{self.BASE_API_URL}/files/{file_id}/url"
|
||||
url = f"{self.base_url}/files/{file_id}/url"
|
||||
params = {"expiry": 1}
|
||||
|
||||
headers = {**self.headers, "Accept": "application/json"}
|
||||
|
|
@ -373,7 +375,7 @@ class MistralLoader:
|
|||
def _process_ocr(self, signed_url: str) -> Dict[str, Any]:
|
||||
"""Sends the signed URL to the OCR endpoint for processing (sync version)."""
|
||||
log.info("Processing OCR via Mistral API")
|
||||
url = f"{self.BASE_API_URL}/ocr"
|
||||
url = f"{self.base_url}/ocr"
|
||||
ocr_headers = {
|
||||
**self.headers,
|
||||
"Content-Type": "application/json",
|
||||
|
|
@ -407,7 +409,7 @@ class MistralLoader:
|
|||
self, session: aiohttp.ClientSession, signed_url: str
|
||||
) -> Dict[str, Any]:
|
||||
"""Async OCR processing with timing metrics."""
|
||||
url = f"{self.BASE_API_URL}/ocr"
|
||||
url = f"{self.base_url}/ocr"
|
||||
|
||||
headers = {
|
||||
**self.headers,
|
||||
|
|
@ -446,7 +448,7 @@ class MistralLoader:
|
|||
def _delete_file(self, file_id: str) -> None:
|
||||
"""Deletes the file from Mistral storage (sync version)."""
|
||||
log.info(f"Deleting uploaded file ID: {file_id}")
|
||||
url = f"{self.BASE_API_URL}/files/{file_id}"
|
||||
url = f"{self.base_url}/files/{file_id}"
|
||||
|
||||
try:
|
||||
response = requests.delete(
|
||||
|
|
@ -467,7 +469,7 @@ class MistralLoader:
|
|||
async def delete_request():
|
||||
self._debug_log(f"Deleting file ID: {file_id}")
|
||||
async with session.delete(
|
||||
url=f"{self.BASE_API_URL}/files/{file_id}",
|
||||
url=f"{self.base_url}/files/{file_id}",
|
||||
headers=self.headers,
|
||||
timeout=aiohttp.ClientTimeout(
|
||||
total=self.cleanup_timeout
|
||||
|
|
|
|||
|
|
@ -83,6 +83,7 @@ class YoutubeLoader:
|
|||
TranscriptsDisabled,
|
||||
YouTubeTranscriptApi,
|
||||
)
|
||||
from youtube_transcript_api.proxies import GenericProxyConfig
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
'Could not import "youtube_transcript_api" Python package. '
|
||||
|
|
@ -90,18 +91,16 @@ class YoutubeLoader:
|
|||
)
|
||||
|
||||
if self.proxy_url:
|
||||
youtube_proxies = {
|
||||
"http": self.proxy_url,
|
||||
"https": self.proxy_url,
|
||||
}
|
||||
youtube_proxies = GenericProxyConfig(
|
||||
http_url=self.proxy_url, https_url=self.proxy_url
|
||||
)
|
||||
log.debug(f"Using proxy URL: {self.proxy_url[:14]}...")
|
||||
else:
|
||||
youtube_proxies = None
|
||||
|
||||
transcript_api = YouTubeTranscriptApi(proxy_config=youtube_proxies)
|
||||
try:
|
||||
transcript_list = YouTubeTranscriptApi.list_transcripts(
|
||||
self.video_id, proxies=youtube_proxies
|
||||
)
|
||||
transcript_list = transcript_api.list(self.video_id)
|
||||
except Exception as e:
|
||||
log.exception("Loading YouTube transcript failed")
|
||||
return []
|
||||
|
|
@ -158,3 +157,10 @@ class YoutubeLoader:
|
|||
f"No transcript found for any of the specified languages: {languages_tried}. Verify if the video has transcripts, add more languages if needed."
|
||||
)
|
||||
raise NoTranscriptFound(self.video_id, self.language, list(transcript_list))
|
||||
|
||||
async def aload(self) -> Generator[Document, None, None]:
|
||||
"""Asynchronously load YouTube transcripts into `Document` objects."""
|
||||
import asyncio
|
||||
|
||||
loop = asyncio.get_event_loop()
|
||||
return await loop.run_in_executor(None, self.load)
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@ import requests
|
|||
import hashlib
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
import time
|
||||
import re
|
||||
|
||||
from urllib.parse import quote
|
||||
from huggingface_hub import snapshot_download
|
||||
|
|
@ -16,13 +17,20 @@ from langchain_core.documents import Document
|
|||
from open_webui.config import VECTOR_DB
|
||||
from open_webui.retrieval.vector.factory import VECTOR_DB_CLIENT
|
||||
|
||||
|
||||
from open_webui.models.users import UserModel
|
||||
from open_webui.models.files import Files
|
||||
from open_webui.models.knowledge import Knowledges
|
||||
|
||||
from open_webui.models.chats import Chats
|
||||
from open_webui.models.notes import Notes
|
||||
|
||||
from open_webui.retrieval.vector.main import GetResult
|
||||
from open_webui.utils.access_control import has_access
|
||||
from open_webui.utils.misc import get_message_list
|
||||
|
||||
from open_webui.retrieval.web.utils import get_web_loader
|
||||
from open_webui.retrieval.loaders.youtube import YoutubeLoader
|
||||
|
||||
|
||||
from open_webui.env import (
|
||||
|
|
@ -46,6 +54,34 @@ from langchain_core.callbacks import CallbackManagerForRetrieverRun
|
|||
from langchain_core.retrievers import BaseRetriever
|
||||
|
||||
|
||||
def is_youtube_url(url: str) -> bool:
|
||||
youtube_regex = r"^(https?://)?(www\.)?(youtube\.com|youtu\.be)/.+$"
|
||||
return re.match(youtube_regex, url) is not None
|
||||
|
||||
|
||||
def get_loader(request, url: str):
|
||||
if is_youtube_url(url):
|
||||
return YoutubeLoader(
|
||||
url,
|
||||
language=request.app.state.config.YOUTUBE_LOADER_LANGUAGE,
|
||||
proxy_url=request.app.state.config.YOUTUBE_LOADER_PROXY_URL,
|
||||
)
|
||||
else:
|
||||
return get_web_loader(
|
||||
url,
|
||||
verify_ssl=request.app.state.config.ENABLE_WEB_LOADER_SSL_VERIFICATION,
|
||||
requests_per_second=request.app.state.config.WEB_LOADER_CONCURRENT_REQUESTS,
|
||||
trust_env=request.app.state.config.WEB_SEARCH_TRUST_ENV,
|
||||
)
|
||||
|
||||
|
||||
def get_content_from_url(request, url: str) -> str:
|
||||
loader = get_loader(request, url)
|
||||
docs = loader.load()
|
||||
content = " ".join([doc.page_content for doc in docs])
|
||||
return content, docs
|
||||
|
||||
|
||||
class VectorSearchRetriever(BaseRetriever):
|
||||
collection_name: Any
|
||||
embedding_function: Any
|
||||
|
|
@ -112,6 +148,44 @@ def get_doc(collection_name: str, user: UserModel = None):
|
|||
raise e
|
||||
|
||||
|
||||
def get_enriched_texts(collection_result: GetResult) -> list[str]:
|
||||
enriched_texts = []
|
||||
for idx, text in enumerate(collection_result.documents[0]):
|
||||
metadata = collection_result.metadatas[0][idx]
|
||||
metadata_parts = [text]
|
||||
|
||||
# Add filename (repeat twice for extra weight in BM25 scoring)
|
||||
if metadata.get("name"):
|
||||
filename = metadata["name"]
|
||||
filename_tokens = (
|
||||
filename.replace("_", " ").replace("-", " ").replace(".", " ")
|
||||
)
|
||||
metadata_parts.append(
|
||||
f"Filename: {filename} {filename_tokens} {filename_tokens}"
|
||||
)
|
||||
|
||||
# Add title if available
|
||||
if metadata.get("title"):
|
||||
metadata_parts.append(f"Title: {metadata['title']}")
|
||||
|
||||
# Add document section headings if available (from markdown splitter)
|
||||
if metadata.get("headings") and isinstance(metadata["headings"], list):
|
||||
headings = " > ".join(str(h) for h in metadata["headings"])
|
||||
metadata_parts.append(f"Section: {headings}")
|
||||
|
||||
# Add source URL/path if available
|
||||
if metadata.get("source"):
|
||||
metadata_parts.append(f"Source: {metadata['source']}")
|
||||
|
||||
# Add snippet for web search results
|
||||
if metadata.get("snippet"):
|
||||
metadata_parts.append(f"Snippet: {metadata['snippet']}")
|
||||
|
||||
enriched_texts.append(" ".join(metadata_parts))
|
||||
|
||||
return enriched_texts
|
||||
|
||||
|
||||
def query_doc_with_hybrid_search(
|
||||
collection_name: str,
|
||||
collection_result: GetResult,
|
||||
|
|
@ -122,20 +196,40 @@ def query_doc_with_hybrid_search(
|
|||
k_reranker: int,
|
||||
r: float,
|
||||
hybrid_bm25_weight: float,
|
||||
enable_enriched_texts: bool = False,
|
||||
) -> dict:
|
||||
try:
|
||||
if not collection_result.documents[0]:
|
||||
# First check if collection_result has the required attributes
|
||||
if (
|
||||
not collection_result
|
||||
or not hasattr(collection_result, "documents")
|
||||
or not hasattr(collection_result, "metadatas")
|
||||
):
|
||||
log.warning(f"query_doc_with_hybrid_search:no_docs {collection_name}")
|
||||
return {"documents": [], "metadatas": [], "distances": []}
|
||||
|
||||
# BM_25 required only if weight is greater than 0
|
||||
if hybrid_bm25_weight > 0:
|
||||
log.debug(f"query_doc_with_hybrid_search:doc {collection_name}")
|
||||
bm25_retriever = BM25Retriever.from_texts(
|
||||
texts=collection_result.documents[0],
|
||||
metadatas=collection_result.metadatas[0],
|
||||
)
|
||||
bm25_retriever.k = k
|
||||
# Now safely check the documents content after confirming attributes exist
|
||||
if (
|
||||
not collection_result.documents
|
||||
or len(collection_result.documents) == 0
|
||||
or not collection_result.documents[0]
|
||||
):
|
||||
log.warning(f"query_doc_with_hybrid_search:no_docs {collection_name}")
|
||||
return {"documents": [], "metadatas": [], "distances": []}
|
||||
|
||||
log.debug(f"query_doc_with_hybrid_search:doc {collection_name}")
|
||||
|
||||
bm25_texts = (
|
||||
get_enriched_texts(collection_result)
|
||||
if enable_enriched_texts
|
||||
else collection_result.documents[0]
|
||||
)
|
||||
|
||||
bm25_retriever = BM25Retriever.from_texts(
|
||||
texts=bm25_texts,
|
||||
metadatas=collection_result.metadatas[0],
|
||||
)
|
||||
bm25_retriever.k = k
|
||||
|
||||
vector_search_retriever = VectorSearchRetriever(
|
||||
collection_name=collection_name,
|
||||
|
|
@ -180,7 +274,11 @@ def query_doc_with_hybrid_search(
|
|||
zip(distances, metadatas, documents), key=lambda x: x[0], reverse=True
|
||||
)
|
||||
sorted_items = sorted_items[:k]
|
||||
distances, documents, metadatas = map(list, zip(*sorted_items))
|
||||
|
||||
if sorted_items:
|
||||
distances, documents, metadatas = map(list, zip(*sorted_items))
|
||||
else:
|
||||
distances, documents, metadatas = [], [], []
|
||||
|
||||
result = {
|
||||
"distances": [distances],
|
||||
|
|
@ -224,6 +322,13 @@ def merge_and_sort_query_results(query_results: list[dict], k: int) -> dict:
|
|||
combined = dict() # To store documents with unique document hashes
|
||||
|
||||
for data in query_results:
|
||||
if (
|
||||
len(data.get("distances", [])) == 0
|
||||
or len(data.get("documents", [])) == 0
|
||||
or len(data.get("metadatas", [])) == 0
|
||||
):
|
||||
continue
|
||||
|
||||
distances = data["distances"][0]
|
||||
documents = data["documents"][0]
|
||||
metadatas = data["metadatas"][0]
|
||||
|
|
@ -337,28 +442,25 @@ def query_collection_with_hybrid_search(
|
|||
k_reranker: int,
|
||||
r: float,
|
||||
hybrid_bm25_weight: float,
|
||||
enable_enriched_texts: bool = False,
|
||||
) -> dict:
|
||||
results = []
|
||||
error = False
|
||||
# Fetch collection data once per collection sequentially
|
||||
# Avoid fetching the same data multiple times later
|
||||
collection_results = {}
|
||||
# Only retrieve entire collection if bm_25 calculation is required
|
||||
if hybrid_bm25_weight > 0:
|
||||
for collection_name in collection_names:
|
||||
try:
|
||||
log.debug(
|
||||
f"query_collection_with_hybrid_search:VECTOR_DB_CLIENT.get:collection {collection_name}"
|
||||
)
|
||||
collection_results[collection_name] = VECTOR_DB_CLIENT.get(
|
||||
collection_name=collection_name
|
||||
)
|
||||
except Exception as e:
|
||||
log.exception(f"Failed to fetch collection {collection_name}: {e}")
|
||||
collection_results[collection_name] = None
|
||||
else:
|
||||
for collection_name in collection_names:
|
||||
collection_results[collection_name] = []
|
||||
for collection_name in collection_names:
|
||||
try:
|
||||
log.debug(
|
||||
f"query_collection_with_hybrid_search:VECTOR_DB_CLIENT.get:collection {collection_name}"
|
||||
)
|
||||
collection_results[collection_name] = VECTOR_DB_CLIENT.get(
|
||||
collection_name=collection_name
|
||||
)
|
||||
except Exception as e:
|
||||
log.exception(f"Failed to fetch collection {collection_name}: {e}")
|
||||
collection_results[collection_name] = None
|
||||
|
||||
log.info(
|
||||
f"Starting hybrid search for {len(queries)} queries in {len(collection_names)} collections..."
|
||||
)
|
||||
|
|
@ -375,6 +477,7 @@ def query_collection_with_hybrid_search(
|
|||
k_reranker=k_reranker,
|
||||
r=r,
|
||||
hybrid_bm25_weight=hybrid_bm25_weight,
|
||||
enable_enriched_texts=enable_enriched_texts,
|
||||
)
|
||||
return result, None
|
||||
except Exception as e:
|
||||
|
|
@ -437,13 +540,14 @@ def get_embedding_function(
|
|||
if isinstance(query, list):
|
||||
embeddings = []
|
||||
for i in range(0, len(query), embedding_batch_size):
|
||||
embeddings.extend(
|
||||
func(
|
||||
query[i : i + embedding_batch_size],
|
||||
prefix=prefix,
|
||||
user=user,
|
||||
)
|
||||
batch_embeddings = func(
|
||||
query[i : i + embedding_batch_size],
|
||||
prefix=prefix,
|
||||
user=user,
|
||||
)
|
||||
|
||||
if isinstance(batch_embeddings, list):
|
||||
embeddings.extend(batch_embeddings)
|
||||
return embeddings
|
||||
else:
|
||||
return func(query, prefix, user)
|
||||
|
|
@ -459,11 +563,13 @@ def get_reranking_function(reranking_engine, reranking_model, reranking_function
|
|||
if reranking_function is None:
|
||||
return None
|
||||
if reranking_engine == "external":
|
||||
return lambda sentences, user=None: reranking_function.predict(
|
||||
sentences, user=user
|
||||
return lambda query, documents, user=None: reranking_function.predict(
|
||||
[(query, doc.page_content) for doc in documents], user=user
|
||||
)
|
||||
else:
|
||||
return lambda sentences, user=None: reranking_function.predict(sentences)
|
||||
return lambda query, documents, user=None: reranking_function.predict(
|
||||
[(query, doc.page_content) for doc in documents]
|
||||
)
|
||||
|
||||
|
||||
def get_sources_from_items(
|
||||
|
|
@ -493,26 +599,39 @@ def get_sources_from_items(
|
|||
|
||||
if item.get("type") == "text":
|
||||
# Raw Text
|
||||
# Used during temporary chat file uploads
|
||||
# Used during temporary chat file uploads or web page & youtube attachements
|
||||
|
||||
if item.get("file"):
|
||||
# if item has file data, use it
|
||||
query_result = {
|
||||
"documents": [
|
||||
[item.get("file", {}).get("data", {}).get("content")]
|
||||
],
|
||||
"metadatas": [
|
||||
[item.get("file", {}).get("data", {}).get("meta", {})]
|
||||
],
|
||||
}
|
||||
else:
|
||||
# Fallback to item content
|
||||
query_result = {
|
||||
"documents": [[item.get("content")]],
|
||||
"metadatas": [
|
||||
[{"file_id": item.get("id"), "name": item.get("name")}]
|
||||
],
|
||||
}
|
||||
if item.get("context") == "full":
|
||||
if item.get("file"):
|
||||
# if item has file data, use it
|
||||
query_result = {
|
||||
"documents": [
|
||||
[item.get("file", {}).get("data", {}).get("content")]
|
||||
],
|
||||
"metadatas": [[item.get("file", {}).get("meta", {})]],
|
||||
}
|
||||
|
||||
if query_result is None:
|
||||
# Fallback
|
||||
if item.get("collection_name"):
|
||||
# If item has a collection name, use it
|
||||
collection_names.append(item.get("collection_name"))
|
||||
elif item.get("file"):
|
||||
# If item has file data, use it
|
||||
query_result = {
|
||||
"documents": [
|
||||
[item.get("file", {}).get("data", {}).get("content")]
|
||||
],
|
||||
"metadatas": [[item.get("file", {}).get("meta", {})]],
|
||||
}
|
||||
else:
|
||||
# Fallback to item content
|
||||
query_result = {
|
||||
"documents": [[item.get("content")]],
|
||||
"metadatas": [
|
||||
[{"file_id": item.get("id"), "name": item.get("name")}]
|
||||
],
|
||||
}
|
||||
|
||||
elif item.get("type") == "note":
|
||||
# Note Attached
|
||||
|
|
@ -529,6 +648,37 @@ def get_sources_from_items(
|
|||
"metadatas": [[{"file_id": note.id, "name": note.title}]],
|
||||
}
|
||||
|
||||
elif item.get("type") == "chat":
|
||||
# Chat Attached
|
||||
chat = Chats.get_chat_by_id(item.get("id"))
|
||||
|
||||
if chat and (user.role == "admin" or chat.user_id == user.id):
|
||||
messages_map = chat.chat.get("history", {}).get("messages", {})
|
||||
message_id = chat.chat.get("history", {}).get("currentId")
|
||||
|
||||
if messages_map and message_id:
|
||||
# Reconstruct the message list in order
|
||||
message_list = get_message_list(messages_map, message_id)
|
||||
message_history = "\n".join(
|
||||
[
|
||||
f"#### {m.get('role', 'user').capitalize()}\n{m.get('content')}\n"
|
||||
for m in message_list
|
||||
]
|
||||
)
|
||||
|
||||
# User has access to the chat
|
||||
query_result = {
|
||||
"documents": [[message_history]],
|
||||
"metadatas": [[{"file_id": chat.id, "name": chat.title}]],
|
||||
}
|
||||
|
||||
elif item.get("type") == "url":
|
||||
content, docs = get_content_from_url(request, item.get("url"))
|
||||
if docs:
|
||||
query_result = {
|
||||
"documents": [[content]],
|
||||
"metadatas": [[{"url": item.get("url"), "name": item.get("url")}]],
|
||||
}
|
||||
elif item.get("type") == "file":
|
||||
if (
|
||||
item.get("context") == "full"
|
||||
|
|
@ -576,45 +726,51 @@ def get_sources_from_items(
|
|||
collection_names.append(f"file-{item['id']}")
|
||||
|
||||
elif item.get("type") == "collection":
|
||||
if (
|
||||
item.get("context") == "full"
|
||||
or request.app.state.config.BYPASS_EMBEDDING_AND_RETRIEVAL
|
||||
# Manual Full Mode Toggle for Collection
|
||||
knowledge_base = Knowledges.get_knowledge_by_id(item.get("id"))
|
||||
|
||||
if knowledge_base and (
|
||||
user.role == "admin"
|
||||
or knowledge_base.user_id == user.id
|
||||
or has_access(user.id, "read", knowledge_base.access_control)
|
||||
):
|
||||
# Manual Full Mode Toggle for Collection
|
||||
knowledge_base = Knowledges.get_knowledge_by_id(item.get("id"))
|
||||
|
||||
if knowledge_base and (
|
||||
user.role == "admin"
|
||||
or has_access(user.id, "read", knowledge_base.access_control)
|
||||
if (
|
||||
item.get("context") == "full"
|
||||
or request.app.state.config.BYPASS_EMBEDDING_AND_RETRIEVAL
|
||||
):
|
||||
if knowledge_base and (
|
||||
user.role == "admin"
|
||||
or knowledge_base.user_id == user.id
|
||||
or has_access(user.id, "read", knowledge_base.access_control)
|
||||
):
|
||||
|
||||
file_ids = knowledge_base.data.get("file_ids", [])
|
||||
file_ids = knowledge_base.data.get("file_ids", [])
|
||||
|
||||
documents = []
|
||||
metadatas = []
|
||||
for file_id in file_ids:
|
||||
file_object = Files.get_file_by_id(file_id)
|
||||
documents = []
|
||||
metadatas = []
|
||||
for file_id in file_ids:
|
||||
file_object = Files.get_file_by_id(file_id)
|
||||
|
||||
if file_object:
|
||||
documents.append(file_object.data.get("content", ""))
|
||||
metadatas.append(
|
||||
{
|
||||
"file_id": file_id,
|
||||
"name": file_object.filename,
|
||||
"source": file_object.filename,
|
||||
}
|
||||
)
|
||||
if file_object:
|
||||
documents.append(file_object.data.get("content", ""))
|
||||
metadatas.append(
|
||||
{
|
||||
"file_id": file_id,
|
||||
"name": file_object.filename,
|
||||
"source": file_object.filename,
|
||||
}
|
||||
)
|
||||
|
||||
query_result = {
|
||||
"documents": [documents],
|
||||
"metadatas": [metadatas],
|
||||
}
|
||||
else:
|
||||
# Fallback to collection names
|
||||
if item.get("legacy"):
|
||||
collection_names = item.get("collection_names", [])
|
||||
query_result = {
|
||||
"documents": [documents],
|
||||
"metadatas": [metadatas],
|
||||
}
|
||||
else:
|
||||
collection_names.append(item["id"])
|
||||
# Fallback to collection names
|
||||
if item.get("legacy"):
|
||||
collection_names = item.get("collection_names", [])
|
||||
else:
|
||||
collection_names.append(item["id"])
|
||||
|
||||
elif item.get("docs"):
|
||||
# BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL
|
||||
|
|
@ -653,6 +809,7 @@ def get_sources_from_items(
|
|||
k_reranker=k_reranker,
|
||||
r=r,
|
||||
hybrid_bm25_weight=hybrid_bm25_weight,
|
||||
enable_enriched_texts=request.app.state.config.ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS,
|
||||
)
|
||||
except Exception as e:
|
||||
log.debug(
|
||||
|
|
@ -693,7 +850,6 @@ def get_sources_from_items(
|
|||
sources.append(source)
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
|
||||
return sources
|
||||
|
||||
|
||||
|
|
@ -958,9 +1114,7 @@ class RerankCompressor(BaseDocumentCompressor):
|
|||
|
||||
scores = None
|
||||
if reranking:
|
||||
scores = self.reranking_function(
|
||||
[(query, doc.page_content) for doc in documents]
|
||||
)
|
||||
scores = self.reranking_function(query, documents)
|
||||
else:
|
||||
from sentence_transformers import util
|
||||
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ from open_webui.retrieval.vector.main import (
|
|||
SearchResult,
|
||||
GetResult,
|
||||
)
|
||||
from open_webui.retrieval.vector.utils import stringify_metadata
|
||||
from open_webui.retrieval.vector.utils import process_metadata
|
||||
|
||||
from open_webui.config import (
|
||||
CHROMA_DATA_PATH,
|
||||
|
|
@ -146,7 +146,7 @@ class ChromaClient(VectorDBBase):
|
|||
ids = [item["id"] for item in items]
|
||||
documents = [item["text"] for item in items]
|
||||
embeddings = [item["vector"] for item in items]
|
||||
metadatas = [stringify_metadata(item["metadata"]) for item in items]
|
||||
metadatas = [process_metadata(item["metadata"]) for item in items]
|
||||
|
||||
for batch in create_batches(
|
||||
api=self.client,
|
||||
|
|
@ -166,7 +166,7 @@ class ChromaClient(VectorDBBase):
|
|||
ids = [item["id"] for item in items]
|
||||
documents = [item["text"] for item in items]
|
||||
embeddings = [item["vector"] for item in items]
|
||||
metadatas = [stringify_metadata(item["metadata"]) for item in items]
|
||||
metadatas = [process_metadata(item["metadata"]) for item in items]
|
||||
|
||||
collection.upsert(
|
||||
ids=ids, documents=documents, embeddings=embeddings, metadatas=metadatas
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ from typing import Optional
|
|||
import ssl
|
||||
from elasticsearch.helpers import bulk, scan
|
||||
|
||||
from open_webui.retrieval.vector.utils import stringify_metadata
|
||||
from open_webui.retrieval.vector.utils import process_metadata
|
||||
from open_webui.retrieval.vector.main import (
|
||||
VectorDBBase,
|
||||
VectorItem,
|
||||
|
|
@ -245,7 +245,7 @@ class ElasticsearchClient(VectorDBBase):
|
|||
"collection": collection_name,
|
||||
"vector": item["vector"],
|
||||
"text": item["text"],
|
||||
"metadata": stringify_metadata(item["metadata"]),
|
||||
"metadata": process_metadata(item["metadata"]),
|
||||
},
|
||||
}
|
||||
for item in batch
|
||||
|
|
@ -266,7 +266,7 @@ class ElasticsearchClient(VectorDBBase):
|
|||
"collection": collection_name,
|
||||
"vector": item["vector"],
|
||||
"text": item["text"],
|
||||
"metadata": stringify_metadata(item["metadata"]),
|
||||
"metadata": process_metadata(item["metadata"]),
|
||||
},
|
||||
"doc_as_upsert": True,
|
||||
}
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ import json
|
|||
import logging
|
||||
from typing import Optional
|
||||
|
||||
from open_webui.retrieval.vector.utils import stringify_metadata
|
||||
from open_webui.retrieval.vector.utils import process_metadata
|
||||
from open_webui.retrieval.vector.main import (
|
||||
VectorDBBase,
|
||||
VectorItem,
|
||||
|
|
@ -22,6 +22,8 @@ from open_webui.config import (
|
|||
MILVUS_HNSW_M,
|
||||
MILVUS_HNSW_EFCONSTRUCTION,
|
||||
MILVUS_IVF_FLAT_NLIST,
|
||||
MILVUS_DISKANN_MAX_DEGREE,
|
||||
MILVUS_DISKANN_SEARCH_LIST_SIZE,
|
||||
)
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
|
|
@ -131,12 +133,18 @@ class MilvusClient(VectorDBBase):
|
|||
elif index_type == "IVF_FLAT":
|
||||
index_creation_params = {"nlist": MILVUS_IVF_FLAT_NLIST}
|
||||
log.info(f"IVF_FLAT params: {index_creation_params}")
|
||||
elif index_type == "DISKANN":
|
||||
index_creation_params = {
|
||||
"max_degree": MILVUS_DISKANN_MAX_DEGREE,
|
||||
"search_list_size": MILVUS_DISKANN_SEARCH_LIST_SIZE,
|
||||
}
|
||||
log.info(f"DISKANN params: {index_creation_params}")
|
||||
elif index_type in ["FLAT", "AUTOINDEX"]:
|
||||
log.info(f"Using {index_type} index with no specific build-time params.")
|
||||
else:
|
||||
log.warning(
|
||||
f"Unsupported MILVUS_INDEX_TYPE: '{index_type}'. "
|
||||
f"Supported types: HNSW, IVF_FLAT, FLAT, AUTOINDEX. "
|
||||
f"Supported types: HNSW, IVF_FLAT, DISKANN, FLAT, AUTOINDEX. "
|
||||
f"Milvus will use its default for the collection if this type is not directly supported for index creation."
|
||||
)
|
||||
# For unsupported types, pass the type directly to Milvus; it might handle it or use a default.
|
||||
|
|
@ -189,7 +197,7 @@ class MilvusClient(VectorDBBase):
|
|||
)
|
||||
return self._result_to_search_result(result)
|
||||
|
||||
def query(self, collection_name: str, filter: dict, limit: Optional[int] = None):
|
||||
def query(self, collection_name: str, filter: dict, limit: int = -1):
|
||||
connections.connect(uri=MILVUS_URI, token=MILVUS_TOKEN, db_name=MILVUS_DB)
|
||||
|
||||
# Construct the filter string for querying
|
||||
|
|
@ -222,7 +230,7 @@ class MilvusClient(VectorDBBase):
|
|||
"data",
|
||||
"metadata",
|
||||
],
|
||||
limit=limit, # Pass the limit directly; None means no limit.
|
||||
limit=limit, # Pass the limit directly; -1 means no limit.
|
||||
)
|
||||
|
||||
while True:
|
||||
|
|
@ -249,7 +257,7 @@ class MilvusClient(VectorDBBase):
|
|||
)
|
||||
# Using query with a trivial filter to get all items.
|
||||
# This will use the paginated query logic.
|
||||
return self.query(collection_name=collection_name, filter={}, limit=None)
|
||||
return self.query(collection_name=collection_name, filter={}, limit=-1)
|
||||
|
||||
def insert(self, collection_name: str, items: list[VectorItem]):
|
||||
# Insert the items into the collection, if the collection does not exist, it will be created.
|
||||
|
|
@ -281,7 +289,7 @@ class MilvusClient(VectorDBBase):
|
|||
"id": item["id"],
|
||||
"vector": item["vector"],
|
||||
"data": {"text": item["text"]},
|
||||
"metadata": stringify_metadata(item["metadata"]),
|
||||
"metadata": process_metadata(item["metadata"]),
|
||||
}
|
||||
for item in items
|
||||
],
|
||||
|
|
@ -317,7 +325,7 @@ class MilvusClient(VectorDBBase):
|
|||
"id": item["id"],
|
||||
"vector": item["vector"],
|
||||
"data": {"text": item["text"]},
|
||||
"metadata": stringify_metadata(item["metadata"]),
|
||||
"metadata": process_metadata(item["metadata"]),
|
||||
}
|
||||
for item in items
|
||||
],
|
||||
|
|
|
|||
282
backend/open_webui/retrieval/vector/dbs/milvus_multitenancy.py
Normal file
|
|
@ -0,0 +1,282 @@
|
|||
import logging
|
||||
from typing import Optional, Tuple, List, Dict, Any
|
||||
|
||||
from open_webui.config import (
|
||||
MILVUS_URI,
|
||||
MILVUS_TOKEN,
|
||||
MILVUS_DB,
|
||||
MILVUS_COLLECTION_PREFIX,
|
||||
MILVUS_INDEX_TYPE,
|
||||
MILVUS_METRIC_TYPE,
|
||||
MILVUS_HNSW_M,
|
||||
MILVUS_HNSW_EFCONSTRUCTION,
|
||||
MILVUS_IVF_FLAT_NLIST,
|
||||
)
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from open_webui.retrieval.vector.main import (
|
||||
GetResult,
|
||||
SearchResult,
|
||||
VectorDBBase,
|
||||
VectorItem,
|
||||
)
|
||||
from pymilvus import (
|
||||
connections,
|
||||
utility,
|
||||
Collection,
|
||||
CollectionSchema,
|
||||
FieldSchema,
|
||||
DataType,
|
||||
)
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
RESOURCE_ID_FIELD = "resource_id"
|
||||
|
||||
|
||||
class MilvusClient(VectorDBBase):
|
||||
def __init__(self):
|
||||
# Milvus collection names can only contain numbers, letters, and underscores.
|
||||
self.collection_prefix = MILVUS_COLLECTION_PREFIX.replace("-", "_")
|
||||
connections.connect(
|
||||
alias="default",
|
||||
uri=MILVUS_URI,
|
||||
token=MILVUS_TOKEN,
|
||||
db_name=MILVUS_DB,
|
||||
)
|
||||
|
||||
# Main collection types for multi-tenancy
|
||||
self.MEMORY_COLLECTION = f"{self.collection_prefix}_memories"
|
||||
self.KNOWLEDGE_COLLECTION = f"{self.collection_prefix}_knowledge"
|
||||
self.FILE_COLLECTION = f"{self.collection_prefix}_files"
|
||||
self.WEB_SEARCH_COLLECTION = f"{self.collection_prefix}_web_search"
|
||||
self.HASH_BASED_COLLECTION = f"{self.collection_prefix}_hash_based"
|
||||
self.shared_collections = [
|
||||
self.MEMORY_COLLECTION,
|
||||
self.KNOWLEDGE_COLLECTION,
|
||||
self.FILE_COLLECTION,
|
||||
self.WEB_SEARCH_COLLECTION,
|
||||
self.HASH_BASED_COLLECTION,
|
||||
]
|
||||
|
||||
def _get_collection_and_resource_id(self, collection_name: str) -> Tuple[str, str]:
|
||||
"""
|
||||
Maps the traditional collection name to multi-tenant collection and resource ID.
|
||||
|
||||
WARNING: This mapping relies on current Open WebUI naming conventions for
|
||||
collection names. If Open WebUI changes how it generates collection names
|
||||
(e.g., "user-memory-" prefix, "file-" prefix, web search patterns, or hash
|
||||
formats), this mapping will break and route data to incorrect collections.
|
||||
POTENTIALLY CAUSING HUGE DATA CORRUPTION, DATA CONSISTENCY ISSUES AND INCORRECT
|
||||
DATA MAPPING INSIDE THE DATABASE.
|
||||
"""
|
||||
resource_id = collection_name
|
||||
|
||||
if collection_name.startswith("user-memory-"):
|
||||
return self.MEMORY_COLLECTION, resource_id
|
||||
elif collection_name.startswith("file-"):
|
||||
return self.FILE_COLLECTION, resource_id
|
||||
elif collection_name.startswith("web-search-"):
|
||||
return self.WEB_SEARCH_COLLECTION, resource_id
|
||||
elif len(collection_name) == 63 and all(
|
||||
c in "0123456789abcdef" for c in collection_name
|
||||
):
|
||||
return self.HASH_BASED_COLLECTION, resource_id
|
||||
else:
|
||||
return self.KNOWLEDGE_COLLECTION, resource_id
|
||||
|
||||
def _create_shared_collection(self, mt_collection_name: str, dimension: int):
|
||||
fields = [
|
||||
FieldSchema(
|
||||
name="id",
|
||||
dtype=DataType.VARCHAR,
|
||||
is_primary=True,
|
||||
auto_id=False,
|
||||
max_length=36,
|
||||
),
|
||||
FieldSchema(name="vector", dtype=DataType.FLOAT_VECTOR, dim=dimension),
|
||||
FieldSchema(name="text", dtype=DataType.VARCHAR, max_length=65535),
|
||||
FieldSchema(name="metadata", dtype=DataType.JSON),
|
||||
FieldSchema(name=RESOURCE_ID_FIELD, dtype=DataType.VARCHAR, max_length=255),
|
||||
]
|
||||
schema = CollectionSchema(fields, "Shared collection for multi-tenancy")
|
||||
collection = Collection(mt_collection_name, schema)
|
||||
|
||||
index_params = {
|
||||
"metric_type": MILVUS_METRIC_TYPE,
|
||||
"index_type": MILVUS_INDEX_TYPE,
|
||||
"params": {},
|
||||
}
|
||||
if MILVUS_INDEX_TYPE == "HNSW":
|
||||
index_params["params"] = {
|
||||
"M": MILVUS_HNSW_M,
|
||||
"efConstruction": MILVUS_HNSW_EFCONSTRUCTION,
|
||||
}
|
||||
elif MILVUS_INDEX_TYPE == "IVF_FLAT":
|
||||
index_params["params"] = {"nlist": MILVUS_IVF_FLAT_NLIST}
|
||||
|
||||
collection.create_index("vector", index_params)
|
||||
collection.create_index(RESOURCE_ID_FIELD)
|
||||
log.info(f"Created shared collection: {mt_collection_name}")
|
||||
return collection
|
||||
|
||||
def _ensure_collection(self, mt_collection_name: str, dimension: int):
|
||||
if not utility.has_collection(mt_collection_name):
|
||||
self._create_shared_collection(mt_collection_name, dimension)
|
||||
|
||||
def has_collection(self, collection_name: str) -> bool:
|
||||
mt_collection, resource_id = self._get_collection_and_resource_id(
|
||||
collection_name
|
||||
)
|
||||
if not utility.has_collection(mt_collection):
|
||||
return False
|
||||
|
||||
collection = Collection(mt_collection)
|
||||
collection.load()
|
||||
res = collection.query(expr=f"{RESOURCE_ID_FIELD} == '{resource_id}'", limit=1)
|
||||
return len(res) > 0
|
||||
|
||||
def upsert(self, collection_name: str, items: List[VectorItem]):
|
||||
if not items:
|
||||
return
|
||||
mt_collection, resource_id = self._get_collection_and_resource_id(
|
||||
collection_name
|
||||
)
|
||||
dimension = len(items[0]["vector"])
|
||||
self._ensure_collection(mt_collection, dimension)
|
||||
collection = Collection(mt_collection)
|
||||
|
||||
entities = [
|
||||
{
|
||||
"id": item["id"],
|
||||
"vector": item["vector"],
|
||||
"text": item["text"],
|
||||
"metadata": item["metadata"],
|
||||
RESOURCE_ID_FIELD: resource_id,
|
||||
}
|
||||
for item in items
|
||||
]
|
||||
collection.insert(entities)
|
||||
collection.flush()
|
||||
|
||||
def search(
|
||||
self, collection_name: str, vectors: List[List[float]], limit: int
|
||||
) -> Optional[SearchResult]:
|
||||
if not vectors:
|
||||
return None
|
||||
|
||||
mt_collection, resource_id = self._get_collection_and_resource_id(
|
||||
collection_name
|
||||
)
|
||||
if not utility.has_collection(mt_collection):
|
||||
return None
|
||||
|
||||
collection = Collection(mt_collection)
|
||||
collection.load()
|
||||
|
||||
search_params = {"metric_type": MILVUS_METRIC_TYPE, "params": {}}
|
||||
results = collection.search(
|
||||
data=vectors,
|
||||
anns_field="vector",
|
||||
param=search_params,
|
||||
limit=limit,
|
||||
expr=f"{RESOURCE_ID_FIELD} == '{resource_id}'",
|
||||
output_fields=["id", "text", "metadata"],
|
||||
)
|
||||
|
||||
ids, documents, metadatas, distances = [], [], [], []
|
||||
for hits in results:
|
||||
batch_ids, batch_docs, batch_metadatas, batch_dists = [], [], [], []
|
||||
for hit in hits:
|
||||
batch_ids.append(hit.entity.get("id"))
|
||||
batch_docs.append(hit.entity.get("text"))
|
||||
batch_metadatas.append(hit.entity.get("metadata"))
|
||||
batch_dists.append(hit.distance)
|
||||
ids.append(batch_ids)
|
||||
documents.append(batch_docs)
|
||||
metadatas.append(batch_metadatas)
|
||||
distances.append(batch_dists)
|
||||
|
||||
return SearchResult(
|
||||
ids=ids, documents=documents, metadatas=metadatas, distances=distances
|
||||
)
|
||||
|
||||
def delete(
|
||||
self,
|
||||
collection_name: str,
|
||||
ids: Optional[List[str]] = None,
|
||||
filter: Optional[Dict[str, Any]] = None,
|
||||
):
|
||||
mt_collection, resource_id = self._get_collection_and_resource_id(
|
||||
collection_name
|
||||
)
|
||||
if not utility.has_collection(mt_collection):
|
||||
return
|
||||
|
||||
collection = Collection(mt_collection)
|
||||
|
||||
# Build expression
|
||||
expr = [f"{RESOURCE_ID_FIELD} == '{resource_id}'"]
|
||||
if ids:
|
||||
# Milvus expects a string list for 'in' operator
|
||||
id_list_str = ", ".join([f"'{id_val}'" for id_val in ids])
|
||||
expr.append(f"id in [{id_list_str}]")
|
||||
|
||||
if filter:
|
||||
for key, value in filter.items():
|
||||
expr.append(f"metadata['{key}'] == '{value}'")
|
||||
|
||||
collection.delete(" and ".join(expr))
|
||||
|
||||
def reset(self):
|
||||
for collection_name in self.shared_collections:
|
||||
if utility.has_collection(collection_name):
|
||||
utility.drop_collection(collection_name)
|
||||
|
||||
def delete_collection(self, collection_name: str):
|
||||
mt_collection, resource_id = self._get_collection_and_resource_id(
|
||||
collection_name
|
||||
)
|
||||
if not utility.has_collection(mt_collection):
|
||||
return
|
||||
|
||||
collection = Collection(mt_collection)
|
||||
collection.delete(f"{RESOURCE_ID_FIELD} == '{resource_id}'")
|
||||
|
||||
def query(
|
||||
self, collection_name: str, filter: Dict[str, Any], limit: Optional[int] = None
|
||||
) -> Optional[GetResult]:
|
||||
mt_collection, resource_id = self._get_collection_and_resource_id(
|
||||
collection_name
|
||||
)
|
||||
if not utility.has_collection(mt_collection):
|
||||
return None
|
||||
|
||||
collection = Collection(mt_collection)
|
||||
collection.load()
|
||||
|
||||
expr = [f"{RESOURCE_ID_FIELD} == '{resource_id}'"]
|
||||
if filter:
|
||||
for key, value in filter.items():
|
||||
if isinstance(value, str):
|
||||
expr.append(f"metadata['{key}'] == '{value}'")
|
||||
else:
|
||||
expr.append(f"metadata['{key}'] == {value}")
|
||||
|
||||
results = collection.query(
|
||||
expr=" and ".join(expr),
|
||||
output_fields=["id", "text", "metadata"],
|
||||
limit=limit,
|
||||
)
|
||||
|
||||
ids = [res["id"] for res in results]
|
||||
documents = [res["text"] for res in results]
|
||||
metadatas = [res["metadata"] for res in results]
|
||||
|
||||
return GetResult(ids=[ids], documents=[documents], metadatas=[metadatas])
|
||||
|
||||
def get(self, collection_name: str) -> Optional[GetResult]:
|
||||
return self.query(collection_name, filter={}, limit=None)
|
||||
|
||||
def insert(self, collection_name: str, items: List[VectorItem]):
|
||||
return self.upsert(collection_name, items)
|
||||
|
|
@ -2,7 +2,7 @@ from opensearchpy import OpenSearch
|
|||
from opensearchpy.helpers import bulk
|
||||
from typing import Optional
|
||||
|
||||
from open_webui.retrieval.vector.utils import stringify_metadata
|
||||
from open_webui.retrieval.vector.utils import process_metadata
|
||||
from open_webui.retrieval.vector.main import (
|
||||
VectorDBBase,
|
||||
VectorItem,
|
||||
|
|
@ -201,7 +201,7 @@ class OpenSearchClient(VectorDBBase):
|
|||
"_source": {
|
||||
"vector": item["vector"],
|
||||
"text": item["text"],
|
||||
"metadata": stringify_metadata(item["metadata"]),
|
||||
"metadata": process_metadata(item["metadata"]),
|
||||
},
|
||||
}
|
||||
for item in batch
|
||||
|
|
@ -223,7 +223,7 @@ class OpenSearchClient(VectorDBBase):
|
|||
"doc": {
|
||||
"vector": item["vector"],
|
||||
"text": item["text"],
|
||||
"metadata": stringify_metadata(item["metadata"]),
|
||||
"metadata": process_metadata(item["metadata"]),
|
||||
},
|
||||
"doc_as_upsert": True,
|
||||
}
|
||||
|
|
|
|||
|
|
@ -717,7 +717,7 @@ class Oracle23aiClient(VectorDBBase):
|
|||
)
|
||||
|
||||
try:
|
||||
limit = limit or 1000
|
||||
limit = 1000 # Hardcoded limit for get operation
|
||||
|
||||
with self.get_connection() as connection:
|
||||
with connection.cursor() as cursor:
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
from typing import Optional, List, Dict, Any
|
||||
from typing import Optional, List, Dict, Any, Tuple
|
||||
import logging
|
||||
import json
|
||||
from sqlalchemy import (
|
||||
|
|
@ -22,12 +22,12 @@ from sqlalchemy.pool import NullPool, QueuePool
|
|||
|
||||
from sqlalchemy.orm import declarative_base, scoped_session, sessionmaker
|
||||
from sqlalchemy.dialects.postgresql import JSONB, array
|
||||
from pgvector.sqlalchemy import Vector
|
||||
from pgvector.sqlalchemy import Vector, HALFVEC
|
||||
from sqlalchemy.ext.mutable import MutableDict
|
||||
from sqlalchemy.exc import NoSuchTableError
|
||||
|
||||
|
||||
from open_webui.retrieval.vector.utils import stringify_metadata
|
||||
from open_webui.retrieval.vector.utils import process_metadata
|
||||
from open_webui.retrieval.vector.main import (
|
||||
VectorDBBase,
|
||||
VectorItem,
|
||||
|
|
@ -37,17 +37,27 @@ from open_webui.retrieval.vector.main import (
|
|||
from open_webui.config import (
|
||||
PGVECTOR_DB_URL,
|
||||
PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH,
|
||||
PGVECTOR_CREATE_EXTENSION,
|
||||
PGVECTOR_PGCRYPTO,
|
||||
PGVECTOR_PGCRYPTO_KEY,
|
||||
PGVECTOR_POOL_SIZE,
|
||||
PGVECTOR_POOL_MAX_OVERFLOW,
|
||||
PGVECTOR_POOL_TIMEOUT,
|
||||
PGVECTOR_POOL_RECYCLE,
|
||||
PGVECTOR_INDEX_METHOD,
|
||||
PGVECTOR_HNSW_M,
|
||||
PGVECTOR_HNSW_EF_CONSTRUCTION,
|
||||
PGVECTOR_IVFFLAT_LISTS,
|
||||
PGVECTOR_USE_HALFVEC,
|
||||
)
|
||||
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
VECTOR_LENGTH = PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH
|
||||
USE_HALFVEC = PGVECTOR_USE_HALFVEC
|
||||
|
||||
VECTOR_TYPE_FACTORY = HALFVEC if USE_HALFVEC else Vector
|
||||
VECTOR_OPCLASS = "halfvec_cosine_ops" if USE_HALFVEC else "vector_cosine_ops"
|
||||
Base = declarative_base()
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
|
@ -66,7 +76,7 @@ class DocumentChunk(Base):
|
|||
__tablename__ = "document_chunk"
|
||||
|
||||
id = Column(Text, primary_key=True)
|
||||
vector = Column(Vector(dim=VECTOR_LENGTH), nullable=True)
|
||||
vector = Column(VECTOR_TYPE_FACTORY(dim=VECTOR_LENGTH), nullable=True)
|
||||
collection_name = Column(Text, nullable=False)
|
||||
|
||||
if PGVECTOR_PGCRYPTO:
|
||||
|
|
@ -112,18 +122,19 @@ class PgvectorClient(VectorDBBase):
|
|||
try:
|
||||
# Ensure the pgvector extension is available
|
||||
# Use a conditional check to avoid permission issues on Azure PostgreSQL
|
||||
self.session.execute(
|
||||
text(
|
||||
"""
|
||||
DO $$
|
||||
BEGIN
|
||||
IF NOT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'vector') THEN
|
||||
CREATE EXTENSION IF NOT EXISTS vector;
|
||||
END IF;
|
||||
END $$;
|
||||
"""
|
||||
if PGVECTOR_CREATE_EXTENSION:
|
||||
self.session.execute(
|
||||
text(
|
||||
"""
|
||||
DO $$
|
||||
BEGIN
|
||||
IF NOT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'vector') THEN
|
||||
CREATE EXTENSION IF NOT EXISTS vector;
|
||||
END IF;
|
||||
END $$;
|
||||
"""
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
if PGVECTOR_PGCRYPTO:
|
||||
# Ensure the pgcrypto extension is available for encryption
|
||||
|
|
@ -155,13 +166,9 @@ class PgvectorClient(VectorDBBase):
|
|||
connection = self.session.connection()
|
||||
Base.metadata.create_all(bind=connection)
|
||||
|
||||
# Create an index on the vector column if it doesn't exist
|
||||
self.session.execute(
|
||||
text(
|
||||
"CREATE INDEX IF NOT EXISTS idx_document_chunk_vector "
|
||||
"ON document_chunk USING ivfflat (vector vector_cosine_ops) WITH (lists = 100);"
|
||||
)
|
||||
)
|
||||
index_method, index_options = self._vector_index_configuration()
|
||||
self._ensure_vector_index(index_method, index_options)
|
||||
|
||||
self.session.execute(
|
||||
text(
|
||||
"CREATE INDEX IF NOT EXISTS idx_document_chunk_collection_name "
|
||||
|
|
@ -175,6 +182,78 @@ class PgvectorClient(VectorDBBase):
|
|||
log.exception(f"Error during initialization: {e}")
|
||||
raise
|
||||
|
||||
@staticmethod
|
||||
def _extract_index_method(index_def: Optional[str]) -> Optional[str]:
|
||||
if not index_def:
|
||||
return None
|
||||
try:
|
||||
after_using = index_def.lower().split("using ", 1)[1]
|
||||
return after_using.split()[0]
|
||||
except (IndexError, AttributeError):
|
||||
return None
|
||||
|
||||
def _vector_index_configuration(self) -> Tuple[str, str]:
|
||||
if PGVECTOR_INDEX_METHOD:
|
||||
index_method = PGVECTOR_INDEX_METHOD
|
||||
log.info(
|
||||
"Using vector index method '%s' from PGVECTOR_INDEX_METHOD.",
|
||||
index_method,
|
||||
)
|
||||
elif USE_HALFVEC:
|
||||
index_method = "hnsw"
|
||||
log.info(
|
||||
"VECTOR_LENGTH=%s exceeds 2000; using halfvec column type with hnsw index.",
|
||||
VECTOR_LENGTH,
|
||||
)
|
||||
else:
|
||||
index_method = "ivfflat"
|
||||
|
||||
if index_method == "hnsw":
|
||||
index_options = f"WITH (m = {PGVECTOR_HNSW_M}, ef_construction = {PGVECTOR_HNSW_EF_CONSTRUCTION})"
|
||||
else:
|
||||
index_options = f"WITH (lists = {PGVECTOR_IVFFLAT_LISTS})"
|
||||
|
||||
return index_method, index_options
|
||||
|
||||
def _ensure_vector_index(self, index_method: str, index_options: str) -> None:
|
||||
index_name = "idx_document_chunk_vector"
|
||||
existing_index_def = self.session.execute(
|
||||
text(
|
||||
"""
|
||||
SELECT indexdef
|
||||
FROM pg_indexes
|
||||
WHERE schemaname = current_schema()
|
||||
AND tablename = 'document_chunk'
|
||||
AND indexname = :index_name
|
||||
"""
|
||||
),
|
||||
{"index_name": index_name},
|
||||
).scalar()
|
||||
|
||||
existing_method = self._extract_index_method(existing_index_def)
|
||||
if existing_method and existing_method != index_method:
|
||||
raise RuntimeError(
|
||||
f"Existing pgvector index '{index_name}' uses method '{existing_method}' but configuration now "
|
||||
f"requires '{index_method}'. Automatic rebuild is disabled to prevent long-running maintenance. "
|
||||
"Drop the index manually (optionally after tuning maintenance_work_mem/max_parallel_maintenance_workers) "
|
||||
"and recreate it with the new method before restarting Open WebUI."
|
||||
)
|
||||
|
||||
if not existing_index_def:
|
||||
index_sql = (
|
||||
f"CREATE INDEX IF NOT EXISTS {index_name} "
|
||||
f"ON document_chunk USING {index_method} (vector {VECTOR_OPCLASS})"
|
||||
)
|
||||
if index_options:
|
||||
index_sql = f"{index_sql} {index_options}"
|
||||
self.session.execute(text(index_sql))
|
||||
log.info(
|
||||
"Ensured vector index '%s' using %s%s.",
|
||||
index_name,
|
||||
index_method,
|
||||
f" {index_options}" if index_options else "",
|
||||
)
|
||||
|
||||
def check_vector_length(self) -> None:
|
||||
"""
|
||||
Check if the VECTOR_LENGTH matches the existing vector column dimension in the database.
|
||||
|
|
@ -194,16 +273,19 @@ class PgvectorClient(VectorDBBase):
|
|||
if "vector" in document_chunk_table.columns:
|
||||
vector_column = document_chunk_table.columns["vector"]
|
||||
vector_type = vector_column.type
|
||||
if isinstance(vector_type, Vector):
|
||||
db_vector_length = vector_type.dim
|
||||
if db_vector_length != VECTOR_LENGTH:
|
||||
raise Exception(
|
||||
f"VECTOR_LENGTH {VECTOR_LENGTH} does not match existing vector column dimension {db_vector_length}. "
|
||||
"Cannot change vector size after initialization without migrating the data."
|
||||
)
|
||||
else:
|
||||
expected_type = HALFVEC if USE_HALFVEC else Vector
|
||||
|
||||
if not isinstance(vector_type, expected_type):
|
||||
raise Exception(
|
||||
"The 'vector' column exists but is not of type 'Vector'."
|
||||
"The 'vector' column type does not match the expected type "
|
||||
f"('{expected_type.__name__}') for VECTOR_LENGTH {VECTOR_LENGTH}."
|
||||
)
|
||||
|
||||
db_vector_length = getattr(vector_type, "dim", None)
|
||||
if db_vector_length is not None and db_vector_length != VECTOR_LENGTH:
|
||||
raise Exception(
|
||||
f"VECTOR_LENGTH {VECTOR_LENGTH} does not match existing vector column dimension {db_vector_length}. "
|
||||
"Cannot change vector size after initialization without migrating the data."
|
||||
)
|
||||
else:
|
||||
raise Exception(
|
||||
|
|
@ -263,7 +345,7 @@ class PgvectorClient(VectorDBBase):
|
|||
vector=vector,
|
||||
collection_name=collection_name,
|
||||
text=item["text"],
|
||||
vmetadata=stringify_metadata(item["metadata"]),
|
||||
vmetadata=process_metadata(item["metadata"]),
|
||||
)
|
||||
new_items.append(new_chunk)
|
||||
self.session.bulk_save_objects(new_items)
|
||||
|
|
@ -321,7 +403,7 @@ class PgvectorClient(VectorDBBase):
|
|||
if existing:
|
||||
existing.vector = vector
|
||||
existing.text = item["text"]
|
||||
existing.vmetadata = stringify_metadata(item["metadata"])
|
||||
existing.vmetadata = process_metadata(item["metadata"])
|
||||
existing.collection_name = (
|
||||
collection_name # Update collection_name if necessary
|
||||
)
|
||||
|
|
@ -331,7 +413,7 @@ class PgvectorClient(VectorDBBase):
|
|||
vector=vector,
|
||||
collection_name=collection_name,
|
||||
text=item["text"],
|
||||
vmetadata=stringify_metadata(item["metadata"]),
|
||||
vmetadata=process_metadata(item["metadata"]),
|
||||
)
|
||||
self.session.add(new_chunk)
|
||||
self.session.commit()
|
||||
|
|
@ -358,11 +440,11 @@ class PgvectorClient(VectorDBBase):
|
|||
num_queries = len(vectors)
|
||||
|
||||
def vector_expr(vector):
|
||||
return cast(array(vector), Vector(VECTOR_LENGTH))
|
||||
return cast(array(vector), VECTOR_TYPE_FACTORY(VECTOR_LENGTH))
|
||||
|
||||
# Create the values for query vectors
|
||||
qid_col = column("qid", Integer)
|
||||
q_vector_col = column("q_vector", Vector(VECTOR_LENGTH))
|
||||
q_vector_col = column("q_vector", VECTOR_TYPE_FACTORY(VECTOR_LENGTH))
|
||||
query_vectors = (
|
||||
values(qid_col, q_vector_col)
|
||||
.data(
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ from open_webui.config import (
|
|||
PINECONE_CLOUD,
|
||||
)
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from open_webui.retrieval.vector.utils import stringify_metadata
|
||||
from open_webui.retrieval.vector.utils import process_metadata
|
||||
|
||||
|
||||
NO_LIMIT = 10000 # Reasonable limit to avoid overwhelming the system
|
||||
|
|
@ -185,7 +185,7 @@ class PineconeClient(VectorDBBase):
|
|||
point = {
|
||||
"id": item["id"],
|
||||
"values": item["vector"],
|
||||
"metadata": stringify_metadata(metadata),
|
||||
"metadata": process_metadata(metadata),
|
||||
}
|
||||
points.append(point)
|
||||
return points
|
||||
|
|
|
|||
|
|
@ -105,6 +105,13 @@ class QdrantClient(VectorDBBase):
|
|||
|
||||
Returns:
|
||||
tuple: (collection_name, tenant_id)
|
||||
|
||||
WARNING: This mapping relies on current Open WebUI naming conventions for
|
||||
collection names. If Open WebUI changes how it generates collection names
|
||||
(e.g., "user-memory-" prefix, "file-" prefix, web search patterns, or hash
|
||||
formats), this mapping will break and route data to incorrect collections.
|
||||
POTENTIALLY CAUSING HUGE DATA CORRUPTION, DATA CONSISTENCY ISSUES AND INCORRECT
|
||||
DATA MAPPING INSIDE THE DATABASE.
|
||||
"""
|
||||
# Check for user memory collections
|
||||
tenant_id = collection_name
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
from open_webui.retrieval.vector.utils import stringify_metadata
|
||||
from open_webui.retrieval.vector.utils import process_metadata
|
||||
from open_webui.retrieval.vector.main import (
|
||||
VectorDBBase,
|
||||
VectorItem,
|
||||
|
|
@ -117,15 +117,16 @@ class S3VectorClient(VectorDBBase):
|
|||
|
||||
def has_collection(self, collection_name: str) -> bool:
|
||||
"""
|
||||
Check if a vector index (collection) exists in the S3 vector bucket.
|
||||
Check if a vector index exists using direct lookup.
|
||||
This avoids pagination issues with list_indexes() and is significantly faster.
|
||||
"""
|
||||
|
||||
try:
|
||||
response = self.client.list_indexes(vectorBucketName=self.bucket_name)
|
||||
indexes = response.get("indexes", [])
|
||||
return any(idx.get("indexName") == collection_name for idx in indexes)
|
||||
self.client.get_index(
|
||||
vectorBucketName=self.bucket_name, indexName=collection_name
|
||||
)
|
||||
return True
|
||||
except Exception as e:
|
||||
log.error(f"Error listing indexes: {e}")
|
||||
log.error(f"Error checking if index '{collection_name}' exists: {e}")
|
||||
return False
|
||||
|
||||
def delete_collection(self, collection_name: str) -> None:
|
||||
|
|
@ -185,7 +186,7 @@ class S3VectorClient(VectorDBBase):
|
|||
metadata["text"] = item["text"]
|
||||
|
||||
# Convert metadata to string format for consistency
|
||||
metadata = stringify_metadata(metadata)
|
||||
metadata = process_metadata(metadata)
|
||||
|
||||
# Filter metadata to comply with S3 Vector API limit of 10 keys
|
||||
metadata = self._filter_metadata(metadata, item["id"])
|
||||
|
|
@ -256,7 +257,7 @@ class S3VectorClient(VectorDBBase):
|
|||
metadata["text"] = item["text"]
|
||||
|
||||
# Convert metadata to string format for consistency
|
||||
metadata = stringify_metadata(metadata)
|
||||
metadata = process_metadata(metadata)
|
||||
|
||||
# Filter metadata to comply with S3 Vector API limit of 10 keys
|
||||
metadata = self._filter_metadata(metadata, item["id"])
|
||||
|
|
|
|||
|
|
@ -1,6 +1,10 @@
|
|||
from open_webui.retrieval.vector.main import VectorDBBase
|
||||
from open_webui.retrieval.vector.type import VectorType
|
||||
from open_webui.config import VECTOR_DB, ENABLE_QDRANT_MULTITENANCY_MODE
|
||||
from open_webui.config import (
|
||||
VECTOR_DB,
|
||||
ENABLE_QDRANT_MULTITENANCY_MODE,
|
||||
ENABLE_MILVUS_MULTITENANCY_MODE,
|
||||
)
|
||||
|
||||
|
||||
class Vector:
|
||||
|
|
@ -12,9 +16,16 @@ class Vector:
|
|||
"""
|
||||
match vector_type:
|
||||
case VectorType.MILVUS:
|
||||
from open_webui.retrieval.vector.dbs.milvus import MilvusClient
|
||||
if ENABLE_MILVUS_MULTITENANCY_MODE:
|
||||
from open_webui.retrieval.vector.dbs.milvus_multitenancy import (
|
||||
MilvusClient,
|
||||
)
|
||||
|
||||
return MilvusClient()
|
||||
return MilvusClient()
|
||||
else:
|
||||
from open_webui.retrieval.vector.dbs.milvus import MilvusClient
|
||||
|
||||
return MilvusClient()
|
||||
case VectorType.QDRANT:
|
||||
if ENABLE_QDRANT_MULTITENANCY_MODE:
|
||||
from open_webui.retrieval.vector.dbs.qdrant_multitenancy import (
|
||||
|
|
|
|||
|
|
@ -1,10 +1,24 @@
|
|||
from datetime import datetime
|
||||
|
||||
KEYS_TO_EXCLUDE = ["content", "pages", "tables", "paragraphs", "sections", "figures"]
|
||||
|
||||
def stringify_metadata(
|
||||
|
||||
def filter_metadata(metadata: dict[str, any]) -> dict[str, any]:
|
||||
metadata = {
|
||||
key: value for key, value in metadata.items() if key not in KEYS_TO_EXCLUDE
|
||||
}
|
||||
return metadata
|
||||
|
||||
|
||||
def process_metadata(
|
||||
metadata: dict[str, any],
|
||||
) -> dict[str, any]:
|
||||
for key, value in metadata.items():
|
||||
# Remove large fields
|
||||
if key in KEYS_TO_EXCLUDE:
|
||||
del metadata[key]
|
||||
|
||||
# Convert non-serializable fields to strings
|
||||
if (
|
||||
isinstance(value, datetime)
|
||||
or isinstance(value, list)
|
||||
|
|
|
|||
128
backend/open_webui/retrieval/web/azure.py
Normal file
|
|
@ -0,0 +1,128 @@
|
|||
import logging
|
||||
from typing import Optional
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
"""
|
||||
Azure AI Search integration for Open WebUI.
|
||||
Documentation: https://learn.microsoft.com/en-us/python/api/overview/azure/search-documents-readme?view=azure-python
|
||||
|
||||
Required package: azure-search-documents
|
||||
Install: pip install azure-search-documents
|
||||
"""
|
||||
|
||||
|
||||
def search_azure(
|
||||
api_key: str,
|
||||
endpoint: str,
|
||||
index_name: str,
|
||||
query: str,
|
||||
count: int,
|
||||
filter_list: Optional[list[str]] = None,
|
||||
) -> list[SearchResult]:
|
||||
"""
|
||||
Search using Azure AI Search.
|
||||
|
||||
Args:
|
||||
api_key: Azure Search API key (query key or admin key)
|
||||
endpoint: Azure Search service endpoint (e.g., https://myservice.search.windows.net)
|
||||
index_name: Name of the search index to query
|
||||
query: Search query string
|
||||
count: Number of results to return
|
||||
filter_list: Optional list of domains to filter results
|
||||
|
||||
Returns:
|
||||
List of SearchResult objects with link, title, and snippet
|
||||
"""
|
||||
try:
|
||||
from azure.core.credentials import AzureKeyCredential
|
||||
from azure.search.documents import SearchClient
|
||||
except ImportError:
|
||||
log.error(
|
||||
"azure-search-documents package is not installed. "
|
||||
"Install it with: pip install azure-search-documents"
|
||||
)
|
||||
raise ImportError(
|
||||
"azure-search-documents is required for Azure AI Search. "
|
||||
"Install it with: pip install azure-search-documents"
|
||||
)
|
||||
|
||||
try:
|
||||
# Create search client with API key authentication
|
||||
credential = AzureKeyCredential(api_key)
|
||||
search_client = SearchClient(
|
||||
endpoint=endpoint, index_name=index_name, credential=credential
|
||||
)
|
||||
|
||||
# Perform the search
|
||||
results = search_client.search(search_text=query, top=count)
|
||||
|
||||
# Convert results to list and extract fields
|
||||
search_results = []
|
||||
for result in results:
|
||||
# Azure AI Search returns documents with custom schemas
|
||||
# We need to extract common fields that might represent URL, title, and content
|
||||
# Common field names to look for:
|
||||
result_dict = dict(result)
|
||||
|
||||
# Try to find URL field (common names)
|
||||
link = (
|
||||
result_dict.get("url")
|
||||
or result_dict.get("link")
|
||||
or result_dict.get("uri")
|
||||
or result_dict.get("metadata_storage_path")
|
||||
or ""
|
||||
)
|
||||
|
||||
# Try to find title field (common names)
|
||||
title = (
|
||||
result_dict.get("title")
|
||||
or result_dict.get("name")
|
||||
or result_dict.get("metadata_title")
|
||||
or result_dict.get("metadata_storage_name")
|
||||
or None
|
||||
)
|
||||
|
||||
# Try to find content/snippet field (common names)
|
||||
snippet = (
|
||||
result_dict.get("content")
|
||||
or result_dict.get("snippet")
|
||||
or result_dict.get("description")
|
||||
or result_dict.get("summary")
|
||||
or result_dict.get("text")
|
||||
or None
|
||||
)
|
||||
|
||||
# Truncate snippet if too long
|
||||
if snippet and len(snippet) > 500:
|
||||
snippet = snippet[:497] + "..."
|
||||
|
||||
if link: # Only add if we found a valid link
|
||||
search_results.append(
|
||||
{
|
||||
"link": link,
|
||||
"title": title,
|
||||
"snippet": snippet,
|
||||
}
|
||||
)
|
||||
|
||||
# Apply domain filtering if specified
|
||||
if filter_list:
|
||||
search_results = get_filtered_results(search_results, filter_list)
|
||||
|
||||
# Convert to SearchResult objects
|
||||
return [
|
||||
SearchResult(
|
||||
link=result["link"],
|
||||
title=result.get("title"),
|
||||
snippet=result.get("snippet"),
|
||||
)
|
||||
for result in search_results
|
||||
]
|
||||
|
||||
except Exception as ex:
|
||||
log.error(f"Azure AI Search error: {ex}")
|
||||
raise ex
|
||||
|
|
@ -2,27 +2,42 @@ import logging
|
|||
from typing import Optional, List
|
||||
|
||||
import requests
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
|
||||
from fastapi import Request
|
||||
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.utils.headers import include_user_info_headers
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def search_external(
|
||||
request: Request,
|
||||
external_url: str,
|
||||
external_api_key: str,
|
||||
query: str,
|
||||
count: int,
|
||||
filter_list: Optional[List[str]] = None,
|
||||
user=None,
|
||||
) -> List[SearchResult]:
|
||||
try:
|
||||
headers = {
|
||||
"User-Agent": "Open WebUI (https://github.com/open-webui/open-webui) RAG Bot",
|
||||
"Authorization": f"Bearer {external_api_key}",
|
||||
}
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
chat_id = getattr(request.state, "chat_id", None)
|
||||
if chat_id:
|
||||
headers["X-OpenWebUI-Chat-Id"] = str(chat_id)
|
||||
|
||||
response = requests.post(
|
||||
external_url,
|
||||
headers={
|
||||
"User-Agent": "Open WebUI (https://github.com/open-webui/open-webui) RAG Bot",
|
||||
"Authorization": f"Bearer {external_api_key}",
|
||||
},
|
||||
headers=headers,
|
||||
json={
|
||||
"query": query,
|
||||
"count": count,
|
||||
|
|
|
|||
|
|
@ -1,11 +1,10 @@
|
|||
import logging
|
||||
from typing import Optional, List
|
||||
from urllib.parse import urljoin
|
||||
|
||||
import requests
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
|
@ -18,27 +17,20 @@ def search_firecrawl(
|
|||
filter_list: Optional[List[str]] = None,
|
||||
) -> List[SearchResult]:
|
||||
try:
|
||||
firecrawl_search_url = urljoin(firecrawl_url, "/v1/search")
|
||||
response = requests.post(
|
||||
firecrawl_search_url,
|
||||
headers={
|
||||
"User-Agent": "Open WebUI (https://github.com/open-webui/open-webui) RAG Bot",
|
||||
"Authorization": f"Bearer {firecrawl_api_key}",
|
||||
},
|
||||
json={
|
||||
"query": query,
|
||||
"limit": count,
|
||||
},
|
||||
from firecrawl import FirecrawlApp
|
||||
|
||||
firecrawl = FirecrawlApp(api_key=firecrawl_api_key, api_url=firecrawl_url)
|
||||
response = firecrawl.search(
|
||||
query=query, limit=count, ignore_invalid_urls=True, timeout=count * 3
|
||||
)
|
||||
response.raise_for_status()
|
||||
results = response.json().get("data", [])
|
||||
results = response.web
|
||||
if filter_list:
|
||||
results = get_filtered_results(results, filter_list)
|
||||
results = [
|
||||
SearchResult(
|
||||
link=result.get("url"),
|
||||
title=result.get("title"),
|
||||
snippet=result.get("description"),
|
||||
link=result.url,
|
||||
title=result.title,
|
||||
snippet=result.description,
|
||||
)
|
||||
for result in results[:count]
|
||||
]
|
||||
|
|
|
|||
|
|
@ -15,6 +15,7 @@ def search_google_pse(
|
|||
query: str,
|
||||
count: int,
|
||||
filter_list: Optional[list[str]] = None,
|
||||
referer: Optional[str] = None,
|
||||
) -> list[SearchResult]:
|
||||
"""Search using Google's Programmable Search Engine API and return the results as a list of SearchResult objects.
|
||||
Handles pagination for counts greater than 10.
|
||||
|
|
@ -30,7 +31,11 @@ def search_google_pse(
|
|||
list[SearchResult]: A list of SearchResult objects.
|
||||
"""
|
||||
url = "https://www.googleapis.com/customsearch/v1"
|
||||
|
||||
headers = {"Content-Type": "application/json"}
|
||||
if referer:
|
||||
headers["Referer"] = referer
|
||||
|
||||
all_results = []
|
||||
start_index = 1 # Google PSE start parameter is 1-based
|
||||
|
||||
|
|
|
|||
|
|
@ -5,18 +5,37 @@ from urllib.parse import urlparse
|
|||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from open_webui.retrieval.web.utils import is_string_allowed, resolve_hostname
|
||||
|
||||
|
||||
def get_filtered_results(results, filter_list):
|
||||
if not filter_list:
|
||||
return results
|
||||
|
||||
filtered_results = []
|
||||
|
||||
for result in results:
|
||||
url = result.get("url") or result.get("link", "") or result.get("href", "")
|
||||
if not validators.url(url):
|
||||
continue
|
||||
|
||||
domain = urlparse(url).netloc
|
||||
if any(domain.endswith(filtered_domain) for filtered_domain in filter_list):
|
||||
if not domain:
|
||||
continue
|
||||
|
||||
hostnames = [domain]
|
||||
|
||||
try:
|
||||
ipv4_addresses, ipv6_addresses = resolve_hostname(domain)
|
||||
hostnames.extend(ipv4_addresses)
|
||||
hostnames.extend(ipv6_addresses)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if any(is_string_allowed(hostname, filter_list) for hostname in hostnames):
|
||||
filtered_results.append(result)
|
||||
continue
|
||||
|
||||
return filtered_results
|
||||
|
||||
|
||||
|
|
|
|||
51
backend/open_webui/retrieval/web/ollama.py
Normal file
|
|
@ -0,0 +1,51 @@
|
|||
import logging
|
||||
from dataclasses import dataclass
|
||||
from typing import Optional
|
||||
|
||||
import requests
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from open_webui.retrieval.web.main import SearchResult
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def search_ollama_cloud(
|
||||
url: str,
|
||||
api_key: str,
|
||||
query: str,
|
||||
count: int,
|
||||
filter_list: Optional[list[str]] = None,
|
||||
) -> list[SearchResult]:
|
||||
"""Search using Ollama Search API and return the results as a list of SearchResult objects.
|
||||
|
||||
Args:
|
||||
api_key (str): A Ollama Search API key
|
||||
query (str): The query to search for
|
||||
count (int): Number of results to return
|
||||
filter_list (Optional[list[str]]): List of domains to filter results by
|
||||
"""
|
||||
log.info(f"Searching with Ollama for query: {query}")
|
||||
|
||||
headers = {"Authorization": f"Bearer {api_key}", "Content-Type": "application/json"}
|
||||
payload = {"query": query, "max_results": count}
|
||||
|
||||
try:
|
||||
response = requests.post(f"{url}/api/web_search", headers=headers, json=payload)
|
||||
response.raise_for_status()
|
||||
data = response.json()
|
||||
|
||||
results = data.get("results", [])
|
||||
log.info(f"Found {len(results)} results")
|
||||
|
||||
return [
|
||||
SearchResult(
|
||||
link=result.get("url", ""),
|
||||
title=result.get("title", ""),
|
||||
snippet=result.get("content", ""),
|
||||
)
|
||||
for result in results
|
||||
]
|
||||
except Exception as e:
|
||||
log.error(f"Error searching Ollama: {e}")
|
||||
return []
|
||||
76
backend/open_webui/retrieval/web/perplexity_search.py
Normal file
|
|
@ -0,0 +1,76 @@
|
|||
import logging
|
||||
from typing import Optional, Literal
|
||||
import requests
|
||||
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.utils.headers import include_user_info_headers
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def search_perplexity_search(
|
||||
api_key: str,
|
||||
query: str,
|
||||
count: int,
|
||||
filter_list: Optional[list[str]] = None,
|
||||
api_url: str = "https://api.perplexity.ai/search",
|
||||
user=None,
|
||||
) -> list[SearchResult]:
|
||||
"""Search using Perplexity API and return the results as a list of SearchResult objects.
|
||||
|
||||
Args:
|
||||
api_key (str): A Perplexity API key
|
||||
query (str): The query to search for
|
||||
count (int): Maximum number of results to return
|
||||
filter_list (Optional[list[str]]): List of domains to filter results
|
||||
api_url (str): Custom API URL (defaults to https://api.perplexity.ai/search)
|
||||
user: Optional user object for forwarding user info headers
|
||||
|
||||
"""
|
||||
|
||||
# Handle PersistentConfig object
|
||||
if hasattr(api_key, "__str__"):
|
||||
api_key = str(api_key)
|
||||
|
||||
if hasattr(api_url, "__str__"):
|
||||
api_url = str(api_url)
|
||||
|
||||
try:
|
||||
url = api_url
|
||||
|
||||
# Create payload for the API call
|
||||
payload = {
|
||||
"query": query,
|
||||
"max_results": count,
|
||||
}
|
||||
|
||||
headers = {
|
||||
"Authorization": f"Bearer {api_key}",
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
|
||||
# Forward user info headers if user is provided
|
||||
if user is not None:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
# Make the API request
|
||||
response = requests.request("POST", url, json=payload, headers=headers)
|
||||
# Parse the JSON response
|
||||
json_response = response.json()
|
||||
|
||||
# Extract citations from the response
|
||||
results = json_response.get("results", [])
|
||||
|
||||
return [
|
||||
SearchResult(
|
||||
link=result["url"], title=result["title"], snippet=result["snippet"]
|
||||
)
|
||||
for result in results
|
||||
]
|
||||
|
||||
except Exception as e:
|
||||
log.error(f"Error searching with Perplexity Search API: {e}")
|
||||
return []
|
||||
|
|
@ -4,7 +4,6 @@ import socket
|
|||
import ssl
|
||||
import urllib.parse
|
||||
import urllib.request
|
||||
from collections import defaultdict
|
||||
from datetime import datetime, time, timedelta
|
||||
from typing import (
|
||||
Any,
|
||||
|
|
@ -17,13 +16,15 @@ from typing import (
|
|||
Union,
|
||||
Literal,
|
||||
)
|
||||
|
||||
from fastapi.concurrency import run_in_threadpool
|
||||
import aiohttp
|
||||
import certifi
|
||||
import validators
|
||||
from langchain_community.document_loaders import PlaywrightURLLoader, WebBaseLoader
|
||||
from langchain_community.document_loaders.firecrawl import FireCrawlLoader
|
||||
from langchain_community.document_loaders.base import BaseLoader
|
||||
from langchain_core.documents import Document
|
||||
|
||||
from open_webui.retrieval.loaders.tavily import TavilyLoader
|
||||
from open_webui.retrieval.loaders.external_web import ExternalWebLoader
|
||||
from open_webui.constants import ERROR_MESSAGES
|
||||
|
|
@ -38,17 +39,79 @@ from open_webui.config import (
|
|||
TAVILY_EXTRACT_DEPTH,
|
||||
EXTERNAL_WEB_LOADER_URL,
|
||||
EXTERNAL_WEB_LOADER_API_KEY,
|
||||
WEB_FETCH_FILTER_LIST,
|
||||
)
|
||||
from open_webui.env import SRC_LOG_LEVELS, AIOHTTP_CLIENT_SESSION_SSL
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def resolve_hostname(hostname):
|
||||
# Get address information
|
||||
addr_info = socket.getaddrinfo(hostname, None)
|
||||
|
||||
# Extract IP addresses from address information
|
||||
ipv4_addresses = [info[4][0] for info in addr_info if info[0] == socket.AF_INET]
|
||||
ipv6_addresses = [info[4][0] for info in addr_info if info[0] == socket.AF_INET6]
|
||||
|
||||
return ipv4_addresses, ipv6_addresses
|
||||
|
||||
|
||||
def get_allow_block_lists(filter_list):
|
||||
allow_list = []
|
||||
block_list = []
|
||||
|
||||
if filter_list:
|
||||
for d in filter_list:
|
||||
if d.startswith("!"):
|
||||
# Domains starting with "!" → blocked
|
||||
block_list.append(d[1:])
|
||||
else:
|
||||
# Domains starting without "!" → allowed
|
||||
allow_list.append(d)
|
||||
|
||||
return allow_list, block_list
|
||||
|
||||
|
||||
def is_string_allowed(string: str, filter_list: Optional[list[str]] = None) -> bool:
|
||||
if not filter_list:
|
||||
return True
|
||||
|
||||
allow_list, block_list = get_allow_block_lists(filter_list)
|
||||
# If allow list is non-empty, require domain to match one of them
|
||||
if allow_list:
|
||||
if not any(string.endswith(allowed) for allowed in allow_list):
|
||||
return False
|
||||
|
||||
# Block list always removes matches
|
||||
if any(string.endswith(blocked) for blocked in block_list):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def validate_url(url: Union[str, Sequence[str]]):
|
||||
if isinstance(url, str):
|
||||
if isinstance(validators.url(url), validators.ValidationError):
|
||||
raise ValueError(ERROR_MESSAGES.INVALID_URL)
|
||||
|
||||
parsed_url = urllib.parse.urlparse(url)
|
||||
|
||||
# Protocol validation - only allow http/https
|
||||
if parsed_url.scheme not in ["http", "https"]:
|
||||
log.warning(
|
||||
f"Blocked non-HTTP(S) protocol: {parsed_url.scheme} in URL: {url}"
|
||||
)
|
||||
raise ValueError(ERROR_MESSAGES.INVALID_URL)
|
||||
|
||||
# Blocklist check using unified filtering logic
|
||||
if WEB_FETCH_FILTER_LIST:
|
||||
if not is_string_allowed(url, WEB_FETCH_FILTER_LIST):
|
||||
log.warning(f"URL blocked by filter list: {url}")
|
||||
raise ValueError(ERROR_MESSAGES.INVALID_URL)
|
||||
|
||||
if not ENABLE_RAG_LOCAL_WEB_FETCH:
|
||||
# Local web fetch is disabled, filter out any URLs that resolve to private IP addresses
|
||||
parsed_url = urllib.parse.urlparse(url)
|
||||
|
|
@ -75,22 +138,12 @@ def safe_validate_urls(url: Sequence[str]) -> Sequence[str]:
|
|||
try:
|
||||
if validate_url(u):
|
||||
valid_urls.append(u)
|
||||
except ValueError:
|
||||
except Exception as e:
|
||||
log.debug(f"Invalid URL {u}: {str(e)}")
|
||||
continue
|
||||
return valid_urls
|
||||
|
||||
|
||||
def resolve_hostname(hostname):
|
||||
# Get address information
|
||||
addr_info = socket.getaddrinfo(hostname, None)
|
||||
|
||||
# Extract IP addresses from address information
|
||||
ipv4_addresses = [info[4][0] for info in addr_info if info[0] == socket.AF_INET]
|
||||
ipv6_addresses = [info[4][0] for info in addr_info if info[0] == socket.AF_INET6]
|
||||
|
||||
return ipv4_addresses, ipv6_addresses
|
||||
|
||||
|
||||
def extract_metadata(soup, url):
|
||||
metadata = {"source": url}
|
||||
if title := soup.find("title"):
|
||||
|
|
@ -141,13 +194,13 @@ class RateLimitMixin:
|
|||
|
||||
|
||||
class URLProcessingMixin:
|
||||
def _verify_ssl_cert(self, url: str) -> bool:
|
||||
async def _verify_ssl_cert(self, url: str) -> bool:
|
||||
"""Verify SSL certificate for a URL."""
|
||||
return verify_ssl_cert(url)
|
||||
return await run_in_threadpool(verify_ssl_cert, url)
|
||||
|
||||
async def _safe_process_url(self, url: str) -> bool:
|
||||
"""Perform safety checks before processing a URL."""
|
||||
if self.verify_ssl and not self._verify_ssl_cert(url):
|
||||
if self.verify_ssl and not await self._verify_ssl_cert(url):
|
||||
raise ValueError(f"SSL certificate verification failed for {url}")
|
||||
await self._wait_for_rate_limit()
|
||||
return True
|
||||
|
|
@ -188,13 +241,12 @@ class SafeFireCrawlLoader(BaseLoader, RateLimitMixin, URLProcessingMixin):
|
|||
(uses FIRE_CRAWL_API_KEY environment variable if not provided).
|
||||
api_url: Base URL for FireCrawl API. Defaults to official API endpoint.
|
||||
mode: Operation mode selection:
|
||||
- 'crawl': Website crawling mode (default)
|
||||
- 'scrape': Direct page scraping
|
||||
- 'crawl': Website crawling mode
|
||||
- 'scrape': Direct page scraping (default)
|
||||
- 'map': Site map generation
|
||||
proxy: Proxy override settings for the FireCrawl API.
|
||||
params: The parameters to pass to the Firecrawl API.
|
||||
Examples include crawlerOptions.
|
||||
For more details, visit: https://github.com/mendableai/firecrawl-py
|
||||
For more details, visit: https://docs.firecrawl.dev/sdks/python#batch-scrape
|
||||
"""
|
||||
proxy_server = proxy.get("server") if proxy else None
|
||||
if trust_env and not proxy_server:
|
||||
|
|
@ -214,50 +266,88 @@ class SafeFireCrawlLoader(BaseLoader, RateLimitMixin, URLProcessingMixin):
|
|||
self.api_key = api_key
|
||||
self.api_url = api_url
|
||||
self.mode = mode
|
||||
self.params = params
|
||||
self.params = params or {}
|
||||
|
||||
def lazy_load(self) -> Iterator[Document]:
|
||||
"""Load documents concurrently using FireCrawl."""
|
||||
for url in self.web_paths:
|
||||
try:
|
||||
self._safe_process_url_sync(url)
|
||||
loader = FireCrawlLoader(
|
||||
url=url,
|
||||
api_key=self.api_key,
|
||||
api_url=self.api_url,
|
||||
mode=self.mode,
|
||||
params=self.params,
|
||||
"""Load documents using FireCrawl batch_scrape."""
|
||||
log.debug(
|
||||
"Starting FireCrawl batch scrape for %d URLs, mode: %s, params: %s",
|
||||
len(self.web_paths),
|
||||
self.mode,
|
||||
self.params,
|
||||
)
|
||||
try:
|
||||
from firecrawl import FirecrawlApp
|
||||
|
||||
firecrawl = FirecrawlApp(api_key=self.api_key, api_url=self.api_url)
|
||||
result = firecrawl.batch_scrape(
|
||||
self.web_paths,
|
||||
formats=["markdown"],
|
||||
skip_tls_verification=not self.verify_ssl,
|
||||
ignore_invalid_urls=True,
|
||||
remove_base64_images=True,
|
||||
max_age=300000, # 5 minutes https://docs.firecrawl.dev/features/fast-scraping#common-maxage-values
|
||||
wait_timeout=len(self.web_paths) * 3,
|
||||
**self.params,
|
||||
)
|
||||
|
||||
if result.status != "completed":
|
||||
raise RuntimeError(
|
||||
f"FireCrawl batch scrape did not complete successfully. result: {result}"
|
||||
)
|
||||
for document in loader.lazy_load():
|
||||
if not document.metadata.get("source"):
|
||||
document.metadata["source"] = document.metadata.get("sourceURL")
|
||||
yield document
|
||||
except Exception as e:
|
||||
if self.continue_on_failure:
|
||||
log.exception(f"Error loading {url}: {e}")
|
||||
continue
|
||||
|
||||
for data in result.data:
|
||||
metadata = data.metadata or {}
|
||||
yield Document(
|
||||
page_content=data.markdown or "",
|
||||
metadata={"source": metadata.url or metadata.source_url or ""},
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
if self.continue_on_failure:
|
||||
log.exception(f"Error extracting content from URLs: {e}")
|
||||
else:
|
||||
raise e
|
||||
|
||||
async def alazy_load(self):
|
||||
"""Async version of lazy_load."""
|
||||
for url in self.web_paths:
|
||||
try:
|
||||
await self._safe_process_url(url)
|
||||
loader = FireCrawlLoader(
|
||||
url=url,
|
||||
api_key=self.api_key,
|
||||
api_url=self.api_url,
|
||||
mode=self.mode,
|
||||
params=self.params,
|
||||
log.debug(
|
||||
"Starting FireCrawl batch scrape for %d URLs, mode: %s, params: %s",
|
||||
len(self.web_paths),
|
||||
self.mode,
|
||||
self.params,
|
||||
)
|
||||
try:
|
||||
from firecrawl import FirecrawlApp
|
||||
|
||||
firecrawl = FirecrawlApp(api_key=self.api_key, api_url=self.api_url)
|
||||
result = firecrawl.batch_scrape(
|
||||
self.web_paths,
|
||||
formats=["markdown"],
|
||||
skip_tls_verification=not self.verify_ssl,
|
||||
ignore_invalid_urls=True,
|
||||
remove_base64_images=True,
|
||||
max_age=300000, # 5 minutes https://docs.firecrawl.dev/features/fast-scraping#common-maxage-values
|
||||
wait_timeout=len(self.web_paths) * 3,
|
||||
**self.params,
|
||||
)
|
||||
|
||||
if result.status != "completed":
|
||||
raise RuntimeError(
|
||||
f"FireCrawl batch scrape did not complete successfully. result: {result}"
|
||||
)
|
||||
async for document in loader.alazy_load():
|
||||
if not document.metadata.get("source"):
|
||||
document.metadata["source"] = document.metadata.get("sourceURL")
|
||||
yield document
|
||||
except Exception as e:
|
||||
if self.continue_on_failure:
|
||||
log.exception(f"Error loading {url}: {e}")
|
||||
continue
|
||||
|
||||
for data in result.data:
|
||||
metadata = data.metadata or {}
|
||||
yield Document(
|
||||
page_content=data.markdown or "",
|
||||
metadata={"source": metadata.url or metadata.source_url or ""},
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
if self.continue_on_failure:
|
||||
log.exception(f"Error extracting content from URLs: {e}")
|
||||
else:
|
||||
raise e
|
||||
|
||||
|
||||
|
|
@ -517,6 +607,7 @@ class SafeWebBaseLoader(WebBaseLoader):
|
|||
async with session.get(
|
||||
url,
|
||||
**(self.requests_kwargs | kwargs),
|
||||
allow_redirects=False,
|
||||
) as response:
|
||||
if self.raise_for_status:
|
||||
response.raise_for_status()
|
||||
|
|
@ -602,6 +693,10 @@ def get_web_loader(
|
|||
# Check if the URLs are valid
|
||||
safe_urls = safe_validate_urls([urls] if isinstance(urls, str) else urls)
|
||||
|
||||
if not safe_urls:
|
||||
log.warning(f"All provided URLs were blocked or invalid: {urls}")
|
||||
raise ValueError(ERROR_MESSAGES.INVALID_URL)
|
||||
|
||||
web_loader_args = {
|
||||
"web_paths": safe_urls,
|
||||
"verify_ssl": verify_ssl,
|
||||
|
|
|
|||
|
|
@ -3,8 +3,9 @@ import json
|
|||
import logging
|
||||
import os
|
||||
import uuid
|
||||
import html
|
||||
import base64
|
||||
from functools import lru_cache
|
||||
from pathlib import Path
|
||||
from pydub import AudioSegment
|
||||
from pydub.silence import split_on_silence
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
|
|
@ -15,7 +16,7 @@ import aiohttp
|
|||
import aiofiles
|
||||
import requests
|
||||
import mimetypes
|
||||
from urllib.parse import quote
|
||||
from urllib.parse import urljoin, quote
|
||||
|
||||
from fastapi import (
|
||||
Depends,
|
||||
|
|
@ -39,13 +40,14 @@ from open_webui.config import (
|
|||
WHISPER_MODEL_DIR,
|
||||
CACHE_DIR,
|
||||
WHISPER_LANGUAGE,
|
||||
ELEVENLABS_API_BASE_URL,
|
||||
)
|
||||
|
||||
from open_webui.constants import ERROR_MESSAGES
|
||||
from open_webui.env import (
|
||||
ENV,
|
||||
AIOHTTP_CLIENT_SESSION_SSL,
|
||||
AIOHTTP_CLIENT_TIMEOUT,
|
||||
ENV,
|
||||
SRC_LOG_LEVELS,
|
||||
DEVICE_TYPE,
|
||||
ENABLE_FORWARD_USER_INFO_HEADERS,
|
||||
|
|
@ -154,6 +156,7 @@ def set_faster_whisper_model(model: str, auto_update: bool = False):
|
|||
class TTSConfigForm(BaseModel):
|
||||
OPENAI_API_BASE_URL: str
|
||||
OPENAI_API_KEY: str
|
||||
OPENAI_PARAMS: Optional[dict] = None
|
||||
API_KEY: str
|
||||
ENGINE: str
|
||||
MODEL: str
|
||||
|
|
@ -177,6 +180,9 @@ class STTConfigForm(BaseModel):
|
|||
AZURE_LOCALES: str
|
||||
AZURE_BASE_URL: str
|
||||
AZURE_MAX_SPEAKERS: str
|
||||
MISTRAL_API_KEY: str
|
||||
MISTRAL_API_BASE_URL: str
|
||||
MISTRAL_USE_CHAT_COMPLETIONS: bool
|
||||
|
||||
|
||||
class AudioConfigUpdateForm(BaseModel):
|
||||
|
|
@ -190,6 +196,7 @@ async def get_audio_config(request: Request, user=Depends(get_admin_user)):
|
|||
"tts": {
|
||||
"OPENAI_API_BASE_URL": request.app.state.config.TTS_OPENAI_API_BASE_URL,
|
||||
"OPENAI_API_KEY": request.app.state.config.TTS_OPENAI_API_KEY,
|
||||
"OPENAI_PARAMS": request.app.state.config.TTS_OPENAI_PARAMS,
|
||||
"API_KEY": request.app.state.config.TTS_API_KEY,
|
||||
"ENGINE": request.app.state.config.TTS_ENGINE,
|
||||
"MODEL": request.app.state.config.TTS_MODEL,
|
||||
|
|
@ -212,6 +219,9 @@ async def get_audio_config(request: Request, user=Depends(get_admin_user)):
|
|||
"AZURE_LOCALES": request.app.state.config.AUDIO_STT_AZURE_LOCALES,
|
||||
"AZURE_BASE_URL": request.app.state.config.AUDIO_STT_AZURE_BASE_URL,
|
||||
"AZURE_MAX_SPEAKERS": request.app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS,
|
||||
"MISTRAL_API_KEY": request.app.state.config.AUDIO_STT_MISTRAL_API_KEY,
|
||||
"MISTRAL_API_BASE_URL": request.app.state.config.AUDIO_STT_MISTRAL_API_BASE_URL,
|
||||
"MISTRAL_USE_CHAT_COMPLETIONS": request.app.state.config.AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS,
|
||||
},
|
||||
}
|
||||
|
||||
|
|
@ -222,6 +232,7 @@ async def update_audio_config(
|
|||
):
|
||||
request.app.state.config.TTS_OPENAI_API_BASE_URL = form_data.tts.OPENAI_API_BASE_URL
|
||||
request.app.state.config.TTS_OPENAI_API_KEY = form_data.tts.OPENAI_API_KEY
|
||||
request.app.state.config.TTS_OPENAI_PARAMS = form_data.tts.OPENAI_PARAMS
|
||||
request.app.state.config.TTS_API_KEY = form_data.tts.API_KEY
|
||||
request.app.state.config.TTS_ENGINE = form_data.tts.ENGINE
|
||||
request.app.state.config.TTS_MODEL = form_data.tts.MODEL
|
||||
|
|
@ -252,6 +263,13 @@ async def update_audio_config(
|
|||
request.app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS = (
|
||||
form_data.stt.AZURE_MAX_SPEAKERS
|
||||
)
|
||||
request.app.state.config.AUDIO_STT_MISTRAL_API_KEY = form_data.stt.MISTRAL_API_KEY
|
||||
request.app.state.config.AUDIO_STT_MISTRAL_API_BASE_URL = (
|
||||
form_data.stt.MISTRAL_API_BASE_URL
|
||||
)
|
||||
request.app.state.config.AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS = (
|
||||
form_data.stt.MISTRAL_USE_CHAT_COMPLETIONS
|
||||
)
|
||||
|
||||
if request.app.state.config.STT_ENGINE == "":
|
||||
request.app.state.faster_whisper_model = set_faster_whisper_model(
|
||||
|
|
@ -262,12 +280,13 @@ async def update_audio_config(
|
|||
|
||||
return {
|
||||
"tts": {
|
||||
"OPENAI_API_BASE_URL": request.app.state.config.TTS_OPENAI_API_BASE_URL,
|
||||
"OPENAI_API_KEY": request.app.state.config.TTS_OPENAI_API_KEY,
|
||||
"API_KEY": request.app.state.config.TTS_API_KEY,
|
||||
"ENGINE": request.app.state.config.TTS_ENGINE,
|
||||
"MODEL": request.app.state.config.TTS_MODEL,
|
||||
"VOICE": request.app.state.config.TTS_VOICE,
|
||||
"OPENAI_API_BASE_URL": request.app.state.config.TTS_OPENAI_API_BASE_URL,
|
||||
"OPENAI_API_KEY": request.app.state.config.TTS_OPENAI_API_KEY,
|
||||
"OPENAI_PARAMS": request.app.state.config.TTS_OPENAI_PARAMS,
|
||||
"API_KEY": request.app.state.config.TTS_API_KEY,
|
||||
"SPLIT_ON": request.app.state.config.TTS_SPLIT_ON,
|
||||
"AZURE_SPEECH_REGION": request.app.state.config.TTS_AZURE_SPEECH_REGION,
|
||||
"AZURE_SPEECH_BASE_URL": request.app.state.config.TTS_AZURE_SPEECH_BASE_URL,
|
||||
|
|
@ -286,6 +305,9 @@ async def update_audio_config(
|
|||
"AZURE_LOCALES": request.app.state.config.AUDIO_STT_AZURE_LOCALES,
|
||||
"AZURE_BASE_URL": request.app.state.config.AUDIO_STT_AZURE_BASE_URL,
|
||||
"AZURE_MAX_SPEAKERS": request.app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS,
|
||||
"MISTRAL_API_KEY": request.app.state.config.AUDIO_STT_MISTRAL_API_KEY,
|
||||
"MISTRAL_API_BASE_URL": request.app.state.config.AUDIO_STT_MISTRAL_API_BASE_URL,
|
||||
"MISTRAL_USE_CHAT_COMPLETIONS": request.app.state.config.AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS,
|
||||
},
|
||||
}
|
||||
|
||||
|
|
@ -337,6 +359,11 @@ async def speech(request: Request, user=Depends(get_verified_user)):
|
|||
async with aiohttp.ClientSession(
|
||||
timeout=timeout, trust_env=True
|
||||
) as session:
|
||||
payload = {
|
||||
**payload,
|
||||
**(request.app.state.config.TTS_OPENAI_PARAMS or {}),
|
||||
}
|
||||
|
||||
r = await session.post(
|
||||
url=f"{request.app.state.config.TTS_OPENAI_API_BASE_URL}/audio/speech",
|
||||
json=payload,
|
||||
|
|
@ -404,7 +431,7 @@ async def speech(request: Request, user=Depends(get_verified_user)):
|
|||
timeout=timeout, trust_env=True
|
||||
) as session:
|
||||
async with session.post(
|
||||
f"https://api.elevenlabs.io/v1/text-to-speech/{voice_id}",
|
||||
f"{ELEVENLABS_API_BASE_URL}/v1/text-to-speech/{voice_id}",
|
||||
json={
|
||||
"text": payload["input"],
|
||||
"model_id": request.app.state.config.TTS_MODEL,
|
||||
|
|
@ -459,7 +486,7 @@ async def speech(request: Request, user=Depends(get_verified_user)):
|
|||
|
||||
try:
|
||||
data = f"""<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="{locale}">
|
||||
<voice name="{language}">{payload["input"]}</voice>
|
||||
<voice name="{language}">{html.escape(payload["input"])}</voice>
|
||||
</speak>"""
|
||||
timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT)
|
||||
async with aiohttp.ClientSession(
|
||||
|
|
@ -551,7 +578,7 @@ def transcription_handler(request, file_path, metadata):
|
|||
metadata = metadata or {}
|
||||
|
||||
languages = [
|
||||
metadata.get("language", None) if WHISPER_LANGUAGE == "" else WHISPER_LANGUAGE,
|
||||
metadata.get("language", None) if not WHISPER_LANGUAGE else WHISPER_LANGUAGE,
|
||||
None, # Always fallback to None in case transcription fails
|
||||
]
|
||||
|
||||
|
|
@ -819,6 +846,186 @@ def transcription_handler(request, file_path, metadata):
|
|||
detail=detail if detail else "Open WebUI: Server Connection Error",
|
||||
)
|
||||
|
||||
elif request.app.state.config.STT_ENGINE == "mistral":
|
||||
# Check file exists
|
||||
if not os.path.exists(file_path):
|
||||
raise HTTPException(status_code=400, detail="Audio file not found")
|
||||
|
||||
# Check file size
|
||||
file_size = os.path.getsize(file_path)
|
||||
if file_size > MAX_FILE_SIZE:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail=f"File size exceeds limit of {MAX_FILE_SIZE_MB}MB",
|
||||
)
|
||||
|
||||
api_key = request.app.state.config.AUDIO_STT_MISTRAL_API_KEY
|
||||
api_base_url = (
|
||||
request.app.state.config.AUDIO_STT_MISTRAL_API_BASE_URL
|
||||
or "https://api.mistral.ai/v1"
|
||||
)
|
||||
use_chat_completions = (
|
||||
request.app.state.config.AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS
|
||||
)
|
||||
|
||||
if not api_key:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="Mistral API key is required for Mistral STT",
|
||||
)
|
||||
|
||||
r = None
|
||||
try:
|
||||
# Use voxtral-mini-latest as the default model for transcription
|
||||
model = request.app.state.config.STT_MODEL or "voxtral-mini-latest"
|
||||
|
||||
log.info(
|
||||
f"Mistral STT - model: {model}, "
|
||||
f"method: {'chat_completions' if use_chat_completions else 'transcriptions'}"
|
||||
)
|
||||
|
||||
if use_chat_completions:
|
||||
# Use chat completions API with audio input
|
||||
# This method requires mp3 or wav format
|
||||
audio_file_to_use = file_path
|
||||
|
||||
if is_audio_conversion_required(file_path):
|
||||
log.debug("Converting audio to mp3 for chat completions API")
|
||||
converted_path = convert_audio_to_mp3(file_path)
|
||||
if converted_path:
|
||||
audio_file_to_use = converted_path
|
||||
else:
|
||||
log.error("Audio conversion failed")
|
||||
raise HTTPException(
|
||||
status_code=500,
|
||||
detail="Audio conversion failed. Chat completions API requires mp3 or wav format.",
|
||||
)
|
||||
|
||||
# Read and encode audio file as base64
|
||||
with open(audio_file_to_use, "rb") as audio_file:
|
||||
audio_base64 = base64.b64encode(audio_file.read()).decode("utf-8")
|
||||
|
||||
# Prepare chat completions request
|
||||
url = f"{api_base_url}/chat/completions"
|
||||
|
||||
# Add language instruction if specified
|
||||
language = metadata.get("language", None) if metadata else None
|
||||
if language:
|
||||
text_instruction = f"Transcribe this audio exactly as spoken in {language}. Do not translate it."
|
||||
else:
|
||||
text_instruction = "Transcribe this audio exactly as spoken in its original language. Do not translate it to another language."
|
||||
|
||||
payload = {
|
||||
"model": model,
|
||||
"messages": [
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{
|
||||
"type": "input_audio",
|
||||
"input_audio": audio_base64,
|
||||
},
|
||||
{"type": "text", "text": text_instruction},
|
||||
],
|
||||
}
|
||||
],
|
||||
}
|
||||
|
||||
r = requests.post(
|
||||
url=url,
|
||||
json=payload,
|
||||
headers={
|
||||
"Authorization": f"Bearer {api_key}",
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
)
|
||||
|
||||
r.raise_for_status()
|
||||
response = r.json()
|
||||
|
||||
# Extract transcript from chat completion response
|
||||
transcript = (
|
||||
response.get("choices", [{}])[0]
|
||||
.get("message", {})
|
||||
.get("content", "")
|
||||
.strip()
|
||||
)
|
||||
if not transcript:
|
||||
raise ValueError("Empty transcript in response")
|
||||
|
||||
data = {"text": transcript}
|
||||
|
||||
else:
|
||||
# Use dedicated transcriptions API
|
||||
url = f"{api_base_url}/audio/transcriptions"
|
||||
|
||||
# Determine the MIME type
|
||||
mime_type, _ = mimetypes.guess_type(file_path)
|
||||
if not mime_type:
|
||||
mime_type = "audio/webm"
|
||||
|
||||
# Use context manager to ensure file is properly closed
|
||||
with open(file_path, "rb") as audio_file:
|
||||
files = {"file": (filename, audio_file, mime_type)}
|
||||
data_form = {"model": model}
|
||||
|
||||
# Add language if specified in metadata
|
||||
language = metadata.get("language", None) if metadata else None
|
||||
if language:
|
||||
data_form["language"] = language
|
||||
|
||||
r = requests.post(
|
||||
url=url,
|
||||
files=files,
|
||||
data=data_form,
|
||||
headers={
|
||||
"Authorization": f"Bearer {api_key}",
|
||||
},
|
||||
)
|
||||
|
||||
r.raise_for_status()
|
||||
response = r.json()
|
||||
|
||||
# Extract transcript from response
|
||||
transcript = response.get("text", "").strip()
|
||||
if not transcript:
|
||||
raise ValueError("Empty transcript in response")
|
||||
|
||||
data = {"text": transcript}
|
||||
|
||||
# Save transcript to json file (consistent with other providers)
|
||||
transcript_file = f"{file_dir}/{id}.json"
|
||||
with open(transcript_file, "w") as f:
|
||||
json.dump(data, f)
|
||||
|
||||
log.debug(data)
|
||||
return data
|
||||
|
||||
except ValueError as e:
|
||||
log.exception("Error parsing Mistral response")
|
||||
raise HTTPException(
|
||||
status_code=500,
|
||||
detail=f"Failed to parse Mistral response: {str(e)}",
|
||||
)
|
||||
except requests.exceptions.RequestException as e:
|
||||
log.exception(e)
|
||||
detail = None
|
||||
|
||||
try:
|
||||
if r is not None and r.status_code != 200:
|
||||
res = r.json()
|
||||
if "error" in res:
|
||||
detail = f"External: {res['error'].get('message', '')}"
|
||||
else:
|
||||
detail = f"External: {r.text}"
|
||||
except Exception:
|
||||
detail = f"External: {e}"
|
||||
|
||||
raise HTTPException(
|
||||
status_code=getattr(r, "status_code", 500) if r else 500,
|
||||
detail=detail if detail else "Open WebUI: Server Connection Error",
|
||||
)
|
||||
|
||||
|
||||
def transcribe(request: Request, file_path: str, metadata: Optional[dict] = None):
|
||||
log.info(f"transcribe: {file_path} {metadata}")
|
||||
|
|
@ -1028,7 +1235,7 @@ def get_available_models(request: Request) -> list[dict]:
|
|||
elif request.app.state.config.TTS_ENGINE == "elevenlabs":
|
||||
try:
|
||||
response = requests.get(
|
||||
"https://api.elevenlabs.io/v1/models",
|
||||
f"{ELEVENLABS_API_BASE_URL}/v1/models",
|
||||
headers={
|
||||
"xi-api-key": request.app.state.config.TTS_API_KEY,
|
||||
"Content-Type": "application/json",
|
||||
|
|
@ -1132,7 +1339,7 @@ def get_elevenlabs_voices(api_key: str) -> dict:
|
|||
try:
|
||||
# TODO: Add retries
|
||||
response = requests.get(
|
||||
"https://api.elevenlabs.io/v1/voices",
|
||||
f"{ELEVENLABS_API_BASE_URL}/v1/voices",
|
||||
headers={
|
||||
"xi-api-key": api_key,
|
||||
"Content-Type": "application/json",
|
||||
|
|
|
|||
|
|
@ -19,6 +19,7 @@ from open_webui.models.auths import (
|
|||
)
|
||||
from open_webui.models.users import Users, UpdateProfileForm
|
||||
from open_webui.models.groups import Groups
|
||||
from open_webui.models.oauth_sessions import OAuthSessions
|
||||
|
||||
from open_webui.constants import ERROR_MESSAGES, WEBHOOK_MESSAGES
|
||||
from open_webui.env import (
|
||||
|
|
@ -34,12 +35,19 @@ from open_webui.env import (
|
|||
)
|
||||
from fastapi import APIRouter, Depends, HTTPException, Request, status
|
||||
from fastapi.responses import RedirectResponse, Response, JSONResponse
|
||||
from open_webui.config import OPENID_PROVIDER_URL, ENABLE_OAUTH_SIGNUP, ENABLE_LDAP
|
||||
from open_webui.config import (
|
||||
OPENID_PROVIDER_URL,
|
||||
ENABLE_OAUTH_SIGNUP,
|
||||
ENABLE_LDAP,
|
||||
ENABLE_PASSWORD_AUTH,
|
||||
)
|
||||
from pydantic import BaseModel
|
||||
|
||||
from open_webui.utils.misc import parse_duration, validate_email_format
|
||||
from open_webui.utils.auth import (
|
||||
verify_password,
|
||||
decode_token,
|
||||
invalidate_token,
|
||||
create_api_key,
|
||||
create_token,
|
||||
get_admin_user,
|
||||
|
|
@ -49,7 +57,7 @@ from open_webui.utils.auth import (
|
|||
get_http_authorization_cred,
|
||||
)
|
||||
from open_webui.utils.webhook import post_webhook
|
||||
from open_webui.utils.access_control import get_permissions
|
||||
from open_webui.utils.access_control import get_permissions, has_permission
|
||||
|
||||
from typing import Optional, List
|
||||
|
||||
|
|
@ -168,7 +176,9 @@ async def update_password(
|
|||
if WEBUI_AUTH_TRUSTED_EMAIL_HEADER:
|
||||
raise HTTPException(400, detail=ERROR_MESSAGES.ACTION_PROHIBITED)
|
||||
if session_user:
|
||||
user = Auths.authenticate_user(session_user.email, form_data.password)
|
||||
user = Auths.authenticate_user(
|
||||
session_user.email, lambda pw: verify_password(form_data.password, pw)
|
||||
)
|
||||
|
||||
if user:
|
||||
hashed = get_password_hash(form_data.new_password)
|
||||
|
|
@ -184,7 +194,17 @@ async def update_password(
|
|||
############################
|
||||
@router.post("/ldap", response_model=SessionUserResponse)
|
||||
async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
|
||||
ENABLE_LDAP = request.app.state.config.ENABLE_LDAP
|
||||
# Security checks FIRST - before loading any config
|
||||
if not request.app.state.config.ENABLE_LDAP:
|
||||
raise HTTPException(400, detail="LDAP authentication is not enabled")
|
||||
|
||||
if not ENABLE_PASSWORD_AUTH:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail=ERROR_MESSAGES.ACTION_PROHIBITED,
|
||||
)
|
||||
|
||||
# NOW load LDAP config variables
|
||||
LDAP_SERVER_LABEL = request.app.state.config.LDAP_SERVER_LABEL
|
||||
LDAP_SERVER_HOST = request.app.state.config.LDAP_SERVER_HOST
|
||||
LDAP_SERVER_PORT = request.app.state.config.LDAP_SERVER_PORT
|
||||
|
|
@ -205,9 +225,6 @@ async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
|
|||
else "ALL"
|
||||
)
|
||||
|
||||
if not ENABLE_LDAP:
|
||||
raise HTTPException(400, detail="LDAP authentication is not enabled")
|
||||
|
||||
try:
|
||||
tls = Tls(
|
||||
validate=LDAP_VALIDATE_CERT,
|
||||
|
|
@ -462,6 +479,12 @@ async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
|
|||
|
||||
@router.post("/signin", response_model=SessionUserResponse)
|
||||
async def signin(request: Request, response: Response, form_data: SigninForm):
|
||||
if not ENABLE_PASSWORD_AUTH:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail=ERROR_MESSAGES.ACTION_PROHIBITED,
|
||||
)
|
||||
|
||||
if WEBUI_AUTH_TRUSTED_EMAIL_HEADER:
|
||||
if WEBUI_AUTH_TRUSTED_EMAIL_HEADER not in request.headers:
|
||||
raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_TRUSTED_HEADER)
|
||||
|
|
@ -494,7 +517,9 @@ async def signin(request: Request, response: Response, form_data: SigninForm):
|
|||
admin_password = "admin"
|
||||
|
||||
if Users.get_user_by_email(admin_email.lower()):
|
||||
user = Auths.authenticate_user(admin_email.lower(), admin_password)
|
||||
user = Auths.authenticate_user(
|
||||
admin_email.lower(), lambda pw: verify_password(admin_password, pw)
|
||||
)
|
||||
else:
|
||||
if Users.has_users():
|
||||
raise HTTPException(400, detail=ERROR_MESSAGES.EXISTING_USERS)
|
||||
|
|
@ -505,9 +530,22 @@ async def signin(request: Request, response: Response, form_data: SigninForm):
|
|||
SignupForm(email=admin_email, password=admin_password, name="User"),
|
||||
)
|
||||
|
||||
user = Auths.authenticate_user(admin_email.lower(), admin_password)
|
||||
user = Auths.authenticate_user(
|
||||
admin_email.lower(), lambda pw: verify_password(admin_password, pw)
|
||||
)
|
||||
else:
|
||||
user = Auths.authenticate_user(form_data.email.lower(), form_data.password)
|
||||
password_bytes = form_data.password.encode("utf-8")
|
||||
if len(password_bytes) > 72:
|
||||
# TODO: Implement other hashing algorithms that support longer passwords
|
||||
log.info("Password too long, truncating to 72 bytes for bcrypt")
|
||||
password_bytes = password_bytes[:72]
|
||||
|
||||
# decode safely — ignore incomplete UTF-8 sequences
|
||||
form_data.password = password_bytes.decode("utf-8", errors="ignore")
|
||||
|
||||
user = Auths.authenticate_user(
|
||||
form_data.email.lower(), lambda pw: verify_password(form_data.password, pw)
|
||||
)
|
||||
|
||||
if user:
|
||||
|
||||
|
|
@ -674,21 +712,44 @@ async def signup(request: Request, response: Response, form_data: SignupForm):
|
|||
|
||||
@router.get("/signout")
|
||||
async def signout(request: Request, response: Response):
|
||||
|
||||
# get auth token from headers or cookies
|
||||
token = None
|
||||
auth_header = request.headers.get("Authorization")
|
||||
if auth_header:
|
||||
auth_cred = get_http_authorization_cred(auth_header)
|
||||
token = auth_cred.credentials
|
||||
else:
|
||||
token = request.cookies.get("token")
|
||||
|
||||
if token:
|
||||
await invalidate_token(request, token)
|
||||
|
||||
response.delete_cookie("token")
|
||||
response.delete_cookie("oui-session")
|
||||
response.delete_cookie("oauth_id_token")
|
||||
|
||||
if ENABLE_OAUTH_SIGNUP.value:
|
||||
oauth_id_token = request.cookies.get("oauth_id_token")
|
||||
if oauth_id_token and OPENID_PROVIDER_URL.value:
|
||||
oauth_session_id = request.cookies.get("oauth_session_id")
|
||||
if oauth_session_id:
|
||||
response.delete_cookie("oauth_session_id")
|
||||
|
||||
session = OAuthSessions.get_session_by_id(oauth_session_id)
|
||||
oauth_server_metadata_url = (
|
||||
request.app.state.oauth_manager.get_server_metadata_url(session.provider)
|
||||
if session
|
||||
else None
|
||||
) or OPENID_PROVIDER_URL.value
|
||||
|
||||
if session and oauth_server_metadata_url:
|
||||
oauth_id_token = session.token.get("id_token")
|
||||
try:
|
||||
async with ClientSession(trust_env=True) as session:
|
||||
async with session.get(OPENID_PROVIDER_URL.value) as resp:
|
||||
if resp.status == 200:
|
||||
openid_data = await resp.json()
|
||||
async with session.get(oauth_server_metadata_url) as r:
|
||||
if r.status == 200:
|
||||
openid_data = await r.json()
|
||||
logout_url = openid_data.get("end_session_endpoint")
|
||||
if logout_url:
|
||||
response.delete_cookie("oauth_id_token")
|
||||
|
||||
if logout_url:
|
||||
return JSONResponse(
|
||||
status_code=200,
|
||||
content={
|
||||
|
|
@ -703,15 +764,14 @@ async def signout(request: Request, response: Response):
|
|||
headers=response.headers,
|
||||
)
|
||||
else:
|
||||
raise HTTPException(
|
||||
status_code=resp.status,
|
||||
detail="Failed to fetch OpenID configuration",
|
||||
)
|
||||
raise Exception("Failed to fetch OpenID configuration")
|
||||
|
||||
except Exception as e:
|
||||
log.error(f"OpenID signout error: {str(e)}")
|
||||
raise HTTPException(
|
||||
status_code=500,
|
||||
detail="Failed to sign out from the OpenID provider.",
|
||||
headers=response.headers,
|
||||
)
|
||||
|
||||
if WEBUI_AUTH_SIGNOUT_REDIRECT_URL:
|
||||
|
|
@ -816,9 +876,9 @@ async def get_admin_config(request: Request, user=Depends(get_admin_user)):
|
|||
"SHOW_ADMIN_DETAILS": request.app.state.config.SHOW_ADMIN_DETAILS,
|
||||
"WEBUI_URL": request.app.state.config.WEBUI_URL,
|
||||
"ENABLE_SIGNUP": request.app.state.config.ENABLE_SIGNUP,
|
||||
"ENABLE_API_KEY": request.app.state.config.ENABLE_API_KEY,
|
||||
"ENABLE_API_KEY_ENDPOINT_RESTRICTIONS": request.app.state.config.ENABLE_API_KEY_ENDPOINT_RESTRICTIONS,
|
||||
"API_KEY_ALLOWED_ENDPOINTS": request.app.state.config.API_KEY_ALLOWED_ENDPOINTS,
|
||||
"ENABLE_API_KEYS": request.app.state.config.ENABLE_API_KEYS,
|
||||
"ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS": request.app.state.config.ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS,
|
||||
"API_KEYS_ALLOWED_ENDPOINTS": request.app.state.config.API_KEYS_ALLOWED_ENDPOINTS,
|
||||
"DEFAULT_USER_ROLE": request.app.state.config.DEFAULT_USER_ROLE,
|
||||
"JWT_EXPIRES_IN": request.app.state.config.JWT_EXPIRES_IN,
|
||||
"ENABLE_COMMUNITY_SHARING": request.app.state.config.ENABLE_COMMUNITY_SHARING,
|
||||
|
|
@ -836,9 +896,9 @@ class AdminConfig(BaseModel):
|
|||
SHOW_ADMIN_DETAILS: bool
|
||||
WEBUI_URL: str
|
||||
ENABLE_SIGNUP: bool
|
||||
ENABLE_API_KEY: bool
|
||||
ENABLE_API_KEY_ENDPOINT_RESTRICTIONS: bool
|
||||
API_KEY_ALLOWED_ENDPOINTS: str
|
||||
ENABLE_API_KEYS: bool
|
||||
ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS: bool
|
||||
API_KEYS_ALLOWED_ENDPOINTS: str
|
||||
DEFAULT_USER_ROLE: str
|
||||
JWT_EXPIRES_IN: str
|
||||
ENABLE_COMMUNITY_SHARING: bool
|
||||
|
|
@ -859,12 +919,12 @@ async def update_admin_config(
|
|||
request.app.state.config.WEBUI_URL = form_data.WEBUI_URL
|
||||
request.app.state.config.ENABLE_SIGNUP = form_data.ENABLE_SIGNUP
|
||||
|
||||
request.app.state.config.ENABLE_API_KEY = form_data.ENABLE_API_KEY
|
||||
request.app.state.config.ENABLE_API_KEY_ENDPOINT_RESTRICTIONS = (
|
||||
form_data.ENABLE_API_KEY_ENDPOINT_RESTRICTIONS
|
||||
request.app.state.config.ENABLE_API_KEYS = form_data.ENABLE_API_KEYS
|
||||
request.app.state.config.ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS = (
|
||||
form_data.ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS
|
||||
)
|
||||
request.app.state.config.API_KEY_ALLOWED_ENDPOINTS = (
|
||||
form_data.API_KEY_ALLOWED_ENDPOINTS
|
||||
request.app.state.config.API_KEYS_ALLOWED_ENDPOINTS = (
|
||||
form_data.API_KEYS_ALLOWED_ENDPOINTS
|
||||
)
|
||||
|
||||
request.app.state.config.ENABLE_CHANNELS = form_data.ENABLE_CHANNELS
|
||||
|
|
@ -899,9 +959,9 @@ async def update_admin_config(
|
|||
"SHOW_ADMIN_DETAILS": request.app.state.config.SHOW_ADMIN_DETAILS,
|
||||
"WEBUI_URL": request.app.state.config.WEBUI_URL,
|
||||
"ENABLE_SIGNUP": request.app.state.config.ENABLE_SIGNUP,
|
||||
"ENABLE_API_KEY": request.app.state.config.ENABLE_API_KEY,
|
||||
"ENABLE_API_KEY_ENDPOINT_RESTRICTIONS": request.app.state.config.ENABLE_API_KEY_ENDPOINT_RESTRICTIONS,
|
||||
"API_KEY_ALLOWED_ENDPOINTS": request.app.state.config.API_KEY_ALLOWED_ENDPOINTS,
|
||||
"ENABLE_API_KEYS": request.app.state.config.ENABLE_API_KEYS,
|
||||
"ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS": request.app.state.config.ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS,
|
||||
"API_KEYS_ALLOWED_ENDPOINTS": request.app.state.config.API_KEYS_ALLOWED_ENDPOINTS,
|
||||
"DEFAULT_USER_ROLE": request.app.state.config.DEFAULT_USER_ROLE,
|
||||
"JWT_EXPIRES_IN": request.app.state.config.JWT_EXPIRES_IN,
|
||||
"ENABLE_COMMUNITY_SHARING": request.app.state.config.ENABLE_COMMUNITY_SHARING,
|
||||
|
|
@ -1026,9 +1086,11 @@ async def update_ldap_config(
|
|||
# create api key
|
||||
@router.post("/api_key", response_model=ApiKey)
|
||||
async def generate_api_key(request: Request, user=Depends(get_current_user)):
|
||||
if not request.app.state.config.ENABLE_API_KEY:
|
||||
if not request.app.state.config.ENABLE_API_KEYS or not has_permission(
|
||||
user.id, "features.api_keys", request.app.state.config.USER_PERMISSIONS
|
||||
):
|
||||
raise HTTPException(
|
||||
status.HTTP_403_FORBIDDEN,
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail=ERROR_MESSAGES.API_KEY_CREATION_NOT_ALLOWED,
|
||||
)
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,13 @@ from pydantic import BaseModel
|
|||
from open_webui.socket.main import sio, get_user_ids_from_room
|
||||
from open_webui.models.users import Users, UserNameResponse
|
||||
|
||||
from open_webui.models.channels import Channels, ChannelModel, ChannelForm
|
||||
from open_webui.models.groups import Groups
|
||||
from open_webui.models.channels import (
|
||||
Channels,
|
||||
ChannelModel,
|
||||
ChannelForm,
|
||||
ChannelResponse,
|
||||
)
|
||||
from open_webui.models.messages import (
|
||||
Messages,
|
||||
MessageModel,
|
||||
|
|
@ -24,9 +30,17 @@ from open_webui.constants import ERROR_MESSAGES
|
|||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
|
||||
from open_webui.utils.models import (
|
||||
get_all_models,
|
||||
get_filtered_models,
|
||||
)
|
||||
from open_webui.utils.chat import generate_chat_completion
|
||||
|
||||
|
||||
from open_webui.utils.auth import get_admin_user, get_verified_user
|
||||
from open_webui.utils.access_control import has_access, get_users_with_access
|
||||
from open_webui.utils.webhook import post_webhook
|
||||
from open_webui.utils.channels import extract_mentions, replace_mentions
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
||||
|
|
@ -72,7 +86,7 @@ async def create_new_channel(form_data: ChannelForm, user=Depends(get_admin_user
|
|||
############################
|
||||
|
||||
|
||||
@router.get("/{id}", response_model=Optional[ChannelModel])
|
||||
@router.get("/{id}", response_model=Optional[ChannelResponse])
|
||||
async def get_channel_by_id(id: str, user=Depends(get_verified_user)):
|
||||
channel = Channels.get_channel_by_id(id)
|
||||
if not channel:
|
||||
|
|
@ -87,7 +101,16 @@ async def get_channel_by_id(id: str, user=Depends(get_verified_user)):
|
|||
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
|
||||
)
|
||||
|
||||
return ChannelModel(**channel.model_dump())
|
||||
write_access = has_access(
|
||||
user.id, type="write", access_control=channel.access_control, strict=False
|
||||
)
|
||||
|
||||
return ChannelResponse(
|
||||
**{
|
||||
**channel.model_dump(),
|
||||
"write_access": write_access or user.role == "admin",
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
############################
|
||||
|
|
@ -144,7 +167,7 @@ async def delete_channel_by_id(id: str, user=Depends(get_admin_user)):
|
|||
|
||||
|
||||
class MessageUserResponse(MessageResponse):
|
||||
user: UserNameResponse
|
||||
pass
|
||||
|
||||
|
||||
@router.get("/{id}/messages", response_model=list[MessageUserResponse])
|
||||
|
|
@ -173,15 +196,17 @@ async def get_channel_messages(
|
|||
user = Users.get_user_by_id(message.user_id)
|
||||
users[message.user_id] = user
|
||||
|
||||
replies = Messages.get_replies_by_message_id(message.id)
|
||||
latest_reply_at = replies[0].created_at if replies else None
|
||||
thread_replies = Messages.get_thread_replies_by_message_id(message.id)
|
||||
latest_thread_reply_at = (
|
||||
thread_replies[0].created_at if thread_replies else None
|
||||
)
|
||||
|
||||
messages.append(
|
||||
MessageUserResponse(
|
||||
**{
|
||||
**message.model_dump(),
|
||||
"reply_count": len(replies),
|
||||
"latest_reply_at": latest_reply_at,
|
||||
"reply_count": len(thread_replies),
|
||||
"latest_reply_at": latest_thread_reply_at,
|
||||
"reactions": Messages.get_reactions_by_message_id(message.id),
|
||||
"user": UserNameResponse(**users[message.user_id].model_dump()),
|
||||
}
|
||||
|
|
@ -200,14 +225,11 @@ async def send_notification(name, webui_url, channel, message, active_user_ids):
|
|||
users = get_users_with_access("read", channel.access_control)
|
||||
|
||||
for user in users:
|
||||
if user.id in active_user_ids:
|
||||
continue
|
||||
else:
|
||||
if user.id not in active_user_ids:
|
||||
if user.settings:
|
||||
webhook_url = user.settings.ui.get("notifications", {}).get(
|
||||
"webhook_url", None
|
||||
)
|
||||
|
||||
if webhook_url:
|
||||
await post_webhook(
|
||||
name,
|
||||
|
|
@ -221,14 +243,185 @@ async def send_notification(name, webui_url, channel, message, active_user_ids):
|
|||
},
|
||||
)
|
||||
|
||||
return True
|
||||
|
||||
@router.post("/{id}/messages/post", response_model=Optional[MessageModel])
|
||||
async def post_new_message(
|
||||
request: Request,
|
||||
id: str,
|
||||
form_data: MessageForm,
|
||||
background_tasks: BackgroundTasks,
|
||||
user=Depends(get_verified_user),
|
||||
|
||||
async def model_response_handler(request, channel, message, user):
|
||||
MODELS = {
|
||||
model["id"]: model
|
||||
for model in get_filtered_models(await get_all_models(request, user=user), user)
|
||||
}
|
||||
|
||||
mentions = extract_mentions(message.content)
|
||||
message_content = replace_mentions(message.content)
|
||||
|
||||
model_mentions = {}
|
||||
|
||||
# check if the message is a reply to a message sent by a model
|
||||
if (
|
||||
message.reply_to_message
|
||||
and message.reply_to_message.meta
|
||||
and message.reply_to_message.meta.get("model_id", None)
|
||||
):
|
||||
model_id = message.reply_to_message.meta.get("model_id", None)
|
||||
model_mentions[model_id] = {"id": model_id, "id_type": "M"}
|
||||
|
||||
# check if any of the mentions are models
|
||||
for mention in mentions:
|
||||
if mention["id_type"] == "M" and mention["id"] not in model_mentions:
|
||||
model_mentions[mention["id"]] = mention
|
||||
|
||||
if not model_mentions:
|
||||
return False
|
||||
|
||||
for mention in model_mentions.values():
|
||||
model_id = mention["id"]
|
||||
model = MODELS.get(model_id, None)
|
||||
|
||||
if model:
|
||||
try:
|
||||
# reverse to get in chronological order
|
||||
thread_messages = Messages.get_messages_by_parent_id(
|
||||
channel.id,
|
||||
message.parent_id if message.parent_id else message.id,
|
||||
)[::-1]
|
||||
|
||||
response_message, channel = await new_message_handler(
|
||||
request,
|
||||
channel.id,
|
||||
MessageForm(
|
||||
**{
|
||||
"parent_id": (
|
||||
message.parent_id if message.parent_id else message.id
|
||||
),
|
||||
"content": f"",
|
||||
"data": {},
|
||||
"meta": {
|
||||
"model_id": model_id,
|
||||
"model_name": model.get("name", model_id),
|
||||
},
|
||||
}
|
||||
),
|
||||
user,
|
||||
)
|
||||
|
||||
thread_history = []
|
||||
images = []
|
||||
message_users = {}
|
||||
|
||||
for thread_message in thread_messages:
|
||||
message_user = None
|
||||
if thread_message.user_id not in message_users:
|
||||
message_user = Users.get_user_by_id(thread_message.user_id)
|
||||
message_users[thread_message.user_id] = message_user
|
||||
else:
|
||||
message_user = message_users[thread_message.user_id]
|
||||
|
||||
if thread_message.meta and thread_message.meta.get(
|
||||
"model_id", None
|
||||
):
|
||||
# If the message was sent by a model, use the model name
|
||||
message_model_id = thread_message.meta.get("model_id", None)
|
||||
message_model = MODELS.get(message_model_id, None)
|
||||
username = (
|
||||
message_model.get("name", message_model_id)
|
||||
if message_model
|
||||
else message_model_id
|
||||
)
|
||||
else:
|
||||
username = message_user.name if message_user else "Unknown"
|
||||
|
||||
thread_history.append(
|
||||
f"{username}: {replace_mentions(thread_message.content)}"
|
||||
)
|
||||
|
||||
thread_message_files = thread_message.data.get("files", [])
|
||||
for file in thread_message_files:
|
||||
if file.get("type", "") == "image":
|
||||
images.append(file.get("url", ""))
|
||||
|
||||
thread_history_string = "\n\n".join(thread_history)
|
||||
system_message = {
|
||||
"role": "system",
|
||||
"content": f"You are {model.get('name', model_id)}, participating in a threaded conversation. Be concise and conversational."
|
||||
+ (
|
||||
f"Here's the thread history:\n\n\n{thread_history_string}\n\n\nContinue the conversation naturally as {model.get('name', model_id)}, addressing the most recent message while being aware of the full context."
|
||||
if thread_history
|
||||
else ""
|
||||
),
|
||||
}
|
||||
|
||||
content = f"{user.name if user else 'User'}: {message_content}"
|
||||
if images:
|
||||
content = [
|
||||
{
|
||||
"type": "text",
|
||||
"text": content,
|
||||
},
|
||||
*[
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {
|
||||
"url": image,
|
||||
},
|
||||
}
|
||||
for image in images
|
||||
],
|
||||
]
|
||||
|
||||
form_data = {
|
||||
"model": model_id,
|
||||
"messages": [
|
||||
system_message,
|
||||
{"role": "user", "content": content},
|
||||
],
|
||||
"stream": False,
|
||||
}
|
||||
|
||||
res = await generate_chat_completion(
|
||||
request,
|
||||
form_data=form_data,
|
||||
user=user,
|
||||
)
|
||||
|
||||
if res:
|
||||
if res.get("choices", []) and len(res["choices"]) > 0:
|
||||
await update_message_by_id(
|
||||
channel.id,
|
||||
response_message.id,
|
||||
MessageForm(
|
||||
**{
|
||||
"content": res["choices"][0]["message"]["content"],
|
||||
"meta": {
|
||||
"done": True,
|
||||
},
|
||||
}
|
||||
),
|
||||
user,
|
||||
)
|
||||
elif res.get("error", None):
|
||||
await update_message_by_id(
|
||||
channel.id,
|
||||
response_message.id,
|
||||
MessageForm(
|
||||
**{
|
||||
"content": f"Error: {res['error']}",
|
||||
"meta": {
|
||||
"done": True,
|
||||
},
|
||||
}
|
||||
),
|
||||
user,
|
||||
)
|
||||
except Exception as e:
|
||||
log.info(e)
|
||||
pass
|
||||
|
||||
return True
|
||||
|
||||
|
||||
async def new_message_handler(
|
||||
request: Request, id: str, form_data: MessageForm, user=Depends(get_verified_user)
|
||||
):
|
||||
channel = Channels.get_channel_by_id(id)
|
||||
if not channel:
|
||||
|
|
@ -237,7 +430,7 @@ async def post_new_message(
|
|||
)
|
||||
|
||||
if user.role != "admin" and not has_access(
|
||||
user.id, type="read", access_control=channel.access_control
|
||||
user.id, type="write", access_control=channel.access_control, strict=False
|
||||
):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
|
||||
|
|
@ -245,31 +438,21 @@ async def post_new_message(
|
|||
|
||||
try:
|
||||
message = Messages.insert_new_message(form_data, channel.id, user.id)
|
||||
|
||||
if message:
|
||||
message = Messages.get_message_by_id(message.id)
|
||||
event_data = {
|
||||
"channel_id": channel.id,
|
||||
"message_id": message.id,
|
||||
"data": {
|
||||
"type": "message",
|
||||
"data": MessageUserResponse(
|
||||
**{
|
||||
**message.model_dump(),
|
||||
"reply_count": 0,
|
||||
"latest_reply_at": None,
|
||||
"reactions": Messages.get_reactions_by_message_id(
|
||||
message.id
|
||||
),
|
||||
"user": UserNameResponse(**user.model_dump()),
|
||||
}
|
||||
).model_dump(),
|
||||
"data": message.model_dump(),
|
||||
},
|
||||
"user": UserNameResponse(**user.model_dump()).model_dump(),
|
||||
"channel": channel.model_dump(),
|
||||
}
|
||||
|
||||
await sio.emit(
|
||||
"channel-events",
|
||||
"events:channel",
|
||||
event_data,
|
||||
to=f"channel:{channel.id}",
|
||||
)
|
||||
|
|
@ -280,33 +463,45 @@ async def post_new_message(
|
|||
|
||||
if parent_message:
|
||||
await sio.emit(
|
||||
"channel-events",
|
||||
"events:channel",
|
||||
{
|
||||
"channel_id": channel.id,
|
||||
"message_id": parent_message.id,
|
||||
"data": {
|
||||
"type": "message:reply",
|
||||
"data": MessageUserResponse(
|
||||
**{
|
||||
**parent_message.model_dump(),
|
||||
"user": UserNameResponse(
|
||||
**Users.get_user_by_id(
|
||||
parent_message.user_id
|
||||
).model_dump()
|
||||
),
|
||||
}
|
||||
).model_dump(),
|
||||
"data": parent_message.model_dump(),
|
||||
},
|
||||
"user": UserNameResponse(**user.model_dump()).model_dump(),
|
||||
"channel": channel.model_dump(),
|
||||
},
|
||||
to=f"channel:{channel.id}",
|
||||
)
|
||||
return message, channel
|
||||
else:
|
||||
raise Exception("Error creating message")
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST, detail=ERROR_MESSAGES.DEFAULT()
|
||||
)
|
||||
|
||||
active_user_ids = get_user_ids_from_room(f"channel:{channel.id}")
|
||||
|
||||
background_tasks.add_task(
|
||||
send_notification,
|
||||
@router.post("/{id}/messages/post", response_model=Optional[MessageModel])
|
||||
async def post_new_message(
|
||||
request: Request,
|
||||
id: str,
|
||||
form_data: MessageForm,
|
||||
background_tasks: BackgroundTasks,
|
||||
user=Depends(get_verified_user),
|
||||
):
|
||||
|
||||
try:
|
||||
message, channel = await new_message_handler(request, id, form_data, user)
|
||||
active_user_ids = get_user_ids_from_room(f"channel:{channel.id}")
|
||||
|
||||
async def background_handler():
|
||||
await model_response_handler(request, channel, message, user)
|
||||
await send_notification(
|
||||
request.app.state.WEBUI_NAME,
|
||||
request.app.state.config.WEBUI_URL,
|
||||
channel,
|
||||
|
|
@ -314,7 +509,12 @@ async def post_new_message(
|
|||
active_user_ids,
|
||||
)
|
||||
|
||||
return MessageModel(**message.model_dump())
|
||||
background_tasks.add_task(background_handler)
|
||||
|
||||
return message
|
||||
|
||||
except HTTPException as e:
|
||||
raise e
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
raise HTTPException(
|
||||
|
|
@ -460,20 +660,13 @@ async def update_message_by_id(
|
|||
|
||||
if message:
|
||||
await sio.emit(
|
||||
"channel-events",
|
||||
"events:channel",
|
||||
{
|
||||
"channel_id": channel.id,
|
||||
"message_id": message.id,
|
||||
"data": {
|
||||
"type": "message:update",
|
||||
"data": MessageUserResponse(
|
||||
**{
|
||||
**message.model_dump(),
|
||||
"user": UserNameResponse(
|
||||
**user.model_dump()
|
||||
).model_dump(),
|
||||
}
|
||||
).model_dump(),
|
||||
"data": message.model_dump(),
|
||||
},
|
||||
"user": UserNameResponse(**user.model_dump()).model_dump(),
|
||||
"channel": channel.model_dump(),
|
||||
|
|
@ -509,7 +702,7 @@ async def add_reaction_to_message(
|
|||
)
|
||||
|
||||
if user.role != "admin" and not has_access(
|
||||
user.id, type="read", access_control=channel.access_control
|
||||
user.id, type="write", access_control=channel.access_control, strict=False
|
||||
):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
|
||||
|
|
@ -531,7 +724,7 @@ async def add_reaction_to_message(
|
|||
message = Messages.get_message_by_id(message_id)
|
||||
|
||||
await sio.emit(
|
||||
"channel-events",
|
||||
"events:channel",
|
||||
{
|
||||
"channel_id": channel.id,
|
||||
"message_id": message.id,
|
||||
|
|
@ -539,9 +732,6 @@ async def add_reaction_to_message(
|
|||
"type": "message:reaction:add",
|
||||
"data": {
|
||||
**message.model_dump(),
|
||||
"user": UserNameResponse(
|
||||
**Users.get_user_by_id(message.user_id).model_dump()
|
||||
).model_dump(),
|
||||
"name": form_data.name,
|
||||
},
|
||||
},
|
||||
|
|
@ -575,7 +765,7 @@ async def remove_reaction_by_id_and_user_id_and_name(
|
|||
)
|
||||
|
||||
if user.role != "admin" and not has_access(
|
||||
user.id, type="read", access_control=channel.access_control
|
||||
user.id, type="write", access_control=channel.access_control, strict=False
|
||||
):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
|
||||
|
|
@ -600,7 +790,7 @@ async def remove_reaction_by_id_and_user_id_and_name(
|
|||
message = Messages.get_message_by_id(message_id)
|
||||
|
||||
await sio.emit(
|
||||
"channel-events",
|
||||
"events:channel",
|
||||
{
|
||||
"channel_id": channel.id,
|
||||
"message_id": message.id,
|
||||
|
|
@ -608,9 +798,6 @@ async def remove_reaction_by_id_and_user_id_and_name(
|
|||
"type": "message:reaction:remove",
|
||||
"data": {
|
||||
**message.model_dump(),
|
||||
"user": UserNameResponse(
|
||||
**Users.get_user_by_id(message.user_id).model_dump()
|
||||
).model_dump(),
|
||||
"name": form_data.name,
|
||||
},
|
||||
},
|
||||
|
|
@ -657,7 +844,9 @@ async def delete_message_by_id(
|
|||
if (
|
||||
user.role != "admin"
|
||||
and message.user_id != user.id
|
||||
and not has_access(user.id, type="read", access_control=channel.access_control)
|
||||
and not has_access(
|
||||
user.id, type="write", access_control=channel.access_control, strict=False
|
||||
)
|
||||
):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
|
||||
|
|
@ -666,7 +855,7 @@ async def delete_message_by_id(
|
|||
try:
|
||||
Messages.delete_message_by_id(message_id)
|
||||
await sio.emit(
|
||||
"channel-events",
|
||||
"events:channel",
|
||||
{
|
||||
"channel_id": channel.id,
|
||||
"message_id": message.id,
|
||||
|
|
@ -689,22 +878,13 @@ async def delete_message_by_id(
|
|||
|
||||
if parent_message:
|
||||
await sio.emit(
|
||||
"channel-events",
|
||||
"events:channel",
|
||||
{
|
||||
"channel_id": channel.id,
|
||||
"message_id": parent_message.id,
|
||||
"data": {
|
||||
"type": "message:reply",
|
||||
"data": MessageUserResponse(
|
||||
**{
|
||||
**parent_message.model_dump(),
|
||||
"user": UserNameResponse(
|
||||
**Users.get_user_by_id(
|
||||
parent_message.user_id
|
||||
).model_dump()
|
||||
),
|
||||
}
|
||||
).model_dump(),
|
||||
"data": parent_message.model_dump(),
|
||||
},
|
||||
"user": UserNameResponse(**user.model_dump()).model_dump(),
|
||||
"channel": channel.model_dump(),
|
||||
|
|
|
|||
|
|
@ -37,7 +37,10 @@ router = APIRouter()
|
|||
@router.get("/", response_model=list[ChatTitleIdResponse])
|
||||
@router.get("/list", response_model=list[ChatTitleIdResponse])
|
||||
def get_session_user_chat_list(
|
||||
user=Depends(get_verified_user), page: Optional[int] = None
|
||||
user=Depends(get_verified_user),
|
||||
page: Optional[int] = None,
|
||||
include_pinned: Optional[bool] = False,
|
||||
include_folders: Optional[bool] = False,
|
||||
):
|
||||
try:
|
||||
if page is not None:
|
||||
|
|
@ -45,10 +48,16 @@ def get_session_user_chat_list(
|
|||
skip = (page - 1) * limit
|
||||
|
||||
return Chats.get_chat_title_id_list_by_user_id(
|
||||
user.id, skip=skip, limit=limit
|
||||
user.id,
|
||||
include_folders=include_folders,
|
||||
include_pinned=include_pinned,
|
||||
skip=skip,
|
||||
limit=limit,
|
||||
)
|
||||
else:
|
||||
return Chats.get_chat_title_id_list_by_user_id(user.id)
|
||||
return Chats.get_chat_title_id_list_by_user_id(
|
||||
user.id, include_folders=include_folders, include_pinned=include_pinned
|
||||
)
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
raise HTTPException(
|
||||
|
|
@ -166,7 +175,7 @@ async def import_chat(form_data: ChatImportForm, user=Depends(get_verified_user)
|
|||
|
||||
|
||||
@router.get("/search", response_model=list[ChatTitleIdResponse])
|
||||
async def search_user_chats(
|
||||
def search_user_chats(
|
||||
text: str, page: Optional[int] = None, user=Depends(get_verified_user)
|
||||
):
|
||||
if page is None:
|
||||
|
|
@ -214,6 +223,28 @@ async def get_chats_by_folder_id(folder_id: str, user=Depends(get_verified_user)
|
|||
]
|
||||
|
||||
|
||||
@router.get("/folder/{folder_id}/list")
|
||||
async def get_chat_list_by_folder_id(
|
||||
folder_id: str, page: Optional[int] = 1, user=Depends(get_verified_user)
|
||||
):
|
||||
try:
|
||||
limit = 60
|
||||
skip = (page - 1) * limit
|
||||
|
||||
return [
|
||||
{"title": chat.title, "id": chat.id, "updated_at": chat.updated_at}
|
||||
for chat in Chats.get_chats_by_folder_id_and_user_id(
|
||||
folder_id, user.id, skip=skip, limit=limit
|
||||
)
|
||||
]
|
||||
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST, detail=ERROR_MESSAGES.DEFAULT()
|
||||
)
|
||||
|
||||
|
||||
############################
|
||||
# GetPinnedChats
|
||||
############################
|
||||
|
|
@ -335,6 +366,16 @@ async def archive_all_chats(user=Depends(get_verified_user)):
|
|||
return Chats.archive_all_chats_by_user_id(user.id)
|
||||
|
||||
|
||||
############################
|
||||
# UnarchiveAllChats
|
||||
############################
|
||||
|
||||
|
||||
@router.post("/unarchive/all", response_model=bool)
|
||||
async def unarchive_all_chats(user=Depends(get_verified_user)):
|
||||
return Chats.unarchive_all_chats_by_user_id(user.id)
|
||||
|
||||
|
||||
############################
|
||||
# GetSharedChatById
|
||||
############################
|
||||
|
|
|
|||
|
|
@ -1,5 +1,8 @@
|
|||
import logging
|
||||
import copy
|
||||
from fastapi import APIRouter, Depends, Request, HTTPException
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
import aiohttp
|
||||
|
||||
from typing import Optional
|
||||
|
||||
|
|
@ -12,10 +15,25 @@ from open_webui.utils.tools import (
|
|||
get_tool_server_url,
|
||||
set_tool_servers,
|
||||
)
|
||||
from open_webui.utils.mcp.client import MCPClient
|
||||
from open_webui.models.oauth_sessions import OAuthSessions
|
||||
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
from open_webui.utils.oauth import (
|
||||
get_discovery_urls,
|
||||
get_oauth_client_info_with_dynamic_client_registration,
|
||||
encrypt_data,
|
||||
decrypt_data,
|
||||
OAuthClientInformationFull,
|
||||
)
|
||||
from mcp.shared.auth import OAuthMetadata
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["MAIN"])
|
||||
|
||||
|
||||
############################
|
||||
# ImportConfig
|
||||
|
|
@ -79,6 +97,43 @@ async def set_connections_config(
|
|||
}
|
||||
|
||||
|
||||
class OAuthClientRegistrationForm(BaseModel):
|
||||
url: str
|
||||
client_id: str
|
||||
client_name: Optional[str] = None
|
||||
|
||||
|
||||
@router.post("/oauth/clients/register")
|
||||
async def register_oauth_client(
|
||||
request: Request,
|
||||
form_data: OAuthClientRegistrationForm,
|
||||
type: Optional[str] = None,
|
||||
user=Depends(get_admin_user),
|
||||
):
|
||||
try:
|
||||
oauth_client_id = form_data.client_id
|
||||
if type:
|
||||
oauth_client_id = f"{type}:{form_data.client_id}"
|
||||
|
||||
oauth_client_info = (
|
||||
await get_oauth_client_info_with_dynamic_client_registration(
|
||||
request, oauth_client_id, form_data.url
|
||||
)
|
||||
)
|
||||
return {
|
||||
"status": True,
|
||||
"oauth_client_info": encrypt_data(
|
||||
oauth_client_info.model_dump(mode="json")
|
||||
),
|
||||
}
|
||||
except Exception as e:
|
||||
log.debug(f"Failed to register OAuth client: {e}")
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail=f"Failed to register OAuth client",
|
||||
)
|
||||
|
||||
|
||||
############################
|
||||
# ToolServers Config
|
||||
############################
|
||||
|
|
@ -87,7 +142,9 @@ async def set_connections_config(
|
|||
class ToolServerConnection(BaseModel):
|
||||
url: str
|
||||
path: str
|
||||
type: Optional[str] = "openapi" # openapi, mcp
|
||||
auth_type: Optional[str]
|
||||
headers: Optional[dict | str] = None
|
||||
key: Optional[str]
|
||||
config: Optional[dict]
|
||||
|
||||
|
|
@ -111,11 +168,48 @@ async def set_tool_servers_config(
|
|||
form_data: ToolServersConfigForm,
|
||||
user=Depends(get_admin_user),
|
||||
):
|
||||
for connection in request.app.state.config.TOOL_SERVER_CONNECTIONS:
|
||||
server_type = connection.get("type", "openapi")
|
||||
auth_type = connection.get("auth_type", "none")
|
||||
|
||||
if auth_type == "oauth_2.1":
|
||||
# Remove existing OAuth clients for tool servers
|
||||
server_id = connection.get("info", {}).get("id")
|
||||
client_key = f"{server_type}:{server_id}"
|
||||
|
||||
try:
|
||||
request.app.state.oauth_client_manager.remove_client(client_key)
|
||||
except:
|
||||
pass
|
||||
|
||||
# Set new tool server connections
|
||||
request.app.state.config.TOOL_SERVER_CONNECTIONS = [
|
||||
connection.model_dump() for connection in form_data.TOOL_SERVER_CONNECTIONS
|
||||
]
|
||||
|
||||
await set_tool_servers(request)
|
||||
|
||||
for connection in request.app.state.config.TOOL_SERVER_CONNECTIONS:
|
||||
server_type = connection.get("type", "openapi")
|
||||
if server_type == "mcp":
|
||||
server_id = connection.get("info", {}).get("id")
|
||||
auth_type = connection.get("auth_type", "none")
|
||||
|
||||
if auth_type == "oauth_2.1" and server_id:
|
||||
try:
|
||||
oauth_client_info = connection.get("info", {}).get(
|
||||
"oauth_client_info", ""
|
||||
)
|
||||
oauth_client_info = decrypt_data(oauth_client_info)
|
||||
|
||||
request.app.state.oauth_client_manager.add_client(
|
||||
f"{server_type}:{server_id}",
|
||||
OAuthClientInformationFull(**oauth_client_info),
|
||||
)
|
||||
except Exception as e:
|
||||
log.debug(f"Failed to add OAuth client for MCP tool server: {e}")
|
||||
continue
|
||||
|
||||
return {
|
||||
"TOOL_SERVER_CONNECTIONS": request.app.state.config.TOOL_SERVER_CONNECTIONS,
|
||||
}
|
||||
|
|
@ -129,19 +223,129 @@ async def verify_tool_servers_config(
|
|||
Verify the connection to the tool server.
|
||||
"""
|
||||
try:
|
||||
if form_data.type == "mcp":
|
||||
if form_data.auth_type == "oauth_2.1":
|
||||
discovery_urls = get_discovery_urls(form_data.url)
|
||||
for discovery_url in discovery_urls:
|
||||
log.debug(
|
||||
f"Trying to fetch OAuth 2.1 discovery document from {discovery_url}"
|
||||
)
|
||||
async with aiohttp.ClientSession(trust_env=True) as session:
|
||||
async with session.get(
|
||||
discovery_url
|
||||
) as oauth_server_metadata_response:
|
||||
if oauth_server_metadata_response.status == 200:
|
||||
try:
|
||||
oauth_server_metadata = (
|
||||
OAuthMetadata.model_validate(
|
||||
await oauth_server_metadata_response.json()
|
||||
)
|
||||
)
|
||||
return {
|
||||
"status": True,
|
||||
"oauth_server_metadata": oauth_server_metadata.model_dump(
|
||||
mode="json"
|
||||
),
|
||||
}
|
||||
except Exception as e:
|
||||
log.info(
|
||||
f"Failed to parse OAuth 2.1 discovery document: {e}"
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail=f"Failed to parse OAuth 2.1 discovery document from {discovery_url}",
|
||||
)
|
||||
|
||||
token = None
|
||||
if form_data.auth_type == "bearer":
|
||||
token = form_data.key
|
||||
elif form_data.auth_type == "session":
|
||||
token = request.state.token.credentials
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail=f"Failed to fetch OAuth 2.1 discovery document from {discovery_urls}",
|
||||
)
|
||||
else:
|
||||
try:
|
||||
client = MCPClient()
|
||||
headers = None
|
||||
|
||||
url = get_tool_server_url(form_data.url, form_data.path)
|
||||
return await get_tool_server_data(token, url)
|
||||
token = None
|
||||
if form_data.auth_type == "bearer":
|
||||
token = form_data.key
|
||||
elif form_data.auth_type == "session":
|
||||
token = request.state.token.credentials
|
||||
elif form_data.auth_type == "system_oauth":
|
||||
oauth_token = None
|
||||
try:
|
||||
if request.cookies.get("oauth_session_id", None):
|
||||
oauth_token = await request.app.state.oauth_manager.get_oauth_token(
|
||||
user.id,
|
||||
request.cookies.get("oauth_session_id", None),
|
||||
)
|
||||
|
||||
if oauth_token:
|
||||
token = oauth_token.get("access_token", "")
|
||||
except Exception as e:
|
||||
pass
|
||||
if token:
|
||||
headers = {"Authorization": f"Bearer {token}"}
|
||||
|
||||
if form_data.headers and isinstance(form_data.headers, dict):
|
||||
if headers is None:
|
||||
headers = {}
|
||||
headers.update(form_data.headers)
|
||||
|
||||
await client.connect(form_data.url, headers=headers)
|
||||
specs = await client.list_tool_specs()
|
||||
return {
|
||||
"status": True,
|
||||
"specs": specs,
|
||||
}
|
||||
except Exception as e:
|
||||
log.debug(f"Failed to create MCP client: {e}")
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail=f"Failed to create MCP client",
|
||||
)
|
||||
finally:
|
||||
if client:
|
||||
await client.disconnect()
|
||||
else: # openapi
|
||||
token = None
|
||||
headers = None
|
||||
if form_data.auth_type == "bearer":
|
||||
token = form_data.key
|
||||
elif form_data.auth_type == "session":
|
||||
token = request.state.token.credentials
|
||||
elif form_data.auth_type == "system_oauth":
|
||||
try:
|
||||
if request.cookies.get("oauth_session_id", None):
|
||||
oauth_token = (
|
||||
await request.app.state.oauth_manager.get_oauth_token(
|
||||
user.id,
|
||||
request.cookies.get("oauth_session_id", None),
|
||||
)
|
||||
)
|
||||
|
||||
if oauth_token:
|
||||
token = oauth_token.get("access_token", "")
|
||||
|
||||
except Exception as e:
|
||||
pass
|
||||
|
||||
if token:
|
||||
headers = {"Authorization": f"Bearer {token}"}
|
||||
|
||||
if form_data.headers and isinstance(form_data.headers, dict):
|
||||
if headers is None:
|
||||
headers = {}
|
||||
headers.update(form_data.headers)
|
||||
|
||||
url = get_tool_server_url(form_data.url, form_data.path)
|
||||
return await get_tool_server_data(url, headers=headers)
|
||||
except HTTPException as e:
|
||||
raise e
|
||||
except Exception as e:
|
||||
log.debug(f"Failed to connect to the tool server: {e}")
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail=f"Failed to connect to the tool server: {str(e)}",
|
||||
detail=f"Failed to connect to the tool server",
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -259,6 +463,7 @@ async def set_code_execution_config(
|
|||
############################
|
||||
class ModelsConfigForm(BaseModel):
|
||||
DEFAULT_MODELS: Optional[str]
|
||||
DEFAULT_PINNED_MODELS: Optional[str]
|
||||
MODEL_ORDER_LIST: Optional[list[str]]
|
||||
|
||||
|
||||
|
|
@ -266,6 +471,7 @@ class ModelsConfigForm(BaseModel):
|
|||
async def get_models_config(request: Request, user=Depends(get_admin_user)):
|
||||
return {
|
||||
"DEFAULT_MODELS": request.app.state.config.DEFAULT_MODELS,
|
||||
"DEFAULT_PINNED_MODELS": request.app.state.config.DEFAULT_PINNED_MODELS,
|
||||
"MODEL_ORDER_LIST": request.app.state.config.MODEL_ORDER_LIST,
|
||||
}
|
||||
|
||||
|
|
@ -275,9 +481,11 @@ async def set_models_config(
|
|||
request: Request, form_data: ModelsConfigForm, user=Depends(get_admin_user)
|
||||
):
|
||||
request.app.state.config.DEFAULT_MODELS = form_data.DEFAULT_MODELS
|
||||
request.app.state.config.DEFAULT_PINNED_MODELS = form_data.DEFAULT_PINNED_MODELS
|
||||
request.app.state.config.MODEL_ORDER_LIST = form_data.MODEL_ORDER_LIST
|
||||
return {
|
||||
"DEFAULT_MODELS": request.app.state.config.DEFAULT_MODELS,
|
||||
"DEFAULT_PINNED_MODELS": request.app.state.config.DEFAULT_PINNED_MODELS,
|
||||
"MODEL_ORDER_LIST": request.app.state.config.MODEL_ORDER_LIST,
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -115,16 +115,15 @@ def process_uploaded_file(request, file, file_path, file_item, file_metadata, us
|
|||
request.app.state.config.CONTENT_EXTRACTION_ENGINE == "external"
|
||||
):
|
||||
process_file(request, ProcessFileForm(file_id=file_item.id), user=user)
|
||||
else:
|
||||
raise Exception(
|
||||
f"File type {file.content_type} is not supported for processing"
|
||||
)
|
||||
else:
|
||||
log.info(
|
||||
f"File type {file.content_type} is not provided, but trying to process anyway"
|
||||
)
|
||||
process_file(request, ProcessFileForm(file_id=file_item.id), user=user)
|
||||
|
||||
Files.update_file_data_by_id(
|
||||
file_item.id,
|
||||
{"status": "completed"},
|
||||
)
|
||||
except Exception as e:
|
||||
log.error(f"Error processing file: {file_item.id}")
|
||||
Files.update_file_data_by_id(
|
||||
|
|
@ -411,25 +410,28 @@ async def get_file_process_status(
|
|||
MAX_FILE_PROCESSING_DURATION = 3600 * 2
|
||||
|
||||
async def event_stream(file_item):
|
||||
for _ in range(MAX_FILE_PROCESSING_DURATION):
|
||||
file_item = Files.get_file_by_id(file_item.id)
|
||||
if file_item:
|
||||
data = file_item.model_dump().get("data", {})
|
||||
status = data.get("status")
|
||||
if file_item:
|
||||
for _ in range(MAX_FILE_PROCESSING_DURATION):
|
||||
file_item = Files.get_file_by_id(file_item.id)
|
||||
if file_item:
|
||||
data = file_item.model_dump().get("data", {})
|
||||
status = data.get("status")
|
||||
|
||||
if status:
|
||||
event = {"status": status}
|
||||
if status == "failed":
|
||||
event["error"] = data.get("error")
|
||||
if status:
|
||||
event = {"status": status}
|
||||
if status == "failed":
|
||||
event["error"] = data.get("error")
|
||||
|
||||
yield f"data: {json.dumps(event)}\n\n"
|
||||
if status in ("completed", "failed"):
|
||||
yield f"data: {json.dumps(event)}\n\n"
|
||||
if status in ("completed", "failed"):
|
||||
break
|
||||
else:
|
||||
# Legacy
|
||||
break
|
||||
else:
|
||||
# Legacy
|
||||
break
|
||||
|
||||
await asyncio.sleep(0.5)
|
||||
await asyncio.sleep(0.5)
|
||||
else:
|
||||
yield f"data: {json.dumps({'status': 'not_found'})}\n\n"
|
||||
|
||||
return StreamingResponse(
|
||||
event_stream(file),
|
||||
|
|
|
|||
|
|
@ -10,10 +10,15 @@ import mimetypes
|
|||
|
||||
from open_webui.models.folders import (
|
||||
FolderForm,
|
||||
FolderUpdateForm,
|
||||
FolderModel,
|
||||
FolderNameIdResponse,
|
||||
Folders,
|
||||
)
|
||||
from open_webui.models.chats import Chats
|
||||
from open_webui.models.files import Files
|
||||
from open_webui.models.knowledge import Knowledges
|
||||
|
||||
|
||||
from open_webui.config import UPLOAD_DIR
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
|
@ -40,24 +45,46 @@ router = APIRouter()
|
|||
############################
|
||||
|
||||
|
||||
@router.get("/", response_model=list[FolderModel])
|
||||
@router.get("/", response_model=list[FolderNameIdResponse])
|
||||
async def get_folders(user=Depends(get_verified_user)):
|
||||
folders = Folders.get_folders_by_user_id(user.id)
|
||||
|
||||
return [
|
||||
{
|
||||
**folder.model_dump(),
|
||||
"items": {
|
||||
"chats": [
|
||||
{"title": chat.title, "id": chat.id, "updated_at": chat.updated_at}
|
||||
for chat in Chats.get_chats_by_folder_id_and_user_id(
|
||||
folder.id, user.id
|
||||
)
|
||||
]
|
||||
},
|
||||
}
|
||||
for folder in folders
|
||||
]
|
||||
# Verify folder data integrity
|
||||
folder_list = []
|
||||
for folder in folders:
|
||||
if folder.parent_id and not Folders.get_folder_by_id_and_user_id(
|
||||
folder.parent_id, user.id
|
||||
):
|
||||
folder = Folders.update_folder_parent_id_by_id_and_user_id(
|
||||
folder.id, user.id, None
|
||||
)
|
||||
|
||||
if folder.data:
|
||||
if "files" in folder.data:
|
||||
valid_files = []
|
||||
for file in folder.data["files"]:
|
||||
|
||||
if file.get("type") == "file":
|
||||
if Files.check_access_by_user_id(
|
||||
file.get("id"), user.id, "read"
|
||||
):
|
||||
valid_files.append(file)
|
||||
elif file.get("type") == "collection":
|
||||
if Knowledges.check_access_by_user_id(
|
||||
file.get("id"), user.id, "read"
|
||||
):
|
||||
valid_files.append(file)
|
||||
else:
|
||||
valid_files.append(file)
|
||||
|
||||
folder.data["files"] = valid_files
|
||||
Folders.update_folder_by_id_and_user_id(
|
||||
folder.id, user.id, FolderUpdateForm(data=folder.data)
|
||||
)
|
||||
|
||||
folder_list.append(FolderNameIdResponse(**folder.model_dump()))
|
||||
|
||||
return folder_list
|
||||
|
||||
|
||||
############################
|
||||
|
|
@ -113,22 +140,24 @@ async def get_folder_by_id(id: str, user=Depends(get_verified_user)):
|
|||
|
||||
@router.post("/{id}/update")
|
||||
async def update_folder_name_by_id(
|
||||
id: str, form_data: FolderForm, user=Depends(get_verified_user)
|
||||
id: str, form_data: FolderUpdateForm, user=Depends(get_verified_user)
|
||||
):
|
||||
folder = Folders.get_folder_by_id_and_user_id(id, user.id)
|
||||
if folder:
|
||||
existing_folder = Folders.get_folder_by_parent_id_and_user_id_and_name(
|
||||
folder.parent_id, user.id, form_data.name
|
||||
)
|
||||
if existing_folder and existing_folder.id != id:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail=ERROR_MESSAGES.DEFAULT("Folder already exists"),
|
||||
|
||||
if form_data.name is not None:
|
||||
# Check if folder with same name exists
|
||||
existing_folder = Folders.get_folder_by_parent_id_and_user_id_and_name(
|
||||
folder.parent_id, user.id, form_data.name
|
||||
)
|
||||
if existing_folder and existing_folder.id != id:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail=ERROR_MESSAGES.DEFAULT("Folder already exists"),
|
||||
)
|
||||
|
||||
try:
|
||||
folder = Folders.update_folder_by_id_and_user_id(id, user.id, form_data)
|
||||
|
||||
return folder
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
|
|
@ -231,31 +260,41 @@ async def update_folder_is_expanded_by_id(
|
|||
async def delete_folder_by_id(
|
||||
request: Request, id: str, user=Depends(get_verified_user)
|
||||
):
|
||||
chat_delete_permission = has_permission(
|
||||
user.id, "chat.delete", request.app.state.config.USER_PERMISSIONS
|
||||
)
|
||||
|
||||
if user.role != "admin" and not chat_delete_permission:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
|
||||
if Chats.count_chats_by_folder_id_and_user_id(id, user.id):
|
||||
chat_delete_permission = has_permission(
|
||||
user.id, "chat.delete", request.app.state.config.USER_PERMISSIONS
|
||||
)
|
||||
|
||||
folder = Folders.get_folder_by_id_and_user_id(id, user.id)
|
||||
if folder:
|
||||
try:
|
||||
folder_ids = Folders.delete_folder_by_id_and_user_id(id, user.id)
|
||||
for folder_id in folder_ids:
|
||||
Chats.delete_chats_by_user_id_and_folder_id(user.id, folder_id)
|
||||
|
||||
return True
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
log.error(f"Error deleting folder: {id}")
|
||||
if user.role != "admin" and not chat_delete_permission:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail=ERROR_MESSAGES.DEFAULT("Error deleting folder"),
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
|
||||
)
|
||||
|
||||
folders = []
|
||||
folders.append(Folders.get_folder_by_id_and_user_id(id, user.id))
|
||||
while folders:
|
||||
folder = folders.pop()
|
||||
if folder:
|
||||
try:
|
||||
folder_ids = Folders.delete_folder_by_id_and_user_id(id, user.id)
|
||||
for folder_id in folder_ids:
|
||||
Chats.delete_chats_by_user_id_and_folder_id(user.id, folder_id)
|
||||
|
||||
return True
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
log.error(f"Error deleting folder: {id}")
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail=ERROR_MESSAGES.DEFAULT("Error deleting folder"),
|
||||
)
|
||||
finally:
|
||||
# Get all subfolders
|
||||
subfolders = Folders.get_folders_by_parent_id_and_user_id(
|
||||
folder.id, user.id
|
||||
)
|
||||
folders.extend(subfolders)
|
||||
|
||||
else:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND,
|
||||
|
|
|
|||
|
|
@ -10,6 +10,8 @@ from open_webui.models.functions import (
|
|||
FunctionForm,
|
||||
FunctionModel,
|
||||
FunctionResponse,
|
||||
FunctionUserResponse,
|
||||
FunctionWithValvesModel,
|
||||
Functions,
|
||||
)
|
||||
from open_webui.utils.plugin import (
|
||||
|
|
@ -41,14 +43,19 @@ async def get_functions(user=Depends(get_verified_user)):
|
|||
return Functions.get_functions()
|
||||
|
||||
|
||||
@router.get("/list", response_model=list[FunctionUserResponse])
|
||||
async def get_function_list(user=Depends(get_admin_user)):
|
||||
return Functions.get_function_list()
|
||||
|
||||
|
||||
############################
|
||||
# ExportFunctions
|
||||
############################
|
||||
|
||||
|
||||
@router.get("/export", response_model=list[FunctionModel])
|
||||
async def get_functions(user=Depends(get_admin_user)):
|
||||
return Functions.get_functions()
|
||||
@router.get("/export", response_model=list[FunctionModel | FunctionWithValvesModel])
|
||||
async def get_functions(include_valves: bool = False, user=Depends(get_admin_user)):
|
||||
return Functions.get_functions(include_valves=include_valves)
|
||||
|
||||
|
||||
############################
|
||||
|
|
@ -132,10 +139,10 @@ async def load_function_from_url(
|
|||
|
||||
|
||||
class SyncFunctionsForm(BaseModel):
|
||||
functions: list[FunctionModel] = []
|
||||
functions: list[FunctionWithValvesModel] = []
|
||||
|
||||
|
||||
@router.post("/sync", response_model=list[FunctionModel])
|
||||
@router.post("/sync", response_model=list[FunctionWithValvesModel])
|
||||
async def sync_functions(
|
||||
request: Request, form_data: SyncFunctionsForm, user=Depends(get_admin_user)
|
||||
):
|
||||
|
|
@ -147,6 +154,18 @@ async def sync_functions(
|
|||
content=function.content,
|
||||
)
|
||||
|
||||
if hasattr(function_module, "Valves") and function.valves:
|
||||
Valves = function_module.Valves
|
||||
try:
|
||||
Valves(
|
||||
**{k: v for k, v in function.valves.items() if v is not None}
|
||||
)
|
||||
except Exception as e:
|
||||
log.exception(
|
||||
f"Error validating valves for function {function.id}: {e}"
|
||||
)
|
||||
raise e
|
||||
|
||||
return Functions.sync_functions(user.id, form_data.functions)
|
||||
except Exception as e:
|
||||
log.exception(f"Failed to load a function: {e}")
|
||||
|
|
@ -191,6 +210,9 @@ async def create_new_function(
|
|||
function_cache_dir = CACHE_DIR / "functions" / form_data.id
|
||||
function_cache_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
if function_type == "filter" and getattr(function_module, "toggle", None):
|
||||
Functions.update_function_metadata_by_id(id, {"toggle": True})
|
||||
|
||||
if function:
|
||||
return function
|
||||
else:
|
||||
|
|
@ -307,6 +329,9 @@ async def update_function_by_id(
|
|||
|
||||
function = Functions.update_function_by_id(id, updated)
|
||||
|
||||
if function_type == "filter" and getattr(function_module, "toggle", None):
|
||||
Functions.update_function_metadata_by_id(id, {"toggle": True})
|
||||
|
||||
if function:
|
||||
return function
|
||||
else:
|
||||
|
|
@ -412,8 +437,10 @@ async def update_function_valves_by_id(
|
|||
try:
|
||||
form_data = {k: v for k, v in form_data.items() if v is not None}
|
||||
valves = Valves(**form_data)
|
||||
Functions.update_function_valves_by_id(id, valves.model_dump())
|
||||
return valves.model_dump()
|
||||
|
||||
valves_dict = valves.model_dump(exclude_unset=True)
|
||||
Functions.update_function_valves_by_id(id, valves_dict)
|
||||
return valves_dict
|
||||
except Exception as e:
|
||||
log.exception(f"Error updating function values by id {id}: {e}")
|
||||
raise HTTPException(
|
||||
|
|
@ -495,10 +522,11 @@ async def update_function_user_valves_by_id(
|
|||
try:
|
||||
form_data = {k: v for k, v in form_data.items() if v is not None}
|
||||
user_valves = UserValves(**form_data)
|
||||
user_valves_dict = user_valves.model_dump(exclude_unset=True)
|
||||
Functions.update_user_valves_by_id_and_user_id(
|
||||
id, user.id, user_valves.model_dump()
|
||||
id, user.id, user_valves_dict
|
||||
)
|
||||
return user_valves.model_dump()
|
||||
return user_valves_dict
|
||||
except Exception as e:
|
||||
log.exception(f"Error updating function user valves by id {id}: {e}")
|
||||
raise HTTPException(
|
||||
|
|
|
|||
|
|
@ -33,9 +33,18 @@ router = APIRouter()
|
|||
@router.get("/", response_model=list[GroupResponse])
|
||||
async def get_groups(user=Depends(get_verified_user)):
|
||||
if user.role == "admin":
|
||||
return Groups.get_groups()
|
||||
groups = Groups.get_groups()
|
||||
else:
|
||||
return Groups.get_groups_by_member_id(user.id)
|
||||
groups = Groups.get_groups_by_member_id(user.id)
|
||||
|
||||
return [
|
||||
GroupResponse(
|
||||
**group.model_dump(),
|
||||
member_count=Groups.get_group_member_count_by_id(group.id),
|
||||
)
|
||||
for group in groups
|
||||
if group
|
||||
]
|
||||
|
||||
|
||||
############################
|
||||
|
|
@ -48,7 +57,10 @@ async def create_new_group(form_data: GroupForm, user=Depends(get_admin_user)):
|
|||
try:
|
||||
group = Groups.insert_new_group(user.id, form_data)
|
||||
if group:
|
||||
return group
|
||||
return GroupResponse(
|
||||
**group.model_dump(),
|
||||
member_count=Groups.get_group_member_count_by_id(group.id),
|
||||
)
|
||||
else:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
|
|
@ -71,7 +83,10 @@ async def create_new_group(form_data: GroupForm, user=Depends(get_admin_user)):
|
|||
async def get_group_by_id(id: str, user=Depends(get_admin_user)):
|
||||
group = Groups.get_group_by_id(id)
|
||||
if group:
|
||||
return group
|
||||
return GroupResponse(
|
||||
**group.model_dump(),
|
||||
member_count=Groups.get_group_member_count_by_id(group.id),
|
||||
)
|
||||
else:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
|
|
@ -89,12 +104,12 @@ async def update_group_by_id(
|
|||
id: str, form_data: GroupUpdateForm, user=Depends(get_admin_user)
|
||||
):
|
||||
try:
|
||||
if form_data.user_ids:
|
||||
form_data.user_ids = Users.get_valid_user_ids(form_data.user_ids)
|
||||
|
||||
group = Groups.update_group_by_id(id, form_data)
|
||||
if group:
|
||||
return group
|
||||
return GroupResponse(
|
||||
**group.model_dump(),
|
||||
member_count=Groups.get_group_member_count_by_id(group.id),
|
||||
)
|
||||
else:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
|
|
@ -123,7 +138,10 @@ async def add_user_to_group(
|
|||
|
||||
group = Groups.add_users_to_group(id, form_data.user_ids)
|
||||
if group:
|
||||
return group
|
||||
return GroupResponse(
|
||||
**group.model_dump(),
|
||||
member_count=Groups.get_group_member_count_by_id(group.id),
|
||||
)
|
||||
else:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
|
|
@ -144,7 +162,10 @@ async def remove_users_from_group(
|
|||
try:
|
||||
group = Groups.remove_users_from_group(id, form_data.user_ids)
|
||||
if group:
|
||||
return group
|
||||
return GroupResponse(
|
||||
**group.model_dump(),
|
||||
member_count=Groups.get_group_member_count_by_id(group.id),
|
||||
)
|
||||
else:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
from typing import List, Optional
|
||||
from pydantic import BaseModel
|
||||
from fastapi import APIRouter, Depends, HTTPException, status, Request
|
||||
from fastapi import APIRouter, Depends, HTTPException, status, Request, Query
|
||||
import logging
|
||||
|
||||
from open_webui.models.knowledge import (
|
||||
|
|
@ -151,6 +151,18 @@ async def create_new_knowledge(
|
|||
detail=ERROR_MESSAGES.UNAUTHORIZED,
|
||||
)
|
||||
|
||||
# Check if user can share publicly
|
||||
if (
|
||||
user.role != "admin"
|
||||
and form_data.access_control == None
|
||||
and not has_permission(
|
||||
user.id,
|
||||
"sharing.public_knowledge",
|
||||
request.app.state.config.USER_PERMISSIONS,
|
||||
)
|
||||
):
|
||||
form_data.access_control = {}
|
||||
|
||||
knowledge = Knowledges.insert_new_knowledge(user.id, form_data)
|
||||
|
||||
if knowledge:
|
||||
|
|
@ -285,6 +297,7 @@ async def get_knowledge_by_id(id: str, user=Depends(get_verified_user)):
|
|||
|
||||
@router.post("/{id}/update", response_model=Optional[KnowledgeFilesResponse])
|
||||
async def update_knowledge_by_id(
|
||||
request: Request,
|
||||
id: str,
|
||||
form_data: KnowledgeForm,
|
||||
user=Depends(get_verified_user),
|
||||
|
|
@ -306,10 +319,22 @@ async def update_knowledge_by_id(
|
|||
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
|
||||
)
|
||||
|
||||
# Check if user can share publicly
|
||||
if (
|
||||
user.role != "admin"
|
||||
and form_data.access_control == None
|
||||
and not has_permission(
|
||||
user.id,
|
||||
"sharing.public_knowledge",
|
||||
request.app.state.config.USER_PERMISSIONS,
|
||||
)
|
||||
):
|
||||
form_data.access_control = {}
|
||||
|
||||
knowledge = Knowledges.update_knowledge_by_id(id=id, form_data=form_data)
|
||||
if knowledge:
|
||||
file_ids = knowledge.data.get("file_ids", []) if knowledge.data else []
|
||||
files = Files.get_files_by_ids(file_ids)
|
||||
files = Files.get_file_metadatas_by_ids(file_ids)
|
||||
|
||||
return KnowledgeFilesResponse(
|
||||
**knowledge.model_dump(),
|
||||
|
|
@ -492,6 +517,7 @@ def update_file_from_knowledge_by_id(
|
|||
def remove_file_from_knowledge_by_id(
|
||||
id: str,
|
||||
form_data: KnowledgeFileIdForm,
|
||||
delete_file: bool = Query(True),
|
||||
user=Depends(get_verified_user),
|
||||
):
|
||||
knowledge = Knowledges.get_knowledge_by_id(id=id)
|
||||
|
|
@ -528,18 +554,19 @@ def remove_file_from_knowledge_by_id(
|
|||
log.debug(e)
|
||||
pass
|
||||
|
||||
try:
|
||||
# Remove the file's collection from vector database
|
||||
file_collection = f"file-{form_data.file_id}"
|
||||
if VECTOR_DB_CLIENT.has_collection(collection_name=file_collection):
|
||||
VECTOR_DB_CLIENT.delete_collection(collection_name=file_collection)
|
||||
except Exception as e:
|
||||
log.debug("This was most likely caused by bypassing embedding processing")
|
||||
log.debug(e)
|
||||
pass
|
||||
if delete_file:
|
||||
try:
|
||||
# Remove the file's collection from vector database
|
||||
file_collection = f"file-{form_data.file_id}"
|
||||
if VECTOR_DB_CLIENT.has_collection(collection_name=file_collection):
|
||||
VECTOR_DB_CLIENT.delete_collection(collection_name=file_collection)
|
||||
except Exception as e:
|
||||
log.debug("This was most likely caused by bypassing embedding processing")
|
||||
log.debug(e)
|
||||
pass
|
||||
|
||||
# Delete file from database
|
||||
Files.delete_file_by_id(form_data.file_id)
|
||||
# Delete file from database
|
||||
Files.delete_file_by_id(form_data.file_id)
|
||||
|
||||
if knowledge:
|
||||
data = knowledge.data or {}
|
||||
|
|
|
|||
|
|
@ -1,4 +1,9 @@
|
|||
from typing import Optional
|
||||
import io
|
||||
import base64
|
||||
import json
|
||||
import asyncio
|
||||
import logging
|
||||
|
||||
from open_webui.models.models import (
|
||||
ModelForm,
|
||||
|
|
@ -10,22 +15,38 @@ from open_webui.models.models import (
|
|||
|
||||
from pydantic import BaseModel
|
||||
from open_webui.constants import ERROR_MESSAGES
|
||||
from fastapi import APIRouter, Depends, HTTPException, Request, status
|
||||
from fastapi import (
|
||||
APIRouter,
|
||||
Depends,
|
||||
HTTPException,
|
||||
Request,
|
||||
status,
|
||||
Response,
|
||||
)
|
||||
from fastapi.responses import FileResponse, StreamingResponse
|
||||
|
||||
|
||||
from open_webui.utils.auth import get_admin_user, get_verified_user
|
||||
from open_webui.utils.access_control import has_access, has_permission
|
||||
from open_webui.config import BYPASS_ADMIN_ACCESS_CONTROL
|
||||
from open_webui.config import BYPASS_ADMIN_ACCESS_CONTROL, STATIC_DIR
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
|
||||
def is_valid_model_id(model_id: str) -> bool:
|
||||
return model_id and len(model_id) <= 256
|
||||
|
||||
|
||||
###########################
|
||||
# GetModels
|
||||
###########################
|
||||
|
||||
|
||||
@router.get("/", response_model=list[ModelUserResponse])
|
||||
@router.get(
|
||||
"/list", response_model=list[ModelUserResponse]
|
||||
) # do NOT use "/" as path, conflicts with main.py
|
||||
async def get_models(id: Optional[str] = None, user=Depends(get_verified_user)):
|
||||
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
|
||||
return Models.get_models()
|
||||
|
|
@ -69,6 +90,12 @@ async def create_new_model(
|
|||
detail=ERROR_MESSAGES.MODEL_ID_TAKEN,
|
||||
)
|
||||
|
||||
if not is_valid_model_id(form_data.id):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail=ERROR_MESSAGES.MODEL_ID_TOO_LONG,
|
||||
)
|
||||
|
||||
else:
|
||||
model = Models.insert_new_model(form_data, user.id)
|
||||
if model:
|
||||
|
|
@ -86,8 +113,73 @@ async def create_new_model(
|
|||
|
||||
|
||||
@router.get("/export", response_model=list[ModelModel])
|
||||
async def export_models(user=Depends(get_admin_user)):
|
||||
return Models.get_models()
|
||||
async def export_models(request: Request, user=Depends(get_verified_user)):
|
||||
if user.role != "admin" and not has_permission(
|
||||
user.id, "workspace.models_export", request.app.state.config.USER_PERMISSIONS
|
||||
):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail=ERROR_MESSAGES.UNAUTHORIZED,
|
||||
)
|
||||
|
||||
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
|
||||
return Models.get_models()
|
||||
else:
|
||||
return Models.get_models_by_user_id(user.id)
|
||||
|
||||
|
||||
############################
|
||||
# ImportModels
|
||||
############################
|
||||
|
||||
|
||||
class ModelsImportForm(BaseModel):
|
||||
models: list[dict]
|
||||
|
||||
|
||||
@router.post("/import", response_model=bool)
|
||||
async def import_models(
|
||||
request: Request,
|
||||
user=Depends(get_verified_user),
|
||||
form_data: ModelsImportForm = (...),
|
||||
):
|
||||
if user.role != "admin" and not has_permission(
|
||||
user.id, "workspace.models_import", request.app.state.config.USER_PERMISSIONS
|
||||
):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail=ERROR_MESSAGES.UNAUTHORIZED,
|
||||
)
|
||||
try:
|
||||
data = form_data.models
|
||||
if isinstance(data, list):
|
||||
for model_data in data:
|
||||
# Here, you can add logic to validate model_data if needed
|
||||
model_id = model_data.get("id")
|
||||
|
||||
if model_id and is_valid_model_id(model_id):
|
||||
existing_model = Models.get_model_by_id(model_id)
|
||||
if existing_model:
|
||||
# Update existing model
|
||||
model_data["meta"] = model_data.get("meta", {})
|
||||
model_data["params"] = model_data.get("params", {})
|
||||
|
||||
updated_model = ModelForm(
|
||||
**{**existing_model.model_dump(), **model_data}
|
||||
)
|
||||
Models.update_model_by_id(model_id, updated_model)
|
||||
else:
|
||||
# Insert new model
|
||||
model_data["meta"] = model_data.get("meta", {})
|
||||
model_data["params"] = model_data.get("params", {})
|
||||
new_model = ModelForm(**model_data)
|
||||
Models.insert_new_model(user_id=user.id, form_data=new_model)
|
||||
return True
|
||||
else:
|
||||
raise HTTPException(status_code=400, detail="Invalid JSON format")
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
############################
|
||||
|
|
@ -111,6 +203,10 @@ async def sync_models(
|
|||
###########################
|
||||
|
||||
|
||||
class ModelIdForm(BaseModel):
|
||||
id: str
|
||||
|
||||
|
||||
# Note: We're not using the typical url path param here, but instead using a query parameter to allow '/' in the id
|
||||
@router.get("/model", response_model=Optional[ModelResponse])
|
||||
async def get_model_by_id(id: str, user=Depends(get_verified_user)):
|
||||
|
|
@ -129,6 +225,39 @@ async def get_model_by_id(id: str, user=Depends(get_verified_user)):
|
|||
)
|
||||
|
||||
|
||||
###########################
|
||||
# GetModelById
|
||||
###########################
|
||||
|
||||
|
||||
@router.get("/model/profile/image")
|
||||
async def get_model_profile_image(id: str, user=Depends(get_verified_user)):
|
||||
model = Models.get_model_by_id(id)
|
||||
if model:
|
||||
if model.meta.profile_image_url:
|
||||
if model.meta.profile_image_url.startswith("http"):
|
||||
return Response(
|
||||
status_code=status.HTTP_302_FOUND,
|
||||
headers={"Location": model.meta.profile_image_url},
|
||||
)
|
||||
elif model.meta.profile_image_url.startswith("data:image"):
|
||||
try:
|
||||
header, base64_data = model.meta.profile_image_url.split(",", 1)
|
||||
image_data = base64.b64decode(base64_data)
|
||||
image_buffer = io.BytesIO(image_data)
|
||||
|
||||
return StreamingResponse(
|
||||
image_buffer,
|
||||
media_type="image/png",
|
||||
headers={"Content-Disposition": "inline; filename=image.png"},
|
||||
)
|
||||
except Exception as e:
|
||||
pass
|
||||
return FileResponse(f"{STATIC_DIR}/favicon.png")
|
||||
else:
|
||||
return FileResponse(f"{STATIC_DIR}/favicon.png")
|
||||
|
||||
|
||||
############################
|
||||
# ToggleModelById
|
||||
############################
|
||||
|
|
@ -171,12 +300,10 @@ async def toggle_model_by_id(id: str, user=Depends(get_verified_user)):
|
|||
|
||||
@router.post("/model/update", response_model=Optional[ModelModel])
|
||||
async def update_model_by_id(
|
||||
id: str,
|
||||
form_data: ModelForm,
|
||||
user=Depends(get_verified_user),
|
||||
):
|
||||
model = Models.get_model_by_id(id)
|
||||
|
||||
model = Models.get_model_by_id(form_data.id)
|
||||
if not model:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
|
|
@ -193,7 +320,7 @@ async def update_model_by_id(
|
|||
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
|
||||
)
|
||||
|
||||
model = Models.update_model_by_id(id, form_data)
|
||||
model = Models.update_model_by_id(form_data.id, form_data)
|
||||
return model
|
||||
|
||||
|
||||
|
|
@ -202,9 +329,9 @@ async def update_model_by_id(
|
|||
############################
|
||||
|
||||
|
||||
@router.delete("/model/delete", response_model=bool)
|
||||
async def delete_model_by_id(id: str, user=Depends(get_verified_user)):
|
||||
model = Models.get_model_by_id(id)
|
||||
@router.post("/model/delete", response_model=bool)
|
||||
async def delete_model_by_id(form_data: ModelIdForm, user=Depends(get_verified_user)):
|
||||
model = Models.get_model_by_id(form_data.id)
|
||||
if not model:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
|
|
@ -221,7 +348,7 @@ async def delete_model_by_id(id: str, user=Depends(get_verified_user)):
|
|||
detail=ERROR_MESSAGES.UNAUTHORIZED,
|
||||
)
|
||||
|
||||
result = Models.delete_model_by_id(id)
|
||||
result = Models.delete_model_by_id(form_data.id)
|
||||
return result
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ async def get_notes(request: Request, user=Depends(get_verified_user)):
|
|||
"user": UserResponse(**Users.get_user_by_id(note.user_id).model_dump()),
|
||||
}
|
||||
)
|
||||
for note in Notes.get_notes_by_user_id(user.id, "write")
|
||||
for note in Notes.get_notes_by_permission(user.id, "write")
|
||||
]
|
||||
|
||||
return notes
|
||||
|
|
@ -62,8 +62,9 @@ class NoteTitleIdResponse(BaseModel):
|
|||
|
||||
|
||||
@router.get("/list", response_model=list[NoteTitleIdResponse])
|
||||
async def get_note_list(request: Request, user=Depends(get_verified_user)):
|
||||
|
||||
async def get_note_list(
|
||||
request: Request, page: Optional[int] = None, user=Depends(get_verified_user)
|
||||
):
|
||||
if user.role != "admin" and not has_permission(
|
||||
user.id, "features.notes", request.app.state.config.USER_PERMISSIONS
|
||||
):
|
||||
|
|
@ -72,9 +73,17 @@ async def get_note_list(request: Request, user=Depends(get_verified_user)):
|
|||
detail=ERROR_MESSAGES.UNAUTHORIZED,
|
||||
)
|
||||
|
||||
limit = None
|
||||
skip = None
|
||||
if page is not None:
|
||||
limit = 60
|
||||
skip = (page - 1) * limit
|
||||
|
||||
notes = [
|
||||
NoteTitleIdResponse(**note.model_dump())
|
||||
for note in Notes.get_notes_by_user_id(user.id, "write")
|
||||
for note in Notes.get_notes_by_permission(
|
||||
user.id, "write", skip=skip, limit=limit
|
||||
)
|
||||
]
|
||||
|
||||
return notes
|
||||
|
|
@ -171,6 +180,18 @@ async def update_note_by_id(
|
|||
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
|
||||
)
|
||||
|
||||
# Check if user can share publicly
|
||||
if (
|
||||
user.role != "admin"
|
||||
and form_data.access_control == None
|
||||
and not has_permission(
|
||||
user.id,
|
||||
"sharing.public_notes",
|
||||
request.app.state.config.USER_PERMISSIONS,
|
||||
)
|
||||
):
|
||||
form_data.access_control = {}
|
||||
|
||||
try:
|
||||
note = Notes.update_note_by_id(id, form_data)
|
||||
await sio.emit(
|
||||
|
|
|
|||
|
|
@ -340,7 +340,10 @@ def merge_ollama_models_lists(model_lists):
|
|||
return list(merged_models.values())
|
||||
|
||||
|
||||
@cached(ttl=MODELS_CACHE_TTL)
|
||||
@cached(
|
||||
ttl=MODELS_CACHE_TTL,
|
||||
key=lambda _, user: f"ollama_all_models_{user.id}" if user else "ollama_all_models",
|
||||
)
|
||||
async def get_all_models(request: Request, user: UserModel = None):
|
||||
log.info("get_all_models()")
|
||||
if request.app.state.config.ENABLE_OLLAMA_API:
|
||||
|
|
@ -1017,6 +1020,10 @@ class GenerateEmbedForm(BaseModel):
|
|||
options: Optional[dict] = None
|
||||
keep_alive: Optional[Union[int, str]] = None
|
||||
|
||||
model_config = ConfigDict(
|
||||
extra="allow",
|
||||
)
|
||||
|
||||
|
||||
@router.post("/api/embed")
|
||||
@router.post("/api/embed/{url_idx}")
|
||||
|
|
@ -1691,25 +1698,27 @@ async def download_file_stream(
|
|||
yield f'data: {{"progress": {progress}, "completed": {current_size}, "total": {total_size}}}\n\n'
|
||||
|
||||
if done:
|
||||
file.seek(0)
|
||||
chunk_size = 1024 * 1024 * 2
|
||||
hashed = calculate_sha256(file, chunk_size)
|
||||
file.seek(0)
|
||||
file.close()
|
||||
|
||||
url = f"{ollama_url}/api/blobs/sha256:{hashed}"
|
||||
response = requests.post(url, data=file)
|
||||
with open(file_path, "rb") as file:
|
||||
chunk_size = 1024 * 1024 * 2
|
||||
hashed = calculate_sha256(file, chunk_size)
|
||||
|
||||
if response.ok:
|
||||
res = {
|
||||
"done": done,
|
||||
"blob": f"sha256:{hashed}",
|
||||
"name": file_name,
|
||||
}
|
||||
os.remove(file_path)
|
||||
url = f"{ollama_url}/api/blobs/sha256:{hashed}"
|
||||
with requests.Session() as session:
|
||||
response = session.post(url, data=file, timeout=30)
|
||||
|
||||
yield f"data: {json.dumps(res)}\n\n"
|
||||
else:
|
||||
raise "Ollama: Could not create blob, Please try again."
|
||||
if response.ok:
|
||||
res = {
|
||||
"done": done,
|
||||
"blob": f"sha256:{hashed}",
|
||||
"name": file_name,
|
||||
}
|
||||
os.remove(file_path)
|
||||
|
||||
yield f"data: {json.dumps(res)}\n\n"
|
||||
else:
|
||||
raise "Ollama: Could not create blob, Please try again."
|
||||
|
||||
|
||||
# url = "https://huggingface.co/TheBloke/stablelm-zephyr-3b-GGUF/resolve/main/stablelm-zephyr-3b.Q2_K.gguf"
|
||||
|
|
|
|||
|
|
@ -9,6 +9,8 @@ from aiocache import cached
|
|||
import requests
|
||||
from urllib.parse import quote
|
||||
|
||||
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
|
||||
|
||||
from fastapi import Depends, HTTPException, Request, APIRouter
|
||||
from fastapi.responses import (
|
||||
FileResponse,
|
||||
|
|
@ -43,6 +45,7 @@ from open_webui.utils.payload import (
|
|||
)
|
||||
from open_webui.utils.misc import (
|
||||
convert_logit_bias_input_to_json,
|
||||
stream_chunks_handler,
|
||||
)
|
||||
|
||||
from open_webui.utils.auth import get_admin_user, get_verified_user
|
||||
|
|
@ -119,6 +122,96 @@ def openai_reasoning_model_handler(payload):
|
|||
return payload
|
||||
|
||||
|
||||
async def get_headers_and_cookies(
|
||||
request: Request,
|
||||
url,
|
||||
key=None,
|
||||
config=None,
|
||||
metadata: Optional[dict] = None,
|
||||
user: UserModel = None,
|
||||
):
|
||||
cookies = {}
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
**(
|
||||
{
|
||||
"HTTP-Referer": "https://openwebui.com/",
|
||||
"X-Title": "Open WebUI",
|
||||
}
|
||||
if "openrouter.ai" in url
|
||||
else {}
|
||||
),
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
**(
|
||||
{"X-OpenWebUI-Chat-Id": metadata.get("chat_id")}
|
||||
if metadata and metadata.get("chat_id")
|
||||
else {}
|
||||
),
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS
|
||||
else {}
|
||||
),
|
||||
}
|
||||
|
||||
token = None
|
||||
auth_type = config.get("auth_type")
|
||||
|
||||
if auth_type == "bearer" or auth_type is None:
|
||||
# Default to bearer if not specified
|
||||
token = f"{key}"
|
||||
elif auth_type == "none":
|
||||
token = None
|
||||
elif auth_type == "session":
|
||||
cookies = request.cookies
|
||||
token = request.state.token.credentials
|
||||
elif auth_type == "system_oauth":
|
||||
cookies = request.cookies
|
||||
|
||||
oauth_token = None
|
||||
try:
|
||||
if request.cookies.get("oauth_session_id", None):
|
||||
oauth_token = await request.app.state.oauth_manager.get_oauth_token(
|
||||
user.id,
|
||||
request.cookies.get("oauth_session_id", None),
|
||||
)
|
||||
except Exception as e:
|
||||
log.error(f"Error getting OAuth token: {e}")
|
||||
|
||||
if oauth_token:
|
||||
token = f"{oauth_token.get('access_token', '')}"
|
||||
|
||||
elif auth_type in ("azure_ad", "microsoft_entra_id"):
|
||||
token = get_microsoft_entra_id_access_token()
|
||||
|
||||
if token:
|
||||
headers["Authorization"] = f"Bearer {token}"
|
||||
|
||||
if config.get("headers") and isinstance(config.get("headers"), dict):
|
||||
headers = {**headers, **config.get("headers")}
|
||||
|
||||
return headers, cookies
|
||||
|
||||
|
||||
def get_microsoft_entra_id_access_token():
|
||||
"""
|
||||
Get Microsoft Entra ID access token using DefaultAzureCredential for Azure OpenAI.
|
||||
Returns the token string or None if authentication fails.
|
||||
"""
|
||||
try:
|
||||
token_provider = get_bearer_token_provider(
|
||||
DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
|
||||
)
|
||||
return token_provider()
|
||||
except Exception as e:
|
||||
log.error(f"Error getting Microsoft Entra ID access token: {e}")
|
||||
return None
|
||||
|
||||
|
||||
##########################################
|
||||
#
|
||||
# API routes
|
||||
|
|
@ -210,34 +303,23 @@ async def speech(request: Request, user=Depends(get_verified_user)):
|
|||
return FileResponse(file_path)
|
||||
|
||||
url = request.app.state.config.OPENAI_API_BASE_URLS[idx]
|
||||
key = request.app.state.config.OPENAI_API_KEYS[idx]
|
||||
api_config = request.app.state.config.OPENAI_API_CONFIGS.get(
|
||||
str(idx),
|
||||
request.app.state.config.OPENAI_API_CONFIGS.get(url, {}), # Legacy support
|
||||
)
|
||||
|
||||
headers, cookies = await get_headers_and_cookies(
|
||||
request, url, key, api_config, user=user
|
||||
)
|
||||
|
||||
r = None
|
||||
try:
|
||||
r = requests.post(
|
||||
url=f"{url}/audio/speech",
|
||||
data=body,
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": f"Bearer {request.app.state.config.OPENAI_API_KEYS[idx]}",
|
||||
**(
|
||||
{
|
||||
"HTTP-Referer": "https://openwebui.com/",
|
||||
"X-Title": "Open WebUI",
|
||||
}
|
||||
if "openrouter.ai" in url
|
||||
else {}
|
||||
),
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS
|
||||
else {}
|
||||
),
|
||||
},
|
||||
headers=headers,
|
||||
cookies=cookies,
|
||||
stream=True,
|
||||
)
|
||||
|
||||
|
|
@ -401,7 +483,10 @@ async def get_filtered_models(models, user):
|
|||
return filtered_models
|
||||
|
||||
|
||||
@cached(ttl=MODELS_CACHE_TTL)
|
||||
@cached(
|
||||
ttl=MODELS_CACHE_TTL,
|
||||
key=lambda _, user: f"openai_all_models_{user.id}" if user else "openai_all_models",
|
||||
)
|
||||
async def get_all_models(request: Request, user: UserModel) -> dict[str, list]:
|
||||
log.info("get_all_models()")
|
||||
|
||||
|
|
@ -417,50 +502,55 @@ async def get_all_models(request: Request, user: UserModel) -> dict[str, list]:
|
|||
return response
|
||||
return None
|
||||
|
||||
def merge_models_lists(model_lists):
|
||||
def is_supported_openai_models(model_id):
|
||||
if any(
|
||||
name in model_id
|
||||
for name in [
|
||||
"babbage",
|
||||
"dall-e",
|
||||
"davinci",
|
||||
"embedding",
|
||||
"tts",
|
||||
"whisper",
|
||||
]
|
||||
):
|
||||
return False
|
||||
return True
|
||||
|
||||
def get_merged_models(model_lists):
|
||||
log.debug(f"merge_models_lists {model_lists}")
|
||||
merged_list = []
|
||||
models = {}
|
||||
|
||||
for idx, models in enumerate(model_lists):
|
||||
if models is not None and "error" not in models:
|
||||
for idx, model_list in enumerate(model_lists):
|
||||
if model_list is not None and "error" not in model_list:
|
||||
for model in model_list:
|
||||
model_id = model.get("id") or model.get("name")
|
||||
|
||||
merged_list.extend(
|
||||
[
|
||||
{
|
||||
if (
|
||||
"api.openai.com"
|
||||
in request.app.state.config.OPENAI_API_BASE_URLS[idx]
|
||||
and not is_supported_openai_models(model_id)
|
||||
):
|
||||
# Skip unwanted OpenAI models
|
||||
continue
|
||||
|
||||
if model_id and model_id not in models:
|
||||
models[model_id] = {
|
||||
**model,
|
||||
"name": model.get("name", model["id"]),
|
||||
"name": model.get("name", model_id),
|
||||
"owned_by": "openai",
|
||||
"openai": model,
|
||||
"connection_type": model.get("connection_type", "external"),
|
||||
"urlIdx": idx,
|
||||
}
|
||||
for model in models
|
||||
if (model.get("id") or model.get("name"))
|
||||
and (
|
||||
"api.openai.com"
|
||||
not in request.app.state.config.OPENAI_API_BASE_URLS[idx]
|
||||
or not any(
|
||||
name in model["id"]
|
||||
for name in [
|
||||
"babbage",
|
||||
"dall-e",
|
||||
"davinci",
|
||||
"embedding",
|
||||
"tts",
|
||||
"whisper",
|
||||
]
|
||||
)
|
||||
)
|
||||
]
|
||||
)
|
||||
|
||||
return merged_list
|
||||
return models
|
||||
|
||||
models = {"data": merge_models_lists(map(extract_data, responses))}
|
||||
models = get_merged_models(map(extract_data, responses))
|
||||
log.debug(f"models: {models}")
|
||||
|
||||
request.app.state.OPENAI_MODELS = {model["id"]: model for model in models["data"]}
|
||||
return models
|
||||
request.app.state.OPENAI_MODELS = models
|
||||
return {"data": list(models.values())}
|
||||
|
||||
|
||||
@router.get("/models")
|
||||
|
|
@ -489,19 +579,9 @@ async def get_models(
|
|||
timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST),
|
||||
) as session:
|
||||
try:
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS
|
||||
else {}
|
||||
),
|
||||
}
|
||||
headers, cookies = await get_headers_and_cookies(
|
||||
request, url, key, api_config, user=user
|
||||
)
|
||||
|
||||
if api_config.get("azure", False):
|
||||
models = {
|
||||
|
|
@ -509,11 +589,10 @@ async def get_models(
|
|||
"object": "list",
|
||||
}
|
||||
else:
|
||||
headers["Authorization"] = f"Bearer {key}"
|
||||
|
||||
async with session.get(
|
||||
f"{url}/models",
|
||||
headers=headers,
|
||||
cookies=cookies,
|
||||
ssl=AIOHTTP_CLIENT_SESSION_SSL,
|
||||
) as r:
|
||||
if r.status != 200:
|
||||
|
|
@ -572,7 +651,9 @@ class ConnectionVerificationForm(BaseModel):
|
|||
|
||||
@router.post("/verify")
|
||||
async def verify_connection(
|
||||
form_data: ConnectionVerificationForm, user=Depends(get_admin_user)
|
||||
request: Request,
|
||||
form_data: ConnectionVerificationForm,
|
||||
user=Depends(get_admin_user),
|
||||
):
|
||||
url = form_data.url
|
||||
key = form_data.key
|
||||
|
|
@ -584,27 +665,21 @@ async def verify_connection(
|
|||
timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST),
|
||||
) as session:
|
||||
try:
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS
|
||||
else {}
|
||||
),
|
||||
}
|
||||
headers, cookies = await get_headers_and_cookies(
|
||||
request, url, key, api_config, user=user
|
||||
)
|
||||
|
||||
if api_config.get("azure", False):
|
||||
headers["api-key"] = key
|
||||
api_version = api_config.get("api_version", "") or "2023-03-15-preview"
|
||||
# Only set api-key header if not using Azure Entra ID authentication
|
||||
auth_type = api_config.get("auth_type", "bearer")
|
||||
if auth_type not in ("azure_ad", "microsoft_entra_id"):
|
||||
headers["api-key"] = key
|
||||
|
||||
api_version = api_config.get("api_version", "") or "2023-03-15-preview"
|
||||
async with session.get(
|
||||
url=f"{url}/openai/models?api-version={api_version}",
|
||||
headers=headers,
|
||||
cookies=cookies,
|
||||
ssl=AIOHTTP_CLIENT_SESSION_SSL,
|
||||
) as r:
|
||||
try:
|
||||
|
|
@ -624,11 +699,10 @@ async def verify_connection(
|
|||
|
||||
return response_data
|
||||
else:
|
||||
headers["Authorization"] = f"Bearer {key}"
|
||||
|
||||
async with session.get(
|
||||
f"{url}/models",
|
||||
headers=headers,
|
||||
cookies=cookies,
|
||||
ssl=AIOHTTP_CLIENT_SESSION_SSL,
|
||||
) as r:
|
||||
try:
|
||||
|
|
@ -689,6 +763,7 @@ def get_azure_allowed_params(api_version: str) -> set[str]:
|
|||
"response_format",
|
||||
"seed",
|
||||
"max_completion_tokens",
|
||||
"reasoning_effort",
|
||||
}
|
||||
|
||||
try:
|
||||
|
|
@ -836,42 +911,23 @@ async def generate_chat_completion(
|
|||
convert_logit_bias_input_to_json(payload["logit_bias"])
|
||||
)
|
||||
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
**(
|
||||
{
|
||||
"HTTP-Referer": "https://openwebui.com/",
|
||||
"X-Title": "Open WebUI",
|
||||
}
|
||||
if "openrouter.ai" in url
|
||||
else {}
|
||||
),
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
**(
|
||||
{"X-OpenWebUI-Chat-Id": metadata.get("chat_id")}
|
||||
if metadata and metadata.get("chat_id")
|
||||
else {}
|
||||
),
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS
|
||||
else {}
|
||||
),
|
||||
}
|
||||
headers, cookies = await get_headers_and_cookies(
|
||||
request, url, key, api_config, metadata, user=user
|
||||
)
|
||||
|
||||
if api_config.get("azure", False):
|
||||
api_version = api_config.get("api_version", "2023-03-15-preview")
|
||||
request_url, payload = convert_to_azure_payload(url, payload, api_version)
|
||||
headers["api-key"] = key
|
||||
|
||||
# Only set api-key header if not using Azure Entra ID authentication
|
||||
auth_type = api_config.get("auth_type", "bearer")
|
||||
if auth_type not in ("azure_ad", "microsoft_entra_id"):
|
||||
headers["api-key"] = key
|
||||
|
||||
headers["api-version"] = api_version
|
||||
request_url = f"{request_url}/chat/completions?api-version={api_version}"
|
||||
else:
|
||||
request_url = f"{url}/chat/completions"
|
||||
headers["Authorization"] = f"Bearer {key}"
|
||||
|
||||
payload = json.dumps(payload)
|
||||
|
||||
|
|
@ -890,6 +946,7 @@ async def generate_chat_completion(
|
|||
url=request_url,
|
||||
data=payload,
|
||||
headers=headers,
|
||||
cookies=cookies,
|
||||
ssl=AIOHTTP_CLIENT_SESSION_SSL,
|
||||
)
|
||||
|
||||
|
|
@ -897,7 +954,7 @@ async def generate_chat_completion(
|
|||
if "text/event-stream" in r.headers.get("Content-Type", ""):
|
||||
streaming = True
|
||||
return StreamingResponse(
|
||||
r.content,
|
||||
stream_chunks_handler(r.content),
|
||||
status_code=r.status,
|
||||
headers=dict(r.headers),
|
||||
background=BackgroundTask(
|
||||
|
|
@ -951,31 +1008,29 @@ async def embeddings(request: Request, form_data: dict, user):
|
|||
models = request.app.state.OPENAI_MODELS
|
||||
if model_id in models:
|
||||
idx = models[model_id]["urlIdx"]
|
||||
|
||||
url = request.app.state.config.OPENAI_API_BASE_URLS[idx]
|
||||
key = request.app.state.config.OPENAI_API_KEYS[idx]
|
||||
api_config = request.app.state.config.OPENAI_API_CONFIGS.get(
|
||||
str(idx),
|
||||
request.app.state.config.OPENAI_API_CONFIGS.get(url, {}), # Legacy support
|
||||
)
|
||||
|
||||
r = None
|
||||
session = None
|
||||
streaming = False
|
||||
|
||||
headers, cookies = await get_headers_and_cookies(
|
||||
request, url, key, api_config, user=user
|
||||
)
|
||||
try:
|
||||
session = aiohttp.ClientSession(trust_env=True)
|
||||
r = await session.request(
|
||||
method="POST",
|
||||
url=f"{url}/embeddings",
|
||||
data=body,
|
||||
headers={
|
||||
"Authorization": f"Bearer {key}",
|
||||
"Content-Type": "application/json",
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user
|
||||
else {}
|
||||
),
|
||||
},
|
||||
headers=headers,
|
||||
cookies=cookies,
|
||||
)
|
||||
|
||||
if "text/event-stream" in r.headers.get("Content-Type", ""):
|
||||
|
|
@ -1037,23 +1092,18 @@ async def proxy(path: str, request: Request, user=Depends(get_verified_user)):
|
|||
streaming = False
|
||||
|
||||
try:
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS
|
||||
else {}
|
||||
),
|
||||
}
|
||||
headers, cookies = await get_headers_and_cookies(
|
||||
request, url, key, api_config, user=user
|
||||
)
|
||||
|
||||
if api_config.get("azure", False):
|
||||
api_version = api_config.get("api_version", "2023-03-15-preview")
|
||||
headers["api-key"] = key
|
||||
|
||||
# Only set api-key header if not using Azure Entra ID authentication
|
||||
auth_type = api_config.get("auth_type", "bearer")
|
||||
if auth_type not in ("azure_ad", "microsoft_entra_id"):
|
||||
headers["api-key"] = key
|
||||
|
||||
headers["api-version"] = api_version
|
||||
|
||||
payload = json.loads(body)
|
||||
|
|
@ -1062,7 +1112,6 @@ async def proxy(path: str, request: Request, user=Depends(get_verified_user)):
|
|||
|
||||
request_url = f"{url}/{path}?api-version={api_version}"
|
||||
else:
|
||||
headers["Authorization"] = f"Bearer {key}"
|
||||
request_url = f"{url}/{path}"
|
||||
|
||||
session = aiohttp.ClientSession(trust_env=True)
|
||||
|
|
@ -1071,6 +1120,7 @@ async def proxy(path: str, request: Request, user=Depends(get_verified_user)):
|
|||
url=request_url,
|
||||
data=body,
|
||||
headers=headers,
|
||||
cookies=cookies,
|
||||
ssl=AIOHTTP_CLIENT_SESSION_SSL,
|
||||
)
|
||||
|
||||
|
|
|
|||
|
|
@ -48,8 +48,15 @@ async def get_prompt_list(user=Depends(get_verified_user)):
|
|||
async def create_new_prompt(
|
||||
request: Request, form_data: PromptForm, user=Depends(get_verified_user)
|
||||
):
|
||||
if user.role != "admin" and not has_permission(
|
||||
user.id, "workspace.prompts", request.app.state.config.USER_PERMISSIONS
|
||||
if user.role != "admin" and not (
|
||||
has_permission(
|
||||
user.id, "workspace.prompts", request.app.state.config.USER_PERMISSIONS
|
||||
)
|
||||
or has_permission(
|
||||
user.id,
|
||||
"workspace.prompts_import",
|
||||
request.app.state.config.USER_PERMISSIONS,
|
||||
)
|
||||
):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
|
|
|
|||
|
|
@ -256,15 +256,16 @@ def get_scim_auth(
|
|||
)
|
||||
|
||||
# Check if SCIM is enabled
|
||||
scim_enabled = getattr(request.app.state, "SCIM_ENABLED", False)
|
||||
enable_scim = getattr(request.app.state, "ENABLE_SCIM", False)
|
||||
log.info(
|
||||
f"SCIM auth check - raw SCIM_ENABLED: {scim_enabled}, type: {type(scim_enabled)}"
|
||||
f"SCIM auth check - raw ENABLE_SCIM: {enable_scim}, type: {type(enable_scim)}"
|
||||
)
|
||||
|
||||
# Handle both PersistentConfig and direct value
|
||||
if hasattr(scim_enabled, "value"):
|
||||
scim_enabled = scim_enabled.value
|
||||
log.info(f"SCIM enabled status after conversion: {scim_enabled}")
|
||||
if not scim_enabled:
|
||||
if hasattr(enable_scim, "value"):
|
||||
enable_scim = enable_scim.value
|
||||
|
||||
if not enable_scim:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail="SCIM is not enabled",
|
||||
|
|
@ -348,8 +349,10 @@ def user_to_scim(user: UserModel, request: Request) -> SCIMUser:
|
|||
|
||||
def group_to_scim(group: GroupModel, request: Request) -> SCIMGroup:
|
||||
"""Convert internal Group model to SCIM Group"""
|
||||
member_ids = Groups.get_group_user_ids_by_id(group.id)
|
||||
members = []
|
||||
for user_id in group.user_ids:
|
||||
|
||||
for user_id in member_ids:
|
||||
user = Users.get_user_by_id(user_id)
|
||||
if user:
|
||||
members.append(
|
||||
|
|
@ -795,9 +798,11 @@ async def create_group(
|
|||
update_form = GroupUpdateForm(
|
||||
name=new_group.name,
|
||||
description=new_group.description,
|
||||
user_ids=member_ids,
|
||||
)
|
||||
|
||||
Groups.update_group_by_id(new_group.id, update_form)
|
||||
Groups.set_group_user_ids_by_id(new_group.id, member_ids)
|
||||
|
||||
new_group = Groups.get_group_by_id(new_group.id)
|
||||
|
||||
return group_to_scim(new_group, request)
|
||||
|
|
@ -829,7 +834,7 @@ async def update_group(
|
|||
# Handle members if provided
|
||||
if group_data.members is not None:
|
||||
member_ids = [member.value for member in group_data.members]
|
||||
update_form.user_ids = member_ids
|
||||
Groups.set_group_user_ids_by_id(group_id, member_ids)
|
||||
|
||||
# Update group
|
||||
updated_group = Groups.update_group_by_id(group_id, update_form)
|
||||
|
|
@ -862,7 +867,6 @@ async def patch_group(
|
|||
update_form = GroupUpdateForm(
|
||||
name=group.name,
|
||||
description=group.description,
|
||||
user_ids=group.user_ids.copy() if group.user_ids else [],
|
||||
)
|
||||
|
||||
for operation in patch_data.Operations:
|
||||
|
|
@ -875,21 +879,22 @@ async def patch_group(
|
|||
update_form.name = value
|
||||
elif path == "members":
|
||||
# Replace all members
|
||||
update_form.user_ids = [member["value"] for member in value]
|
||||
Groups.set_group_user_ids_by_id(
|
||||
group_id, [member["value"] for member in value]
|
||||
)
|
||||
|
||||
elif op == "add":
|
||||
if path == "members":
|
||||
# Add members
|
||||
if isinstance(value, list):
|
||||
for member in value:
|
||||
if isinstance(member, dict) and "value" in member:
|
||||
if member["value"] not in update_form.user_ids:
|
||||
update_form.user_ids.append(member["value"])
|
||||
Groups.add_users_to_group(group_id, [member["value"]])
|
||||
elif op == "remove":
|
||||
if path and path.startswith("members[value eq"):
|
||||
# Remove specific member
|
||||
member_id = path.split('"')[1]
|
||||
if member_id in update_form.user_ids:
|
||||
update_form.user_ids.remove(member_id)
|
||||
Groups.remove_users_from_group(group_id, [member_id])
|
||||
|
||||
# Update group
|
||||
updated_group = Groups.update_group_by_id(group_id, update_form)
|
||||
|
|
|
|||
|
|
@ -33,6 +33,7 @@ from open_webui.config import (
|
|||
DEFAULT_AUTOCOMPLETE_GENERATION_PROMPT_TEMPLATE,
|
||||
DEFAULT_EMOJI_GENERATION_PROMPT_TEMPLATE,
|
||||
DEFAULT_MOA_GENERATION_PROMPT_TEMPLATE,
|
||||
DEFAULT_VOICE_MODE_PROMPT_TEMPLATE,
|
||||
)
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
|
|
@ -69,6 +70,7 @@ async def get_task_config(request: Request, user=Depends(get_verified_user)):
|
|||
"QUERY_GENERATION_PROMPT_TEMPLATE": request.app.state.config.QUERY_GENERATION_PROMPT_TEMPLATE,
|
||||
"TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE": request.app.state.config.TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE,
|
||||
"TRANSLATION_LANGUAGES": request.app.state.config.TRANSLATION_LANGUAGES,
|
||||
"VOICE_MODE_PROMPT_TEMPLATE": request.app.state.config.VOICE_MODE_PROMPT_TEMPLATE,
|
||||
}
|
||||
|
||||
|
||||
|
|
@ -89,6 +91,7 @@ class TaskConfigForm(BaseModel):
|
|||
QUERY_GENERATION_PROMPT_TEMPLATE: str
|
||||
TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE: str
|
||||
TRANSLATION_LANGUAGES: Optional[List[str]] = []
|
||||
VOICE_MODE_PROMPT_TEMPLATE: Optional[str]
|
||||
|
||||
|
||||
@router.post("/config/update")
|
||||
|
|
@ -142,6 +145,10 @@ async def update_task_config(
|
|||
form_data.TRANSLATION_LANGUAGES
|
||||
)
|
||||
|
||||
request.app.state.config.VOICE_MODE_PROMPT_TEMPLATE = (
|
||||
form_data.VOICE_MODE_PROMPT_TEMPLATE
|
||||
)
|
||||
|
||||
return {
|
||||
"TASK_MODEL": request.app.state.config.TASK_MODEL,
|
||||
"TASK_MODEL_EXTERNAL": request.app.state.config.TASK_MODEL_EXTERNAL,
|
||||
|
|
@ -159,6 +166,7 @@ async def update_task_config(
|
|||
"QUERY_GENERATION_PROMPT_TEMPLATE": request.app.state.config.QUERY_GENERATION_PROMPT_TEMPLATE,
|
||||
"TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE": request.app.state.config.TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE,
|
||||
"TRANSLATION_LANGUAGES": request.app.state.config.TRANSLATION_LANGUAGES,
|
||||
"VOICE_MODE_PROMPT_TEMPLATE": request.app.state.config.VOICE_MODE_PROMPT_TEMPLATE,
|
||||
}
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -4,10 +4,12 @@ from typing import Optional
|
|||
import time
|
||||
import re
|
||||
import aiohttp
|
||||
from open_webui.models.groups import Groups
|
||||
from pydantic import BaseModel, HttpUrl
|
||||
from fastapi import APIRouter, Depends, HTTPException, Request, status
|
||||
|
||||
|
||||
from open_webui.models.oauth_sessions import OAuthSessions
|
||||
from open_webui.models.tools import (
|
||||
ToolForm,
|
||||
ToolModel,
|
||||
|
|
@ -15,7 +17,11 @@ from open_webui.models.tools import (
|
|||
ToolUserResponse,
|
||||
Tools,
|
||||
)
|
||||
from open_webui.utils.plugin import load_tool_module_by_id, replace_imports
|
||||
from open_webui.utils.plugin import (
|
||||
load_tool_module_by_id,
|
||||
replace_imports,
|
||||
get_tool_module_from_cache,
|
||||
)
|
||||
from open_webui.utils.tools import get_tool_specs
|
||||
from open_webui.utils.auth import get_admin_user, get_verified_user
|
||||
from open_webui.utils.access_control import has_access, has_permission
|
||||
|
|
@ -33,6 +39,14 @@ log.setLevel(SRC_LOG_LEVELS["MAIN"])
|
|||
router = APIRouter()
|
||||
|
||||
|
||||
def get_tool_module(request, tool_id, load_from_db=True):
|
||||
"""
|
||||
Get the tool module by its ID.
|
||||
"""
|
||||
tool_module, _ = get_tool_module_from_cache(request, tool_id, load_from_db)
|
||||
return tool_module
|
||||
|
||||
|
||||
############################
|
||||
# GetTools
|
||||
############################
|
||||
|
|
@ -40,8 +54,21 @@ router = APIRouter()
|
|||
|
||||
@router.get("/", response_model=list[ToolUserResponse])
|
||||
async def get_tools(request: Request, user=Depends(get_verified_user)):
|
||||
tools = Tools.get_tools()
|
||||
tools = []
|
||||
|
||||
# Local Tools
|
||||
for tool in Tools.get_tools():
|
||||
tool_module = get_tool_module(request, tool.id)
|
||||
tools.append(
|
||||
ToolUserResponse(
|
||||
**{
|
||||
**tool.model_dump(),
|
||||
"has_user_valves": hasattr(tool_module, "UserValves"),
|
||||
}
|
||||
)
|
||||
)
|
||||
|
||||
# OpenAPI Tool Servers
|
||||
for server in await get_tool_servers(request):
|
||||
tools.append(
|
||||
ToolUserResponse(
|
||||
|
|
@ -67,15 +94,60 @@ async def get_tools(request: Request, user=Depends(get_verified_user)):
|
|||
)
|
||||
)
|
||||
|
||||
# MCP Tool Servers
|
||||
for server in request.app.state.config.TOOL_SERVER_CONNECTIONS:
|
||||
if server.get("type", "openapi") == "mcp":
|
||||
server_id = server.get("info", {}).get("id")
|
||||
auth_type = server.get("auth_type", "none")
|
||||
|
||||
session_token = None
|
||||
if auth_type == "oauth_2.1":
|
||||
splits = server_id.split(":")
|
||||
server_id = splits[-1] if len(splits) > 1 else server_id
|
||||
|
||||
session_token = (
|
||||
await request.app.state.oauth_client_manager.get_oauth_token(
|
||||
user.id, f"mcp:{server_id}"
|
||||
)
|
||||
)
|
||||
|
||||
tools.append(
|
||||
ToolUserResponse(
|
||||
**{
|
||||
"id": f"server:mcp:{server.get('info', {}).get('id')}",
|
||||
"user_id": f"server:mcp:{server.get('info', {}).get('id')}",
|
||||
"name": server.get("info", {}).get("name", "MCP Tool Server"),
|
||||
"meta": {
|
||||
"description": server.get("info", {}).get(
|
||||
"description", ""
|
||||
),
|
||||
},
|
||||
"access_control": server.get("config", {}).get(
|
||||
"access_control", None
|
||||
),
|
||||
"updated_at": int(time.time()),
|
||||
"created_at": int(time.time()),
|
||||
**(
|
||||
{
|
||||
"authenticated": session_token is not None,
|
||||
}
|
||||
if auth_type == "oauth_2.1"
|
||||
else {}
|
||||
),
|
||||
}
|
||||
)
|
||||
)
|
||||
|
||||
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
|
||||
# Admin can see all tools
|
||||
return tools
|
||||
else:
|
||||
user_group_ids = {group.id for group in Groups.get_groups_by_member_id(user.id)}
|
||||
tools = [
|
||||
tool
|
||||
for tool in tools
|
||||
if tool.user_id == user.id
|
||||
or has_access(user.id, "read", tool.access_control)
|
||||
or has_access(user.id, "read", tool.access_control, user_group_ids)
|
||||
]
|
||||
return tools
|
||||
|
||||
|
|
@ -175,9 +247,19 @@ async def load_tool_from_url(
|
|||
|
||||
|
||||
@router.get("/export", response_model=list[ToolModel])
|
||||
async def export_tools(user=Depends(get_admin_user)):
|
||||
tools = Tools.get_tools()
|
||||
return tools
|
||||
async def export_tools(request: Request, user=Depends(get_verified_user)):
|
||||
if user.role != "admin" and not has_permission(
|
||||
user.id, "workspace.tools_export", request.app.state.config.USER_PERMISSIONS
|
||||
):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail=ERROR_MESSAGES.UNAUTHORIZED,
|
||||
)
|
||||
|
||||
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
|
||||
return Tools.get_tools()
|
||||
else:
|
||||
return Tools.get_tools_by_user_id(user.id, "read")
|
||||
|
||||
|
||||
############################
|
||||
|
|
@ -191,8 +273,13 @@ async def create_new_tools(
|
|||
form_data: ToolForm,
|
||||
user=Depends(get_verified_user),
|
||||
):
|
||||
if user.role != "admin" and not has_permission(
|
||||
user.id, "workspace.tools", request.app.state.config.USER_PERMISSIONS
|
||||
if user.role != "admin" and not (
|
||||
has_permission(
|
||||
user.id, "workspace.tools", request.app.state.config.USER_PERMISSIONS
|
||||
)
|
||||
or has_permission(
|
||||
user.id, "workspace.tools_import", request.app.state.config.USER_PERMISSIONS
|
||||
)
|
||||
):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
|
|
@ -460,8 +547,9 @@ async def update_tools_valves_by_id(
|
|||
try:
|
||||
form_data = {k: v for k, v in form_data.items() if v is not None}
|
||||
valves = Valves(**form_data)
|
||||
Tools.update_tool_valves_by_id(id, valves.model_dump())
|
||||
return valves.model_dump()
|
||||
valves_dict = valves.model_dump(exclude_unset=True)
|
||||
Tools.update_tool_valves_by_id(id, valves_dict)
|
||||
return valves_dict
|
||||
except Exception as e:
|
||||
log.exception(f"Failed to update tool valves by id {id}: {e}")
|
||||
raise HTTPException(
|
||||
|
|
@ -536,10 +624,11 @@ async def update_tools_user_valves_by_id(
|
|||
try:
|
||||
form_data = {k: v for k, v in form_data.items() if v is not None}
|
||||
user_valves = UserValves(**form_data)
|
||||
user_valves_dict = user_valves.model_dump(exclude_unset=True)
|
||||
Tools.update_user_valves_by_id_and_user_id(
|
||||
id, user.id, user_valves.model_dump()
|
||||
id, user.id, user_valves_dict
|
||||
)
|
||||
return user_valves.model_dump()
|
||||
return user_valves_dict
|
||||
except Exception as e:
|
||||
log.exception(f"Failed to update user valves by id {id}: {e}")
|
||||
raise HTTPException(
|
||||
|
|
|
|||
|
|
@ -10,12 +10,16 @@ from pydantic import BaseModel
|
|||
|
||||
|
||||
from open_webui.models.auths import Auths
|
||||
from open_webui.models.oauth_sessions import OAuthSessions
|
||||
|
||||
from open_webui.models.groups import Groups
|
||||
from open_webui.models.chats import Chats
|
||||
from open_webui.models.users import (
|
||||
UserModel,
|
||||
UserGroupIdsModel,
|
||||
UserListResponse,
|
||||
UserInfoListResponse,
|
||||
UserIdNameListResponse,
|
||||
UserRoleUpdateForm,
|
||||
Users,
|
||||
UserSettings,
|
||||
|
|
@ -88,7 +92,25 @@ async def get_users(
|
|||
if direction:
|
||||
filter["direction"] = direction
|
||||
|
||||
return Users.get_users(filter=filter, skip=skip, limit=limit)
|
||||
result = Users.get_users(filter=filter, skip=skip, limit=limit)
|
||||
|
||||
users = result["users"]
|
||||
total = result["total"]
|
||||
|
||||
return {
|
||||
"users": [
|
||||
UserGroupIdsModel(
|
||||
**{
|
||||
**user.model_dump(),
|
||||
"group_ids": [
|
||||
group.id for group in Groups.get_groups_by_member_id(user.id)
|
||||
],
|
||||
}
|
||||
)
|
||||
for user in users
|
||||
],
|
||||
"total": total,
|
||||
}
|
||||
|
||||
|
||||
@router.get("/all", response_model=UserInfoListResponse)
|
||||
|
|
@ -98,6 +120,23 @@ async def get_all_users(
|
|||
return Users.get_users()
|
||||
|
||||
|
||||
@router.get("/search", response_model=UserIdNameListResponse)
|
||||
async def search_users(
|
||||
query: Optional[str] = None,
|
||||
user=Depends(get_verified_user),
|
||||
):
|
||||
limit = PAGE_ITEM_COUNT
|
||||
|
||||
page = 1 # Always return the first page for search
|
||||
skip = (page - 1) * limit
|
||||
|
||||
filter = {}
|
||||
if query:
|
||||
filter["query"] = query
|
||||
|
||||
return Users.get_users(filter=filter, skip=skip, limit=limit)
|
||||
|
||||
|
||||
############################
|
||||
# User Groups
|
||||
############################
|
||||
|
|
@ -130,6 +169,12 @@ class WorkspacePermissions(BaseModel):
|
|||
knowledge: bool = False
|
||||
prompts: bool = False
|
||||
tools: bool = False
|
||||
models_import: bool = False
|
||||
models_export: bool = False
|
||||
prompts_import: bool = False
|
||||
prompts_export: bool = False
|
||||
tools_import: bool = False
|
||||
tools_export: bool = False
|
||||
|
||||
|
||||
class SharingPermissions(BaseModel):
|
||||
|
|
@ -137,6 +182,7 @@ class SharingPermissions(BaseModel):
|
|||
public_knowledge: bool = True
|
||||
public_prompts: bool = True
|
||||
public_tools: bool = True
|
||||
public_notes: bool = True
|
||||
|
||||
|
||||
class ChatPermissions(BaseModel):
|
||||
|
|
@ -162,6 +208,7 @@ class ChatPermissions(BaseModel):
|
|||
|
||||
|
||||
class FeaturesPermissions(BaseModel):
|
||||
api_keys: bool = False
|
||||
direct_tool_servers: bool = False
|
||||
web_search: bool = True
|
||||
image_generation: bool = True
|
||||
|
|
@ -340,6 +387,18 @@ async def get_user_by_id(user_id: str, user=Depends(get_verified_user)):
|
|||
)
|
||||
|
||||
|
||||
@router.get("/{user_id}/oauth/sessions")
|
||||
async def get_user_oauth_sessions_by_id(user_id: str, user=Depends(get_admin_user)):
|
||||
sessions = OAuthSessions.get_sessions_by_user_id(user_id)
|
||||
if sessions and len(sessions) > 0:
|
||||
return sessions
|
||||
else:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail=ERROR_MESSAGES.USER_NOT_FOUND,
|
||||
)
|
||||
|
||||
|
||||
############################
|
||||
# GetUserProfileImageById
|
||||
############################
|
||||
|
|
|
|||
|
|
@ -124,12 +124,3 @@ async def download_db(user=Depends(get_admin_user)):
|
|||
media_type="application/octet-stream",
|
||||
filename="webui.db",
|
||||
)
|
||||
|
||||
|
||||
@router.get("/litellm/config")
|
||||
async def download_litellm_config_yaml(user=Depends(get_admin_user)):
|
||||
return FileResponse(
|
||||
f"{DATA_DIR}/litellm/config.yaml",
|
||||
media_type="application/octet-stream",
|
||||
filename="config.yaml",
|
||||
)
|
||||
|
|
|
|||
|
|
@ -18,7 +18,12 @@ from open_webui.utils.redis import (
|
|||
get_sentinel_url_from_env,
|
||||
)
|
||||
|
||||
from open_webui.config import (
|
||||
CORS_ALLOW_ORIGIN,
|
||||
)
|
||||
|
||||
from open_webui.env import (
|
||||
VERSION,
|
||||
ENABLE_WEBSOCKET_SUPPORT,
|
||||
WEBSOCKET_MANAGER,
|
||||
WEBSOCKET_REDIS_URL,
|
||||
|
|
@ -27,6 +32,11 @@ from open_webui.env import (
|
|||
WEBSOCKET_SENTINEL_PORT,
|
||||
WEBSOCKET_SENTINEL_HOSTS,
|
||||
REDIS_KEY_PREFIX,
|
||||
WEBSOCKET_REDIS_OPTIONS,
|
||||
WEBSOCKET_SERVER_PING_TIMEOUT,
|
||||
WEBSOCKET_SERVER_PING_INTERVAL,
|
||||
WEBSOCKET_SERVER_LOGGING,
|
||||
WEBSOCKET_SERVER_ENGINEIO_LOGGING,
|
||||
)
|
||||
from open_webui.utils.auth import decode_token
|
||||
from open_webui.socket.utils import RedisDict, RedisLock, YdocManager
|
||||
|
|
@ -48,30 +58,44 @@ log.setLevel(SRC_LOG_LEVELS["SOCKET"])
|
|||
|
||||
REDIS = None
|
||||
|
||||
# Configure CORS for Socket.IO
|
||||
SOCKETIO_CORS_ORIGINS = "*" if CORS_ALLOW_ORIGIN == ["*"] else CORS_ALLOW_ORIGIN
|
||||
|
||||
if WEBSOCKET_MANAGER == "redis":
|
||||
if WEBSOCKET_SENTINEL_HOSTS:
|
||||
mgr = socketio.AsyncRedisManager(
|
||||
get_sentinel_url_from_env(
|
||||
WEBSOCKET_REDIS_URL, WEBSOCKET_SENTINEL_HOSTS, WEBSOCKET_SENTINEL_PORT
|
||||
)
|
||||
),
|
||||
redis_options=WEBSOCKET_REDIS_OPTIONS,
|
||||
)
|
||||
else:
|
||||
mgr = socketio.AsyncRedisManager(WEBSOCKET_REDIS_URL)
|
||||
mgr = socketio.AsyncRedisManager(
|
||||
WEBSOCKET_REDIS_URL, redis_options=WEBSOCKET_REDIS_OPTIONS
|
||||
)
|
||||
sio = socketio.AsyncServer(
|
||||
cors_allowed_origins=[],
|
||||
cors_allowed_origins=SOCKETIO_CORS_ORIGINS,
|
||||
async_mode="asgi",
|
||||
transports=(["websocket"] if ENABLE_WEBSOCKET_SUPPORT else ["polling"]),
|
||||
allow_upgrades=ENABLE_WEBSOCKET_SUPPORT,
|
||||
always_connect=True,
|
||||
client_manager=mgr,
|
||||
logger=WEBSOCKET_SERVER_LOGGING,
|
||||
ping_interval=WEBSOCKET_SERVER_PING_INTERVAL,
|
||||
ping_timeout=WEBSOCKET_SERVER_PING_TIMEOUT,
|
||||
engineio_logger=WEBSOCKET_SERVER_ENGINEIO_LOGGING,
|
||||
)
|
||||
else:
|
||||
sio = socketio.AsyncServer(
|
||||
cors_allowed_origins=[],
|
||||
cors_allowed_origins=SOCKETIO_CORS_ORIGINS,
|
||||
async_mode="asgi",
|
||||
transports=(["websocket"] if ENABLE_WEBSOCKET_SUPPORT else ["polling"]),
|
||||
allow_upgrades=ENABLE_WEBSOCKET_SUPPORT,
|
||||
always_connect=True,
|
||||
logger=WEBSOCKET_SERVER_LOGGING,
|
||||
ping_interval=WEBSOCKET_SERVER_PING_INTERVAL,
|
||||
ping_timeout=WEBSOCKET_SERVER_PING_TIMEOUT,
|
||||
engineio_logger=WEBSOCKET_SERVER_ENGINEIO_LOGGING,
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -274,6 +298,8 @@ async def connect(sid, environ, auth):
|
|||
else:
|
||||
USER_POOL[user.id] = [sid]
|
||||
|
||||
await sio.enter_room(sid, f"user:{user.id}")
|
||||
|
||||
|
||||
@sio.on("user-join")
|
||||
async def user_join(sid, data):
|
||||
|
|
@ -296,6 +322,7 @@ async def user_join(sid, data):
|
|||
else:
|
||||
USER_POOL[user.id] = [sid]
|
||||
|
||||
await sio.enter_room(sid, f"user:{user.id}")
|
||||
# Join all the channels
|
||||
channels = Channels.get_channels_by_user_id(user.id)
|
||||
log.debug(f"{channels=}")
|
||||
|
|
@ -356,7 +383,7 @@ async def join_note(sid, data):
|
|||
await sio.enter_room(sid, f"note:{note.id}")
|
||||
|
||||
|
||||
@sio.on("channel-events")
|
||||
@sio.on("events:channel")
|
||||
async def channel_events(sid, data):
|
||||
room = f"channel:{data['channel_id']}"
|
||||
participants = sio.manager.get_participants(
|
||||
|
|
@ -373,7 +400,7 @@ async def channel_events(sid, data):
|
|||
|
||||
if event_type == "typing":
|
||||
await sio.emit(
|
||||
"channel-events",
|
||||
"events:channel",
|
||||
{
|
||||
"channel_id": data["channel_id"],
|
||||
"message_id": data.get("message_id", None),
|
||||
|
|
@ -641,34 +668,24 @@ async def disconnect(sid):
|
|||
def get_event_emitter(request_info, update_db=True):
|
||||
async def __event_emitter__(event_data):
|
||||
user_id = request_info["user_id"]
|
||||
chat_id = request_info["chat_id"]
|
||||
message_id = request_info["message_id"]
|
||||
|
||||
session_ids = list(
|
||||
set(
|
||||
USER_POOL.get(user_id, [])
|
||||
+ (
|
||||
[request_info.get("session_id")]
|
||||
if request_info.get("session_id")
|
||||
else []
|
||||
)
|
||||
)
|
||||
await sio.emit(
|
||||
"events",
|
||||
{
|
||||
"chat_id": chat_id,
|
||||
"message_id": message_id,
|
||||
"data": event_data,
|
||||
},
|
||||
room=f"user:{user_id}",
|
||||
)
|
||||
if (
|
||||
update_db
|
||||
and message_id
|
||||
and not request_info.get("chat_id", "").startswith("local:")
|
||||
):
|
||||
|
||||
emit_tasks = [
|
||||
sio.emit(
|
||||
"chat-events",
|
||||
{
|
||||
"chat_id": request_info.get("chat_id", None),
|
||||
"message_id": request_info.get("message_id", None),
|
||||
"data": event_data,
|
||||
},
|
||||
to=session_id,
|
||||
)
|
||||
for session_id in session_ids
|
||||
]
|
||||
|
||||
await asyncio.gather(*emit_tasks)
|
||||
|
||||
if update_db:
|
||||
if "type" in event_data and event_data["type"] == "status":
|
||||
Chats.add_message_status_to_chat_by_id_and_message_id(
|
||||
request_info["chat_id"],
|
||||
|
|
@ -705,6 +722,23 @@ def get_event_emitter(request_info, update_db=True):
|
|||
},
|
||||
)
|
||||
|
||||
if "type" in event_data and event_data["type"] == "embeds":
|
||||
message = Chats.get_message_by_id_and_message_id(
|
||||
request_info["chat_id"],
|
||||
request_info["message_id"],
|
||||
)
|
||||
|
||||
embeds = event_data.get("data", {}).get("embeds", [])
|
||||
embeds.extend(message.get("embeds", []))
|
||||
|
||||
Chats.upsert_message_to_chat_by_id_and_message_id(
|
||||
request_info["chat_id"],
|
||||
request_info["message_id"],
|
||||
{
|
||||
"embeds": embeds,
|
||||
},
|
||||
)
|
||||
|
||||
if "type" in event_data and event_data["type"] == "files":
|
||||
message = Chats.get_message_by_id_and_message_id(
|
||||
request_info["chat_id"],
|
||||
|
|
@ -741,13 +775,20 @@ def get_event_emitter(request_info, update_db=True):
|
|||
},
|
||||
)
|
||||
|
||||
return __event_emitter__
|
||||
if (
|
||||
"user_id" in request_info
|
||||
and "chat_id" in request_info
|
||||
and "message_id" in request_info
|
||||
):
|
||||
return __event_emitter__
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
def get_event_call(request_info):
|
||||
async def __event_caller__(event_data):
|
||||
response = await sio.call(
|
||||
"chat-events",
|
||||
"events",
|
||||
{
|
||||
"chat_id": request_info.get("chat_id", None),
|
||||
"message_id": request_info.get("message_id", None),
|
||||
|
|
@ -757,7 +798,14 @@ def get_event_call(request_info):
|
|||
)
|
||||
return response
|
||||
|
||||
return __event_caller__
|
||||
if (
|
||||
"session_id" in request_info
|
||||
and "chat_id" in request_info
|
||||
and "message_id" in request_info
|
||||
):
|
||||
return __event_caller__
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
get_event_caller = get_event_call
|
||||
|
|
|
|||
|
Before Width: | Height: | Size: 1.6 KiB |
|
Before Width: | Height: | Size: 3.7 KiB |
|
Before Width: | Height: | Size: 16 KiB |
|
Before Width: | Height: | Size: 15 KiB |
|
Before Width: | Height: | Size: 10 KiB |
|
Before Width: | Height: | Size: 14 KiB |
|
Before Width: | Height: | Size: 5.2 KiB |
|
|
@ -1,21 +0,0 @@
|
|||
{
|
||||
"name": "Open WebUI",
|
||||
"short_name": "WebUI",
|
||||
"icons": [
|
||||
{
|
||||
"src": "/static/web-app-manifest-192x192.png",
|
||||
"sizes": "192x192",
|
||||
"type": "image/png",
|
||||
"purpose": "maskable"
|
||||
},
|
||||
{
|
||||
"src": "/static/web-app-manifest-512x512.png",
|
||||
"sizes": "512x512",
|
||||
"type": "image/png",
|
||||
"purpose": "maskable"
|
||||
}
|
||||
],
|
||||
"theme_color": "#ffffff",
|
||||
"background_color": "#ffffff",
|
||||
"display": "standalone"
|
||||
}
|
||||
|
Before Width: | Height: | Size: 5.3 KiB |
|
Before Width: | Height: | Size: 5.1 KiB |
|
|
@ -1 +0,0 @@
|
|||
Name,Email,Password,Role
|
||||
|