This refactors the model import functionality to improve performance and user experience by centralizing the logic on the backend.
Previously, the frontend would parse an imported JSON file and send an individual API request for each model, which was slow and inefficient.
This change introduces a new backend endpoint, `/api/v1/models/import`, that accepts a list of model objects. The frontend now reads the selected JSON file, parses it, and sends the entire payload to the backend in a single request. The backend then processes this list, creating or updating models as necessary.
This commit also includes the following fixes:
- Handles cases where the imported JSON contains models without `meta` or `params` fields by providing default empty values.
This moves the JSON model import functionality to the backend. Instead of the frontend parsing the JSON file and sending multiple requests, it now uploads the file to a new endpoint (/api/v1/models/import), which processes the file and imports the models. This improves efficiency and provides better user feedback.
The previous implementation for unarchiving all chats in `ArchivedChatsModal.svelte` was inefficient, as it sent a separate request for each chat, which could potentially overload the server.
This commit introduces a new backend endpoint, `/chats/unarchive/all`, to handle the bulk unarchiving of all chats for a user with a single API call.
The frontend has been updated to use this new endpoint, resolving the performance issue by minimizing the number of requests to the server.
This commit introduces a new permission toggle that allows administrators to control whether users can publicly share their notes.
- Adds a new environment variable `USER_PERMISSIONS_NOTES_ALLOW_PUBLIC_SHARING` to control the default setting.
- Adds a `public_notes` permission to the `sharing` section of the user permissions.
- Adds a toggle switch to the admin panel for managing this permission.
- Implements backend logic to enforce the permission when a user attempts to share a note publicly.
This commit introduces support for the DISKANN index type in the Milvus vector database integration.
Changes include:
- Added `MILVUS_DISKANN_MAX_DEGREE` and `MILVUS_DISKANN_SEARCH_LIST_SIZE` configuration variables.
- Updated the Milvus client to recognize and configure the DISKANN index type during collection creation.
The pymilvus library expects -1 for unlimited queries, but the code was passing None, which caused a TypeError. This commit changes the default value of the limit parameter in the query method from None to -1. It also updates the call site in the get method to pass -1 instead of None and updates the type hint and a comment to reflect this change.
This commit fixes an issue where Retrieval-Augmented Generation (RAG)
queries were still being generated even when all attached files were set
to 'full context' mode. This was inefficient as the full content of the
files was already available to the model.
The `chat_completion_files_handler` in `backend/open_webui/utils/middleware.py`
has been updated to:
- Check if all attached files have the `context: 'full'` property.
- Skip the `generate_queries` step if all files are in full context mode.
- Pass a `full_context=True` flag to the `get_sources_from_items`
function to ensure it fetches the entire document content instead of
performing a vector search.
This change ensures that RAG queries are only generated when necessary,
improving the efficiency of the system.
- Fix file handle memory leak in download_file_stream by properly closing and reopening files
- Add requests.Session context manager for proper HTTP connection cleanup
- Remove unnecessary file.seek(0) after file reopening
- Add timeout to prevent hanging connections
This prevents memory accumulation during large file downloads and ensures
proper resource cleanup in all scenarios.
Signed-off-by: Sihyeon Jang <sihyeon.jang@navercorp.com>
- Replace inefficient memory-based filtering with database-level filtering
- Add proper access control conditions to SQL query
- Reduce memory usage by filtering at database level instead of loading all notes
- Maintain access control validation with post-filtering for complex cases
This change significantly improves performance for users with many notes
by reducing the number of database queries and memory usage.
Signed-off-by: Sihyeon Jang <sihyeon.jang@navercorp.com>