Merge pull request #10939 from open-webui/dev

0.5.19
This commit is contained in:
Timothy Jaeryang Baek 2025-03-04 22:22:20 -08:00 committed by GitHub
commit 1a51584fe0
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
139 changed files with 3697 additions and 1691 deletions

View file

@ -1,80 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
# Bug Report
## Important Notes
- **Before submitting a bug report**: Please check the Issues or Discussions section to see if a similar issue or feature request has already been posted. It's likely we're already tracking it! If youre unsure, start a discussion post first. This will help us efficiently focus on improving the project.
- **Collaborate respectfully**: We value a constructive attitude, so please be mindful of your communication. If negativity is part of your approach, our capacity to engage may be limited. Were here to help if youre open to learning and communicating positively. Remember, Open WebUI is a volunteer-driven project managed by a single maintainer and supported by contributors who also have full-time jobs. We appreciate your time and ask that you respect ours.
- **Contributing**: If you encounter an issue, we highly encourage you to submit a pull request or fork the project. We actively work to prevent contributor burnout to maintain the quality and continuity of Open WebUI.
- **Bug reproducibility**: If a bug cannot be reproduced with a `:main` or `:dev` Docker setup, or a pip install with Python 3.11, it may require additional help from the community. In such cases, we will move it to the "issues" Discussions section due to our limited resources. We encourage the community to assist with these issues. Remember, its not that the issue doesnt exist; we need your help!
Note: Please remove the notes above when submitting your post. Thank you for your understanding and support!
---
## Installation Method
[Describe the method you used to install the project, e.g., git clone, Docker, pip, etc.]
## Environment
- **Open WebUI Version:** [e.g., v0.3.11]
- **Ollama (if applicable):** [e.g., v0.2.0, v0.1.32-rc1]
- **Operating System:** [e.g., Windows 10, macOS Big Sur, Ubuntu 20.04]
- **Browser (if applicable):** [e.g., Chrome 100.0, Firefox 98.0]
**Confirmation:**
- [ ] I have read and followed all the instructions provided in the README.md.
- [ ] I am on the latest version of both Open WebUI and Ollama.
- [ ] I have included the browser console logs.
- [ ] I have included the Docker container logs.
- [ ] I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below.
## Expected Behavior:
[Describe what you expected to happen.]
## Actual Behavior:
[Describe what actually happened.]
## Description
**Bug Summary:**
[Provide a brief but clear summary of the bug]
## Reproduction Details
**Steps to Reproduce:**
[Outline the steps to reproduce the bug. Be as detailed as possible.]
## Logs and Screenshots
**Browser Console Logs:**
[Include relevant browser console logs, if applicable]
**Docker Container Logs:**
[Include relevant Docker container logs, if applicable]
**Screenshots/Screen Recordings (if applicable):**
[Attach any relevant screenshots to help illustrate the issue]
## Additional Information
[Include any additional details that may help in understanding and reproducing the issue. This could include specific configurations, error messages, or anything else relevant to the bug.]
## Note
If the bug report is incomplete or does not follow the provided instructions, it may not be addressed. Please ensure that you have followed the steps outlined in the README.md and troubleshooting.md documents, and provide all necessary information for us to reproduce and address the issue. Thank you!

144
.github/ISSUE_TEMPLATE/bug_report.yaml vendored Normal file
View file

@ -0,0 +1,144 @@
name: Bug Report
description: Create a detailed bug report to help us improve Open WebUI.
title: 'issue: '
labels: ['bug', 'triage']
assignees: []
body:
- type: markdown
attributes:
value: |
# Bug Report
## Important Notes
- **Before submitting a bug report**: Please check the [Issues](https://github.com/open-webui/open-webui/issues) or [Discussions](https://github.com/open-webui/open-webui/discussions) sections to see if a similar issue has already been reported. If unsure, start a discussion first, as this helps us efficiently focus on improving the project.
- **Respectful collaboration**: Open WebUI is a volunteer-driven project with a single maintainer and contributors who also have full-time jobs. Please be constructive and respectful in your communication.
- **Contributing**: If you encounter an issue, consider submitting a pull request or forking the project. We prioritize preventing contributor burnout to maintain Open WebUI's quality.
- **Bug Reproducibility**: If a bug cannot be reproduced using a `:main` or `:dev` Docker setup or with `pip install` on Python 3.11, community assistance may be required. In such cases, we will move it to the "[Issues](https://github.com/open-webui/open-webui/discussions/categories/issues)" Discussions section. Your help is appreciated!
- type: checkboxes
id: issue-check
attributes:
label: Check Existing Issues
description: Confirm that youve checked for existing reports before submitting a new one.
options:
- label: I have searched the existing issues and discussions.
required: true
- type: dropdown
id: installation-method
attributes:
label: Installation Method
description: How did you install Open WebUI?
options:
- Git Clone
- Pip Install
- Docker
- Other
validations:
required: true
- type: input
id: open-webui-version
attributes:
label: Open WebUI Version
description: Specify the version (e.g., v0.3.11)
validations:
required: true
- type: input
id: ollama-version
attributes:
label: Ollama Version (if applicable)
description: Specify the version (e.g., v0.2.0, or v0.1.32-rc1)
validations:
required: false
- type: input
id: operating-system
attributes:
label: Operating System
description: Specify the OS (e.g., Windows 10, macOS Sonoma, Ubuntu 22.04)
validations:
required: true
- type: input
id: browser
attributes:
label: Browser (if applicable)
description: Specify the browser/version (e.g., Chrome 100.0, Firefox 98.0)
validations:
required: false
- type: checkboxes
id: confirmation
attributes:
label: Confirmation
description: Ensure the following prerequisites have been met.
options:
- label: I have read and followed all instructions in `README.md`.
required: true
- label: I am using the latest version of **both** Open WebUI and Ollama.
required: true
- label: I have checked the browser console logs.
required: true
- label: I have checked the Docker container logs.
required: true
- label: I have listed steps to reproduce the bug in detail.
required: true
- type: textarea
id: expected-behavior
attributes:
label: Expected Behavior
description: Describe what should have happened.
validations:
required: true
- type: textarea
id: actual-behavior
attributes:
label: Actual Behavior
description: Describe what actually happened.
validations:
required: true
- type: textarea
id: reproduction-steps
attributes:
label: Steps to Reproduce
description: Provide step-by-step instructions to reproduce the issue.
placeholder: |
1. Go to '...'
2. Click on '...'
3. Scroll down to '...'
4. See the error message '...'
validations:
required: true
- type: textarea
id: logs-screenshots
attributes:
label: Logs & Screenshots
description: Include relevant logs, errors, or screenshots to help diagnose the issue.
placeholder: 'Attach logs from the browser console, Docker logs, or error messages.'
validations:
required: true
- type: textarea
id: additional-info
attributes:
label: Additional Information
description: Provide any extra details that may assist in understanding the issue.
validations:
required: false
- type: markdown
attributes:
value: |
## Note
If the bug report is incomplete or does not follow instructions, it may not be addressed. Ensure that you've followed all the **README.md** and **troubleshooting.md** guidelines, and provide all necessary information for us to reproduce the issue.
Thank you for contributing to Open WebUI!

View file

@ -1,7 +1,7 @@
name: Feature Request name: Feature Request
description: Suggest an idea for this project description: Suggest an idea for this project
title: "[Feature Request]: " title: 'feat: '
labels: ["triage"] labels: ['triage']
body: body:
- type: markdown - type: markdown
attributes: attributes:

View file

@ -14,7 +14,7 @@ env:
jobs: jobs:
build-main-image: build-main-image:
runs-on: ubuntu-latest runs-on: ${{ matrix.platform == 'linux/arm64' && 'ubuntu-24.04-arm' || 'ubuntu-latest' }}
permissions: permissions:
contents: read contents: read
packages: write packages: write
@ -111,7 +111,7 @@ jobs:
retention-days: 1 retention-days: 1
build-cuda-image: build-cuda-image:
runs-on: ubuntu-latest runs-on: ${{ matrix.platform == 'linux/arm64' && 'ubuntu-24.04-arm' || 'ubuntu-latest' }}
permissions: permissions:
contents: read contents: read
packages: write packages: write
@ -211,7 +211,7 @@ jobs:
retention-days: 1 retention-days: 1
build-ollama-image: build-ollama-image:
runs-on: ubuntu-latest runs-on: ${{ matrix.platform == 'linux/arm64' && 'ubuntu-24.04-arm' || 'ubuntu-latest' }}
permissions: permissions:
contents: read contents: read
packages: write packages: write

View file

@ -5,6 +5,24 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/), The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.5.19] - 2024-03-04
### Added
- **📊 Logit Bias Parameter Support**: Fine-tune conversation dynamics by adjusting the Logit Bias parameter directly in chat settings, giving you more control over model responses.
- **⌨️ Customizable Enter Behavior**: You can now configure Enter to send messages only when combined with Ctrl (Ctrl+Enter) via Settings > Interface, preventing accidental message sends.
- **📝 Collapsible Code Blocks**: Easily collapse long code blocks to declutter your chat, making it easier to focus on important details.
- **🏷️ Tag Selector in Model Selector**: Quickly find and categorize models with the new tag filtering system in the Model Selector, streamlining model discovery.
- **📈 Experimental Elasticsearch Vector DB Support**: Now supports Elasticsearch as a vector database, offering more flexibility for data retrieval in Retrieval-Augmented Generation (RAG) workflows.
- **⚙️ General Reliability Enhancements**: Various stability improvements across the WebUI, ensuring a smoother, more consistent experience.
- **🌍 Updated Translations**: Refined multilingual support for better localization and accuracy across various languages.
### Fixed
- **🔄 "Stream" Hook Activation**: Fixed an issue where the "Stream" hook only worked when globally enabled, ensuring reliable real-time filtering.
- **📧 LDAP Email Case Sensitivity**: Resolved an issue where LDAP login failed due to email case sensitivity mismatches, improving authentication reliability.
- **💬 WebSocket Chat Event Registration**: Fixed a bug preventing chat event listeners from being registered upon sign-in, ensuring real-time updates work properly.
## [0.5.18] - 2025-02-27 ## [0.5.18] - 2025-02-27
### Fixed ### Fixed

View file

@ -587,6 +587,14 @@ load_oauth_providers()
STATIC_DIR = Path(os.getenv("STATIC_DIR", OPEN_WEBUI_DIR / "static")).resolve() STATIC_DIR = Path(os.getenv("STATIC_DIR", OPEN_WEBUI_DIR / "static")).resolve()
for file_path in (FRONTEND_BUILD_DIR / "static").glob("**/*"):
if file_path.is_file():
target_path = STATIC_DIR / file_path.relative_to(
(FRONTEND_BUILD_DIR / "static")
)
target_path.parent.mkdir(parents=True, exist_ok=True)
shutil.copyfile(file_path, target_path)
frontend_favicon = FRONTEND_BUILD_DIR / "static" / "favicon.png" frontend_favicon = FRONTEND_BUILD_DIR / "static" / "favicon.png"
if frontend_favicon.exists(): if frontend_favicon.exists():
@ -659,11 +667,7 @@ if CUSTOM_NAME:
# LICENSE_KEY # LICENSE_KEY
#################################### ####################################
LICENSE_KEY = PersistentConfig( LICENSE_KEY = os.environ.get("LICENSE_KEY", "")
"LICENSE_KEY",
"license.key",
os.environ.get("LICENSE_KEY", ""),
)
#################################### ####################################
# STORAGE PROVIDER # STORAGE PROVIDER
@ -695,16 +699,16 @@ AZURE_STORAGE_KEY = os.environ.get("AZURE_STORAGE_KEY", None)
# File Upload DIR # File Upload DIR
#################################### ####################################
UPLOAD_DIR = f"{DATA_DIR}/uploads" UPLOAD_DIR = DATA_DIR / "uploads"
Path(UPLOAD_DIR).mkdir(parents=True, exist_ok=True) UPLOAD_DIR.mkdir(parents=True, exist_ok=True)
#################################### ####################################
# Cache DIR # Cache DIR
#################################### ####################################
CACHE_DIR = f"{DATA_DIR}/cache" CACHE_DIR = DATA_DIR / "cache"
Path(CACHE_DIR).mkdir(parents=True, exist_ok=True) CACHE_DIR.mkdir(parents=True, exist_ok=True)
#################################### ####################################
@ -1541,6 +1545,15 @@ OPENSEARCH_CERT_VERIFY = os.environ.get("OPENSEARCH_CERT_VERIFY", False)
OPENSEARCH_USERNAME = os.environ.get("OPENSEARCH_USERNAME", None) OPENSEARCH_USERNAME = os.environ.get("OPENSEARCH_USERNAME", None)
OPENSEARCH_PASSWORD = os.environ.get("OPENSEARCH_PASSWORD", None) OPENSEARCH_PASSWORD = os.environ.get("OPENSEARCH_PASSWORD", None)
# ElasticSearch
ELASTICSEARCH_URL = os.environ.get("ELASTICSEARCH_URL", "https://localhost:9200")
ELASTICSEARCH_CA_CERTS = os.environ.get("ELASTICSEARCH_CA_CERTS", None)
ELASTICSEARCH_API_KEY = os.environ.get("ELASTICSEARCH_API_KEY", None)
ELASTICSEARCH_USERNAME = os.environ.get("ELASTICSEARCH_USERNAME", None)
ELASTICSEARCH_PASSWORD = os.environ.get("ELASTICSEARCH_PASSWORD", None)
ELASTICSEARCH_CLOUD_ID = os.environ.get("ELASTICSEARCH_CLOUD_ID", None)
SSL_ASSERT_FINGERPRINT = os.environ.get("SSL_ASSERT_FINGERPRINT", None)
# Pgvector # Pgvector
PGVECTOR_DB_URL = os.environ.get("PGVECTOR_DB_URL", DATABASE_URL) PGVECTOR_DB_URL = os.environ.get("PGVECTOR_DB_URL", DATABASE_URL)
if VECTOR_DB == "pgvector" and not PGVECTOR_DB_URL.startswith("postgres"): if VECTOR_DB == "pgvector" and not PGVECTOR_DB_URL.startswith("postgres"):
@ -1977,6 +1990,12 @@ EXA_API_KEY = PersistentConfig(
os.getenv("EXA_API_KEY", ""), os.getenv("EXA_API_KEY", ""),
) )
PERPLEXITY_API_KEY = PersistentConfig(
"PERPLEXITY_API_KEY",
"rag.web.search.perplexity_api_key",
os.getenv("PERPLEXITY_API_KEY", ""),
)
RAG_WEB_SEARCH_RESULT_COUNT = PersistentConfig( RAG_WEB_SEARCH_RESULT_COUNT = PersistentConfig(
"RAG_WEB_SEARCH_RESULT_COUNT", "RAG_WEB_SEARCH_RESULT_COUNT",
"rag.web.search.result_count", "rag.web.search.result_count",

View file

@ -65,10 +65,8 @@ except Exception:
# LOGGING # LOGGING
#################################### ####################################
log_levels = ["CRITICAL", "ERROR", "WARNING", "INFO", "DEBUG"]
GLOBAL_LOG_LEVEL = os.environ.get("GLOBAL_LOG_LEVEL", "").upper() GLOBAL_LOG_LEVEL = os.environ.get("GLOBAL_LOG_LEVEL", "").upper()
if GLOBAL_LOG_LEVEL in log_levels: if GLOBAL_LOG_LEVEL in logging.getLevelNamesMapping():
logging.basicConfig(stream=sys.stdout, level=GLOBAL_LOG_LEVEL, force=True) logging.basicConfig(stream=sys.stdout, level=GLOBAL_LOG_LEVEL, force=True)
else: else:
GLOBAL_LOG_LEVEL = "INFO" GLOBAL_LOG_LEVEL = "INFO"
@ -78,6 +76,7 @@ log.info(f"GLOBAL_LOG_LEVEL: {GLOBAL_LOG_LEVEL}")
if "cuda_error" in locals(): if "cuda_error" in locals():
log.exception(cuda_error) log.exception(cuda_error)
del cuda_error
log_sources = [ log_sources = [
"AUDIO", "AUDIO",
@ -100,7 +99,7 @@ SRC_LOG_LEVELS = {}
for source in log_sources: for source in log_sources:
log_env_var = source + "_LOG_LEVEL" log_env_var = source + "_LOG_LEVEL"
SRC_LOG_LEVELS[source] = os.environ.get(log_env_var, "").upper() SRC_LOG_LEVELS[source] = os.environ.get(log_env_var, "").upper()
if SRC_LOG_LEVELS[source] not in log_levels: if SRC_LOG_LEVELS[source] not in logging.getLevelNamesMapping():
SRC_LOG_LEVELS[source] = GLOBAL_LOG_LEVEL SRC_LOG_LEVELS[source] = GLOBAL_LOG_LEVEL
log.info(f"{log_env_var}: {SRC_LOG_LEVELS[source]}") log.info(f"{log_env_var}: {SRC_LOG_LEVELS[source]}")
@ -386,6 +385,7 @@ ENABLE_WEBSOCKET_SUPPORT = (
WEBSOCKET_MANAGER = os.environ.get("WEBSOCKET_MANAGER", "") WEBSOCKET_MANAGER = os.environ.get("WEBSOCKET_MANAGER", "")
WEBSOCKET_REDIS_URL = os.environ.get("WEBSOCKET_REDIS_URL", REDIS_URL) WEBSOCKET_REDIS_URL = os.environ.get("WEBSOCKET_REDIS_URL", REDIS_URL)
WEBSOCKET_REDIS_LOCK_TIMEOUT = os.environ.get("WEBSOCKET_REDIS_LOCK_TIMEOUT", 60)
AIOHTTP_CLIENT_TIMEOUT = os.environ.get("AIOHTTP_CLIENT_TIMEOUT", "") AIOHTTP_CLIENT_TIMEOUT = os.environ.get("AIOHTTP_CLIENT_TIMEOUT", "")
@ -397,19 +397,20 @@ else:
except Exception: except Exception:
AIOHTTP_CLIENT_TIMEOUT = 300 AIOHTTP_CLIENT_TIMEOUT = 300
AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST = os.environ.get( AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST = os.environ.get(
"AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST", "" "AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST",
os.environ.get("AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST", ""),
) )
if AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST == "":
AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST = None if AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST == "":
AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST = None
else: else:
try: try:
AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST = int( AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST = int(AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST)
AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST
)
except Exception: except Exception:
AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST = 5 AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST = 5
#################################### ####################################
# OFFLINE_MODE # OFFLINE_MODE

View file

@ -215,6 +215,7 @@ from open_webui.config import (
BING_SEARCH_V7_SUBSCRIPTION_KEY, BING_SEARCH_V7_SUBSCRIPTION_KEY,
BRAVE_SEARCH_API_KEY, BRAVE_SEARCH_API_KEY,
EXA_API_KEY, EXA_API_KEY,
PERPLEXITY_API_KEY,
KAGI_SEARCH_API_KEY, KAGI_SEARCH_API_KEY,
MOJEEK_SEARCH_API_KEY, MOJEEK_SEARCH_API_KEY,
BOCHA_SEARCH_API_KEY, BOCHA_SEARCH_API_KEY,
@ -400,8 +401,8 @@ async def lifespan(app: FastAPI):
if RESET_CONFIG_ON_START: if RESET_CONFIG_ON_START:
reset_config() reset_config()
if app.state.config.LICENSE_KEY: if LICENSE_KEY:
get_license_data(app, app.state.config.LICENSE_KEY) get_license_data(app, LICENSE_KEY)
asyncio.create_task(periodic_usage_pool_cleanup()) asyncio.create_task(periodic_usage_pool_cleanup())
yield yield
@ -419,7 +420,7 @@ oauth_manager = OAuthManager(app)
app.state.config = AppConfig() app.state.config = AppConfig()
app.state.WEBUI_NAME = WEBUI_NAME app.state.WEBUI_NAME = WEBUI_NAME
app.state.config.LICENSE_KEY = LICENSE_KEY app.state.LICENSE_METADATA = None
######################################## ########################################
# #
@ -603,6 +604,7 @@ app.state.config.JINA_API_KEY = JINA_API_KEY
app.state.config.BING_SEARCH_V7_ENDPOINT = BING_SEARCH_V7_ENDPOINT app.state.config.BING_SEARCH_V7_ENDPOINT = BING_SEARCH_V7_ENDPOINT
app.state.config.BING_SEARCH_V7_SUBSCRIPTION_KEY = BING_SEARCH_V7_SUBSCRIPTION_KEY app.state.config.BING_SEARCH_V7_SUBSCRIPTION_KEY = BING_SEARCH_V7_SUBSCRIPTION_KEY
app.state.config.EXA_API_KEY = EXA_API_KEY app.state.config.EXA_API_KEY = EXA_API_KEY
app.state.config.PERPLEXITY_API_KEY = PERPLEXITY_API_KEY
app.state.config.RAG_WEB_SEARCH_RESULT_COUNT = RAG_WEB_SEARCH_RESULT_COUNT app.state.config.RAG_WEB_SEARCH_RESULT_COUNT = RAG_WEB_SEARCH_RESULT_COUNT
app.state.config.RAG_WEB_SEARCH_CONCURRENT_REQUESTS = RAG_WEB_SEARCH_CONCURRENT_REQUESTS app.state.config.RAG_WEB_SEARCH_CONCURRENT_REQUESTS = RAG_WEB_SEARCH_CONCURRENT_REQUESTS
@ -1019,7 +1021,7 @@ async def chat_completion(
"files": form_data.get("files", None), "files": form_data.get("files", None),
"features": form_data.get("features", None), "features": form_data.get("features", None),
"variables": form_data.get("variables", None), "variables": form_data.get("variables", None),
"model": model_info.model_dump() if model_info else model, "model": model,
"direct": model_item.get("direct", False), "direct": model_item.get("direct", False),
**( **(
{"function_calling": "native"} {"function_calling": "native"}
@ -1037,7 +1039,7 @@ async def chat_completion(
form_data["metadata"] = metadata form_data["metadata"] = metadata
form_data, metadata, events = await process_chat_payload( form_data, metadata, events = await process_chat_payload(
request, form_data, metadata, user, model request, form_data, user, metadata, model
) )
except Exception as e: except Exception as e:
@ -1051,7 +1053,7 @@ async def chat_completion(
response = await chat_completion_handler(request, form_data, user) response = await chat_completion_handler(request, form_data, user)
return await process_chat_response( return await process_chat_response(
request, response, form_data, user, events, metadata, tasks request, response, form_data, user, metadata, model, events, tasks
) )
except Exception as e: except Exception as e:
raise HTTPException( raise HTTPException(
@ -1140,9 +1142,10 @@ async def get_app_config(request: Request):
if data is not None and "id" in data: if data is not None and "id" in data:
user = Users.get_user_by_id(data["id"]) user = Users.get_user_by_id(data["id"])
onboarding = False
if user is None:
user_count = Users.get_num_users() user_count = Users.get_num_users()
onboarding = False
if user is None:
onboarding = user_count == 0 onboarding = user_count == 0
return { return {
@ -1188,6 +1191,7 @@ async def get_app_config(request: Request):
{ {
"default_models": app.state.config.DEFAULT_MODELS, "default_models": app.state.config.DEFAULT_MODELS,
"default_prompt_suggestions": app.state.config.DEFAULT_PROMPT_SUGGESTIONS, "default_prompt_suggestions": app.state.config.DEFAULT_PROMPT_SUGGESTIONS,
"user_count": user_count,
"code": { "code": {
"engine": app.state.config.CODE_EXECUTION_ENGINE, "engine": app.state.config.CODE_EXECUTION_ENGINE,
}, },
@ -1211,6 +1215,14 @@ async def get_app_config(request: Request):
"api_key": GOOGLE_DRIVE_API_KEY.value, "api_key": GOOGLE_DRIVE_API_KEY.value,
}, },
"onedrive": {"client_id": ONEDRIVE_CLIENT_ID.value}, "onedrive": {"client_id": ONEDRIVE_CLIENT_ID.value},
"license_metadata": app.state.LICENSE_METADATA,
**(
{
"active_entries": app.state.USER_COUNT,
}
if user.role == "admin"
else {}
),
} }
if user is not None if user is not None
else {} else {}

View file

@ -414,6 +414,13 @@ def get_sources_from_files(
] ]
], ],
} }
elif file.get("file").get("data"):
context = {
"documents": [[file.get("file").get("data", {}).get("content")]],
"metadatas": [
[file.get("file").get("data", {}).get("metadata", {})]
],
}
else: else:
collection_names = [] collection_names = []
if file.get("type") == "collection": if file.get("type") == "collection":

View file

@ -16,6 +16,10 @@ elif VECTOR_DB == "pgvector":
from open_webui.retrieval.vector.dbs.pgvector import PgvectorClient from open_webui.retrieval.vector.dbs.pgvector import PgvectorClient
VECTOR_DB_CLIENT = PgvectorClient() VECTOR_DB_CLIENT = PgvectorClient()
elif VECTOR_DB == "elasticsearch":
from open_webui.retrieval.vector.dbs.elasticsearch import ElasticsearchClient
VECTOR_DB_CLIENT = ElasticsearchClient()
else: else:
from open_webui.retrieval.vector.dbs.chroma import ChromaClient from open_webui.retrieval.vector.dbs.chroma import ChromaClient

View file

@ -0,0 +1,274 @@
from elasticsearch import Elasticsearch, BadRequestError
from typing import Optional
import ssl
from elasticsearch.helpers import bulk, scan
from open_webui.retrieval.vector.main import VectorItem, SearchResult, GetResult
from open_webui.config import (
ELASTICSEARCH_URL,
ELASTICSEARCH_CA_CERTS,
ELASTICSEARCH_API_KEY,
ELASTICSEARCH_USERNAME,
ELASTICSEARCH_PASSWORD,
ELASTICSEARCH_CLOUD_ID,
SSL_ASSERT_FINGERPRINT,
)
class ElasticsearchClient:
"""
Important:
in order to reduce the number of indexes and since the embedding vector length is fixed, we avoid creating
an index for each file but store it as a text field, while seperating to different index
baesd on the embedding length.
"""
def __init__(self):
self.index_prefix = "open_webui_collections"
self.client = Elasticsearch(
hosts=[ELASTICSEARCH_URL],
ca_certs=ELASTICSEARCH_CA_CERTS,
api_key=ELASTICSEARCH_API_KEY,
cloud_id=ELASTICSEARCH_CLOUD_ID,
basic_auth=(
(ELASTICSEARCH_USERNAME, ELASTICSEARCH_PASSWORD)
if ELASTICSEARCH_USERNAME and ELASTICSEARCH_PASSWORD
else None
),
ssl_assert_fingerprint=SSL_ASSERT_FINGERPRINT,
)
# Status: works
def _get_index_name(self, dimension: int) -> str:
return f"{self.index_prefix}_d{str(dimension)}"
# Status: works
def _scan_result_to_get_result(self, result) -> GetResult:
if not result:
return None
ids = []
documents = []
metadatas = []
for hit in result:
ids.append(hit["_id"])
documents.append(hit["_source"].get("text"))
metadatas.append(hit["_source"].get("metadata"))
return GetResult(ids=[ids], documents=[documents], metadatas=[metadatas])
# Status: works
def _result_to_get_result(self, result) -> GetResult:
if not result["hits"]["hits"]:
return None
ids = []
documents = []
metadatas = []
for hit in result["hits"]["hits"]:
ids.append(hit["_id"])
documents.append(hit["_source"].get("text"))
metadatas.append(hit["_source"].get("metadata"))
return GetResult(ids=[ids], documents=[documents], metadatas=[metadatas])
# Status: works
def _result_to_search_result(self, result) -> SearchResult:
ids = []
distances = []
documents = []
metadatas = []
for hit in result["hits"]["hits"]:
ids.append(hit["_id"])
distances.append(hit["_score"])
documents.append(hit["_source"].get("text"))
metadatas.append(hit["_source"].get("metadata"))
return SearchResult(
ids=[ids],
distances=[distances],
documents=[documents],
metadatas=[metadatas],
)
# Status: works
def _create_index(self, dimension: int):
body = {
"mappings": {
"properties": {
"collection": {"type": "keyword"},
"id": {"type": "keyword"},
"vector": {
"type": "dense_vector",
"dims": dimension, # Adjust based on your vector dimensions
"index": True,
"similarity": "cosine",
},
"text": {"type": "text"},
"metadata": {"type": "object"},
}
}
}
self.client.indices.create(index=self._get_index_name(dimension), body=body)
# Status: works
def _create_batches(self, items: list[VectorItem], batch_size=100):
for i in range(0, len(items), batch_size):
yield items[i : min(i + batch_size, len(items))]
# Status: works
def has_collection(self, collection_name) -> bool:
query_body = {"query": {"bool": {"filter": []}}}
query_body["query"]["bool"]["filter"].append(
{"term": {"collection": collection_name}}
)
try:
result = self.client.count(index=f"{self.index_prefix}*", body=query_body)
return result.body["count"] > 0
except Exception as e:
return None
# @TODO: Make this delete a collection and not an index
def delete_colleciton(self, collection_name: str):
# TODO: fix this to include the dimension or a * prefix
# delete_collection here means delete a bunch of documents for an index.
# We are simply adapting to the norms of the other DBs.
self.client.indices.delete(index=self._get_collection_name(collection_name))
# Status: works
def search(
self, collection_name: str, vectors: list[list[float]], limit: int
) -> Optional[SearchResult]:
query = {
"size": limit,
"_source": ["text", "metadata"],
"query": {
"script_score": {
"query": {
"bool": {"filter": [{"term": {"collection": collection_name}}]}
},
"script": {
"source": "cosineSimilarity(params.vector, 'vector') + 1.0",
"params": {
"vector": vectors[0]
}, # Assuming single query vector
},
}
},
}
result = self.client.search(
index=self._get_index_name(len(vectors[0])), body=query
)
return self._result_to_search_result(result)
# Status: only tested halfwat
def query(
self, collection_name: str, filter: dict, limit: Optional[int] = None
) -> Optional[GetResult]:
if not self.has_collection(collection_name):
return None
query_body = {
"query": {"bool": {"filter": []}},
"_source": ["text", "metadata"],
}
for field, value in filter.items():
query_body["query"]["bool"]["filter"].append({"term": {field: value}})
query_body["query"]["bool"]["filter"].append(
{"term": {"collection": collection_name}}
)
size = limit if limit else 10
try:
result = self.client.search(
index=f"{self.index_prefix}*",
body=query_body,
size=size,
)
return self._result_to_get_result(result)
except Exception as e:
return None
# Status: works
def _has_index(self, dimension: int):
return self.client.indices.exists(
index=self._get_index_name(dimension=dimension)
)
def get_or_create_index(self, dimension: int):
if not self._has_index(dimension=dimension):
self._create_index(dimension=dimension)
# Status: works
def get(self, collection_name: str) -> Optional[GetResult]:
# Get all the items in the collection.
query = {
"query": {"bool": {"filter": [{"term": {"collection": collection_name}}]}},
"_source": ["text", "metadata"],
}
results = list(scan(self.client, index=f"{self.index_prefix}*", query=query))
return self._scan_result_to_get_result(results)
# Status: works
def insert(self, collection_name: str, items: list[VectorItem]):
if not self._has_index(dimension=len(items[0]["vector"])):
self._create_index(dimension=len(items[0]["vector"]))
for batch in self._create_batches(items):
actions = [
{
"_index": self._get_index_name(dimension=len(items[0]["vector"])),
"_id": item["id"],
"_source": {
"collection": collection_name,
"vector": item["vector"],
"text": item["text"],
"metadata": item["metadata"],
},
}
for item in batch
]
bulk(self.client, actions)
# Status: should work
def upsert(self, collection_name: str, items: list[VectorItem]):
if not self._has_index(dimension=len(items[0]["vector"])):
self._create_index(collection_name, dimension=len(items[0]["vector"]))
for batch in self._create_batches(items):
actions = [
{
"_index": self._get_index_name(dimension=len(items[0]["vector"])),
"_id": item["id"],
"_source": {
"vector": item["vector"],
"text": item["text"],
"metadata": item["metadata"],
},
}
for item in batch
]
self.client.bulk(actions)
# TODO: This currently deletes by * which is not always supported in ElasticSearch.
# Need to read a bit before changing. Also, need to delete from a specific collection
def delete(self, collection_name: str, ids: list[str]):
# Assuming ID is unique across collections and indexes
actions = [
{"delete": {"_index": f"{self.index_prefix}*", "_id": id}} for id in ids
]
self.client.bulk(body=actions)
def reset(self):
indices = self.client.indices.get(index=f"{self.index_prefix}*")
for index in indices:
self.client.indices.delete(index=index)

View file

@ -20,9 +20,9 @@ class MilvusClient:
def __init__(self): def __init__(self):
self.collection_prefix = "open_webui" self.collection_prefix = "open_webui"
if MILVUS_TOKEN is None: if MILVUS_TOKEN is None:
self.client = Client(uri=MILVUS_URI, database=MILVUS_DB) self.client = Client(uri=MILVUS_URI, db_name=MILVUS_DB)
else: else:
self.client = Client(uri=MILVUS_URI, database=MILVUS_DB, token=MILVUS_TOKEN) self.client = Client(uri=MILVUS_URI, db_name=MILVUS_DB, token=MILVUS_TOKEN)
def _result_to_get_result(self, result) -> GetResult: def _result_to_get_result(self, result) -> GetResult:
ids = [] ids = []

View file

@ -49,7 +49,7 @@ class OpenSearchClient:
ids=ids, distances=distances, documents=documents, metadatas=metadatas ids=ids, distances=distances, documents=documents, metadatas=metadatas
) )
def _create_index(self, index_name: str, dimension: int): def _create_index(self, collection_name: str, dimension: int):
body = { body = {
"mappings": { "mappings": {
"properties": { "properties": {
@ -72,24 +72,28 @@ class OpenSearchClient:
} }
} }
} }
self.client.indices.create(index=f"{self.index_prefix}_{index_name}", body=body) self.client.indices.create(
index=f"{self.index_prefix}_{collection_name}", body=body
)
def _create_batches(self, items: list[VectorItem], batch_size=100): def _create_batches(self, items: list[VectorItem], batch_size=100):
for i in range(0, len(items), batch_size): for i in range(0, len(items), batch_size):
yield items[i : i + batch_size] yield items[i : i + batch_size]
def has_collection(self, index_name: str) -> bool: def has_collection(self, collection_name: str) -> bool:
# has_collection here means has index. # has_collection here means has index.
# We are simply adapting to the norms of the other DBs. # We are simply adapting to the norms of the other DBs.
return self.client.indices.exists(index=f"{self.index_prefix}_{index_name}") return self.client.indices.exists(
index=f"{self.index_prefix}_{collection_name}"
)
def delete_colleciton(self, index_name: str): def delete_colleciton(self, collection_name: str):
# delete_collection here means delete index. # delete_collection here means delete index.
# We are simply adapting to the norms of the other DBs. # We are simply adapting to the norms of the other DBs.
self.client.indices.delete(index=f"{self.index_prefix}_{index_name}") self.client.indices.delete(index=f"{self.index_prefix}_{collection_name}")
def search( def search(
self, index_name: str, vectors: list[list[float]], limit: int self, collection_name: str, vectors: list[list[float]], limit: int
) -> Optional[SearchResult]: ) -> Optional[SearchResult]:
query = { query = {
"size": limit, "size": limit,
@ -108,7 +112,7 @@ class OpenSearchClient:
} }
result = self.client.search( result = self.client.search(
index=f"{self.index_prefix}_{index_name}", body=query index=f"{self.index_prefix}_{collection_name}", body=query
) )
return self._result_to_search_result(result) return self._result_to_search_result(result)
@ -141,21 +145,22 @@ class OpenSearchClient:
except Exception as e: except Exception as e:
return None return None
def get_or_create_index(self, index_name: str, dimension: int): def _create_index_if_not_exists(self, collection_name: str, dimension: int):
if not self.has_index(index_name): if not self.has_index(collection_name):
self._create_index(index_name, dimension) self._create_index(collection_name, dimension)
def get(self, index_name: str) -> Optional[GetResult]: def get(self, collection_name: str) -> Optional[GetResult]:
query = {"query": {"match_all": {}}, "_source": ["text", "metadata"]} query = {"query": {"match_all": {}}, "_source": ["text", "metadata"]}
result = self.client.search( result = self.client.search(
index=f"{self.index_prefix}_{index_name}", body=query index=f"{self.index_prefix}_{collection_name}", body=query
) )
return self._result_to_get_result(result) return self._result_to_get_result(result)
def insert(self, index_name: str, items: list[VectorItem]): def insert(self, collection_name: str, items: list[VectorItem]):
if not self.has_index(index_name): self._create_index_if_not_exists(
self._create_index(index_name, dimension=len(items[0]["vector"])) collection_name=collection_name, dimension=len(items[0]["vector"])
)
for batch in self._create_batches(items): for batch in self._create_batches(items):
actions = [ actions = [
@ -173,15 +178,17 @@ class OpenSearchClient:
] ]
self.client.bulk(actions) self.client.bulk(actions)
def upsert(self, index_name: str, items: list[VectorItem]): def upsert(self, collection_name: str, items: list[VectorItem]):
if not self.has_index(index_name): self._create_index_if_not_exists(
self._create_index(index_name, dimension=len(items[0]["vector"])) collection_name=collection_name, dimension=len(items[0]["vector"])
)
for batch in self._create_batches(items): for batch in self._create_batches(items):
actions = [ actions = [
{ {
"index": { "index": {
"_id": item["id"], "_id": item["id"],
"_index": f"{self.index_prefix}_{collection_name}",
"_source": { "_source": {
"vector": item["vector"], "vector": item["vector"],
"text": item["text"], "text": item["text"],
@ -193,9 +200,9 @@ class OpenSearchClient:
] ]
self.client.bulk(actions) self.client.bulk(actions)
def delete(self, index_name: str, ids: list[str]): def delete(self, collection_name: str, ids: list[str]):
actions = [ actions = [
{"delete": {"_index": f"{self.index_prefix}_{index_name}", "_id": id}} {"delete": {"_index": f"{self.index_prefix}_{collection_name}", "_id": id}}
for id in ids for id in ids
] ]
self.client.bulk(body=actions) self.client.bulk(body=actions)

View file

@ -0,0 +1,87 @@
import logging
from typing import Optional, List
import requests
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
from open_webui.env import SRC_LOG_LEVELS
log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["RAG"])
def search_perplexity(
api_key: str,
query: str,
count: int,
filter_list: Optional[list[str]] = None,
) -> list[SearchResult]:
"""Search using Perplexity API and return the results as a list of SearchResult objects.
Args:
api_key (str): A Perplexity API key
query (str): The query to search for
count (int): Maximum number of results to return
"""
# Handle PersistentConfig object
if hasattr(api_key, "__str__"):
api_key = str(api_key)
try:
url = "https://api.perplexity.ai/chat/completions"
# Create payload for the API call
payload = {
"model": "sonar",
"messages": [
{
"role": "system",
"content": "You are a search assistant. Provide factual information with citations.",
},
{"role": "user", "content": query},
],
"temperature": 0.2, # Lower temperature for more factual responses
"stream": False,
}
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
}
# Make the API request
response = requests.request("POST", url, json=payload, headers=headers)
# Parse the JSON response
json_response = response.json()
# Extract citations from the response
citations = json_response.get("citations", [])
# Create search results from citations
results = []
for i, citation in enumerate(citations[:count]):
# Extract content from the response to use as snippet
content = ""
if "choices" in json_response and json_response["choices"]:
if i == 0:
content = json_response["choices"][0]["message"]["content"]
result = {"link": citation, "title": f"Source {i+1}", "snippet": content}
results.append(result)
if filter_list:
results = get_filtered_results(results, filter_list)
return [
SearchResult(
link=result["link"], title=result["title"], snippet=result["snippet"]
)
for result in results[:count]
]
except Exception as e:
log.error(f"Error searching with Perplexity API: {e}")
return []

View file

@ -54,7 +54,7 @@ MAX_FILE_SIZE = MAX_FILE_SIZE_MB * 1024 * 1024 # Convert MB to bytes
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["AUDIO"]) log.setLevel(SRC_LOG_LEVELS["AUDIO"])
SPEECH_CACHE_DIR = Path(CACHE_DIR).joinpath("./audio/speech/") SPEECH_CACHE_DIR = CACHE_DIR / "audio" / "speech"
SPEECH_CACHE_DIR.mkdir(parents=True, exist_ok=True) SPEECH_CACHE_DIR.mkdir(parents=True, exist_ok=True)

View file

@ -230,9 +230,12 @@ async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
entry = connection_app.entries[0] entry = connection_app.entries[0]
username = str(entry[f"{LDAP_ATTRIBUTE_FOR_USERNAME}"]).lower() username = str(entry[f"{LDAP_ATTRIBUTE_FOR_USERNAME}"]).lower()
mail = str(entry[f"{LDAP_ATTRIBUTE_FOR_MAIL}"]) email = str(entry[f"{LDAP_ATTRIBUTE_FOR_MAIL}"])
if not mail or mail == "" or mail == "[]": if not email or email == "" or email == "[]":
raise HTTPException(400, f"User {form_data.user} does not have mail.") raise HTTPException(400, f"User {form_data.user} does not have email.")
else:
email = email.lower()
cn = str(entry["cn"]) cn = str(entry["cn"])
user_dn = entry.entry_dn user_dn = entry.entry_dn
@ -247,7 +250,7 @@ async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
if not connection_user.bind(): if not connection_user.bind():
raise HTTPException(400, f"Authentication failed for {form_data.user}") raise HTTPException(400, f"Authentication failed for {form_data.user}")
user = Users.get_user_by_email(mail) user = Users.get_user_by_email(email)
if not user: if not user:
try: try:
user_count = Users.get_num_users() user_count = Users.get_num_users()
@ -259,7 +262,10 @@ async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
) )
user = Auths.insert_new_auth( user = Auths.insert_new_auth(
email=mail, password=str(uuid.uuid4()), name=cn, role=role email=email,
password=str(uuid.uuid4()),
name=cn,
role=role,
) )
if not user: if not user:
@ -272,7 +278,7 @@ async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
except Exception as err: except Exception as err:
raise HTTPException(500, detail=ERROR_MESSAGES.DEFAULT(err)) raise HTTPException(500, detail=ERROR_MESSAGES.DEFAULT(err))
user = Auths.authenticate_user_by_trusted_header(mail) user = Auths.authenticate_user_by_trusted_header(email)
if user: if user:
token = create_token( token = create_token(

View file

@ -74,7 +74,7 @@ async def create_new_function(
function = Functions.insert_new_function(user.id, function_type, form_data) function = Functions.insert_new_function(user.id, function_type, form_data)
function_cache_dir = Path(CACHE_DIR) / "functions" / form_data.id function_cache_dir = CACHE_DIR / "functions" / form_data.id
function_cache_dir.mkdir(parents=True, exist_ok=True) function_cache_dir.mkdir(parents=True, exist_ok=True)
if function: if function:

View file

@ -25,7 +25,7 @@ from pydantic import BaseModel
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["IMAGES"]) log.setLevel(SRC_LOG_LEVELS["IMAGES"])
IMAGE_CACHE_DIR = Path(CACHE_DIR).joinpath("./image/generations/") IMAGE_CACHE_DIR = CACHE_DIR / "image" / "generations"
IMAGE_CACHE_DIR.mkdir(parents=True, exist_ok=True) IMAGE_CACHE_DIR.mkdir(parents=True, exist_ok=True)
@ -517,7 +517,13 @@ async def image_generations(
images = [] images = []
for image in res["data"]: for image in res["data"]:
if "url" in image:
image_data, content_type = load_url_image_data(
image["url"], headers
)
else:
image_data, content_type = load_b64_image_data(image["b64_json"]) image_data, content_type = load_b64_image_data(image["b64_json"])
url = upload_image(request, data, image_data, content_type, user) url = upload_image(request, data, image_data, content_type, user)
images.append({"url": url}) images.append({"url": url})
return images return images

View file

@ -55,7 +55,7 @@ from open_webui.env import (
ENV, ENV,
SRC_LOG_LEVELS, SRC_LOG_LEVELS,
AIOHTTP_CLIENT_TIMEOUT, AIOHTTP_CLIENT_TIMEOUT,
AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST, AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST,
BYPASS_MODEL_ACCESS_CONTROL, BYPASS_MODEL_ACCESS_CONTROL,
) )
from open_webui.constants import ERROR_MESSAGES from open_webui.constants import ERROR_MESSAGES
@ -72,7 +72,7 @@ log.setLevel(SRC_LOG_LEVELS["OLLAMA"])
async def send_get_request(url, key=None, user: UserModel = None): async def send_get_request(url, key=None, user: UserModel = None):
timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST) timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST)
try: try:
async with aiohttp.ClientSession(timeout=timeout, trust_env=True) as session: async with aiohttp.ClientSession(timeout=timeout, trust_env=True) as session:
async with session.get( async with session.get(
@ -216,7 +216,7 @@ async def verify_connection(
key = form_data.key key = form_data.key
async with aiohttp.ClientSession( async with aiohttp.ClientSession(
timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST) timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST)
) as session: ) as session:
try: try:
async with session.get( async with session.get(

View file

@ -22,7 +22,7 @@ from open_webui.config import (
) )
from open_webui.env import ( from open_webui.env import (
AIOHTTP_CLIENT_TIMEOUT, AIOHTTP_CLIENT_TIMEOUT,
AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST, AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST,
ENABLE_FORWARD_USER_INFO_HEADERS, ENABLE_FORWARD_USER_INFO_HEADERS,
BYPASS_MODEL_ACCESS_CONTROL, BYPASS_MODEL_ACCESS_CONTROL,
) )
@ -53,7 +53,7 @@ log.setLevel(SRC_LOG_LEVELS["OPENAI"])
async def send_get_request(url, key=None, user: UserModel = None): async def send_get_request(url, key=None, user: UserModel = None):
timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST) timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST)
try: try:
async with aiohttp.ClientSession(timeout=timeout, trust_env=True) as session: async with aiohttp.ClientSession(timeout=timeout, trust_env=True) as session:
async with session.get( async with session.get(
@ -192,7 +192,7 @@ async def speech(request: Request, user=Depends(get_verified_user)):
body = await request.body() body = await request.body()
name = hashlib.sha256(body).hexdigest() name = hashlib.sha256(body).hexdigest()
SPEECH_CACHE_DIR = Path(CACHE_DIR).joinpath("./audio/speech/") SPEECH_CACHE_DIR = CACHE_DIR / "audio" / "speech"
SPEECH_CACHE_DIR.mkdir(parents=True, exist_ok=True) SPEECH_CACHE_DIR.mkdir(parents=True, exist_ok=True)
file_path = SPEECH_CACHE_DIR.joinpath(f"{name}.mp3") file_path = SPEECH_CACHE_DIR.joinpath(f"{name}.mp3")
file_body_path = SPEECH_CACHE_DIR.joinpath(f"{name}.json") file_body_path = SPEECH_CACHE_DIR.joinpath(f"{name}.json")
@ -448,9 +448,7 @@ async def get_models(
r = None r = None
async with aiohttp.ClientSession( async with aiohttp.ClientSession(
timeout=aiohttp.ClientTimeout( timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST)
total=AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST
)
) as session: ) as session:
try: try:
async with session.get( async with session.get(
@ -530,7 +528,7 @@ async def verify_connection(
key = form_data.key key = form_data.key
async with aiohttp.ClientSession( async with aiohttp.ClientSession(
timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST) timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST)
) as session: ) as session:
try: try:
async with session.get( async with session.get(

View file

@ -59,7 +59,7 @@ from open_webui.retrieval.web.serpstack import search_serpstack
from open_webui.retrieval.web.tavily import search_tavily from open_webui.retrieval.web.tavily import search_tavily
from open_webui.retrieval.web.bing import search_bing from open_webui.retrieval.web.bing import search_bing
from open_webui.retrieval.web.exa import search_exa from open_webui.retrieval.web.exa import search_exa
from open_webui.retrieval.web.perplexity import search_perplexity
from open_webui.retrieval.utils import ( from open_webui.retrieval.utils import (
get_embedding_function, get_embedding_function,
@ -405,6 +405,7 @@ async def get_rag_config(request: Request, user=Depends(get_admin_user)):
"bing_search_v7_endpoint": request.app.state.config.BING_SEARCH_V7_ENDPOINT, "bing_search_v7_endpoint": request.app.state.config.BING_SEARCH_V7_ENDPOINT,
"bing_search_v7_subscription_key": request.app.state.config.BING_SEARCH_V7_SUBSCRIPTION_KEY, "bing_search_v7_subscription_key": request.app.state.config.BING_SEARCH_V7_SUBSCRIPTION_KEY,
"exa_api_key": request.app.state.config.EXA_API_KEY, "exa_api_key": request.app.state.config.EXA_API_KEY,
"perplexity_api_key": request.app.state.config.PERPLEXITY_API_KEY,
"result_count": request.app.state.config.RAG_WEB_SEARCH_RESULT_COUNT, "result_count": request.app.state.config.RAG_WEB_SEARCH_RESULT_COUNT,
"trust_env": request.app.state.config.RAG_WEB_SEARCH_TRUST_ENV, "trust_env": request.app.state.config.RAG_WEB_SEARCH_TRUST_ENV,
"concurrent_requests": request.app.state.config.RAG_WEB_SEARCH_CONCURRENT_REQUESTS, "concurrent_requests": request.app.state.config.RAG_WEB_SEARCH_CONCURRENT_REQUESTS,
@ -465,6 +466,7 @@ class WebSearchConfig(BaseModel):
bing_search_v7_endpoint: Optional[str] = None bing_search_v7_endpoint: Optional[str] = None
bing_search_v7_subscription_key: Optional[str] = None bing_search_v7_subscription_key: Optional[str] = None
exa_api_key: Optional[str] = None exa_api_key: Optional[str] = None
perplexity_api_key: Optional[str] = None
result_count: Optional[int] = None result_count: Optional[int] = None
concurrent_requests: Optional[int] = None concurrent_requests: Optional[int] = None
trust_env: Optional[bool] = None trust_env: Optional[bool] = None
@ -617,6 +619,10 @@ async def update_rag_config(
request.app.state.config.EXA_API_KEY = form_data.web.search.exa_api_key request.app.state.config.EXA_API_KEY = form_data.web.search.exa_api_key
request.app.state.config.PERPLEXITY_API_KEY = (
form_data.web.search.perplexity_api_key
)
request.app.state.config.RAG_WEB_SEARCH_RESULT_COUNT = ( request.app.state.config.RAG_WEB_SEARCH_RESULT_COUNT = (
form_data.web.search.result_count form_data.web.search.result_count
) )
@ -683,6 +689,7 @@ async def update_rag_config(
"bing_search_v7_endpoint": request.app.state.config.BING_SEARCH_V7_ENDPOINT, "bing_search_v7_endpoint": request.app.state.config.BING_SEARCH_V7_ENDPOINT,
"bing_search_v7_subscription_key": request.app.state.config.BING_SEARCH_V7_SUBSCRIPTION_KEY, "bing_search_v7_subscription_key": request.app.state.config.BING_SEARCH_V7_SUBSCRIPTION_KEY,
"exa_api_key": request.app.state.config.EXA_API_KEY, "exa_api_key": request.app.state.config.EXA_API_KEY,
"perplexity_api_key": request.app.state.config.PERPLEXITY_API_KEY,
"result_count": request.app.state.config.RAG_WEB_SEARCH_RESULT_COUNT, "result_count": request.app.state.config.RAG_WEB_SEARCH_RESULT_COUNT,
"concurrent_requests": request.app.state.config.RAG_WEB_SEARCH_CONCURRENT_REQUESTS, "concurrent_requests": request.app.state.config.RAG_WEB_SEARCH_CONCURRENT_REQUESTS,
"trust_env": request.app.state.config.RAG_WEB_SEARCH_TRUST_ENV, "trust_env": request.app.state.config.RAG_WEB_SEARCH_TRUST_ENV,
@ -1182,9 +1189,13 @@ def process_web(
content = " ".join([doc.page_content for doc in docs]) content = " ".join([doc.page_content for doc in docs])
log.debug(f"text_content: {content}") log.debug(f"text_content: {content}")
if not request.app.state.config.BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL:
save_docs_to_vector_db( save_docs_to_vector_db(
request, docs, collection_name, overwrite=True, user=user request, docs, collection_name, overwrite=True, user=user
) )
else:
collection_name = None
return { return {
"status": True, "status": True,
@ -1196,6 +1207,7 @@ def process_web(
}, },
"meta": { "meta": {
"name": form_data.url, "name": form_data.url,
"source": form_data.url,
}, },
}, },
} }
@ -1221,6 +1233,7 @@ def search_web(request: Request, engine: str, query: str) -> list[SearchResult]:
- SERPLY_API_KEY - SERPLY_API_KEY
- TAVILY_API_KEY - TAVILY_API_KEY
- EXA_API_KEY - EXA_API_KEY
- PERPLEXITY_API_KEY
- SEARCHAPI_API_KEY + SEARCHAPI_ENGINE (by default `google`) - SEARCHAPI_API_KEY + SEARCHAPI_ENGINE (by default `google`)
- SERPAPI_API_KEY + SERPAPI_ENGINE (by default `google`) - SERPAPI_API_KEY + SERPAPI_ENGINE (by default `google`)
Args: Args:
@ -1385,6 +1398,13 @@ def search_web(request: Request, engine: str, query: str) -> list[SearchResult]:
request.app.state.config.RAG_WEB_SEARCH_RESULT_COUNT, request.app.state.config.RAG_WEB_SEARCH_RESULT_COUNT,
request.app.state.config.RAG_WEB_SEARCH_DOMAIN_FILTER_LIST, request.app.state.config.RAG_WEB_SEARCH_DOMAIN_FILTER_LIST,
) )
elif engine == "perplexity":
return search_perplexity(
request.app.state.config.PERPLEXITY_API_KEY,
query,
request.app.state.config.RAG_WEB_SEARCH_RESULT_COUNT,
request.app.state.config.RAG_WEB_SEARCH_DOMAIN_FILTER_LIST,
)
else: else:
raise Exception("No search engine API key found in environment variables") raise Exception("No search engine API key found in environment variables")

View file

@ -105,7 +105,7 @@ async def create_new_tools(
specs = get_tools_specs(TOOLS[form_data.id]) specs = get_tools_specs(TOOLS[form_data.id])
tools = Tools.insert_new_tool(user.id, form_data, specs) tools = Tools.insert_new_tool(user.id, form_data, specs)
tool_cache_dir = Path(CACHE_DIR) / "tools" / form_data.id tool_cache_dir = CACHE_DIR / "tools" / form_data.id
tool_cache_dir.mkdir(parents=True, exist_ok=True) tool_cache_dir.mkdir(parents=True, exist_ok=True)
if tools: if tools:

View file

@ -12,6 +12,7 @@ from open_webui.env import (
ENABLE_WEBSOCKET_SUPPORT, ENABLE_WEBSOCKET_SUPPORT,
WEBSOCKET_MANAGER, WEBSOCKET_MANAGER,
WEBSOCKET_REDIS_URL, WEBSOCKET_REDIS_URL,
WEBSOCKET_REDIS_LOCK_TIMEOUT,
) )
from open_webui.utils.auth import decode_token from open_webui.utils.auth import decode_token
from open_webui.socket.utils import RedisDict, RedisLock from open_webui.socket.utils import RedisDict, RedisLock
@ -61,7 +62,7 @@ if WEBSOCKET_MANAGER == "redis":
clean_up_lock = RedisLock( clean_up_lock = RedisLock(
redis_url=WEBSOCKET_REDIS_URL, redis_url=WEBSOCKET_REDIS_URL,
lock_name="usage_cleanup_lock", lock_name="usage_cleanup_lock",
timeout_secs=TIMEOUT_DURATION * 2, timeout_secs=WEBSOCKET_REDIS_LOCK_TIMEOUT,
) )
aquire_func = clean_up_lock.aquire_lock aquire_func = clean_up_lock.aquire_lock
renew_func = clean_up_lock.renew_lock renew_func = clean_up_lock.renew_lock

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

View file

@ -101,19 +101,33 @@ class LocalStorageProvider(StorageProvider):
class S3StorageProvider(StorageProvider): class S3StorageProvider(StorageProvider):
def __init__(self): def __init__(self):
config = Config(
s3={
"use_accelerate_endpoint": S3_USE_ACCELERATE_ENDPOINT,
"addressing_style": S3_ADDRESSING_STYLE,
},
)
# If access key and secret are provided, use them for authentication
if S3_ACCESS_KEY_ID and S3_SECRET_ACCESS_KEY:
self.s3_client = boto3.client( self.s3_client = boto3.client(
"s3", "s3",
region_name=S3_REGION_NAME, region_name=S3_REGION_NAME,
endpoint_url=S3_ENDPOINT_URL, endpoint_url=S3_ENDPOINT_URL,
aws_access_key_id=S3_ACCESS_KEY_ID, aws_access_key_id=S3_ACCESS_KEY_ID,
aws_secret_access_key=S3_SECRET_ACCESS_KEY, aws_secret_access_key=S3_SECRET_ACCESS_KEY,
config=Config( config=config,
s3={
"use_accelerate_endpoint": S3_USE_ACCELERATE_ENDPOINT,
"addressing_style": S3_ADDRESSING_STYLE,
},
),
) )
else:
# If no explicit credentials are provided, fall back to default AWS credentials
# This supports workload identity (IAM roles for EC2, EKS, etc.)
self.s3_client = boto3.client(
"s3",
region_name=S3_REGION_NAME,
endpoint_url=S3_ENDPOINT_URL,
config=config,
)
self.bucket_name = S3_BUCKET_NAME self.bucket_name = S3_BUCKET_NAME
self.key_prefix = S3_KEY_PREFIX if S3_KEY_PREFIX else "" self.key_prefix = S3_KEY_PREFIX if S3_KEY_PREFIX else ""

View file

@ -187,6 +187,17 @@ class TestS3StorageProvider:
assert not (upload_dir / self.filename).exists() assert not (upload_dir / self.filename).exists()
assert not (upload_dir / self.filename_extra).exists() assert not (upload_dir / self.filename_extra).exists()
def test_init_without_credentials(self, monkeypatch):
"""Test that S3StorageProvider can initialize without explicit credentials."""
# Temporarily unset the environment variables
monkeypatch.setattr(provider, "S3_ACCESS_KEY_ID", None)
monkeypatch.setattr(provider, "S3_SECRET_ACCESS_KEY", None)
# Should not raise an exception
storage = provider.S3StorageProvider()
assert storage.s3_client is not None
assert storage.bucket_name == provider.S3_BUCKET_NAME
class TestGCSStorageProvider: class TestGCSStorageProvider:
Storage = provider.GCSStorageProvider() Storage = provider.GCSStorageProvider()

View file

@ -83,11 +83,12 @@ def get_license_data(app, key):
if k == "resources": if k == "resources":
for p, c in v.items(): for p, c in v.items():
globals().get("override_static", lambda a, b: None)(p, c) globals().get("override_static", lambda a, b: None)(p, c)
elif k == "user_count": elif k == "count":
setattr(app.state, "USER_COUNT", v) setattr(app.state, "USER_COUNT", v)
elif k == "webui_name": elif k == "name":
setattr(app.state, "WEBUI_NAME", v) setattr(app.state, "WEBUI_NAME", v)
elif k == "metadata":
setattr(app.state, "LICENSE_METADATA", v)
return True return True
else: else:
log.error( log.error(

View file

@ -149,7 +149,7 @@ async def generate_direct_chat_completion(
} }
) )
if "error" in res: if "error" in res and res["error"]:
raise Exception(res["error"]) raise Exception(res["error"])
return res return res
@ -328,9 +328,14 @@ async def chat_completed(request: Request, form_data: dict, user: Any):
} }
try: try:
filter_functions = [
Functions.get_function_by_id(filter_id)
for filter_id in get_sorted_filter_ids(model)
]
result, _ = await process_filter_functions( result, _ = await process_filter_functions(
request=request, request=request,
filter_ids=get_sorted_filter_ids(model), filter_functions=filter_functions,
filter_type="outlet", filter_type="outlet",
form_data=data, form_data=data,
extra_params=extra_params, extra_params=extra_params,

View file

@ -1,90 +1,157 @@
import asyncio import asyncio
import json import json
import logging
import uuid import uuid
from typing import Optional
import aiohttp
import websockets import websockets
import requests from pydantic import BaseModel
from urllib.parse import urljoin
from open_webui.env import SRC_LOG_LEVELS
logger = logging.getLogger(__name__)
logger.setLevel(SRC_LOG_LEVELS["MAIN"])
async def execute_code_jupyter( class ResultModel(BaseModel):
jupyter_url, code, token=None, password=None, timeout=10
):
""" """
Executes Python code in a Jupyter kernel. Execute Code Result Model
Supports authentication with a token or password. """
:param jupyter_url: Jupyter server URL (e.g., "http://localhost:8888")
stdout: Optional[str] = ""
stderr: Optional[str] = ""
result: Optional[str] = ""
class JupyterCodeExecuter:
"""
Execute code in jupyter notebook
"""
def __init__(
self,
base_url: str,
code: str,
token: str = "",
password: str = "",
timeout: int = 60,
):
"""
:param base_url: Jupyter server URL (e.g., "http://localhost:8888")
:param code: Code to execute :param code: Code to execute
:param token: Jupyter authentication token (optional) :param token: Jupyter authentication token (optional)
:param password: Jupyter password (optional) :param password: Jupyter password (optional)
:param timeout: WebSocket timeout in seconds (default: 10s) :param timeout: WebSocket timeout in seconds (default: 60s)
:return: Dictionary with stdout, stderr, and result
- Images are prefixed with "base64:image/png," and separated by newlines if multiple.
""" """
session = requests.Session() # Maintain cookies self.base_url = base_url.rstrip("/")
headers = {} # Headers for requests self.code = code
self.token = token
self.password = password
self.timeout = timeout
self.kernel_id = ""
self.session = aiohttp.ClientSession(base_url=self.base_url)
self.params = {}
self.result = ResultModel()
# Authenticate using password async def __aenter__(self):
if password and not token: return self
async def __aexit__(self, exc_type, exc_val, exc_tb):
if self.kernel_id:
try: try:
login_url = urljoin(jupyter_url, "/login") async with self.session.delete(
response = session.get(login_url) f"/api/kernels/{self.kernel_id}", params=self.params
) as response:
response.raise_for_status() response.raise_for_status()
xsrf_token = session.cookies.get("_xsrf") except Exception as err:
logger.exception("close kernel failed, %s", err)
await self.session.close()
async def run(self) -> ResultModel:
try:
await self.sign_in()
await self.init_kernel()
await self.execute_code()
except Exception as err:
logger.exception("execute code failed, %s", err)
self.result.stderr = f"Error: {err}"
return self.result
async def sign_in(self) -> None:
# password authentication
if self.password and not self.token:
async with self.session.get("/login") as response:
response.raise_for_status()
xsrf_token = response.cookies["_xsrf"].value
if not xsrf_token: if not xsrf_token:
raise ValueError("Failed to fetch _xsrf token") raise ValueError("_xsrf token not found")
self.session.cookie_jar.update_cookies(response.cookies)
login_data = {"_xsrf": xsrf_token, "password": password} self.session.headers.update({"X-XSRFToken": xsrf_token})
login_response = session.post( async with self.session.post(
login_url, data=login_data, cookies=session.cookies "/login",
) data={"_xsrf": xsrf_token, "password": self.password},
login_response.raise_for_status() allow_redirects=False,
headers["X-XSRFToken"] = xsrf_token ) as response:
except Exception as e:
return {
"stdout": "",
"stderr": f"Authentication Error: {str(e)}",
"result": "",
}
# Construct API URLs with authentication token if provided
params = f"?token={token}" if token else ""
kernel_url = urljoin(jupyter_url, f"/api/kernels{params}")
try:
response = session.post(kernel_url, headers=headers, cookies=session.cookies)
response.raise_for_status() response.raise_for_status()
kernel_id = response.json()["id"] self.session.cookie_jar.update_cookies(response.cookies)
websocket_url = urljoin( # token authentication
jupyter_url.replace("http", "ws"), if self.token:
f"/api/kernels/{kernel_id}/channels{params}", self.params.update({"token": self.token})
)
async def init_kernel(self) -> None:
async with self.session.post(
url="/api/kernels", params=self.params
) as response:
response.raise_for_status()
kernel_data = await response.json()
self.kernel_id = kernel_data["id"]
def init_ws(self) -> (str, dict):
ws_base = self.base_url.replace("http", "ws")
ws_params = "?" + "&".join([f"{key}={val}" for key, val in self.params.items()])
websocket_url = f"{ws_base}/api/kernels/{self.kernel_id}/channels{ws_params if len(ws_params) > 1 else ''}"
ws_headers = {} ws_headers = {}
if password and not token: if self.password and not self.token:
ws_headers["X-XSRFToken"] = session.cookies.get("_xsrf") ws_headers = {
cookies = {name: value for name, value in session.cookies.items()} "Cookie": "; ".join(
ws_headers["Cookie"] = "; ".join( [
[f"{name}={value}" for name, value in cookies.items()] f"{cookie.key}={cookie.value}"
) for cookie in self.session.cookie_jar
]
),
**self.session.headers,
}
return websocket_url, ws_headers
async def execute_code(self) -> None:
# initialize ws
websocket_url, ws_headers = self.init_ws()
# execute
async with websockets.connect( async with websockets.connect(
websocket_url, additional_headers=ws_headers websocket_url, additional_headers=ws_headers
) as ws: ) as ws:
msg_id = str(uuid.uuid4()) await self.execute_in_jupyter(ws)
execute_request = {
async def execute_in_jupyter(self, ws) -> None:
# send message
msg_id = uuid.uuid4().hex
await ws.send(
json.dumps(
{
"header": { "header": {
"msg_id": msg_id, "msg_id": msg_id,
"msg_type": "execute_request", "msg_type": "execute_request",
"username": "user", "username": "user",
"session": str(uuid.uuid4()), "session": uuid.uuid4().hex,
"date": "", "date": "",
"version": "5.3", "version": "5.3",
}, },
"parent_header": {}, "parent_header": {},
"metadata": {}, "metadata": {},
"content": { "content": {
"code": code, "code": self.code,
"silent": False, "silent": False,
"store_history": True, "store_history": True,
"user_expressions": {}, "user_expressions": {},
@ -93,56 +160,51 @@ async def execute_code_jupyter(
}, },
"channel": "shell", "channel": "shell",
} }
await ws.send(json.dumps(execute_request)) )
)
# parse message
stdout, stderr, result = "", "", [] stdout, stderr, result = "", "", []
while True: while True:
try: try:
message = await asyncio.wait_for(ws.recv(), timeout) # wait for message
message = await asyncio.wait_for(ws.recv(), self.timeout)
message_data = json.loads(message) message_data = json.loads(message)
if message_data.get("parent_header", {}).get("msg_id") == msg_id: # msg id not match, skip
if message_data.get("parent_header", {}).get("msg_id") != msg_id:
continue
# check message type
msg_type = message_data.get("msg_type") msg_type = message_data.get("msg_type")
match msg_type:
if msg_type == "stream": case "stream":
if message_data["content"]["name"] == "stdout": if message_data["content"]["name"] == "stdout":
stdout += message_data["content"]["text"] stdout += message_data["content"]["text"]
elif message_data["content"]["name"] == "stderr": elif message_data["content"]["name"] == "stderr":
stderr += message_data["content"]["text"] stderr += message_data["content"]["text"]
case "execute_result" | "display_data":
elif msg_type in ("execute_result", "display_data"):
data = message_data["content"]["data"] data = message_data["content"]["data"]
if "image/png" in data: if "image/png" in data:
result.append( result.append(f"data:image/png;base64,{data['image/png']}")
f"data:image/png;base64,{data['image/png']}"
)
elif "text/plain" in data: elif "text/plain" in data:
result.append(data["text/plain"]) result.append(data["text/plain"])
case "error":
elif msg_type == "error":
stderr += "\n".join(message_data["content"]["traceback"]) stderr += "\n".join(message_data["content"]["traceback"])
case "status":
elif ( if message_data["content"]["execution_state"] == "idle":
msg_type == "status"
and message_data["content"]["execution_state"] == "idle"
):
break break
except asyncio.TimeoutError: except asyncio.TimeoutError:
stderr += "\nExecution timed out." stderr += "\nExecution timed out."
break break
self.result.stdout = stdout.strip()
self.result.stderr = stderr.strip()
self.result.result = "\n".join(result).strip() if result else ""
except Exception as e:
return {"stdout": "", "stderr": f"Error: {str(e)}", "result": ""}
finally: async def execute_code_jupyter(
if kernel_id: base_url: str, code: str, token: str = "", password: str = "", timeout: int = 60
requests.delete( ) -> dict:
f"{kernel_url}/{kernel_id}", headers=headers, cookies=session.cookies async with JupyterCodeExecuter(
) base_url, code, token, password, timeout
) as executor:
return { result = await executor.run()
"stdout": stdout.strip(), return result.model_dump()
"stderr": stderr.strip(),
"result": "\n".join(result).strip() if result else "",
}

View file

@ -9,7 +9,7 @@ log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["MAIN"]) log.setLevel(SRC_LOG_LEVELS["MAIN"])
def get_sorted_filter_ids(model): def get_sorted_filter_ids(model: dict):
def get_priority(function_id): def get_priority(function_id):
function = Functions.get_function_by_id(function_id) function = Functions.get_function_by_id(function_id)
if function is not None and hasattr(function, "valves"): if function is not None and hasattr(function, "valves"):
@ -33,12 +33,13 @@ def get_sorted_filter_ids(model):
async def process_filter_functions( async def process_filter_functions(
request, filter_ids, filter_type, form_data, extra_params request, filter_functions, filter_type, form_data, extra_params
): ):
skip_files = None skip_files = None
for filter_id in filter_ids: for function in filter_functions:
filter = Functions.get_function_by_id(filter_id) filter = function
filter_id = function.id
if not filter: if not filter:
continue continue
@ -48,6 +49,11 @@ async def process_filter_functions(
function_module, _, _ = load_function_module_by_id(filter_id) function_module, _, _ = load_function_module_by_id(filter_id)
request.app.state.FUNCTIONS[filter_id] = function_module request.app.state.FUNCTIONS[filter_id] = function_module
# Prepare handler function
handler = getattr(function_module, filter_type, None)
if not handler:
continue
# Check if the function has a file_handler variable # Check if the function has a file_handler variable
if filter_type == "inlet" and hasattr(function_module, "file_handler"): if filter_type == "inlet" and hasattr(function_module, "file_handler"):
skip_files = function_module.file_handler skip_files = function_module.file_handler
@ -59,11 +65,6 @@ async def process_filter_functions(
**(valves if valves else {}) **(valves if valves else {})
) )
# Prepare handler function
handler = getattr(function_module, filter_type, None)
if not handler:
continue
try: try:
# Prepare parameters # Prepare parameters
sig = inspect.signature(handler) sig = inspect.signature(handler)

View file

@ -68,6 +68,7 @@ from open_webui.utils.misc import (
get_last_user_message, get_last_user_message,
get_last_assistant_message, get_last_assistant_message,
prepend_to_first_user_message_content, prepend_to_first_user_message_content,
convert_logit_bias_input_to_json,
) )
from open_webui.utils.tools import get_tools from open_webui.utils.tools import get_tools
from open_webui.utils.plugin import load_function_module_by_id from open_webui.utils.plugin import load_function_module_by_id
@ -610,11 +611,18 @@ def apply_params_to_form_data(form_data, model):
if "reasoning_effort" in params: if "reasoning_effort" in params:
form_data["reasoning_effort"] = params["reasoning_effort"] form_data["reasoning_effort"] = params["reasoning_effort"]
if "logit_bias" in params:
try:
form_data["logit_bias"] = json.loads(
convert_logit_bias_input_to_json(params["logit_bias"])
)
except Exception as e:
print(f"Error parsing logit_bias: {e}")
return form_data return form_data
async def process_chat_payload(request, form_data, metadata, user, model): async def process_chat_payload(request, form_data, user, metadata, model):
form_data = apply_params_to_form_data(form_data, model) form_data = apply_params_to_form_data(form_data, model)
log.debug(f"form_data: {form_data}") log.debug(f"form_data: {form_data}")
@ -707,9 +715,14 @@ async def process_chat_payload(request, form_data, metadata, user, model):
raise e raise e
try: try:
filter_functions = [
Functions.get_function_by_id(filter_id)
for filter_id in get_sorted_filter_ids(model)
]
form_data, flags = await process_filter_functions( form_data, flags = await process_filter_functions(
request=request, request=request,
filter_ids=get_sorted_filter_ids(model), filter_functions=filter_functions,
filter_type="inlet", filter_type="inlet",
form_data=form_data, form_data=form_data,
extra_params=extra_params, extra_params=extra_params,
@ -856,7 +869,7 @@ async def process_chat_payload(request, form_data, metadata, user, model):
async def process_chat_response( async def process_chat_response(
request, response, form_data, user, events, metadata, tasks request, response, form_data, user, metadata, model, events, tasks
): ):
async def background_tasks_handler(): async def background_tasks_handler():
message_map = Chats.get_messages_by_chat_id(metadata["chat_id"]) message_map = Chats.get_messages_by_chat_id(metadata["chat_id"])
@ -1061,9 +1074,14 @@ async def process_chat_response(
}, },
"__metadata__": metadata, "__metadata__": metadata,
"__request__": request, "__request__": request,
"__model__": metadata.get("model"), "__model__": model,
} }
filter_ids = get_sorted_filter_ids(form_data.get("model")) filter_functions = [
Functions.get_function_by_id(filter_id)
for filter_id in get_sorted_filter_ids(model)
]
print(f"{filter_functions=}")
# Streaming response # Streaming response
if event_emitter and event_caller: if event_emitter and event_caller:
@ -1470,7 +1488,7 @@ async def process_chat_response(
data, _ = await process_filter_functions( data, _ = await process_filter_functions(
request=request, request=request,
filter_ids=filter_ids, filter_functions=filter_functions,
filter_type="stream", filter_type="stream",
form_data=data, form_data=data,
extra_params=extra_params, extra_params=extra_params,
@ -1544,9 +1562,59 @@ async def process_chat_response(
value = delta.get("content") value = delta.get("content")
if value: reasoning_content = delta.get("reasoning_content")
content = f"{content}{value}" if reasoning_content:
if (
not content_blocks
or content_blocks[-1]["type"] != "reasoning"
):
reasoning_block = {
"type": "reasoning",
"start_tag": "think",
"end_tag": "/think",
"attributes": {
"type": "reasoning_content"
},
"content": "",
"started_at": time.time(),
}
content_blocks.append(reasoning_block)
else:
reasoning_block = content_blocks[-1]
reasoning_block["content"] += reasoning_content
data = {
"content": serialize_content_blocks(
content_blocks
)
}
if value:
if (
content_blocks
and content_blocks[-1]["type"]
== "reasoning"
and content_blocks[-1]
.get("attributes", {})
.get("type")
== "reasoning_content"
):
reasoning_block = content_blocks[-1]
reasoning_block["ended_at"] = time.time()
reasoning_block["duration"] = int(
reasoning_block["ended_at"]
- reasoning_block["started_at"]
)
content_blocks.append(
{
"type": "text",
"content": "",
}
)
content = f"{content}{value}"
if not content_blocks: if not content_blocks:
content_blocks.append( content_blocks.append(
{ {
@ -2017,7 +2085,7 @@ async def process_chat_response(
for event in events: for event in events:
event, _ = await process_filter_functions( event, _ = await process_filter_functions(
request=request, request=request,
filter_ids=filter_ids, filter_functions=filter_functions,
filter_type="stream", filter_type="stream",
form_data=event, form_data=event,
extra_params=extra_params, extra_params=extra_params,
@ -2029,7 +2097,7 @@ async def process_chat_response(
async for data in original_generator: async for data in original_generator:
data, _ = await process_filter_functions( data, _ = await process_filter_functions(
request=request, request=request,
filter_ids=filter_ids, filter_functions=filter_functions,
filter_type="stream", filter_type="stream",
form_data=data, form_data=data,
extra_params=extra_params, extra_params=extra_params,

View file

@ -6,6 +6,7 @@ import logging
from datetime import timedelta from datetime import timedelta
from pathlib import Path from pathlib import Path
from typing import Callable, Optional from typing import Callable, Optional
import json
import collections.abc import collections.abc
@ -450,3 +451,15 @@ def parse_ollama_modelfile(model_text):
data["params"]["messages"] = messages data["params"]["messages"] = messages
return data return data
def convert_logit_bias_input_to_json(user_input):
logit_bias_pairs = user_input.split(",")
logit_bias_json = {}
for pair in logit_bias_pairs:
token, bias = pair.split(":")
token = str(token.strip())
bias = int(bias.strip())
bias = 100 if bias > 100 else -100 if bias < -100 else bias
logit_bias_json[token] = bias
return json.dumps(logit_bias_json)

View file

@ -234,7 +234,7 @@ class OAuthManager:
log.warning(f"OAuth callback error: {e}") log.warning(f"OAuth callback error: {e}")
raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED) raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED)
user_data: UserInfo = token.get("userinfo") user_data: UserInfo = token.get("userinfo")
if not user_data or "email" not in user_data: if not user_data or auth_manager_config.OAUTH_EMAIL_CLAIM not in user_data:
user_data: UserInfo = await client.userinfo(token=token) user_data: UserInfo = await client.userinfo(token=token)
if not user_data: if not user_data:
log.warning(f"OAuth callback failed, user data is missing: {token}") log.warning(f"OAuth callback failed, user data is missing: {token}")

View file

@ -62,6 +62,7 @@ def apply_model_params_to_body_openai(params: dict, form_data: dict) -> dict:
"reasoning_effort": str, "reasoning_effort": str,
"seed": lambda x: x, "seed": lambda x: x,
"stop": lambda x: [bytes(s, "utf-8").decode("unicode_escape") for s in x], "stop": lambda x: [bytes(s, "utf-8").decode("unicode_escape") for s in x],
"logit_bias": lambda x: x,
} }
return apply_model_params_to_body(params, form_data, mappings) return apply_model_params_to_body(params, form_data, mappings)

View file

@ -110,7 +110,7 @@ class PDFGenerator:
# When running using `pip install -e .` the static directory is in the site packages. # When running using `pip install -e .` the static directory is in the site packages.
# This path only works if `open-webui serve` is run from the root of this project. # This path only works if `open-webui serve` is run from the root of this project.
if not FONTS_DIR.exists(): if not FONTS_DIR.exists():
FONTS_DIR = Path("./backend/static/fonts") FONTS_DIR = Path(".") / "backend" / "static" / "fonts"
pdf.add_font("NotoSans", "", f"{FONTS_DIR}/NotoSans-Regular.ttf") pdf.add_font("NotoSans", "", f"{FONTS_DIR}/NotoSans-Regular.ttf")
pdf.add_font("NotoSans", "b", f"{FONTS_DIR}/NotoSans-Bold.ttf") pdf.add_font("NotoSans", "b", f"{FONTS_DIR}/NotoSans-Bold.ttf")

View file

@ -104,7 +104,7 @@ def replace_prompt_variable(template: str, prompt: str) -> str:
def replace_messages_variable( def replace_messages_variable(
template: str, messages: Optional[list[str]] = None template: str, messages: Optional[list[dict]] = None
) -> str: ) -> str:
def replacement_function(match): def replacement_function(match):
full_match = match.group(0) full_match = match.group(0)

View file

@ -1,5 +1,5 @@
fastapi==0.115.7 fastapi==0.115.7
uvicorn[standard]==0.30.6 uvicorn[standard]==0.34.0
pydantic==2.10.6 pydantic==2.10.6
python-multipart==0.0.18 python-multipart==0.0.18
@ -13,14 +13,14 @@ async-timeout
aiocache aiocache
aiofiles aiofiles
sqlalchemy==2.0.32 sqlalchemy==2.0.38
alembic==1.14.0 alembic==1.14.0
peewee==3.17.8 peewee==3.17.9
peewee-migrate==1.12.2 peewee-migrate==1.12.2
psycopg2-binary==2.9.9 psycopg2-binary==2.9.9
pgvector==0.3.5 pgvector==0.3.5
PyMySQL==1.1.1 PyMySQL==1.1.1
bcrypt==4.2.0 bcrypt==4.3.0
pymongo pymongo
redis redis
@ -40,8 +40,8 @@ anthropic
google-generativeai==0.7.2 google-generativeai==0.7.2
tiktoken tiktoken
langchain==0.3.7 langchain==0.3.19
langchain-community==0.3.7 langchain-community==0.3.18
fake-useragent==1.5.1 fake-useragent==1.5.1
chromadb==0.6.2 chromadb==0.6.2
@ -49,6 +49,8 @@ pymilvus==2.5.0
qdrant-client~=1.12.0 qdrant-client~=1.12.0
opensearch-py==2.8.0 opensearch-py==2.8.0
playwright==1.49.1 # Caution: version must match docker-compose.playwright.yaml playwright==1.49.1 # Caution: version must match docker-compose.playwright.yaml
elasticsearch==8.17.1
transformers transformers
sentence-transformers==3.3.1 sentence-transformers==3.3.1
@ -85,7 +87,7 @@ faster-whisper==1.1.1
PyJWT[crypto]==2.10.1 PyJWT[crypto]==2.10.1
authlib==1.4.1 authlib==1.4.1
black==24.8.0 black==25.1.0
langfuse==2.44.0 langfuse==2.44.0
youtube-transcript-api==0.6.3 youtube-transcript-api==0.6.3
pytube==15.0.0 pytube==15.0.0

26
package-lock.json generated
View file

@ -1,13 +1,14 @@
{ {
"name": "open-webui", "name": "open-webui",
"version": "0.5.18", "version": "0.5.19",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "open-webui", "name": "open-webui",
"version": "0.5.18", "version": "0.5.19",
"dependencies": { "dependencies": {
"@azure/msal-browser": "^4.5.0",
"@codemirror/lang-javascript": "^6.2.2", "@codemirror/lang-javascript": "^6.2.2",
"@codemirror/lang-python": "^6.1.6", "@codemirror/lang-python": "^6.1.6",
"@codemirror/language-data": "^6.5.1", "@codemirror/language-data": "^6.5.1",
@ -135,6 +136,27 @@
"node": ">=6.0.0" "node": ">=6.0.0"
} }
}, },
"node_modules/@azure/msal-browser": {
"version": "4.5.0",
"resolved": "https://registry.npmjs.org/@azure/msal-browser/-/msal-browser-4.5.0.tgz",
"integrity": "sha512-H7mWmu8yI0n0XxhJobrgncXI6IU5h8DKMiWDHL5y+Dc58cdg26GbmaMUehbUkdKAQV2OTiFa4FUa6Fdu/wIxBg==",
"license": "MIT",
"dependencies": {
"@azure/msal-common": "15.2.0"
},
"engines": {
"node": ">=0.8.0"
}
},
"node_modules/@azure/msal-common": {
"version": "15.2.0",
"resolved": "https://registry.npmjs.org/@azure/msal-common/-/msal-common-15.2.0.tgz",
"integrity": "sha512-HiYfGAKthisUYqHG1nImCf/uzcyS31wng3o+CycWLIM9chnYJ9Lk6jZ30Y6YiYYpTQ9+z/FGUpiKKekd3Arc0A==",
"license": "MIT",
"engines": {
"node": ">=0.8.0"
}
},
"node_modules/@babel/runtime": { "node_modules/@babel/runtime": {
"version": "7.26.9", "version": "7.26.9",
"resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.26.9.tgz", "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.26.9.tgz",

View file

@ -1,6 +1,6 @@
{ {
"name": "open-webui", "name": "open-webui",
"version": "0.5.18", "version": "0.5.19",
"private": true, "private": true,
"scripts": { "scripts": {
"dev": "npm run pyodide:fetch && vite dev --host", "dev": "npm run pyodide:fetch && vite dev --host",
@ -51,6 +51,7 @@
}, },
"type": "module", "type": "module",
"dependencies": { "dependencies": {
"@azure/msal-browser": "^4.5.0",
"@codemirror/lang-javascript": "^6.2.2", "@codemirror/lang-javascript": "^6.2.2",
"@codemirror/lang-python": "^6.1.6", "@codemirror/lang-python": "^6.1.6",
"@codemirror/language-data": "^6.5.1", "@codemirror/language-data": "^6.5.1",

View file

@ -7,7 +7,7 @@ authors = [
license = { file = "LICENSE" } license = { file = "LICENSE" }
dependencies = [ dependencies = [
"fastapi==0.115.7", "fastapi==0.115.7",
"uvicorn[standard]==0.30.6", "uvicorn[standard]==0.34.0",
"pydantic==2.10.6", "pydantic==2.10.6",
"python-multipart==0.0.18", "python-multipart==0.0.18",
@ -21,14 +21,14 @@ dependencies = [
"aiocache", "aiocache",
"aiofiles", "aiofiles",
"sqlalchemy==2.0.32", "sqlalchemy==2.0.38",
"alembic==1.14.0", "alembic==1.14.0",
"peewee==3.17.8", "peewee==3.17.9",
"peewee-migrate==1.12.2", "peewee-migrate==1.12.2",
"psycopg2-binary==2.9.9", "psycopg2-binary==2.9.9",
"pgvector==0.3.5", "pgvector==0.3.5",
"PyMySQL==1.1.1", "PyMySQL==1.1.1",
"bcrypt==4.2.0", "bcrypt==4.3.0",
"pymongo", "pymongo",
"redis", "redis",
@ -48,8 +48,8 @@ dependencies = [
"google-generativeai==0.7.2", "google-generativeai==0.7.2",
"tiktoken", "tiktoken",
"langchain==0.3.7", "langchain==0.3.19",
"langchain-community==0.3.7", "langchain-community==0.3.18",
"fake-useragent==1.5.1", "fake-useragent==1.5.1",
"chromadb==0.6.2", "chromadb==0.6.2",
@ -57,6 +57,7 @@ dependencies = [
"qdrant-client~=1.12.0", "qdrant-client~=1.12.0",
"opensearch-py==2.8.0", "opensearch-py==2.8.0",
"playwright==1.49.1", "playwright==1.49.1",
"elasticsearch==8.17.1",
"transformers", "transformers",
"sentence-transformers==3.3.1", "sentence-transformers==3.3.1",
@ -92,7 +93,7 @@ dependencies = [
"PyJWT[crypto]==2.10.1", "PyJWT[crypto]==2.10.1",
"authlib==1.4.1", "authlib==1.4.1",
"black==24.8.0", "black==25.1.0",
"langfuse==2.44.0", "langfuse==2.44.0",
"youtube-transcript-api==0.6.3", "youtube-transcript-api==0.6.3",
"pytube==15.0.0", "pytube==15.0.0",

View file

@ -2,12 +2,14 @@
<html lang="en"> <html lang="en">
<head> <head>
<meta charset="utf-8" /> <meta charset="utf-8" />
<link rel="icon" type="image/png" href="/favicon/favicon-96x96.png" sizes="96x96" /> <link rel="icon" type="image/png" href="/static/favicon.png" />
<link rel="icon" type="image/svg+xml" href="/favicon/favicon.svg" /> <link rel="icon" type="image/png" href="/static/favicon-96x96.png" sizes="96x96" />
<link rel="shortcut icon" href="/favicon/favicon.ico" /> <link rel="icon" type="image/svg+xml" href="/static/favicon.svg" />
<link rel="apple-touch-icon" sizes="180x180" href="/favicon/apple-touch-icon.png" /> <link rel="shortcut icon" href="/static/favicon.ico" />
<link rel="apple-touch-icon" sizes="180x180" href="/static/apple-touch-icon.png" />
<meta name="apple-mobile-web-app-title" content="Open WebUI" /> <meta name="apple-mobile-web-app-title" content="Open WebUI" />
<link rel="manifest" href="/favicon/site.webmanifest" />
<link rel="manifest" href="/manifest.json" />
<meta <meta
name="viewport" name="viewport"
content="width=device-width, initial-scale=1, maximum-scale=1, viewport-fit=cover" content="width=device-width, initial-scale=1, maximum-scale=1, viewport-fit=cover"
@ -74,6 +76,28 @@
} }
} }
}); });
function setSplashImage() {
const logo = document.getElementById('logo');
const isDarkMode = document.documentElement.classList.contains('dark');
if (isDarkMode) {
const darkImage = new Image();
darkImage.src = '/static/splash-dark.png';
darkImage.onload = () => {
logo.src = '/static/splash-dark.png';
logo.style.filter = ''; // Ensure no inversion is applied if splash-dark.png exists
};
darkImage.onerror = () => {
logo.style.filter = 'invert(1)'; // Invert image if splash-dark.png is missing
};
}
}
// Runs after classes are assigned
window.onload = setSplashImage;
})(); })();
</script> </script>
@ -176,10 +200,6 @@
background: #000; background: #000;
} }
html.dark #splash-screen img {
filter: invert(1);
}
html.her #splash-screen { html.her #splash-screen {
background: #983724; background: #983724;
} }

View file

@ -1,5 +1,5 @@
<script> <script>
import { getContext } from 'svelte'; import { getContext, onMount } from 'svelte';
const i18n = getContext('i18n'); const i18n = getContext('i18n');
import { WEBUI_BASE_URL } from '$lib/constants'; import { WEBUI_BASE_URL } from '$lib/constants';
@ -10,6 +10,32 @@
export let show = true; export let show = true;
export let getStartedHandler = () => {}; export let getStartedHandler = () => {};
function setLogoImage() {
const logo = document.getElementById('logo');
if (logo) {
const isDarkMode = document.documentElement.classList.contains('dark');
if (isDarkMode) {
const darkImage = new Image();
darkImage.src = '/static/favicon-dark.png';
darkImage.onload = () => {
logo.src = '/static/favicon-dark.png';
logo.style.filter = ''; // Ensure no inversion is applied if splash-dark.png exists
};
darkImage.onerror = () => {
logo.style.filter = 'invert(1)'; // Invert image if splash-dark.png is missing
};
}
}
}
$: if (show) {
setLogoImage();
}
</script> </script>
{#if show} {#if show}
@ -18,6 +44,7 @@
<div class="flex space-x-2"> <div class="flex space-x-2">
<div class=" self-center"> <div class=" self-center">
<img <img
id="logo"
crossorigin="anonymous" crossorigin="anonymous"
src="{WEBUI_BASE_URL}/static/favicon.png" src="{WEBUI_BASE_URL}/static/favicon.png"
class=" w-6 rounded-full" class=" w-6 rounded-full"

View file

@ -1,4 +1,6 @@
<script lang="ts"> <script lang="ts">
import DOMPurify from 'dompurify';
import { getBackendConfig, getVersionUpdates, getWebhookUrl, updateWebhookUrl } from '$lib/apis'; import { getBackendConfig, getVersionUpdates, getWebhookUrl, updateWebhookUrl } from '$lib/apis';
import { import {
getAdminConfig, getAdminConfig,
@ -220,15 +222,44 @@
<div class=""> <div class="">
{$i18n.t('License')} {$i18n.t('License')}
</div> </div>
{#if $config?.license_metadata}
<a <a
class=" text-xs text-gray-500 hover:underline" href="https://docs.openwebui.com/enterprise"
target="_blank"
class="text-gray-500 mt-0.5"
>
<span class=" capitalize text-black dark:text-white"
>{$config?.license_metadata?.type}
license</span
>
registered to
<span class=" capitalize text-black dark:text-white"
>{$config?.license_metadata?.organization_name}</span
>
for
<span class=" font-medium text-black dark:text-white"
>{$config?.license_metadata?.seats ?? 'Unlimited'} users.</span
>
</a>
{#if $config?.license_metadata?.html}
<div class="mt-0.5">
{@html DOMPurify.sanitize($config?.license_metadata?.html)}
</div>
{/if}
{:else}
<a
class=" text-xs hover:underline"
href="https://docs.openwebui.com/enterprise" href="https://docs.openwebui.com/enterprise"
target="_blank" target="_blank"
> >
<span class="text-gray-500">
{$i18n.t( {$i18n.t(
'Upgrade to a licensed plan for enhanced capabilities, including custom theming and branding, and dedicated support.' 'Upgrade to a licensed plan for enhanced capabilities, including custom theming and branding, and dedicated support.'
)} )}
</span>
</a> </a>
{/if}
</div> </div>
<!-- <button <!-- <button

View file

@ -29,7 +29,8 @@
'tavily', 'tavily',
'jina', 'jina',
'bing', 'bing',
'exa' 'exa',
'perplexity'
]; ];
let youtubeLanguage = 'en'; let youtubeLanguage = 'en';
@ -361,6 +362,17 @@
/> />
</div> </div>
</div> </div>
{:else if webConfig.search.engine === 'perplexity'}
<div>
<div class=" self-center text-xs font-medium mb-1">
{$i18n.t('Perplexity API Key')}
</div>
<SensitiveInput
placeholder={$i18n.t('Enter Perplexity API Key')}
bind:value={webConfig.search.perplexity_api_key}
/>
</div>
{:else if webConfig.search.engine === 'bing'} {:else if webConfig.search.engine === 'bing'}
<div class="mb-2.5 flex w-full flex-col"> <div class="mb-2.5 flex w-full flex-col">
<div> <div>

View file

@ -28,6 +28,7 @@
import ChevronUp from '$lib/components/icons/ChevronUp.svelte'; import ChevronUp from '$lib/components/icons/ChevronUp.svelte';
import ChevronDown from '$lib/components/icons/ChevronDown.svelte'; import ChevronDown from '$lib/components/icons/ChevronDown.svelte';
import About from '$lib/components/chat/Settings/About.svelte'; import About from '$lib/components/chat/Settings/About.svelte';
import Banner from '$lib/components/common/Banner.svelte';
const i18n = getContext('i18n'); const i18n = getContext('i18n');
@ -124,12 +125,43 @@
/> />
<UserChatsModal bind:show={showUserChatsModal} user={selectedUser} /> <UserChatsModal bind:show={showUserChatsModal} user={selectedUser} />
{#if ($config?.license_metadata?.seats ?? null) !== null && users.length > $config?.license_metadata?.seats}
<div class=" mt-1 mb-2 text-xs text-red-500">
<Banner
className="mx-0"
banner={{
type: 'error',
title: 'License Error',
content:
'Exceeded the number of seats in your license. Please contact support to increase the number of seats.',
dismissable: true
}}
/>
</div>
{/if}
<div class="mt-0.5 mb-2 gap-1 flex flex-col md:flex-row justify-between"> <div class="mt-0.5 mb-2 gap-1 flex flex-col md:flex-row justify-between">
<div class="flex md:self-center text-lg font-medium px-0.5"> <div class="flex md:self-center text-lg font-medium px-0.5">
<div class="flex-shrink-0">
{$i18n.t('Users')} {$i18n.t('Users')}
</div>
<div class="flex self-center w-[1px] h-6 mx-2.5 bg-gray-50 dark:bg-gray-850" /> <div class="flex self-center w-[1px] h-6 mx-2.5 bg-gray-50 dark:bg-gray-850" />
{#if ($config?.license_metadata?.seats ?? null) !== null}
{#if users.length > $config?.license_metadata?.seats}
<span class="text-lg font-medium text-red-500"
>{users.length} of {$config?.license_metadata?.seats}
<span class="text-sm font-normal">available users</span></span
>
{:else}
<span class="text-lg font-medium text-gray-500 dark:text-gray-300"
>{users.length} of {$config?.license_metadata?.seats}
<span class="text-sm font-normal">available users</span></span
>
{/if}
{:else}
<span class="text-lg font-medium text-gray-500 dark:text-gray-300">{users.length}</span> <span class="text-lg font-medium text-gray-500 dark:text-gray-300">{users.length}</span>
{/if}
</div> </div>
<div class="flex gap-1"> <div class="flex gap-1">

View file

@ -1940,6 +1940,30 @@
{#if $banners.length > 0 && !history.currentId && !$chatId && selectedModels.length <= 1} {#if $banners.length > 0 && !history.currentId && !$chatId && selectedModels.length <= 1}
<div class="absolute top-12 left-0 right-0 w-full z-30"> <div class="absolute top-12 left-0 right-0 w-full z-30">
<div class=" flex flex-col gap-1 w-full"> <div class=" flex flex-col gap-1 w-full">
{#if ($config?.license_metadata?.type ?? null) === 'trial'}
<Banner
banner={{
type: 'info',
title: 'Trial License',
content: $i18n.t(
'You are currently using a trial license. Please contact support to upgrade your license.'
)
}}
/>
{/if}
{#if ($config?.license_metadata?.seats ?? null) !== null && $config?.user_count > $config?.license_metadata?.seats}
<Banner
banner={{
type: 'error',
title: 'License Error',
content: $i18n.t(
'Exceeded the number of seats in your license. Please contact support to increase the number of seats.'
)
}}
/>
{/if}
{#each $banners.filter( (b) => (b.dismissible ? !JSON.parse(localStorage.getItem('dismissedBannerIds') ?? '[]').includes(b.id) : true) ) as banner} {#each $banners.filter( (b) => (b.dismissible ? !JSON.parse(localStorage.getItem('dismissedBannerIds') ?? '[]').includes(b.id) : true) ) as banner}
<Banner <Banner
{banner} {banner}

View file

@ -121,7 +121,8 @@
toast.error('Model not selected'); toast.error('Model not selected');
return; return;
} }
prompt = `Explain this section to me in more detail\n\n\`\`\`\n${selectedText}\n\`\`\``; const explainText = $i18n.t('Explain this section to me in more detail');
prompt = `${explainText}\n\n\`\`\`\n${selectedText}\n\`\`\``;
responseContent = ''; responseContent = '';
const [res, controller] = await chatCompletion(localStorage.token, { const [res, controller] = await chatCompletion(localStorage.token, {
@ -246,7 +247,7 @@
> >
<ChatBubble className="size-3 shrink-0" /> <ChatBubble className="size-3 shrink-0" />
<div class="shrink-0">Ask</div> <div class="shrink-0">{$i18n.t('Ask')}</div>
</button> </button>
<button <button
class="px-1 hover:bg-gray-50 dark:hover:bg-gray-800 rounded-sm flex items-center gap-1 min-w-fit" class="px-1 hover:bg-gray-50 dark:hover:bg-gray-800 rounded-sm flex items-center gap-1 min-w-fit"
@ -257,7 +258,7 @@
> >
<LightBlub className="size-3 shrink-0" /> <LightBlub className="size-3 shrink-0" />
<div class="shrink-0">Explain</div> <div class="shrink-0">{$i18n.t('Explain')}</div>
</button> </button>
</div> </div>
{:else} {:else}

View file

@ -676,12 +676,13 @@
bind:value={prompt} bind:value={prompt}
id="chat-input" id="chat-input"
messageInput={true} messageInput={true}
shiftEnter={!$mobile || shiftEnter={!($settings?.ctrlEnterToSend ?? false) &&
(!$mobile ||
!( !(
'ontouchstart' in window || 'ontouchstart' in window ||
navigator.maxTouchPoints > 0 || navigator.maxTouchPoints > 0 ||
navigator.msMaxTouchPoints > 0 navigator.msMaxTouchPoints > 0
)} ))}
placeholder={placeholder ? placeholder : $i18n.t('Send a Message')} placeholder={placeholder ? placeholder : $i18n.t('Send a Message')}
largeTextAsFile={$settings?.largeTextAsFile ?? false} largeTextAsFile={$settings?.largeTextAsFile ?? false}
autocomplete={$config?.features.enable_autocomplete_generation} autocomplete={$config?.features.enable_autocomplete_generation}
@ -805,22 +806,23 @@
navigator.msMaxTouchPoints > 0 navigator.msMaxTouchPoints > 0
) )
) { ) {
// Prevent Enter key from creating a new line // Uses keyCode '13' for Enter key for chinese/japanese keyboards.
// Uses keyCode '13' for Enter key for chinese/japanese keyboards //
if (e.keyCode === 13 && !e.shiftKey) { // Depending on the user's settings, it will send the message
e.preventDefault(); // either when Enter is pressed or when Ctrl+Enter is pressed.
} const enterPressed =
($settings?.ctrlEnterToSend ?? false)
? (e.key === 'Enter' || e.keyCode === 13) && isCtrlPressed
: (e.key === 'Enter' || e.keyCode === 13) && !e.shiftKey;
// Submit the prompt when Enter key is pressed if (enterPressed) {
if ( e.preventDefault();
(prompt !== '' || files.length > 0) && if (prompt !== '' || files.length > 0) {
e.keyCode === 13 &&
!e.shiftKey
) {
dispatch('submit', prompt); dispatch('submit', prompt);
} }
} }
} }
}
if (e.key === 'Escape') { if (e.key === 'Escape') {
console.log('Escape'); console.log('Escape');
@ -880,38 +882,17 @@
class="scrollbar-hidden bg-transparent dark:text-gray-100 outline-hidden w-full pt-3 px-1 resize-none" class="scrollbar-hidden bg-transparent dark:text-gray-100 outline-hidden w-full pt-3 px-1 resize-none"
placeholder={placeholder ? placeholder : $i18n.t('Send a Message')} placeholder={placeholder ? placeholder : $i18n.t('Send a Message')}
bind:value={prompt} bind:value={prompt}
on:keypress={(e) => {
if (
!$mobile ||
!(
'ontouchstart' in window ||
navigator.maxTouchPoints > 0 ||
navigator.msMaxTouchPoints > 0
)
) {
// Prevent Enter key from creating a new line
if (e.key === 'Enter' && !e.shiftKey) {
e.preventDefault();
}
// Submit the prompt when Enter key is pressed
if (
(prompt !== '' || files.length > 0) &&
e.key === 'Enter' &&
!e.shiftKey
) {
dispatch('submit', prompt);
}
}
}}
on:keydown={async (e) => { on:keydown={async (e) => {
const isCtrlPressed = e.ctrlKey || e.metaKey; // metaKey is for Cmd key on Mac const isCtrlPressed = e.ctrlKey || e.metaKey; // metaKey is for Cmd key on Mac
console.log('keydown', e);
const commandsContainerElement = const commandsContainerElement =
document.getElementById('commands-container'); document.getElementById('commands-container');
if (e.key === 'Escape') { if (e.key === 'Escape') {
stopResponse(); stopResponse();
} }
// Command/Ctrl + Shift + Enter to submit a message pair // Command/Ctrl + Shift + Enter to submit a message pair
if (isCtrlPressed && e.key === 'Enter' && e.shiftKey) { if (isCtrlPressed && e.key === 'Enter' && e.shiftKey) {
e.preventDefault(); e.preventDefault();
@ -947,6 +928,7 @@
editButton?.click(); editButton?.click();
} }
if (commandsContainerElement) {
if (commandsContainerElement && e.key === 'ArrowUp') { if (commandsContainerElement && e.key === 'ArrowUp') {
e.preventDefault(); e.preventDefault();
commandsElement.selectUp(); commandsElement.selectUp();
@ -991,7 +973,38 @@
]?.at(-1); ]?.at(-1);
commandOptionButton?.click(); commandOptionButton?.click();
} else if (e.key === 'Tab') { }
} else {
if (
!$mobile ||
!(
'ontouchstart' in window ||
navigator.maxTouchPoints > 0 ||
navigator.msMaxTouchPoints > 0
)
) {
console.log('keypress', e);
// Prevent Enter key from creating a new line
const isCtrlPressed = e.ctrlKey || e.metaKey;
const enterPressed =
($settings?.ctrlEnterToSend ?? false)
? (e.key === 'Enter' || e.keyCode === 13) && isCtrlPressed
: (e.key === 'Enter' || e.keyCode === 13) && !e.shiftKey;
console.log('Enter pressed:', enterPressed);
if (enterPressed) {
e.preventDefault();
}
// Submit the prompt when Enter key is pressed
if ((prompt !== '' || files.length > 0) && enterPressed) {
dispatch('submit', prompt);
}
}
}
if (e.key === 'Tab') {
const words = findWordIndices(prompt); const words = findWordIndices(prompt);
if (words.length > 0) { if (words.length > 0) {

View file

@ -231,7 +231,6 @@
mediaRecorder.onstart = () => { mediaRecorder.onstart = () => {
console.log('Recording started'); console.log('Recording started');
audioChunks = []; audioChunks = [];
analyseAudio(audioStream);
}; };
mediaRecorder.ondataavailable = (event) => { mediaRecorder.ondataavailable = (event) => {
@ -245,7 +244,7 @@
stopRecordingCallback(); stopRecordingCallback();
}; };
mediaRecorder.start(); analyseAudio(audioStream);
} }
}; };
@ -321,6 +320,9 @@
if (hasSound) { if (hasSound) {
// BIG RED TEXT // BIG RED TEXT
console.log('%c%s', 'color: red; font-size: 20px;', '🔊 Sound detected'); console.log('%c%s', 'color: red; font-size: 20px;', '🔊 Sound detected');
if (mediaRecorder && mediaRecorder.state !== 'recording') {
mediaRecorder.start();
}
if (!hasStartedSpeaking) { if (!hasStartedSpeaking) {
hasStartedSpeaking = true; hasStartedSpeaking = true;

View file

@ -228,7 +228,8 @@
role: 'user', role: 'user',
content: userPrompt, content: userPrompt,
...(history.messages[messageId].files && { files: history.messages[messageId].files }), ...(history.messages[messageId].files && { files: history.messages[messageId].files }),
models: selectedModels models: selectedModels,
timestamp: Math.floor(Date.now() / 1000) // Unix epoch
}; };
let messageParentId = history.messages[messageId].parentId; let messageParentId = history.messages[messageId].parentId;

View file

@ -14,6 +14,9 @@
import { config } from '$lib/stores'; import { config } from '$lib/stores';
import { executeCode } from '$lib/apis/utils'; import { executeCode } from '$lib/apis/utils';
import { toast } from 'svelte-sonner'; import { toast } from 'svelte-sonner';
import ChevronUp from '$lib/components/icons/ChevronUp.svelte';
import ChevronUpDown from '$lib/components/icons/ChevronUpDown.svelte';
import CommandLine from '$lib/components/icons/CommandLine.svelte';
const i18n = getContext('i18n'); const i18n = getContext('i18n');
@ -57,9 +60,14 @@
let result = null; let result = null;
let files = null; let files = null;
let collapsed = false;
let copied = false; let copied = false;
let saved = false; let saved = false;
const collapseCodeBlock = () => {
collapsed = !collapsed;
};
const saveCode = () => { const saveCode = () => {
saved = true; saved = true;
@ -418,18 +426,39 @@
class="sticky {stickyButtonsClassName} mb-1 py-1 pr-2.5 flex items-center justify-end z-10 text-xs text-black dark:text-white" class="sticky {stickyButtonsClassName} mb-1 py-1 pr-2.5 flex items-center justify-end z-10 text-xs text-black dark:text-white"
> >
<div class="flex items-center gap-0.5 translate-y-[1px]"> <div class="flex items-center gap-0.5 translate-y-[1px]">
<button
class="flex gap-1 items-center bg-none border-none bg-gray-50 hover:bg-gray-100 dark:bg-gray-850 dark:hover:bg-gray-800 transition rounded-md px-1.5 py-0.5"
on:click={collapseCodeBlock}
>
<div>
<ChevronUpDown className="size-3" />
</div>
<div>
{collapsed ? $i18n.t('Expand') : $i18n.t('Collapse')}
</div>
</button>
{#if lang.toLowerCase() === 'python' || lang.toLowerCase() === 'py' || (lang === '' && checkPythonCode(code))} {#if lang.toLowerCase() === 'python' || lang.toLowerCase() === 'py' || (lang === '' && checkPythonCode(code))}
{#if executing} {#if executing}
<div class="run-code-button bg-none border-none p-1 cursor-not-allowed">Running</div> <div class="run-code-button bg-none border-none p-1 cursor-not-allowed">Running</div>
{:else if run} {:else if run}
<button <button
class="run-code-button bg-none border-none bg-gray-50 hover:bg-gray-100 dark:bg-gray-850 dark:hover:bg-gray-800 transition rounded-md px-1.5 py-0.5" class="flex gap-1 items-center run-code-button bg-none border-none bg-gray-50 hover:bg-gray-100 dark:bg-gray-850 dark:hover:bg-gray-800 transition rounded-md px-1.5 py-0.5"
on:click={async () => { on:click={async () => {
code = _code; code = _code;
await tick(); await tick();
executePython(code); executePython(code);
}}>{$i18n.t('Run')}</button }}
> >
<div>
<CommandLine className="size-3" />
</div>
<div>
{$i18n.t('Run')}
</div>
</button>
{/if} {/if}
{/if} {/if}
@ -457,6 +486,8 @@
: 'rounded-b-lg'} overflow-hidden" : 'rounded-b-lg'} overflow-hidden"
> >
<div class=" pt-7 bg-gray-50 dark:bg-gray-850"></div> <div class=" pt-7 bg-gray-50 dark:bg-gray-850"></div>
{#if !collapsed}
<CodeEditor <CodeEditor
value={code} value={code}
{id} {id}
@ -468,8 +499,20 @@
_code = value; _code = value;
}} }}
/> />
{:else}
<div
class="bg-gray-50 dark:bg-black dark:text-white rounded-b-lg! pt-2 pb-2 px-4 flex flex-col gap-2 text-xs"
>
<span class="text-gray-500 italic">
{$i18n.t('{{COUNT}} hidden lines', {
COUNT: code.split('\n').length
})}
</span>
</div>
{/if}
</div> </div>
{#if !collapsed}
<div <div
id="plt-canvas-{id}" id="plt-canvas-{id}"
class="bg-gray-50 dark:bg-[#202123] dark:text-white max-w-full overflow-x-auto scrollbar-hidden" class="bg-gray-50 dark:bg-[#202123] dark:text-white max-w-full overflow-x-auto scrollbar-hidden"
@ -518,5 +561,6 @@
</div> </div>
{/if} {/if}
{/if} {/if}
{/if}
</div> </div>
</div> </div>

View file

@ -17,6 +17,7 @@
import Collapsible from '$lib/components/common/Collapsible.svelte'; import Collapsible from '$lib/components/common/Collapsible.svelte';
import Tooltip from '$lib/components/common/Tooltip.svelte'; import Tooltip from '$lib/components/common/Tooltip.svelte';
import ArrowDownTray from '$lib/components/icons/ArrowDownTray.svelte'; import ArrowDownTray from '$lib/components/icons/ArrowDownTray.svelte';
import Source from './Source.svelte';
const dispatch = createEventDispatcher(); const dispatch = createEventDispatcher();
@ -91,7 +92,7 @@
onCode={(value) => { onCode={(value) => {
dispatch('code', value); dispatch('code', value);
}} }}
onSave={(e) => { onSave={(value) => {
dispatch('update', { dispatch('update', {
raw: token.raw, raw: token.raw,
oldContent: token.text, oldContent: token.text,
@ -261,6 +262,8 @@
{@html html} {@html html}
{:else if token.text.includes(`<iframe src="${WEBUI_BASE_URL}/api/v1/files/`)} {:else if token.text.includes(`<iframe src="${WEBUI_BASE_URL}/api/v1/files/`)}
{@html `${token.text}`} {@html `${token.text}`}
{:else if token.text.includes(`<source_id`)}
<Source {id} {token} onClick={onSourceClick} />
{:else} {:else}
{token.text} {token.text}
{/if} {/if}

View file

@ -52,12 +52,17 @@
export let className = 'w-[32rem]'; export let className = 'w-[32rem]';
export let triggerClassName = 'text-lg'; export let triggerClassName = 'text-lg';
let tagsContainerElement;
let show = false; let show = false;
let tags = [];
let selectedModel = ''; let selectedModel = '';
$: selectedModel = items.find((item) => item.value === value) ?? ''; $: selectedModel = items.find((item) => item.value === value) ?? '';
let searchValue = ''; let searchValue = '';
let selectedTag = '';
let ollamaVersion = null; let ollamaVersion = null;
let selectedModelIdx = 0; let selectedModelIdx = 0;
@ -79,10 +84,23 @@
); );
$: filteredItems = searchValue $: filteredItems = searchValue
? fuse.search(searchValue).map((e) => { ? fuse
.search(searchValue)
.map((e) => {
return e.item; return e.item;
}) })
: items; .filter((item) => {
if (selectedTag === '') {
return true;
}
return item.model?.info?.meta?.tags?.map((tag) => tag.name).includes(selectedTag);
})
: items.filter((item) => {
if (selectedTag === '') {
return true;
}
return item.model?.info?.meta?.tags?.map((tag) => tag.name).includes(selectedTag);
});
const pullModelHandler = async () => { const pullModelHandler = async () => {
const sanitizedModelTag = searchValue.trim().replace(/^ollama\s+(run|pull)\s+/, ''); const sanitizedModelTag = searchValue.trim().replace(/^ollama\s+(run|pull)\s+/, '');
@ -214,6 +232,13 @@
onMount(async () => { onMount(async () => {
ollamaVersion = await getOllamaVersion(localStorage.token).catch((error) => false); ollamaVersion = await getOllamaVersion(localStorage.token).catch((error) => false);
if (items) {
tags = items.flatMap((item) => item.model?.info?.meta?.tags ?? []).map((tag) => tag.name);
// Remove duplicates and sort
tags = Array.from(new Set(tags)).sort((a, b) => a.localeCompare(b));
}
}); });
const cancelModelPullHandler = async (model: string) => { const cancelModelPullHandler = async (model: string) => {
@ -269,7 +294,7 @@
> >
<slot> <slot>
{#if searchEnabled} {#if searchEnabled}
<div class="flex items-center gap-2.5 px-5 mt-3.5 mb-3"> <div class="flex items-center gap-2.5 px-5 mt-3.5 mb-1.5">
<Search className="size-4" strokeWidth="2.5" /> <Search className="size-4" strokeWidth="2.5" />
<input <input
@ -297,11 +322,42 @@
}} }}
/> />
</div> </div>
<hr class="border-gray-100 dark:border-gray-850" />
{/if} {/if}
<div class="px-3 my-2 max-h-64 overflow-y-auto scrollbar-hidden group"> <div class="px-3 mb-2 max-h-64 overflow-y-auto scrollbar-hidden group relative">
{#if tags}
<div class=" flex w-full sticky">
<div
class="flex gap-1 scrollbar-none overflow-x-auto w-fit text-center text-sm font-medium rounded-full bg-transparent px-1.5 pb-0.5"
bind:this={tagsContainerElement}
>
<button
class="min-w-fit outline-none p-1.5 {selectedTag === ''
? ''
: 'text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'} transition capitalize"
on:click={() => {
selectedTag = '';
}}
>
{$i18n.t('All')}
</button>
{#each tags as tag}
<button
class="min-w-fit outline-none p-1.5 {selectedTag === tag
? ''
: 'text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'} transition capitalize"
on:click={() => {
selectedTag = tag;
}}
>
{tag}
</button>
{/each}
</div>
</div>
{/if}
{#each filteredItems as item, index} {#each filteredItems as item, index}
<button <button
aria-label="model-item" aria-label="model-item"
@ -441,11 +497,13 @@
{/if} {/if}
{#if !$mobile && (item?.model?.info?.meta?.tags ?? []).length > 0} {#if !$mobile && (item?.model?.info?.meta?.tags ?? []).length > 0}
<div class="flex gap-0.5 self-center items-center h-full translate-y-[0.5px]">
{#each item.model?.info?.meta.tags as tag}
<Tooltip content={tag.name}>
<div <div
class=" text-xs font-bold px-1 rounded-sm uppercase line-clamp-1 bg-gray-500/20 text-gray-700 dark:text-gray-200" class="flex gap-0.5 self-center items-center h-full translate-y-[0.5px] overflow-x-auto scrollbar-none"
>
{#each item.model?.info?.meta.tags as tag}
<Tooltip content={tag.name} className="flex-shrink-0">
<div
class=" text-xs font-bold px-1 rounded-sm uppercase bg-gray-500/20 text-gray-700 dark:text-gray-200"
> >
{tag.name} {tag.name}
</div> </div>
@ -575,7 +633,7 @@
</div> </div>
{#if showTemporaryChatControl} {#if showTemporaryChatControl}
<hr class="border-gray-100 dark:border-gray-850" /> <hr class="border-gray-100 dark:border-gray-800" />
<div class="flex items-center mx-2 my-2"> <div class="flex items-center mx-2 my-2">
<button <button

View file

@ -102,7 +102,7 @@
{/if} {/if}
<div <div
class="w-full text-3xl text-gray-800 dark:text-gray-100 font-medium text-center flex items-center gap-4 font-primary" class="w-full text-3xl text-gray-800 dark:text-gray-100 text-center flex items-center gap-4 font-primary"
> >
<div class="w-full flex flex-col justify-center items-center"> <div class="w-full flex flex-col justify-center items-center">
<div class="flex flex-row justify-center gap-3 @sm:gap-3.5 w-fit px-5"> <div class="flex flex-row justify-center gap-3 @sm:gap-3.5 w-fit px-5">
@ -126,7 +126,7 @@
($i18n.language === 'dg-DG' ($i18n.language === 'dg-DG'
? `/doge.png` ? `/doge.png`
: `${WEBUI_BASE_URL}/static/favicon.png`)} : `${WEBUI_BASE_URL}/static/favicon.png`)}
class=" size-9 @sm:size-10 rounded-full border-[1px] border-gray-200 dark:border-none" class=" size-9 @sm:size-10 rounded-full border-[1px] border-gray-100 dark:border-none"
alt="logo" alt="logo"
draggable="false" draggable="false"
/> />

View file

@ -106,12 +106,16 @@
<hr class=" border-gray-100 dark:border-gray-850" /> <hr class=" border-gray-100 dark:border-gray-850" />
<div class="mt-2 text-xs text-gray-400 dark:text-gray-500"> {#if $config?.license_metadata}
Emoji graphics provided by <div class="mb-2 text-xs">
<a href="https://github.com/jdecked/twemoji" target="_blank">Twemoji</a>, licensed under {#if !$WEBUI_NAME.includes('Open WebUI')}
<a href="https://creativecommons.org/licenses/by/4.0/" target="_blank">CC-BY 4.0</a>. <span class=" text-gray-500 dark:text-gray-300 font-medium">{$WEBUI_NAME}</span> -
</div> {/if}
<span class=" capitalize">{$config?.license_metadata?.type}</span> license purchased by
<span class=" capitalize">{$config?.license_metadata?.organization_name}</span>
</div>
{:else}
<div class="flex space-x-1"> <div class="flex space-x-1">
<a href="https://discord.gg/5rJgQTnV4s" target="_blank"> <a href="https://discord.gg/5rJgQTnV4s" target="_blank">
<img <img
@ -134,6 +138,13 @@
/> />
</a> </a>
</div> </div>
{/if}
<div class="mt-2 text-xs text-gray-400 dark:text-gray-500">
Emoji graphics provided by
<a href="https://github.com/jdecked/twemoji" target="_blank">Twemoji</a>, licensed under
<a href="https://creativecommons.org/licenses/by/4.0/" target="_blank">CC-BY 4.0</a>.
</div>
<div> <div>
<pre <pre
@ -172,9 +183,6 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
</div> </div>
<div class="mt-2 text-xs text-gray-400 dark:text-gray-500"> <div class="mt-2 text-xs text-gray-400 dark:text-gray-500">
{#if !$WEBUI_NAME.includes('Open WebUI')}
<span class=" text-gray-500 dark:text-gray-300 font-medium">{$WEBUI_NAME}</span> -
{/if}
{$i18n.t('Created by')} {$i18n.t('Created by')}
<a <a
class=" text-gray-500 dark:text-gray-300 font-medium" class=" text-gray-500 dark:text-gray-300 font-medium"

View file

@ -17,6 +17,7 @@
stop: null, stop: null,
temperature: null, temperature: null,
reasoning_effort: null, reasoning_effort: null,
logit_bias: null,
frequency_penalty: null, frequency_penalty: null,
repeat_last_n: null, repeat_last_n: null,
mirostat: null, mirostat: null,
@ -114,7 +115,7 @@
<div class=" py-0.5 w-full justify-between"> <div class=" py-0.5 w-full justify-between">
<Tooltip <Tooltip
content={$i18n.t( content={$i18n.t(
'Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)' 'Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.'
)} )}
placement="top-start" placement="top-start"
className="inline-tooltip" className="inline-tooltip"
@ -203,7 +204,7 @@
<div class=" py-0.5 w-full justify-between"> <div class=" py-0.5 w-full justify-between">
<Tooltip <Tooltip
content={$i18n.t( content={$i18n.t(
'The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)' 'The temperature of the model. Increasing the temperature will make the model answer more creatively.'
)} )}
placement="top-start" placement="top-start"
className="inline-tooltip" className="inline-tooltip"
@ -258,7 +259,7 @@
<div class=" py-0.5 w-full justify-between"> <div class=" py-0.5 w-full justify-between">
<Tooltip <Tooltip
content={$i18n.t( content={$i18n.t(
'Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)' 'Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.'
)} )}
placement="top-start" placement="top-start"
className="inline-tooltip" className="inline-tooltip"
@ -301,10 +302,53 @@
<div class=" py-0.5 w-full justify-between"> <div class=" py-0.5 w-full justify-between">
<Tooltip <Tooltip
content={$i18n.t( content={$i18n.t(
'Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)' 'Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)'
)} )}
placement="top-start" placement="top-start"
className="inline-tooltip" className="inline-tooltip"
>
<div class="flex w-full justify-between">
<div class=" self-center text-xs font-medium">
{$i18n.t('Logit Bias')}
</div>
<button
class="p-1 px-3 text-xs flex rounded-sm transition shrink-0 outline-hidden"
type="button"
on:click={() => {
params.logit_bias = (params?.logit_bias ?? null) === null ? '' : null;
}}
>
{#if (params?.logit_bias ?? null) === null}
<span class="ml-2 self-center"> {$i18n.t('Default')} </span>
{:else}
<span class="ml-2 self-center"> {$i18n.t('Custom')} </span>
{/if}
</button>
</div>
</Tooltip>
{#if (params?.logit_bias ?? null) !== null}
<div class="flex mt-0.5 space-x-2">
<div class=" flex-1">
<input
class="w-full rounded-lg pl-2 py-2 px-1 text-sm dark:text-gray-300 dark:bg-gray-850 outline-hidden"
type="text"
placeholder={$i18n.t(
'Enter comma-seperated "token:bias_value" pairs (example: 5432:100, 413:-100)'
)}
bind:value={params.logit_bias}
autocomplete="off"
/>
</div>
</div>
{/if}
</div>
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t('Enable Mirostat sampling for controlling perplexity.')}
placement="top-start"
className="inline-tooltip"
> >
<div class="flex w-full justify-between"> <div class="flex w-full justify-between">
<div class=" self-center text-xs font-medium"> <div class=" self-center text-xs font-medium">
@ -356,7 +400,7 @@
<div class=" py-0.5 w-full justify-between"> <div class=" py-0.5 w-full justify-between">
<Tooltip <Tooltip
content={$i18n.t( content={$i18n.t(
'Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)' 'Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.'
)} )}
placement="top-start" placement="top-start"
className="inline-tooltip" className="inline-tooltip"
@ -411,7 +455,7 @@
<div class=" py-0.5 w-full justify-between"> <div class=" py-0.5 w-full justify-between">
<Tooltip <Tooltip
content={$i18n.t( content={$i18n.t(
'Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)' 'Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.'
)} )}
placement="top-start" placement="top-start"
className="inline-tooltip" className="inline-tooltip"
@ -467,7 +511,7 @@
<div class=" py-0.5 w-full justify-between"> <div class=" py-0.5 w-full justify-between">
<Tooltip <Tooltip
content={$i18n.t( content={$i18n.t(
'Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)' 'Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.'
)} )}
placement="top-start" placement="top-start"
className="inline-tooltip" className="inline-tooltip"
@ -522,7 +566,7 @@
<div class=" py-0.5 w-full justify-between"> <div class=" py-0.5 w-full justify-between">
<Tooltip <Tooltip
content={$i18n.t( content={$i18n.t(
'Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)' 'Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.'
)} )}
placement="top-start" placement="top-start"
className="inline-tooltip" className="inline-tooltip"
@ -578,7 +622,7 @@
<div class=" py-0.5 w-full justify-between"> <div class=" py-0.5 w-full justify-between">
<Tooltip <Tooltip
content={$i18n.t( content={$i18n.t(
'Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)' 'Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.'
)} )}
placement="top-start" placement="top-start"
className="inline-tooltip" className="inline-tooltip"
@ -633,7 +677,7 @@
<div class=" py-0.5 w-full justify-between"> <div class=" py-0.5 w-full justify-between">
<Tooltip <Tooltip
content={$i18n.t( content={$i18n.t(
'Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)' 'Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.'
)} )}
placement="top-start" placement="top-start"
className="inline-tooltip" className="inline-tooltip"
@ -689,7 +733,7 @@
<div class=" py-0.5 w-full justify-between"> <div class=" py-0.5 w-full justify-between">
<Tooltip <Tooltip
content={$i18n.t( content={$i18n.t(
'Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)' 'Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.'
)} )}
placement="top-start" placement="top-start"
className="inline-tooltip" className="inline-tooltip"
@ -745,7 +789,7 @@
<div class=" py-0.5 w-full justify-between"> <div class=" py-0.5 w-full justify-between">
<Tooltip <Tooltip
content={$i18n.t( content={$i18n.t(
'Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)' 'Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.'
)} )}
placement="top-start" placement="top-start"
className="inline-tooltip" className="inline-tooltip"
@ -800,9 +844,7 @@
<div class=" py-0.5 w-full justify-between"> <div class=" py-0.5 w-full justify-between">
<Tooltip <Tooltip
content={$i18n.t( content={$i18n.t('Sets how far back for the model to look back to prevent repetition.')}
'Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)'
)}
placement="top-start" placement="top-start"
className="inline-tooltip" className="inline-tooltip"
> >
@ -857,7 +899,7 @@
<div class=" py-0.5 w-full justify-between"> <div class=" py-0.5 w-full justify-between">
<Tooltip <Tooltip
content={$i18n.t( content={$i18n.t(
'Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)' 'Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.'
)} )}
placement="top-start" placement="top-start"
className="inline-tooltip" className="inline-tooltip"
@ -912,9 +954,7 @@
<div class=" py-0.5 w-full justify-between"> <div class=" py-0.5 w-full justify-between">
<Tooltip <Tooltip
content={$i18n.t( content={$i18n.t('Sets the size of the context window used to generate the next token.')}
'Sets the size of the context window used to generate the next token. (Default: 2048)'
)}
placement="top-start" placement="top-start"
className="inline-tooltip" className="inline-tooltip"
> >
@ -968,7 +1008,7 @@
<div class=" py-0.5 w-full justify-between"> <div class=" py-0.5 w-full justify-between">
<Tooltip <Tooltip
content={$i18n.t( content={$i18n.t(
'The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)' 'The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.'
)} )}
placement="top-start" placement="top-start"
className="inline-tooltip" className="inline-tooltip"
@ -1023,7 +1063,7 @@
<div class=" py-0.5 w-full justify-between"> <div class=" py-0.5 w-full justify-between">
<Tooltip <Tooltip
content={$i18n.t( content={$i18n.t(
'This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)' 'This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.'
)} )}
placement="top-start" placement="top-start"
className="inline-tooltip" className="inline-tooltip"
@ -1078,7 +1118,7 @@
<div class=" py-0.5 w-full justify-between"> <div class=" py-0.5 w-full justify-between">
<Tooltip <Tooltip
content={$i18n.t( content={$i18n.t(
'This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)' 'This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.'
)} )}
placement="top-start" placement="top-start"
className="inline-tooltip" className="inline-tooltip"

View file

@ -1,7 +1,7 @@
<script lang="ts"> <script lang="ts">
import { toast } from 'svelte-sonner'; import { toast } from 'svelte-sonner';
import { createEventDispatcher, onMount, getContext } from 'svelte'; import { createEventDispatcher, onMount, getContext } from 'svelte';
import { getLanguages } from '$lib/i18n'; import { getLanguages, changeLanguage } from '$lib/i18n';
const dispatch = createEventDispatcher(); const dispatch = createEventDispatcher();
import { models, settings, theme, user } from '$lib/stores'; import { models, settings, theme, user } from '$lib/stores';
@ -50,6 +50,7 @@
seed: null, seed: null,
temperature: null, temperature: null,
reasoning_effort: null, reasoning_effort: null,
logit_bias: null,
frequency_penalty: null, frequency_penalty: null,
presence_penalty: null, presence_penalty: null,
repeat_penalty: null, repeat_penalty: null,
@ -198,7 +199,7 @@
bind:value={lang} bind:value={lang}
placeholder="Select a language" placeholder="Select a language"
on:change={(e) => { on:change={(e) => {
$i18n.changeLanguage(lang); changeLanguage(lang);
}} }}
> >
{#each languages as language} {#each languages as language}
@ -348,6 +349,7 @@
temperature: params.temperature !== null ? params.temperature : undefined, temperature: params.temperature !== null ? params.temperature : undefined,
reasoning_effort: reasoning_effort:
params.reasoning_effort !== null ? params.reasoning_effort : undefined, params.reasoning_effort !== null ? params.reasoning_effort : undefined,
logit_bias: params.logit_bias !== null ? params.logit_bias : undefined,
frequency_penalty: frequency_penalty:
params.frequency_penalty !== null ? params.frequency_penalty : undefined, params.frequency_penalty !== null ? params.frequency_penalty : undefined,
presence_penalty: presence_penalty:

View file

@ -37,6 +37,7 @@
let landingPageMode = ''; let landingPageMode = '';
let chatBubble = true; let chatBubble = true;
let chatDirection: 'LTR' | 'RTL' = 'LTR'; let chatDirection: 'LTR' | 'RTL' = 'LTR';
let ctrlEnterToSend = false;
let imageCompression = false; let imageCompression = false;
let imageCompressionSize = { let imageCompressionSize = {
@ -193,6 +194,11 @@
saveSettings({ chatDirection }); saveSettings({ chatDirection });
}; };
const togglectrlEnterToSend = async () => {
ctrlEnterToSend = !ctrlEnterToSend;
saveSettings({ ctrlEnterToSend });
};
const updateInterfaceHandler = async () => { const updateInterfaceHandler = async () => {
saveSettings({ saveSettings({
models: [defaultModelId], models: [defaultModelId],
@ -232,6 +238,7 @@
notificationSound = $settings.notificationSound ?? true; notificationSound = $settings.notificationSound ?? true;
hapticFeedback = $settings.hapticFeedback ?? false; hapticFeedback = $settings.hapticFeedback ?? false;
ctrlEnterToSend = $settings.ctrlEnterToSend ?? false;
imageCompression = $settings.imageCompression ?? false; imageCompression = $settings.imageCompression ?? false;
imageCompressionSize = $settings.imageCompressionSize ?? { width: '', height: '' }; imageCompressionSize = $settings.imageCompressionSize ?? { width: '', height: '' };
@ -652,6 +659,28 @@
</div> </div>
</div> --> </div> -->
<div>
<div class=" py-0.5 flex w-full justify-between">
<div class=" self-center text-xs">
{$i18n.t('Enter Key Behavior')}
</div>
<button
class="p-1 px-3 text-xs flex rounded transition"
on:click={() => {
togglectrlEnterToSend();
}}
type="button"
>
{#if ctrlEnterToSend === true}
<span class="ml-2 self-center">{$i18n.t('Ctrl+Enter to Send')}</span>
{:else}
<span class="ml-2 self-center">{$i18n.t('Enter to Send')}</span>
{/if}
</button>
</div>
</div>
<div> <div>
<div class=" py-0.5 flex w-full justify-between"> <div class=" py-0.5 flex w-full justify-between">
<div class=" self-center text-xs"> <div class=" self-center text-xs">

View file

@ -16,6 +16,7 @@
dismissable: true, dismissable: true,
timestamp: Math.floor(Date.now() / 1000) timestamp: Math.floor(Date.now() / 1000)
}; };
export let className = 'mx-4';
export let dismissed = false; export let dismissed = false;
@ -41,7 +42,7 @@
{#if !dismissed} {#if !dismissed}
{#if mounted} {#if mounted}
<div <div
class=" top-0 left-0 right-0 p-2 mx-4 px-3 flex justify-center items-center relative rounded-xl border border-gray-100 dark:border-gray-850 text-gray-800 dark:text-gary-100 bg-white dark:bg-gray-900 backdrop-blur-xl z-30" class="{className} top-0 left-0 right-0 p-2 px-3 flex justify-center items-center relative rounded-xl border border-gray-100 dark:border-gray-850 text-gray-800 dark:text-gary-100 bg-white dark:bg-gray-900 backdrop-blur-xl z-30"
transition:fade={{ delay: 100, duration: 300 }} transition:fade={{ delay: 100, duration: 300 }}
> >
<div class=" flex flex-col md:flex-row md:items-center flex-1 text-sm w-fit gap-1.5"> <div class=" flex flex-col md:flex-row md:items-center flex-1 text-sm w-fit gap-1.5">

View file

@ -227,7 +227,11 @@
</DragGhost> </DragGhost>
{/if} {/if}
<div bind:this={itemElement} class=" w-full {className} relative group" {draggable}> <div
bind:this={itemElement}
class=" w-full {className} relative group"
draggable={draggable && !confirmEdit}
>
{#if confirmEdit} {#if confirmEdit}
<div <div
class=" w-full flex justify-between rounded-lg px-[11px] py-[6px] {id === $chatId || class=" w-full flex justify-between rounded-lg px-[11px] py-[6px] {id === $chatId ||

View file

@ -5,7 +5,7 @@
import { flyAndScale } from '$lib/utils/transitions'; import { flyAndScale } from '$lib/utils/transitions';
import { goto } from '$app/navigation'; import { goto } from '$app/navigation';
import ArchiveBox from '$lib/components/icons/ArchiveBox.svelte'; import ArchiveBox from '$lib/components/icons/ArchiveBox.svelte';
import { showSettings, activeUserIds, USAGE_POOL, mobile, showSidebar } from '$lib/stores'; import { showSettings, activeUserIds, USAGE_POOL, mobile, showSidebar, user } from '$lib/stores';
import { fade, slide } from 'svelte/transition'; import { fade, slide } from 'svelte/transition';
import Tooltip from '$lib/components/common/Tooltip.svelte'; import Tooltip from '$lib/components/common/Tooltip.svelte';
import { userSignOut } from '$lib/apis/auths'; import { userSignOut } from '$lib/apis/auths';
@ -157,8 +157,11 @@
class="flex rounded-md py-2 px-3 w-full hover:bg-gray-50 dark:hover:bg-gray-800 transition" class="flex rounded-md py-2 px-3 w-full hover:bg-gray-50 dark:hover:bg-gray-800 transition"
on:click={async () => { on:click={async () => {
await userSignOut(); await userSignOut();
user.set(null);
localStorage.removeItem('token'); localStorage.removeItem('token');
location.href = '/auth'; location.href = '/auth';
show = false; show = false;
}} }}
> >

View file

@ -198,7 +198,7 @@ class Tools:
} }
}} }}
> >
<div class="flex flex-col flex-1 overflow-auto h-0"> <div class="flex flex-col flex-1 overflow-auto h-0 rounded-lg">
<div class="w-full mb-2 flex flex-col gap-0.5"> <div class="w-full mb-2 flex flex-col gap-0.5">
<div class="flex w-full items-center"> <div class="flex w-full items-center">
<div class=" shrink-0 mr-2"> <div class=" shrink-0 mr-2">
@ -218,7 +218,7 @@ class Tools:
<div class="flex-1"> <div class="flex-1">
<Tooltip content={$i18n.t('e.g. My Tools')} placement="top-start"> <Tooltip content={$i18n.t('e.g. My Tools')} placement="top-start">
<input <input
class="w-full text-2xl font-semibold bg-transparent outline-hidden" class="w-full text-2xl font-medium bg-transparent outline-hidden font-primary"
type="text" type="text"
placeholder={$i18n.t('Tool Name')} placeholder={$i18n.t('Tool Name')}
bind:value={name} bind:value={name}
@ -282,12 +282,12 @@ class Tools:
<CodeEditor <CodeEditor
bind:this={codeEditor} bind:this={codeEditor}
value={content} value={content}
{boilerplate}
lang="python" lang="python"
{boilerplate}
onChange={(e) => { onChange={(e) => {
_content = e; _content = e;
}} }}
onSave={() => { onSave={async () => {
if (formElement) { if (formElement) {
formElement.requestSubmit(); formElement.requestSubmit();
} }

View file

@ -37,7 +37,7 @@ const createIsLoadingStore = (i18n: i18nType) => {
return isLoading; return isLoading;
}; };
export const initI18n = (defaultLocale: string | undefined) => { export const initI18n = (defaultLocale?: string | undefined) => {
let detectionOrder = defaultLocale let detectionOrder = defaultLocale
? ['querystring', 'localStorage'] ? ['querystring', 'localStorage']
: ['querystring', 'localStorage', 'navigator']; : ['querystring', 'localStorage', 'navigator'];
@ -66,6 +66,9 @@ export const initI18n = (defaultLocale: string | undefined) => {
escapeValue: false // not needed for svelte as it escapes by default escapeValue: false // not needed for svelte as it escapes by default
} }
}); });
const lang = i18next?.language || defaultLocale || 'en-US';
document.documentElement.setAttribute('lang', lang);
}; };
const i18n = createI18nStore(i18next); const i18n = createI18nStore(i18next);
@ -75,5 +78,10 @@ export const getLanguages = async () => {
const languages = (await import(`./locales/languages.json`)).default; const languages = (await import(`./locales/languages.json`)).default;
return languages; return languages;
}; };
export const changeLanguage = (lang: string) => {
document.documentElement.setAttribute('lang', lang);
i18next.changeLanguage(lang);
};
export default i18n; export default i18n;
export const isLoading = isLoadingStore; export const isLoading = isLoadingStore;

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "( `sh webui.sh --api`مثال)", "(e.g. `sh webui.sh --api`)": "( `sh webui.sh --api`مثال)",
"(latest)": "(الأخير)", "(latest)": "(الأخير)",
"{{ models }}": "{{ نماذج }}", "{{ models }}": "{{ نماذج }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "", "{{COUNT}} Replies": "",
"{{user}}'s Chats": "دردشات {{user}}", "{{user}}'s Chats": "دردشات {{user}}",
"{{webUIName}} Backend Required": "{{webUIName}} مطلوب", "{{webUIName}} Backend Required": "{{webUIName}} مطلوب",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "التعليمات المتقدمة", "Advanced Parameters": "التعليمات المتقدمة",
"Advanced Params": "المعلمات المتقدمة", "Advanced Params": "المعلمات المتقدمة",
"All": "",
"All Documents": "جميع الملفات", "All Documents": "جميع الملفات",
"All models deleted successfully": "", "All models deleted successfully": "",
"Allow Chat Controls": "", "Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "", "Allow Voice Interruption in Call": "",
"Allowed Endpoints": "", "Allowed Endpoints": "",
"Already have an account?": "هل تملك حساب ؟", "Already have an account?": "هل تملك حساب ؟",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "", "Amazing": "",
"an assistant": "مساعد", "an assistant": "مساعد",
@ -93,6 +95,7 @@
"Are you sure?": "هل أنت متأكد ؟", "Are you sure?": "هل أنت متأكد ؟",
"Arena Models": "", "Arena Models": "",
"Artifacts": "", "Artifacts": "",
"Ask": "",
"Ask a question": "", "Ask a question": "",
"Assistant": "", "Assistant": "",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "", "Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "", "Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "مفتاح واجهة برمجة تطبيقات البحث الشجاع", "Brave Search API Key": "مفتاح واجهة برمجة تطبيقات البحث الشجاع",
"By {{name}}": "", "By {{name}}": "",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "مجموعة", "Collection": "مجموعة",
"Color": "", "Color": "",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "", "Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "اتصالات", "Connections": "اتصالات",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "", "Contact Admin for WebUI Access": "",
"Content": "الاتصال", "Content": "الاتصال",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "", "Continue with Email": "",
"Continue with LDAP": "", "Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "", "Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "", "Copied": "",
"Copied shared chat URL to clipboard!": "تم نسخ عنوان URL للدردشة المشتركة إلى الحافظة", "Copied shared chat URL to clipboard!": "تم نسخ عنوان URL للدردشة المشتركة إلى الحافظة",
"Copied to clipboard": "", "Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "أنشئت من", "Created At": "أنشئت من",
"Created by": "", "Created by": "",
"CSV Import": "", "CSV Import": "",
"Ctrl+Enter to Send": "",
"Current Model": "الموديل المختار", "Current Model": "الموديل المختار",
"Current Password": "كلمة السر الحالية", "Current Password": "كلمة السر الحالية",
"Custom": "مخصص", "Custom": "مخصص",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "", "Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "تفعيل عمليات التسجيل الجديدة", "Enable New Sign Ups": "تفعيل عمليات التسجيل الجديدة",
"Enabled": "", "Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "تأكد من أن ملف CSV الخاص بك يتضمن 4 أعمدة بهذا الترتيب: Name, Email, Password, Role.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "تأكد من أن ملف CSV الخاص بك يتضمن 4 أعمدة بهذا الترتيب: Name, Email, Password, Role.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "", "Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "أدخل الChunk Overlap", "Enter Chunk Overlap": "أدخل الChunk Overlap",
"Enter Chunk Size": "أدخل Chunk الحجم", "Enter Chunk Size": "أدخل Chunk الحجم",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "", "Enter description": "",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "", "Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "أدخل كود اللغة", "Enter language codes": "أدخل كود اللغة",
"Enter Model ID": "", "Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "(e.g. {{modelTag}}) أدخل الموديل تاق", "Enter model tag (e.g. {{modelTag}})": "(e.g. {{modelTag}}) أدخل الموديل تاق",
"Enter Mojeek Search API Key": "", "Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "(e.g. 50) أدخل عدد الخطوات", "Enter Number of Steps (e.g. 50)": "(e.g. 50) أدخل عدد الخطوات",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "", "Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "", "Enter Sampler (e.g. Euler a)": "",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "", "Enter Tika Server URL": "",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "أدخل Top K", "Enter Top K": "أدخل Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "الرابط (e.g. http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "الرابط (e.g. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "URL (e.g. http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "URL (e.g. http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "", "Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "", "Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "", "Exclude": "",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "تجريبي", "Experimental": "تجريبي",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "", "Explore the cosmos": "",
"Export": "تصدير", "Export": "تصدير",
"Export All Archived Chats": "", "Export All Archived Chats": "",
@ -566,7 +580,7 @@
"Include": "", "Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "", "Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "قم بتضمين علامة `-api` عند تشغيل Stable-diffusion-webui", "Include `--api` flag when running stable-diffusion-webui": "قم بتضمين علامة `-api` عند تشغيل Stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "معلومات", "Info": "معلومات",
"Input commands": "إدخال الأوامر", "Input commands": "إدخال الأوامر",
"Install from Github URL": "التثبيت من عنوان URL لجيثب", "Install from Github URL": "التثبيت من عنوان URL لجيثب",
@ -624,6 +638,7 @@
"Local": "", "Local": "",
"Local Models": "", "Local Models": "",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "", "Lost": "",
"LTR": "من جهة اليسار إلى اليمين", "LTR": "من جهة اليسار إلى اليمين",
"Made by Open WebUI Community": "OpenWebUI تم إنشاؤه بواسطة مجتمع ", "Made by Open WebUI Community": "OpenWebUI تم إنشاؤه بواسطة مجتمع ",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "", "Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "{{error}} تم رفض الإذن عند الوصول إلى الميكروفون ", "Permission denied when accessing microphone: {{error}}": "{{error}} تم رفض الإذن عند الوصول إلى الميكروفون ",
"Permissions": "", "Permissions": "",
"Perplexity API Key": "",
"Personalization": "التخصيص", "Personalization": "التخصيص",
"Pin": "", "Pin": "",
"Pinned": "", "Pinned": "",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "سجل صوت", "Record voice": "سجل صوت",
"Redirecting you to Open WebUI Community": "OpenWebUI إعادة توجيهك إلى مجتمع ", "Redirecting you to Open WebUI Community": "OpenWebUI إعادة توجيهك إلى مجتمع ",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "", "References from": "",
"Refused when it shouldn't have": "رفض عندما لا ينبغي أن يكون", "Refused when it shouldn't have": "رفض عندما لا ينبغي أن يكون",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "ضبط الصوت", "Set Voice": "ضبط الصوت",
"Set whisper model": "", "Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "الاعدادات", "Settings": "الاعدادات",
"Settings saved successfully!": "تم حفظ الاعدادات بنجاح", "Settings saved successfully!": "تم حفظ الاعدادات بنجاح",
@ -964,7 +980,7 @@
"System Prompt": "محادثة النظام", "System Prompt": "محادثة النظام",
"Tags Generation": "", "Tags Generation": "",
"Tags Generation Prompt": "", "Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "", "Tap to interrupt": "",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "شكرا لملاحظاتك!", "Thanks for your feedback!": "شكرا لملاحظاتك!",
"The Application Account DN you bind with for search": "", "The Application Account DN you bind with for search": "",
"The base to search for users": "", "The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "يجب أن تكون النتيجة قيمة تتراوح بين 0.0 (0%) و1.0 (100%).", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "يجب أن تكون النتيجة قيمة تتراوح بين 0.0 (0%) و1.0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "الثيم", "Theme": "الثيم",
"Thinking...": "", "Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "", "This action cannot be undone. Do you wish to continue?": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "وهذا يضمن حفظ محادثاتك القيمة بشكل آمن في قاعدة بياناتك الخلفية. شكرًا لك!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "وهذا يضمن حفظ محادثاتك القيمة بشكل آمن في قاعدة بياناتك الخلفية. شكرًا لك!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "", "This response was generated by \"{{model}}\"": "",
"This will delete": "", "This will delete": "",
@ -1132,7 +1148,7 @@
"Why?": "", "Why?": "",
"Widescreen Mode": "", "Widescreen Mode": "",
"Won": "", "Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "مساحة العمل", "Workspace": "مساحة العمل",
"Workspace Permissions": "", "Workspace Permissions": "",
"Write": "", "Write": "",
@ -1142,6 +1158,7 @@
"Write your model template content here": "", "Write your model template content here": "",
"Yesterday": "أمس", "Yesterday": "أمس",
"You": "انت", "You": "انت",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "", "You cannot upload an empty file.": "",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(напр. `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(напр. `sh webui.sh --api`)",
"(latest)": "(последна)", "(latest)": "(последна)",
"{{ models }}": "{{ models }}", "{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "{{COUNT}} Отговори", "{{COUNT}} Replies": "{{COUNT}} Отговори",
"{{user}}'s Chats": "{{user}}'s чатове", "{{user}}'s Chats": "{{user}}'s чатове",
"{{webUIName}} Backend Required": "{{webUIName}} Изисква се Бекенд", "{{webUIName}} Backend Required": "{{webUIName}} Изисква се Бекенд",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Администраторите имат достъп до всички инструменти по всяко време; потребителите се нуждаят от инструменти, присвоени за всеки модел в работното пространство.", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Администраторите имат достъп до всички инструменти по всяко време; потребителите се нуждаят от инструменти, присвоени за всеки модел в работното пространство.",
"Advanced Parameters": "Разширени Параметри", "Advanced Parameters": "Разширени Параметри",
"Advanced Params": "Разширени параметри", "Advanced Params": "Разширени параметри",
"All": "",
"All Documents": "Всички Документи", "All Documents": "Всички Документи",
"All models deleted successfully": "Всички модели са изтрити успешно", "All models deleted successfully": "Всички модели са изтрити успешно",
"Allow Chat Controls": "Разреши контроли на чата", "Allow Chat Controls": "Разреши контроли на чата",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Разреши прекъсване на гласа по време на разговор", "Allow Voice Interruption in Call": "Разреши прекъсване на гласа по време на разговор",
"Allowed Endpoints": "Разрешени крайни точки", "Allowed Endpoints": "Разрешени крайни точки",
"Already have an account?": "Вече имате акаунт?", "Already have an account?": "Вече имате акаунт?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "Алтернатива на top_p, която цели да осигури баланс между качество и разнообразие. Параметърът p представлява минималната вероятност за разглеждане на токен, спрямо вероятността на най-вероятния токен. Например, при p=0.05 и най-вероятен токен с вероятност 0.9, логитите със стойност по-малка от 0.045 се филтрират. (По подразбиране: 0.0)", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "Винаги", "Always": "Винаги",
"Amazing": "Невероятно", "Amazing": "Невероятно",
"an assistant": "асистент", "an assistant": "асистент",
@ -93,6 +95,7 @@
"Are you sure?": "Сигурни ли сте?", "Are you sure?": "Сигурни ли сте?",
"Arena Models": "Arena Модели", "Arena Models": "Arena Модели",
"Artifacts": "Артефакти", "Artifacts": "Артефакти",
"Ask": "",
"Ask a question": "Задайте въпрос", "Ask a question": "Задайте въпрос",
"Assistant": "Асистент", "Assistant": "Асистент",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "Крайна точка за Bing Search V7", "Bing Search V7 Endpoint": "Крайна точка за Bing Search V7",
"Bing Search V7 Subscription Key": "Абонаментен ключ за Bing Search V7", "Bing Search V7 Subscription Key": "Абонаментен ключ за Bing Search V7",
"Bocha Search API Key": "API ключ за Bocha Search", "Bocha Search API Key": "API ключ за Bocha Search",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "API ключ за Brave Search", "Brave Search API Key": "API ключ за Brave Search",
"By {{name}}": "От {{name}}", "By {{name}}": "От {{name}}",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "Интерпретатор на код", "Code Interpreter": "Интерпретатор на код",
"Code Interpreter Engine": "Двигател на интерпретатора на код", "Code Interpreter Engine": "Двигател на интерпретатора на код",
"Code Interpreter Prompt Template": "Шаблон за промпт на интерпретатора на код", "Code Interpreter Prompt Template": "Шаблон за промпт на интерпретатора на код",
"Collapse": "",
"Collection": "Колекция", "Collection": "Колекция",
"Color": "Цвят", "Color": "Цвят",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "Потвърдете новата си парола", "Confirm your new password": "Потвърдете новата си парола",
"Connect to your own OpenAI compatible API endpoints.": "Свържете се със собствени крайни точки на API, съвместими с OpenAI.", "Connect to your own OpenAI compatible API endpoints.": "Свържете се със собствени крайни точки на API, съвместими с OpenAI.",
"Connections": "Връзки", "Connections": "Връзки",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "Ограничава усилията за разсъждение при модели за разсъждение. Приложимо само за модели за разсъждение от конкретни доставчици, които поддържат усилия за разсъждение. (По подразбиране: средно)", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Свържете се с администратор за достъп до WebUI", "Contact Admin for WebUI Access": "Свържете се с администратор за достъп до WebUI",
"Content": "Съдържание", "Content": "Съдържание",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "Продължете с имейл", "Continue with Email": "Продължете с имейл",
"Continue with LDAP": "Продължете с LDAP", "Continue with LDAP": "Продължете с LDAP",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Контролирайте как текстът на съобщението се разделя за TTS заявки. 'Пунктуация' разделя на изречения, 'параграфи' разделя на параграфи, а 'нищо' запазва съобщението като един низ.", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Контролирайте как текстът на съобщението се разделя за TTS заявки. 'Пунктуация' разделя на изречения, 'параграфи' разделя на параграфи, а 'нищо' запазва съобщението като един низ.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "Контролирайте повторението на последователности от токени в генерирания текст. По-висока стойност (напр. 1.5) ще наказва повторенията по-силно, докато по-ниска стойност (напр. 1.1) ще бъде по-снизходителна. При 1 е изключено. (По подразбиране: 1.1)", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Контроли", "Controls": "Контроли",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "Контролира баланса между съгласуваност и разнообразие на изхода. По-ниска стойност ще доведе до по-фокусиран и съгласуван текст. (По подразбиране: 5.0)", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Копирано", "Copied": "Копирано",
"Copied shared chat URL to clipboard!": "Копирана е връзката за споделен чат в клипборда!", "Copied shared chat URL to clipboard!": "Копирана е връзката за споделен чат в клипборда!",
"Copied to clipboard": "Копирано в клипборда", "Copied to clipboard": "Копирано в клипборда",
@ -245,6 +250,7 @@
"Created At": "Създадено на", "Created At": "Създадено на",
"Created by": "Създадено от", "Created by": "Създадено от",
"CSV Import": "Импортиране на CSV", "CSV Import": "Импортиране на CSV",
"Ctrl+Enter to Send": "",
"Current Model": "Текущ модел", "Current Model": "Текущ модел",
"Current Password": "Текуща Парола", "Current Password": "Текуща Парола",
"Custom": "Персонализиран", "Custom": "Персонализиран",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Активиране на заключване на паметта (mlock), за да се предотврати изваждането на данните на модела от RAM. Тази опция заключва работния набор от страници на модела в RAM, гарантирайки, че няма да бъдат изхвърлени на диска. Това може да помогне за поддържане на производителността, като се избягват грешки в страниците и се осигурява бърз достъп до данните.", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Активиране на заключване на паметта (mlock), за да се предотврати изваждането на данните на модела от RAM. Тази опция заключва работния набор от страници на модела в RAM, гарантирайки, че няма да бъдат изхвърлени на диска. Това може да помогне за поддържане на производителността, като се избягват грешки в страниците и се осигурява бърз достъп до данните.",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Активиране на мапиране на паметта (mmap) за зареждане на данни на модела. Тази опция позволява на системата да използва дисковото пространство като разширение на RAM, третирайки дисковите файлове, сякаш са в RAM. Това може да подобри производителността на модела, като позволява по-бърз достъп до данните. Въпреки това, може да не работи правилно с всички системи и може да консумира значително количество дисково пространство.", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Активиране на мапиране на паметта (mmap) за зареждане на данни на модела. Тази опция позволява на системата да използва дисковото пространство като разширение на RAM, третирайки дисковите файлове, сякаш са в RAM. Това може да подобри производителността на модела, като позволява по-бърз достъп до данните. Въпреки това, може да не работи правилно с всички системи и може да консумира значително количество дисково пространство.",
"Enable Message Rating": "Активиране на оценяване на съобщения", "Enable Message Rating": "Активиране на оценяване на съобщения",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "Активиране на Mirostat семплиране за контрол на перплексията. (По подразбиране: 0, 0 = Деактивирано, 1 = Mirostat, 2 = Mirostat 2.0)", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Включване на нови регистрации", "Enable New Sign Ups": "Включване на нови регистрации",
"Enabled": "Активирано", "Enabled": "Активирано",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Уверете се, че вашият CSV файл включва 4 колони в следния ред: Име, Имейл, Парола, Роля.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Уверете се, че вашият CSV файл включва 4 колони в следния ред: Име, Имейл, Парола, Роля.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "Въведете CFG Scale (напр. 7.0)", "Enter CFG Scale (e.g. 7.0)": "Въведете CFG Scale (напр. 7.0)",
"Enter Chunk Overlap": "Въведете припокриване на чънкове", "Enter Chunk Overlap": "Въведете припокриване на чънкове",
"Enter Chunk Size": "Въведете размер на чънк", "Enter Chunk Size": "Въведете размер на чънк",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "Въведете описание", "Enter description": "Въведете описание",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "Въведете токен за Jupyter", "Enter Jupyter Token": "Въведете токен за Jupyter",
"Enter Jupyter URL": "Въведете URL адрес за Jupyter", "Enter Jupyter URL": "Въведете URL адрес за Jupyter",
"Enter Kagi Search API Key": "Въведете API ключ за Kagi Search", "Enter Kagi Search API Key": "Въведете API ключ за Kagi Search",
"Enter Key Behavior": "",
"Enter language codes": "Въведете кодове на езика", "Enter language codes": "Въведете кодове на езика",
"Enter Model ID": "Въведете ID на модела", "Enter Model ID": "Въведете ID на модела",
"Enter model tag (e.g. {{modelTag}})": "Въведете таг на модел (напр. {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "Въведете таг на модел (напр. {{modelTag}})",
"Enter Mojeek Search API Key": "Въведете API ключ за Mojeek Search", "Enter Mojeek Search API Key": "Въведете API ключ за Mojeek Search",
"Enter Number of Steps (e.g. 50)": "Въведете брой стъпки (напр. 50)", "Enter Number of Steps (e.g. 50)": "Въведете брой стъпки (напр. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "Въведете URL адрес на прокси (напр. https://потребител:парола@хост:порт)", "Enter proxy URL (e.g. https://user:password@host:port)": "Въведете URL адрес на прокси (напр. https://потребител:парола@хост:порт)",
"Enter reasoning effort": "Въведете усилие за разсъждение", "Enter reasoning effort": "Въведете усилие за разсъждение",
"Enter Sampler (e.g. Euler a)": "Въведете семплер (напр. Euler a)", "Enter Sampler (e.g. Euler a)": "Въведете семплер (напр. Euler a)",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "Въведете публичния URL адрес на вашия WebUI. Този URL адрес ще бъде използван за генериране на връзки в известията.", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "Въведете публичния URL адрес на вашия WebUI. Този URL адрес ще бъде използван за генериране на връзки в известията.",
"Enter Tika Server URL": "Въведете URL адрес на Tika сървър", "Enter Tika Server URL": "Въведете URL адрес на Tika сървър",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Въведете Top K", "Enter Top K": "Въведете Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Въведете URL (напр. http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "Въведете URL (напр. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Въведете URL (напр. http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "Въведете URL (напр. http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "Пример: поща", "Example: mail": "Пример: поща",
"Example: ou=users,dc=foo,dc=example": "Пример: ou=users,dc=foo,dc=example", "Example: ou=users,dc=foo,dc=example": "Пример: ou=users,dc=foo,dc=example",
"Example: sAMAccountName or uid or userPrincipalName": "Пример: sAMAccountName или uid или userPrincipalName", "Example: sAMAccountName or uid or userPrincipalName": "Пример: sAMAccountName или uid или userPrincipalName",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "Изключи", "Exclude": "Изключи",
"Execute code for analysis": "Изпълнете код за анализ", "Execute code for analysis": "Изпълнете код за анализ",
"Expand": "",
"Experimental": "Експериментално", "Experimental": "Експериментално",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "Изследвайте космоса", "Explore the cosmos": "Изследвайте космоса",
"Export": "Износ", "Export": "Износ",
"Export All Archived Chats": "Износ на всички архивирани чатове", "Export All Archived Chats": "Износ на всички архивирани чатове",
@ -566,7 +580,7 @@
"Include": "Включи", "Include": "Включи",
"Include `--api-auth` flag when running stable-diffusion-webui": "", "Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "", "Include `--api` flag when running stable-diffusion-webui": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "Влияе върху това колко бързо алгоритъмът реагира на обратната връзка от генерирания текст. По-ниска скорост на обучение ще доведе до по-бавни корекции, докато по-висока скорост на обучение ще направи алгоритъма по-отзивчив. (По подразбиране: 0.1)", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Информация", "Info": "Информация",
"Input commands": "Въведете команди", "Input commands": "Въведете команди",
"Install from Github URL": "Инсталиране от URL адреса на Github", "Install from Github URL": "Инсталиране от URL адреса на Github",
@ -624,6 +638,7 @@
"Local": "Локално", "Local": "Локално",
"Local Models": "Локални модели", "Local Models": "Локални модели",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "Изгубено", "Lost": "Изгубено",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "Направено от OpenWebUI общността", "Made by Open WebUI Community": "Направено от OpenWebUI общността",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "Отказан достъп при опит за достъп до микрофона", "Permission denied when accessing microphone": "Отказан достъп при опит за достъп до микрофона",
"Permission denied when accessing microphone: {{error}}": "Отказан достъп при опит за достъп до микрофона: {{error}}", "Permission denied when accessing microphone: {{error}}": "Отказан достъп при опит за достъп до микрофона: {{error}}",
"Permissions": "Разрешения", "Permissions": "Разрешения",
"Perplexity API Key": "",
"Personalization": "Персонализация", "Personalization": "Персонализация",
"Pin": "Закачи", "Pin": "Закачи",
"Pinned": "Закачено", "Pinned": "Закачено",
@ -809,7 +825,7 @@
"Reasoning Effort": "Усилие за разсъждение", "Reasoning Effort": "Усилие за разсъждение",
"Record voice": "Записване на глас", "Record voice": "Записване на глас",
"Redirecting you to Open WebUI Community": "Пренасочване към OpenWebUI общността", "Redirecting you to Open WebUI Community": "Пренасочване към OpenWebUI общността",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "Намалява вероятността за генериране на безсмислици. По-висока стойност (напр. 100) ще даде по-разнообразни отговори, докато по-ниска стойност (напр. 10) ще бъде по-консервативна. (По подразбиране: 40)", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Отнасяйте се към себе си като \"Потребител\" (напр. \"Потребителят учи испански\")", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Отнасяйте се към себе си като \"Потребител\" (напр. \"Потребителят учи испански\")",
"References from": "Препратки от", "References from": "Препратки от",
"Refused when it shouldn't have": "Отказано, когато не трябва да бъде", "Refused when it shouldn't have": "Отказано, когато не трябва да бъде",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Задайте броя работни нишки, използвани за изчисления. Тази опция контролира колко нишки се използват за едновременна обработка на входящи заявки. Увеличаването на тази стойност може да подобри производителността при високи натоварвания с паралелизъм, но може също да консумира повече CPU ресурси.", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Задайте броя работни нишки, използвани за изчисления. Тази опция контролира колко нишки се използват за едновременна обработка на входящи заявки. Увеличаването на тази стойност може да подобри производителността при високи натоварвания с паралелизъм, но може също да консумира повече CPU ресурси.",
"Set Voice": "Задай Глас", "Set Voice": "Задай Глас",
"Set whisper model": "Задай модел на шепот", "Set whisper model": "Задай модел на шепот",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "Задава плоско отклонение срещу токени, които са се появили поне веднъж. По-висока стойност (напр. 1.5) ще наказва повторенията по-силно, докато по-ниска стойност (напр. 0.9) ще бъде по-снизходителна. При 0 е деактивирано. (По подразбиране: 0)", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "Задава мащабиращо отклонение срещу токени за наказване на повторения, базирано на това колко пъти са се появили. По-висока стойност (напр. 1.5) ще наказва повторенията по-силно, докато по-ниска стойност (напр. 0.9) ще бъде по-снизходителна. При 0 е деактивирано. (По подразбиране: 1.1)", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "Задава колко назад моделът да гледа, за да предотврати повторение. (По подразбиране: 64, 0 = деактивирано, -1 = num_ctx)", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "Задава семето на случайното число, което да се използва за генериране. Задаването на конкретно число ще накара модела да генерира същия текст за същата подкана. (По подразбиране: случайно)", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "Задава размера на контекстния прозорец, използван за генериране на следващия токен. (По подразбиране: 2048)", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Задава последователностите за спиране, които да се използват. Когато се срещне този модел, LLM ще спре да генерира текст и ще се върне. Множество модели за спиране могат да бъдат зададени чрез определяне на множество отделни параметри за спиране в моделния файл.", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Задава последователностите за спиране, които да се използват. Когато се срещне този модел, LLM ще спре да генерира текст и ще се върне. Множество модели за спиране могат да бъдат зададени чрез определяне на множество отделни параметри за спиране в моделния файл.",
"Settings": "Настройки", "Settings": "Настройки",
"Settings saved successfully!": "Настройките са запазени успешно!", "Settings saved successfully!": "Настройките са запазени успешно!",
@ -964,7 +980,7 @@
"System Prompt": "Системен Промпт", "System Prompt": "Системен Промпт",
"Tags Generation": "Генериране на тагове", "Tags Generation": "Генериране на тагове",
"Tags Generation Prompt": "Промпт за генериране на тагове", "Tags Generation Prompt": "Промпт за генериране на тагове",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "Безопашковото семплиране се използва за намаляване на влиянието на по-малко вероятните токени от изхода. По-висока стойност (напр. 2.0) ще намали влиянието повече, докато стойност 1.0 деактивира тази настройка. (по подразбиране: 1)", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "Докоснете за прекъсване", "Tap to interrupt": "Докоснете за прекъсване",
"Tasks": "Задачи", "Tasks": "Задачи",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "Благодарим ви за вашия отзив!", "Thanks for your feedback!": "Благодарим ви за вашия отзив!",
"The Application Account DN you bind with for search": "DN на акаунта на приложението, с който се свързвате за търсене", "The Application Account DN you bind with for search": "DN на акаунта на приложението, с който се свързвате за търсене",
"The base to search for users": "Базата за търсене на потребители", "The base to search for users": "Базата за търсене на потребители",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Разработчиците зад този плъгин са страстни доброволци от общността. Ако намирате този плъгин полезен, моля, обмислете да допринесете за неговото развитие.", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Разработчиците зад този плъгин са страстни доброволци от общността. Ако намирате този плъгин полезен, моля, обмислете да допринесете за неговото развитие.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Класацията за оценка се базира на рейтинговата система Elo и се обновява в реално време.", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Класацията за оценка се базира на рейтинговата система Elo и се обновява в реално време.",
"The LDAP attribute that maps to the mail that users use to sign in.": "LDAP атрибутът, който съответства на имейла, който потребителите използват за вписване.", "The LDAP attribute that maps to the mail that users use to sign in.": "LDAP атрибутът, който съответства на имейла, който потребителите използват за вписване.",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Максималният размер на файла в MB. Ако размерът на файла надвишава този лимит, файлът няма да бъде качен.", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Максималният размер на файла в MB. Ако размерът на файла надвишава този лимит, файлът няма да бъде качен.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Максималният брой файлове, които могат да се използват едновременно в чата. Ако броят на файловете надвишава този лимит, файловете няма да бъдат качени.", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Максималният брой файлове, които могат да се използват едновременно в чата. Ако броят на файловете надвишава този лимит, файловете няма да бъдат качени.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Резултатът трябва да бъде стойност между 0.0 (0%) и 1.0 (100%).", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "Резултатът трябва да бъде стойност между 0.0 (0%) и 1.0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "Температурата на модела. Увеличаването на температурата ще накара модела да отговаря по-креативно. (По подразбиране: 0.8)", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Тема", "Theme": "Тема",
"Thinking...": "Мисля...", "Thinking...": "Мисля...",
"This action cannot be undone. Do you wish to continue?": "Това действие не може да бъде отменено. Желаете ли да продължите?", "This action cannot be undone. Do you wish to continue?": "Това действие не може да бъде отменено. Желаете ли да продължите?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Това гарантира, че ценните ви разговори се запазват сигурно във вашата бекенд база данни. Благодарим ви!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Това гарантира, че ценните ви разговори се запазват сигурно във вашата бекенд база данни. Благодарим ви!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Това е експериментална функция, може да не работи според очакванията и подлежи на промяна по всяко време.", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "Това е експериментална функция, може да не работи според очакванията и подлежи на промяна по всяко време.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "Тази опция контролира колко токена се запазват при обновяване на контекста. Например, ако е зададено на 2, последните 2 токена от контекста на разговора ще бъдат запазени. Запазването на контекста може да помогне за поддържане на непрекъснатостта на разговора, но може да намали способността за отговор на нови теми. (По подразбиране: 24)", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Тази опция ще изтрие всички съществуващи файлове в колекцията и ще ги замени с новокачени файлове.", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "Тази опция ще изтрие всички съществуващи файлове в колекцията и ще ги замени с новокачени файлове.",
"This response was generated by \"{{model}}\"": "Този отговор беше генериран от \"{{model}}\"", "This response was generated by \"{{model}}\"": "Този отговор беше генериран от \"{{model}}\"",
"This will delete": "Това ще изтрие", "This will delete": "Това ще изтрие",
@ -1132,7 +1148,7 @@
"Why?": "Защо?", "Why?": "Защо?",
"Widescreen Mode": "Широкоекранен режим", "Widescreen Mode": "Широкоекранен режим",
"Won": "Спечелено", "Won": "Спечелено",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "Работи заедно с top-k. По-висока стойност (напр. 0.95) ще доведе до по-разнообразен текст, докато по-ниска стойност (напр. 0.5) ще генерира по-фокусиран и консервативен текст. (По подразбиране: 0.9)", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Работно пространство", "Workspace": "Работно пространство",
"Workspace Permissions": "Разрешения за работното пространство", "Workspace Permissions": "Разрешения за работното пространство",
"Write": "Напиши", "Write": "Напиши",
@ -1142,6 +1158,7 @@
"Write your model template content here": "Напишете съдържанието на вашия шаблон за модел тук", "Write your model template content here": "Напишете съдържанието на вашия шаблон за модел тук",
"Yesterday": "вчера", "Yesterday": "вчера",
"You": "Вие", "You": "Вие",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Можете да чатите с максимум {{maxCount}} файл(а) наведнъж.", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Можете да чатите с максимум {{maxCount}} файл(а) наведнъж.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Можете да персонализирате взаимодействията си с LLM-и, като добавите спомени чрез бутона 'Управление' по-долу, правейки ги по-полезни и съобразени с вас.", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Можете да персонализирате взаимодействията си с LLM-и, като добавите спомени чрез бутона 'Управление' по-долу, правейки ги по-полезни и съобразени с вас.",
"You cannot upload an empty file.": "Не можете да качите празен файл.", "You cannot upload an empty file.": "Не можете да качите празен файл.",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(যেমন `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(যেমন `sh webui.sh --api`)",
"(latest)": "(সর্বশেষ)", "(latest)": "(সর্বশেষ)",
"{{ models }}": "{{ মডেল}}", "{{ models }}": "{{ মডেল}}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "", "{{COUNT}} Replies": "",
"{{user}}'s Chats": "{{user}}র চ্যাটস", "{{user}}'s Chats": "{{user}}র চ্যাটস",
"{{webUIName}} Backend Required": "{{webUIName}} ব্যাকএন্ড আবশ্যক", "{{webUIName}} Backend Required": "{{webUIName}} ব্যাকএন্ড আবশ্যক",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "এডভান্সড প্যারামিটার্স", "Advanced Parameters": "এডভান্সড প্যারামিটার্স",
"Advanced Params": "অ্যাডভান্সড প্যারাম", "Advanced Params": "অ্যাডভান্সড প্যারাম",
"All": "",
"All Documents": "সব ডকুমেন্ট", "All Documents": "সব ডকুমেন্ট",
"All models deleted successfully": "", "All models deleted successfully": "",
"Allow Chat Controls": "", "Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "", "Allow Voice Interruption in Call": "",
"Allowed Endpoints": "", "Allowed Endpoints": "",
"Already have an account?": "আগে থেকেই একাউন্ট আছে?", "Already have an account?": "আগে থেকেই একাউন্ট আছে?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "", "Amazing": "",
"an assistant": "একটা এসিস্ট্যান্ট", "an assistant": "একটা এসিস্ট্যান্ট",
@ -93,6 +95,7 @@
"Are you sure?": "আপনি নিশ্চিত?", "Are you sure?": "আপনি নিশ্চিত?",
"Arena Models": "", "Arena Models": "",
"Artifacts": "", "Artifacts": "",
"Ask": "",
"Ask a question": "", "Ask a question": "",
"Assistant": "", "Assistant": "",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "", "Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "", "Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "সাহসী অনুসন্ধান API কী", "Brave Search API Key": "সাহসী অনুসন্ধান API কী",
"By {{name}}": "", "By {{name}}": "",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "সংগ্রহ", "Collection": "সংগ্রহ",
"Color": "", "Color": "",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "", "Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "কানেকশনগুলো", "Connections": "কানেকশনগুলো",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "", "Contact Admin for WebUI Access": "",
"Content": "বিষয়বস্তু", "Content": "বিষয়বস্তু",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "", "Continue with Email": "",
"Continue with LDAP": "", "Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "", "Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "", "Copied": "",
"Copied shared chat URL to clipboard!": "শেয়ারকৃত কথা-ব্যবহারের URL ক্লিপবোর্ডে কপি করা হয়েছে!", "Copied shared chat URL to clipboard!": "শেয়ারকৃত কথা-ব্যবহারের URL ক্লিপবোর্ডে কপি করা হয়েছে!",
"Copied to clipboard": "", "Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "নির্মানকাল", "Created At": "নির্মানকাল",
"Created by": "", "Created by": "",
"CSV Import": "", "CSV Import": "",
"Ctrl+Enter to Send": "",
"Current Model": "বর্তমান মডেল", "Current Model": "বর্তমান মডেল",
"Current Password": "বর্তমান পাসওয়ার্ড", "Current Password": "বর্তমান পাসওয়ার্ড",
"Custom": "কাস্টম", "Custom": "কাস্টম",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "", "Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "নতুন সাইনআপ চালু করুন", "Enable New Sign Ups": "নতুন সাইনআপ চালু করুন",
"Enabled": "", "Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "আপনার সিএসভি ফাইলটিতে এই ক্রমে 4 টি কলাম অন্তর্ভুক্ত রয়েছে তা নিশ্চিত করুন: নাম, ইমেল, পাসওয়ার্ড, ভূমিকা।.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "আপনার সিএসভি ফাইলটিতে এই ক্রমে 4 টি কলাম অন্তর্ভুক্ত রয়েছে তা নিশ্চিত করুন: নাম, ইমেল, পাসওয়ার্ড, ভূমিকা।.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "", "Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "চাঙ্ক ওভারল্যাপ লিখুন", "Enter Chunk Overlap": "চাঙ্ক ওভারল্যাপ লিখুন",
"Enter Chunk Size": "চাংক সাইজ লিখুন", "Enter Chunk Size": "চাংক সাইজ লিখুন",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "", "Enter description": "",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "", "Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "ল্যাঙ্গুয়েজ কোড লিখুন", "Enter language codes": "ল্যাঙ্গুয়েজ কোড লিখুন",
"Enter Model ID": "", "Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "মডেল ট্যাগ লিখুন (e.g. {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "মডেল ট্যাগ লিখুন (e.g. {{modelTag}})",
"Enter Mojeek Search API Key": "", "Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "ধাপের সংখ্যা দিন (যেমন: 50)", "Enter Number of Steps (e.g. 50)": "ধাপের সংখ্যা দিন (যেমন: 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "", "Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "", "Enter Sampler (e.g. Euler a)": "",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "", "Enter Tika Server URL": "",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Top K লিখুন", "Enter Top K": "Top K লিখুন",
"Enter URL (e.g. http://127.0.0.1:7860/)": "ইউআরএল দিন (যেমন http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "ইউআরএল দিন (যেমন http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "ইউআরএল দিন (যেমন http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "ইউআরএল দিন (যেমন http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "", "Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "", "Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "", "Exclude": "",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "পরিক্ষামূলক", "Experimental": "পরিক্ষামূলক",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "", "Explore the cosmos": "",
"Export": "রপ্তানি", "Export": "রপ্তানি",
"Export All Archived Chats": "", "Export All Archived Chats": "",
@ -566,7 +580,7 @@
"Include": "", "Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "", "Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "stable-diffusion-webui চালু করার সময় `--api` ফ্ল্যাগ সংযুক্ত করুন", "Include `--api` flag when running stable-diffusion-webui": "stable-diffusion-webui চালু করার সময় `--api` ফ্ল্যাগ সংযুক্ত করুন",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "তথ্য", "Info": "তথ্য",
"Input commands": "ইনপুট কমান্ডস", "Input commands": "ইনপুট কমান্ডস",
"Install from Github URL": "Github URL থেকে ইনস্টল করুন", "Install from Github URL": "Github URL থেকে ইনস্টল করুন",
@ -624,6 +638,7 @@
"Local": "", "Local": "",
"Local Models": "", "Local Models": "",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "", "Lost": "",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "OpenWebUI কমিউনিটিকর্তৃক নির্মিত", "Made by Open WebUI Community": "OpenWebUI কমিউনিটিকর্তৃক নির্মিত",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "", "Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "মাইক্রোফোন ব্যবহারের অনুমতি পাওয়া যায়নি: {{error}}", "Permission denied when accessing microphone: {{error}}": "মাইক্রোফোন ব্যবহারের অনুমতি পাওয়া যায়নি: {{error}}",
"Permissions": "", "Permissions": "",
"Perplexity API Key": "",
"Personalization": "ডিজিটাল বাংলা", "Personalization": "ডিজিটাল বাংলা",
"Pin": "", "Pin": "",
"Pinned": "", "Pinned": "",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "ভয়েস রেকর্ড করুন", "Record voice": "ভয়েস রেকর্ড করুন",
"Redirecting you to Open WebUI Community": "আপনাকে OpenWebUI কমিউনিটিতে পাঠানো হচ্ছে", "Redirecting you to Open WebUI Community": "আপনাকে OpenWebUI কমিউনিটিতে পাঠানো হচ্ছে",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "", "References from": "",
"Refused when it shouldn't have": "যদি উপযুক্ত নয়, তবে রেজিগেনেট করা হচ্ছে", "Refused when it shouldn't have": "যদি উপযুক্ত নয়, তবে রেজিগেনেট করা হচ্ছে",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "কন্ঠস্বর নির্ধারণ করুন", "Set Voice": "কন্ঠস্বর নির্ধারণ করুন",
"Set whisper model": "", "Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "সেটিংসমূহ", "Settings": "সেটিংসমূহ",
"Settings saved successfully!": "সেটিংগুলো সফলভাবে সংরক্ষিত হয়েছে", "Settings saved successfully!": "সেটিংগুলো সফলভাবে সংরক্ষিত হয়েছে",
@ -964,7 +980,7 @@
"System Prompt": "সিস্টেম প্রম্পট", "System Prompt": "সিস্টেম প্রম্পট",
"Tags Generation": "", "Tags Generation": "",
"Tags Generation Prompt": "", "Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "", "Tap to interrupt": "",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "আপনার মতামত ধন্যবাদ!", "Thanks for your feedback!": "আপনার মতামত ধন্যবাদ!",
"The Application Account DN you bind with for search": "", "The Application Account DN you bind with for search": "",
"The base to search for users": "", "The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "স্কোর একটি 0.0 (0%) এবং 1.0 (100%) এর মধ্যে একটি মান হওয়া উচিত।", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "স্কোর একটি 0.0 (0%) এবং 1.0 (100%) এর মধ্যে একটি মান হওয়া উচিত।",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "থিম", "Theme": "থিম",
"Thinking...": "", "Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "", "This action cannot be undone. Do you wish to continue?": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "এটা নিশ্চিত করে যে, আপনার গুরুত্বপূর্ণ আলোচনা নিরাপদে আপনার ব্যাকএন্ড ডেটাবেজে সংরক্ষিত আছে। ধন্যবাদ!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "এটা নিশ্চিত করে যে, আপনার গুরুত্বপূর্ণ আলোচনা নিরাপদে আপনার ব্যাকএন্ড ডেটাবেজে সংরক্ষিত আছে। ধন্যবাদ!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "", "This response was generated by \"{{model}}\"": "",
"This will delete": "", "This will delete": "",
@ -1132,7 +1148,7 @@
"Why?": "", "Why?": "",
"Widescreen Mode": "", "Widescreen Mode": "",
"Won": "", "Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "ওয়ার্কস্পেস", "Workspace": "ওয়ার্কস্পেস",
"Workspace Permissions": "", "Workspace Permissions": "",
"Write": "", "Write": "",
@ -1142,6 +1158,7 @@
"Write your model template content here": "", "Write your model template content here": "",
"Yesterday": "আগামী", "Yesterday": "আগামী",
"You": "আপনি", "You": "আপনি",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "", "You cannot upload an empty file.": "",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(p. ex. `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(p. ex. `sh webui.sh --api`)",
"(latest)": "(últim)", "(latest)": "(últim)",
"{{ models }}": "{{ models }}", "{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "{{COUNT}} respostes", "{{COUNT}} Replies": "{{COUNT}} respostes",
"{{user}}'s Chats": "Els xats de {{user}}", "{{user}}'s Chats": "Els xats de {{user}}",
"{{webUIName}} Backend Required": "El Backend de {{webUIName}} és necessari", "{{webUIName}} Backend Required": "El Backend de {{webUIName}} és necessari",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Els administradors tenen accés a totes les eines en tot moment; els usuaris necessiten eines assignades per model a l'espai de treball.", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Els administradors tenen accés a totes les eines en tot moment; els usuaris necessiten eines assignades per model a l'espai de treball.",
"Advanced Parameters": "Paràmetres avançats", "Advanced Parameters": "Paràmetres avançats",
"Advanced Params": "Paràmetres avançats", "Advanced Params": "Paràmetres avançats",
"All": "",
"All Documents": "Tots els documents", "All Documents": "Tots els documents",
"All models deleted successfully": "Tots els models s'han eliminat correctament", "All models deleted successfully": "Tots els models s'han eliminat correctament",
"Allow Chat Controls": "Permetre els controls de xat", "Allow Chat Controls": "Permetre els controls de xat",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Permetre la interrupció de la veu en una trucada", "Allow Voice Interruption in Call": "Permetre la interrupció de la veu en una trucada",
"Allowed Endpoints": "Punts d'accés permesos", "Allowed Endpoints": "Punts d'accés permesos",
"Already have an account?": "Ja tens un compte?", "Already have an account?": "Ja tens un compte?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "Alternativa al top_p, i pretén garantir un equilibri de qualitat i varietat. El paràmetre p representa la probabilitat mínima que es consideri un token, en relació amb la probabilitat del token més probable. Per exemple, amb p=0,05 i el token més probable amb una probabilitat de 0,9, es filtren els logits amb un valor inferior a 0,045. (Per defecte: 0.0)", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "Sempre", "Always": "Sempre",
"Amazing": "Al·lucinant", "Amazing": "Al·lucinant",
"an assistant": "un assistent", "an assistant": "un assistent",
@ -93,6 +95,7 @@
"Are you sure?": "Estàs segur?", "Are you sure?": "Estàs segur?",
"Arena Models": "Models de l'Arena", "Arena Models": "Models de l'Arena",
"Artifacts": "Artefactes", "Artifacts": "Artefactes",
"Ask": "",
"Ask a question": "Fer una pregunta", "Ask a question": "Fer una pregunta",
"Assistant": "Assistent", "Assistant": "Assistent",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "Punt de connexió a Bing Search V7", "Bing Search V7 Endpoint": "Punt de connexió a Bing Search V7",
"Bing Search V7 Subscription Key": "Clau de subscripció a Bing Search V7", "Bing Search V7 Subscription Key": "Clau de subscripció a Bing Search V7",
"Bocha Search API Key": "Clau API de Bocha Search", "Bocha Search API Key": "Clau API de Bocha Search",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Clau API de Brave Search", "Brave Search API Key": "Clau API de Brave Search",
"By {{name}}": "Per {{name}}", "By {{name}}": "Per {{name}}",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "Intèrpret de codi", "Code Interpreter": "Intèrpret de codi",
"Code Interpreter Engine": "Motor de l'intèrpret de codi", "Code Interpreter Engine": "Motor de l'intèrpret de codi",
"Code Interpreter Prompt Template": "Plantilla de la indicació de l'intèrpret de codi", "Code Interpreter Prompt Template": "Plantilla de la indicació de l'intèrpret de codi",
"Collapse": "",
"Collection": "Col·lecció", "Collection": "Col·lecció",
"Color": "Color", "Color": "Color",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "Confirma la teva nova contrasenya", "Confirm your new password": "Confirma la teva nova contrasenya",
"Connect to your own OpenAI compatible API endpoints.": "Connecta als teus propis punts de connexió de l'API compatible amb OpenAI", "Connect to your own OpenAI compatible API endpoints.": "Connecta als teus propis punts de connexió de l'API compatible amb OpenAI",
"Connections": "Connexions", "Connections": "Connexions",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "Restringeix l'esforç de raonament dels models de raonament. Només aplicable a models de raonament de proveïdors específics que donen suport a l'esforç de raonament. (Per defecte: mitjà)", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Posat en contacte amb l'administrador per accedir a WebUI", "Contact Admin for WebUI Access": "Posat en contacte amb l'administrador per accedir a WebUI",
"Content": "Contingut", "Content": "Contingut",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "Continuar amb el correu", "Continue with Email": "Continuar amb el correu",
"Continue with LDAP": "Continuar amb LDAP", "Continue with LDAP": "Continuar amb LDAP",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Controlar com es divideix el text del missatge per a les sol·licituds TTS. 'Puntuació' divideix en frases, 'paràgrafs' divideix en paràgrafs i 'cap' manté el missatge com una cadena única.", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Controlar com es divideix el text del missatge per a les sol·licituds TTS. 'Puntuació' divideix en frases, 'paràgrafs' divideix en paràgrafs i 'cap' manté el missatge com una cadena única.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "Controlar la repetició de seqüències de tokens en el text generat. Un valor més alt (p. ex., 1,5) penalitzarà les repeticions amb més força, mentre que un valor més baix (p. ex., 1,1) serà més indulgent. A l'1, està desactivat. (Per defecte: 1.1)", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Controls", "Controls": "Controls",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "Controlar l'equilibri entre la coherència i la diversitat de la sortida. Un valor més baix donarà lloc a un text més enfocat i coherent. (Per defecte: 5.0)", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Copiat", "Copied": "Copiat",
"Copied shared chat URL to clipboard!": "S'ha copiat l'URL compartida al porta-retalls!", "Copied shared chat URL to clipboard!": "S'ha copiat l'URL compartida al porta-retalls!",
"Copied to clipboard": "Copiat al porta-retalls", "Copied to clipboard": "Copiat al porta-retalls",
@ -245,6 +250,7 @@
"Created At": "Creat el", "Created At": "Creat el",
"Created by": "Creat per", "Created by": "Creat per",
"CSV Import": "Importar CSV", "CSV Import": "Importar CSV",
"Ctrl+Enter to Send": "",
"Current Model": "Model actual", "Current Model": "Model actual",
"Current Password": "Contrasenya actual", "Current Password": "Contrasenya actual",
"Custom": "Personalitzat", "Custom": "Personalitzat",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Activar el bloqueig de memòria (mlock) per evitar que les dades del model s'intercanviïn fora de la memòria RAM. Aquesta opció bloqueja el conjunt de pàgines de treball del model a la memòria RAM, assegurant-se que no s'intercanviaran al disc. Això pot ajudar a mantenir el rendiment evitant errors de pàgina i garantint un accés ràpid a les dades.", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Activar el bloqueig de memòria (mlock) per evitar que les dades del model s'intercanviïn fora de la memòria RAM. Aquesta opció bloqueja el conjunt de pàgines de treball del model a la memòria RAM, assegurant-se que no s'intercanviaran al disc. Això pot ajudar a mantenir el rendiment evitant errors de pàgina i garantint un accés ràpid a les dades.",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Activar l'assignació de memòria (mmap) per carregar les dades del model. Aquesta opció permet que el sistema utilitzi l'emmagatzematge en disc com a extensió de la memòria RAM tractant els fitxers de disc com si estiguessin a la memòria RAM. Això pot millorar el rendiment del model permetent un accés més ràpid a les dades. Tanmateix, és possible que no funcioni correctament amb tots els sistemes i pot consumir una quantitat important d'espai en disc.", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Activar l'assignació de memòria (mmap) per carregar les dades del model. Aquesta opció permet que el sistema utilitzi l'emmagatzematge en disc com a extensió de la memòria RAM tractant els fitxers de disc com si estiguessin a la memòria RAM. Això pot millorar el rendiment del model permetent un accés més ràpid a les dades. Tanmateix, és possible que no funcioni correctament amb tots els sistemes i pot consumir una quantitat important d'espai en disc.",
"Enable Message Rating": "Permetre la qualificació de missatges", "Enable Message Rating": "Permetre la qualificació de missatges",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "Activar el mostreig de Mirostat per controlar la perplexitat. (Per defecte: 0, 0 = Inhabilitat, 1 = Mirostat, 2 = Mirostat 2.0)", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Permetre nous registres", "Enable New Sign Ups": "Permetre nous registres",
"Enabled": "Habilitat", "Enabled": "Habilitat",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Assegura't que els teus fitxers CSV inclouen 4 columnes en aquest ordre: Nom, Correu electrònic, Contrasenya, Rol.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Assegura't que els teus fitxers CSV inclouen 4 columnes en aquest ordre: Nom, Correu electrònic, Contrasenya, Rol.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "Entra l'escala CFG (p.ex. 7.0)", "Enter CFG Scale (e.g. 7.0)": "Entra l'escala CFG (p.ex. 7.0)",
"Enter Chunk Overlap": "Introdueix la mida de solapament de blocs", "Enter Chunk Overlap": "Introdueix la mida de solapament de blocs",
"Enter Chunk Size": "Introdueix la mida del bloc", "Enter Chunk Size": "Introdueix la mida del bloc",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "Introdueix la descripció", "Enter description": "Introdueix la descripció",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "Introdueix el token de Jupyter", "Enter Jupyter Token": "Introdueix el token de Jupyter",
"Enter Jupyter URL": "Introdueix la URL de Jupyter", "Enter Jupyter URL": "Introdueix la URL de Jupyter",
"Enter Kagi Search API Key": "Introdueix la clau API de Kagi Search", "Enter Kagi Search API Key": "Introdueix la clau API de Kagi Search",
"Enter Key Behavior": "",
"Enter language codes": "Introdueix els codis de llenguatge", "Enter language codes": "Introdueix els codis de llenguatge",
"Enter Model ID": "Introdueix l'identificador del model", "Enter Model ID": "Introdueix l'identificador del model",
"Enter model tag (e.g. {{modelTag}})": "Introdueix l'etiqueta del model (p. ex. {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "Introdueix l'etiqueta del model (p. ex. {{modelTag}})",
"Enter Mojeek Search API Key": "Introdueix la clau API de Mojeek Search", "Enter Mojeek Search API Key": "Introdueix la clau API de Mojeek Search",
"Enter Number of Steps (e.g. 50)": "Introdueix el nombre de passos (p. ex. 50)", "Enter Number of Steps (e.g. 50)": "Introdueix el nombre de passos (p. ex. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "Entra l'URL (p. ex. https://user:password@host:port)", "Enter proxy URL (e.g. https://user:password@host:port)": "Entra l'URL (p. ex. https://user:password@host:port)",
"Enter reasoning effort": "Introdueix l'esforç de raonament", "Enter reasoning effort": "Introdueix l'esforç de raonament",
"Enter Sampler (e.g. Euler a)": "Introdueix el mostrejador (p.ex. Euler a)", "Enter Sampler (e.g. Euler a)": "Introdueix el mostrejador (p.ex. Euler a)",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "Entra la URL pública de WebUI. Aquesta URL s'utilitzarà per generar els enllaços en les notificacions.", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "Entra la URL pública de WebUI. Aquesta URL s'utilitzarà per generar els enllaços en les notificacions.",
"Enter Tika Server URL": "Introdueix l'URL del servidor Tika", "Enter Tika Server URL": "Introdueix l'URL del servidor Tika",
"Enter timeout in seconds": "Entra el temps màxim en segons", "Enter timeout in seconds": "Entra el temps màxim en segons",
"Enter to Send": "",
"Enter Top K": "Introdueix Top K", "Enter Top K": "Introdueix Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Introdueix l'URL (p. ex. http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "Introdueix l'URL (p. ex. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Introdueix l'URL (p. ex. http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "Introdueix l'URL (p. ex. http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "Exemple: mail", "Example: mail": "Exemple: mail",
"Example: ou=users,dc=foo,dc=example": "Exemple: ou=users,dc=foo,dc=example", "Example: ou=users,dc=foo,dc=example": "Exemple: ou=users,dc=foo,dc=example",
"Example: sAMAccountName or uid or userPrincipalName": "Exemple: sAMAccountName o uid o userPrincipalName", "Example: sAMAccountName or uid or userPrincipalName": "Exemple: sAMAccountName o uid o userPrincipalName",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "Excloure", "Exclude": "Excloure",
"Execute code for analysis": "Executa el codi per analitzar-lo", "Execute code for analysis": "Executa el codi per analitzar-lo",
"Expand": "",
"Experimental": "Experimental", "Experimental": "Experimental",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "Explorar el cosmos", "Explore the cosmos": "Explorar el cosmos",
"Export": "Exportar", "Export": "Exportar",
"Export All Archived Chats": "Exportar tots els xats arxivats", "Export All Archived Chats": "Exportar tots els xats arxivats",
@ -566,7 +580,7 @@
"Include": "Incloure", "Include": "Incloure",
"Include `--api-auth` flag when running stable-diffusion-webui": "Inclou `--api-auth` quan executis stable-diffusion-webui", "Include `--api-auth` flag when running stable-diffusion-webui": "Inclou `--api-auth` quan executis stable-diffusion-webui",
"Include `--api` flag when running stable-diffusion-webui": "Inclou `--api` quan executis stable-diffusion-webui", "Include `--api` flag when running stable-diffusion-webui": "Inclou `--api` quan executis stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "Influeix amb la rapidesa amb què l'algoritme respon als comentaris del text generat. Una taxa d'aprenentatge més baixa donarà lloc a ajustos més lents, mentre que una taxa d'aprenentatge més alta farà que l'algorisme sigui més sensible. (Per defecte: 0,1)", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Informació", "Info": "Informació",
"Input commands": "Entra comandes", "Input commands": "Entra comandes",
"Install from Github URL": "Instal·lar des de l'URL de Github", "Install from Github URL": "Instal·lar des de l'URL de Github",
@ -624,6 +638,7 @@
"Local": "Local", "Local": "Local",
"Local Models": "Models locals", "Local Models": "Models locals",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "Perdut", "Lost": "Perdut",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "Creat per la Comunitat OpenWebUI", "Made by Open WebUI Community": "Creat per la Comunitat OpenWebUI",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "Permís denegat en accedir al micròfon", "Permission denied when accessing microphone": "Permís denegat en accedir al micròfon",
"Permission denied when accessing microphone: {{error}}": "Permís denegat en accedir al micròfon: {{error}}", "Permission denied when accessing microphone: {{error}}": "Permís denegat en accedir al micròfon: {{error}}",
"Permissions": "Permisos", "Permissions": "Permisos",
"Perplexity API Key": "",
"Personalization": "Personalització", "Personalization": "Personalització",
"Pin": "Fixar", "Pin": "Fixar",
"Pinned": "Fixat", "Pinned": "Fixat",
@ -809,7 +825,7 @@
"Reasoning Effort": "Esforç de raonament", "Reasoning Effort": "Esforç de raonament",
"Record voice": "Enregistrar la veu", "Record voice": "Enregistrar la veu",
"Redirecting you to Open WebUI Community": "Redirigint-te a la comunitat OpenWebUI", "Redirecting you to Open WebUI Community": "Redirigint-te a la comunitat OpenWebUI",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "Redueix la probabilitat de generar ximpleries. Un valor més alt (p. ex. 100) donarà respostes més diverses, mentre que un valor més baix (p. ex. 10) serà més conservador. (Per defecte: 40)", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Fes referència a tu mateix com a \"Usuari\" (p. ex., \"L'usuari està aprenent espanyol\")", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Fes referència a tu mateix com a \"Usuari\" (p. ex., \"L'usuari està aprenent espanyol\")",
"References from": "Referències de", "References from": "Referències de",
"Refused when it shouldn't have": "Refusat quan no hauria d'haver estat", "Refused when it shouldn't have": "Refusat quan no hauria d'haver estat",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Establir el nombre de fils de treball utilitzats per al càlcul. Aquesta opció controla quants fils s'utilitzen per processar les sol·licituds entrants simultàniament. Augmentar aquest valor pot millorar el rendiment amb càrregues de treball de concurrència elevada, però també pot consumir més recursos de CPU.", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Establir el nombre de fils de treball utilitzats per al càlcul. Aquesta opció controla quants fils s'utilitzen per processar les sol·licituds entrants simultàniament. Augmentar aquest valor pot millorar el rendiment amb càrregues de treball de concurrència elevada, però també pot consumir més recursos de CPU.",
"Set Voice": "Establir la veu", "Set Voice": "Establir la veu",
"Set whisper model": "Establir el model whisper", "Set whisper model": "Establir el model whisper",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "Estableix un biaix pla contra tokens que han aparegut almenys una vegada. Un valor més alt (p. ex., 1,5) penalitzarà les repeticions amb més força, mentre que un valor més baix (p. ex., 0,9) serà més indulgent. A 0, està desactivat. (Per defecte: 0)", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "Estableix un biaix d'escala contra tokens per penalitzar les repeticions, en funció de quantes vegades han aparegut. Un valor més alt (p. ex., 1,5) penalitzarà les repeticions amb més força, mentre que un valor més baix (p. ex., 0,9) serà més indulgent. A 0, està desactivat. (Per defecte: 1.1)", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "Establir fins a quin punt el model mira enrere per evitar la repetició. (Per defecte: 64, 0 = desactivat, -1 = num_ctx)", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "Establir la llavor del nombre aleatori que s'utilitzarà per a la generació. Establir-ho a un número específic farà que el model generi el mateix text per a la mateixa sol·licitud. (Per defecte: aleatori)", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "Estableix la mida de la finestra de context utilitzada per generar el següent token. (Per defecte: 2048)", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Establir les seqüències d'aturada a utilitzar. Quan es trobi aquest patró, el LLM deixarà de generar text. Es poden establir diversos patrons de parada especificant diversos paràmetres de parada separats en un fitxer model.", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Establir les seqüències d'aturada a utilitzar. Quan es trobi aquest patró, el LLM deixarà de generar text. Es poden establir diversos patrons de parada especificant diversos paràmetres de parada separats en un fitxer model.",
"Settings": "Preferències", "Settings": "Preferències",
"Settings saved successfully!": "Les preferències s'han desat correctament", "Settings saved successfully!": "Les preferències s'han desat correctament",
@ -964,7 +980,7 @@
"System Prompt": "Indicació del Sistema", "System Prompt": "Indicació del Sistema",
"Tags Generation": "Generació d'etiquetes", "Tags Generation": "Generació d'etiquetes",
"Tags Generation Prompt": "Indicació per a la generació d'etiquetes", "Tags Generation Prompt": "Indicació per a la generació d'etiquetes",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "El mostreig sense cua s'utilitza per reduir l'impacte de tokens menys probables de la sortida. Un valor més alt (p. ex., 2,0) reduirà més l'impacte, mentre que un valor d'1,0 desactiva aquesta configuració. (per defecte: 1)", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "Prem per interrompre", "Tap to interrupt": "Prem per interrompre",
"Tasks": "Tasques", "Tasks": "Tasques",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "Gràcies pel teu comentari!", "Thanks for your feedback!": "Gràcies pel teu comentari!",
"The Application Account DN you bind with for search": "El DN del compte d'aplicació per realitzar la cerca", "The Application Account DN you bind with for search": "El DN del compte d'aplicació per realitzar la cerca",
"The base to search for users": "La base per cercar usuaris", "The base to search for users": "La base per cercar usuaris",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "La mida del lot determina quantes sol·licituds de text es processen alhora. Una mida de lot més gran pot augmentar el rendiment i la velocitat del model, però també requereix més memòria. (Per defecte: 512)", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Els desenvolupadors d'aquest complement són voluntaris apassionats de la comunitat. Si trobeu útil aquest complement, considereu contribuir al seu desenvolupament.", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Els desenvolupadors d'aquest complement són voluntaris apassionats de la comunitat. Si trobeu útil aquest complement, considereu contribuir al seu desenvolupament.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "La classificació d'avaluació es basa en el sistema de qualificació Elo i s'actualitza en temps real.", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "La classificació d'avaluació es basa en el sistema de qualificació Elo i s'actualitza en temps real.",
"The LDAP attribute that maps to the mail that users use to sign in.": "L'atribut LDAP que s'associa al correu que els usuaris utilitzen per iniciar la sessió.", "The LDAP attribute that maps to the mail that users use to sign in.": "L'atribut LDAP que s'associa al correu que els usuaris utilitzen per iniciar la sessió.",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "La mida màxima del fitxer en MB. Si la mida del fitxer supera aquest límit, el fitxer no es carregarà.", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "La mida màxima del fitxer en MB. Si la mida del fitxer supera aquest límit, el fitxer no es carregarà.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "El nombre màxim de fitxers que es poden utilitzar alhora al xat. Si el nombre de fitxers supera aquest límit, els fitxers no es penjaran.", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "El nombre màxim de fitxers que es poden utilitzar alhora al xat. Si el nombre de fitxers supera aquest límit, els fitxers no es penjaran.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "El valor de puntuació hauria de ser entre 0.0 (0%) i 1.0 (100%).", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "El valor de puntuació hauria de ser entre 0.0 (0%) i 1.0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "La temperatura del model. Augmentar la temperatura farà que el model respongui de manera més creativa. (Per defecte: 0,8)", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Tema", "Theme": "Tema",
"Thinking...": "Pensant...", "Thinking...": "Pensant...",
"This action cannot be undone. Do you wish to continue?": "Aquesta acció no es pot desfer. Vols continuar?", "This action cannot be undone. Do you wish to continue?": "Aquesta acció no es pot desfer. Vols continuar?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Això assegura que les teves converses valuoses queden desades de manera segura a la teva base de dades. Gràcies!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Això assegura que les teves converses valuoses queden desades de manera segura a la teva base de dades. Gràcies!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Aquesta és una funció experimental, és possible que no funcioni com s'espera i està subjecta a canvis en qualsevol moment.", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "Aquesta és una funció experimental, és possible que no funcioni com s'espera i està subjecta a canvis en qualsevol moment.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "Aquesta opció controla quants tokens es conserven en actualitzar el context. Per exemple, si s'estableix en 2, es conservaran els darrers 2 tokens del context de conversa. Preservar el context pot ajudar a mantenir la continuïtat d'una conversa, però pot reduir la capacitat de respondre a nous temes. (Per defecte: 24)", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "Aquesta opció estableix el nombre màxim de tokens que el model pot generar en la seva resposta. Augmentar aquest límit permet que el model proporcioni respostes més llargues, però també pot augmentar la probabilitat que es generi contingut poc útil o irrellevant. (Per defecte: 128)", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Aquesta opció eliminarà tots els fitxers existents de la col·lecció i els substituirà per fitxers recentment penjats.", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "Aquesta opció eliminarà tots els fitxers existents de la col·lecció i els substituirà per fitxers recentment penjats.",
"This response was generated by \"{{model}}\"": "Aquesta resposta l'ha generat el model \"{{model}}\"", "This response was generated by \"{{model}}\"": "Aquesta resposta l'ha generat el model \"{{model}}\"",
"This will delete": "Això eliminarà", "This will delete": "Això eliminarà",
@ -1132,7 +1148,7 @@
"Why?": "Per què?", "Why?": "Per què?",
"Widescreen Mode": "Mode de pantalla ampla", "Widescreen Mode": "Mode de pantalla ampla",
"Won": "Ha guanyat", "Won": "Ha guanyat",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "Funciona juntament amb top-k. Un valor més alt (p. ex., 0,95) donarà lloc a un text més divers, mentre que un valor més baix (p. ex., 0,5) generarà un text més concentrat i conservador. (Per defecte: 0,9)", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Espai de treball", "Workspace": "Espai de treball",
"Workspace Permissions": "Permisos de l'espai de treball", "Workspace Permissions": "Permisos de l'espai de treball",
"Write": "Escriure", "Write": "Escriure",
@ -1142,6 +1158,7 @@
"Write your model template content here": "Introdueix el contingut de la plantilla del teu model aquí", "Write your model template content here": "Introdueix el contingut de la plantilla del teu model aquí",
"Yesterday": "Ahir", "Yesterday": "Ahir",
"You": "Tu", "You": "Tu",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Només pots xatejar amb un màxim de {{maxCount}} fitxers alhora.", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Només pots xatejar amb un màxim de {{maxCount}} fitxers alhora.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Pots personalitzar les teves interaccions amb els models de llenguatge afegint memòries mitjançant el botó 'Gestiona' que hi ha a continuació, fent-les més útils i adaptades a tu.", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Pots personalitzar les teves interaccions amb els models de llenguatge afegint memòries mitjançant el botó 'Gestiona' que hi ha a continuació, fent-les més útils i adaptades a tu.",
"You cannot upload an empty file.": "No es pot pujar un ariux buit.", "You cannot upload an empty file.": "No es pot pujar un ariux buit.",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(pananglitan `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(pananglitan `sh webui.sh --api`)",
"(latest)": "", "(latest)": "",
"{{ models }}": "", "{{ models }}": "",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "", "{{COUNT}} Replies": "",
"{{user}}'s Chats": "", "{{user}}'s Chats": "",
"{{webUIName}} Backend Required": "Backend {{webUIName}} gikinahanglan", "{{webUIName}} Backend Required": "Backend {{webUIName}} gikinahanglan",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "advanced settings", "Advanced Parameters": "advanced settings",
"Advanced Params": "", "Advanced Params": "",
"All": "",
"All Documents": "", "All Documents": "",
"All models deleted successfully": "", "All models deleted successfully": "",
"Allow Chat Controls": "", "Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "", "Allow Voice Interruption in Call": "",
"Allowed Endpoints": "", "Allowed Endpoints": "",
"Already have an account?": "Naa na kay account ?", "Already have an account?": "Naa na kay account ?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "", "Amazing": "",
"an assistant": "usa ka katabang", "an assistant": "usa ka katabang",
@ -93,6 +95,7 @@
"Are you sure?": "Sigurado ka ?", "Are you sure?": "Sigurado ka ?",
"Arena Models": "", "Arena Models": "",
"Artifacts": "", "Artifacts": "",
"Ask": "",
"Ask a question": "", "Ask a question": "",
"Assistant": "", "Assistant": "",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "", "Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "", "Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "", "Brave Search API Key": "",
"By {{name}}": "", "By {{name}}": "",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Koleksyon", "Collection": "Koleksyon",
"Color": "", "Color": "",
"ComfyUI": "", "ComfyUI": "",
@ -208,7 +213,7 @@
"Confirm your new password": "", "Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Mga koneksyon", "Connections": "Mga koneksyon",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "", "Contact Admin for WebUI Access": "",
"Content": "Kontento", "Content": "Kontento",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "", "Continue with Email": "",
"Continue with LDAP": "", "Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "", "Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "", "Copied": "",
"Copied shared chat URL to clipboard!": "", "Copied shared chat URL to clipboard!": "",
"Copied to clipboard": "", "Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "", "Created At": "",
"Created by": "", "Created by": "",
"CSV Import": "", "CSV Import": "",
"Ctrl+Enter to Send": "",
"Current Model": "Kasamtangang modelo", "Current Model": "Kasamtangang modelo",
"Current Password": "Kasamtangang Password", "Current Password": "Kasamtangang Password",
"Custom": "Custom", "Custom": "Custom",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "", "Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "I-enable ang bag-ong mga rehistro", "Enable New Sign Ups": "I-enable ang bag-ong mga rehistro",
"Enabled": "", "Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "", "Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "Pagsulod sa block overlap", "Enter Chunk Overlap": "Pagsulod sa block overlap",
"Enter Chunk Size": "Isulod ang block size", "Enter Chunk Size": "Isulod ang block size",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "", "Enter description": "",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "", "Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "", "Enter language codes": "",
"Enter Model ID": "", "Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "Pagsulod sa template tag (e.g. {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "Pagsulod sa template tag (e.g. {{modelTag}})",
"Enter Mojeek Search API Key": "", "Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "Pagsulod sa gidaghanon sa mga lakang (e.g. 50)", "Enter Number of Steps (e.g. 50)": "Pagsulod sa gidaghanon sa mga lakang (e.g. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "", "Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "", "Enter Sampler (e.g. Euler a)": "",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "", "Enter Tika Server URL": "",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Pagsulod sa Top K", "Enter Top K": "Pagsulod sa Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Pagsulod sa URL (e.g. http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "Pagsulod sa URL (e.g. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "", "Enter URL (e.g. http://localhost:11434)": "",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "", "Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "", "Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "", "Exclude": "",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "Eksperimento", "Experimental": "Eksperimento",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "", "Explore the cosmos": "",
"Export": "", "Export": "",
"Export All Archived Chats": "", "Export All Archived Chats": "",
@ -566,7 +580,7 @@
"Include": "", "Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "", "Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "Iapil ang `--api` nga bandila kung nagdagan nga stable-diffusion-webui", "Include `--api` flag when running stable-diffusion-webui": "Iapil ang `--api` nga bandila kung nagdagan nga stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "", "Info": "",
"Input commands": "Pagsulod sa input commands", "Input commands": "Pagsulod sa input commands",
"Install from Github URL": "", "Install from Github URL": "",
@ -624,6 +638,7 @@
"Local": "", "Local": "",
"Local Models": "", "Local Models": "",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "", "Lost": "",
"LTR": "", "LTR": "",
"Made by Open WebUI Community": "Gihimo sa komunidad sa OpenWebUI", "Made by Open WebUI Community": "Gihimo sa komunidad sa OpenWebUI",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "", "Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "Gidili ang pagtugot sa dihang nag-access sa mikropono: {{error}}", "Permission denied when accessing microphone: {{error}}": "Gidili ang pagtugot sa dihang nag-access sa mikropono: {{error}}",
"Permissions": "", "Permissions": "",
"Perplexity API Key": "",
"Personalization": "", "Personalization": "",
"Pin": "", "Pin": "",
"Pinned": "", "Pinned": "",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "Irekord ang tingog", "Record voice": "Irekord ang tingog",
"Redirecting you to Open WebUI Community": "Gi-redirect ka sa komunidad sa OpenWebUI", "Redirecting you to Open WebUI Community": "Gi-redirect ka sa komunidad sa OpenWebUI",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "", "References from": "",
"Refused when it shouldn't have": "", "Refused when it shouldn't have": "",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "Ibutang ang tingog", "Set Voice": "Ibutang ang tingog",
"Set whisper model": "", "Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "Mga setting", "Settings": "Mga setting",
"Settings saved successfully!": "Malampuson nga na-save ang mga setting!", "Settings saved successfully!": "Malampuson nga na-save ang mga setting!",
@ -964,7 +980,7 @@
"System Prompt": "Madasig nga Sistema", "System Prompt": "Madasig nga Sistema",
"Tags Generation": "", "Tags Generation": "",
"Tags Generation Prompt": "", "Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "", "Tap to interrupt": "",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "", "Thanks for your feedback!": "",
"The Application Account DN you bind with for search": "", "The Application Account DN you bind with for search": "",
"The base to search for users": "", "The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Tema", "Theme": "Tema",
"Thinking...": "", "Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "", "This action cannot be undone. Do you wish to continue?": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Kini nagsiguro nga ang imong bililhon nga mga panag-istoryahanay luwas nga natipig sa imong backend database. ", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Kini nagsiguro nga ang imong bililhon nga mga panag-istoryahanay luwas nga natipig sa imong backend database. ",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "", "This response was generated by \"{{model}}\"": "",
"This will delete": "", "This will delete": "",
@ -1132,7 +1148,7 @@
"Why?": "", "Why?": "",
"Widescreen Mode": "", "Widescreen Mode": "",
"Won": "", "Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "", "Workspace": "",
"Workspace Permissions": "", "Workspace Permissions": "",
"Write": "", "Write": "",
@ -1142,6 +1158,7 @@
"Write your model template content here": "", "Write your model template content here": "",
"Yesterday": "", "Yesterday": "",
"You": "", "You": "",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "", "You cannot upload an empty file.": "",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(např. `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(např. `sh webui.sh --api`)",
"(latest)": "Nejnovější", "(latest)": "Nejnovější",
"{{ models }}": "{{ models }}", "{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "", "{{COUNT}} Replies": "",
"{{user}}'s Chats": "{{user}}'s konverzace", "{{user}}'s Chats": "{{user}}'s konverzace",
"{{webUIName}} Backend Required": "Požadován {{webUIName}} Backend", "{{webUIName}} Backend Required": "Požadován {{webUIName}} Backend",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Administrátoři mají přístup ke všem nástrojům kdykoliv; uživatelé potřebují mít nástroje přiřazené podle modelu ve workspace.", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Administrátoři mají přístup ke všem nástrojům kdykoliv; uživatelé potřebují mít nástroje přiřazené podle modelu ve workspace.",
"Advanced Parameters": "Pokročilé parametry", "Advanced Parameters": "Pokročilé parametry",
"Advanced Params": "Pokročilé parametry", "Advanced Params": "Pokročilé parametry",
"All": "",
"All Documents": "Všechny dokumenty", "All Documents": "Všechny dokumenty",
"All models deleted successfully": "Všechny modely úspěšně odstráněny", "All models deleted successfully": "Všechny modely úspěšně odstráněny",
"Allow Chat Controls": "", "Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Povolit přerušení hlasu při hovoru", "Allow Voice Interruption in Call": "Povolit přerušení hlasu při hovoru",
"Allowed Endpoints": "", "Allowed Endpoints": "",
"Already have an account?": "Už máte účet?", "Already have an account?": "Už máte účet?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "", "Amazing": "",
"an assistant": "asistent", "an assistant": "asistent",
@ -93,6 +95,7 @@
"Are you sure?": "Jste si jistý?", "Are you sure?": "Jste si jistý?",
"Arena Models": "Arena modely", "Arena Models": "Arena modely",
"Artifacts": "Artefakty", "Artifacts": "Artefakty",
"Ask": "",
"Ask a question": "Zeptejte se na otázku", "Ask a question": "Zeptejte se na otázku",
"Assistant": "Ano, jak vám mohu pomoci?", "Assistant": "Ano, jak vám mohu pomoci?",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "", "Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "", "Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Klíč API pro Brave Search", "Brave Search API Key": "Klíč API pro Brave Search",
"By {{name}}": "", "By {{name}}": "",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "", "Collection": "",
"Color": "Barva", "Color": "Barva",
"ComfyUI": "ComfyUI.", "ComfyUI": "ComfyUI.",
@ -208,7 +213,7 @@
"Confirm your new password": "", "Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Připojení", "Connections": "Připojení",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Kontaktujte administrátora pro přístup k webovému rozhraní.", "Contact Admin for WebUI Access": "Kontaktujte administrátora pro přístup k webovému rozhraní.",
"Content": "Obsah", "Content": "Obsah",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "", "Continue with Email": "",
"Continue with LDAP": "", "Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Řízení, jak se text zprávy rozděluje pro požadavky TTS. 'Punctuation' rozděluje text na věty, 'paragraphs' rozděluje text na odstavce a 'none' udržuje zprávu jako jeden celý řetězec.", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Řízení, jak se text zprávy rozděluje pro požadavky TTS. 'Punctuation' rozděluje text na věty, 'paragraphs' rozděluje text na odstavce a 'none' udržuje zprávu jako jeden celý řetězec.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Ovládací prvky", "Controls": "Ovládací prvky",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Zkopírováno", "Copied": "Zkopírováno",
"Copied shared chat URL to clipboard!": "URL sdíleného chatu zkopírován do schránky!", "Copied shared chat URL to clipboard!": "URL sdíleného chatu zkopírován do schránky!",
"Copied to clipboard": "Zkopírováno do schránky", "Copied to clipboard": "Zkopírováno do schránky",
@ -245,6 +250,7 @@
"Created At": "Vytvořeno dne", "Created At": "Vytvořeno dne",
"Created by": "Vytvořeno uživatelem", "Created by": "Vytvořeno uživatelem",
"CSV Import": "CSV import", "CSV Import": "CSV import",
"Ctrl+Enter to Send": "",
"Current Model": "Aktuální model", "Current Model": "Aktuální model",
"Current Password": "Aktuální heslo", "Current Password": "Aktuální heslo",
"Custom": "Na míru", "Custom": "Na míru",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "Povolit hodnocení zpráv", "Enable Message Rating": "Povolit hodnocení zpráv",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Povolit nové registrace", "Enable New Sign Ups": "Povolit nové registrace",
"Enabled": "Povoleno", "Enabled": "Povoleno",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Ujistěte se, že váš CSV soubor obsahuje 4 sloupce v tomto pořadí: Name, Email, Password, Role.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Ujistěte se, že váš CSV soubor obsahuje 4 sloupce v tomto pořadí: Name, Email, Password, Role.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "Zadejte měřítko CFG (např. 7.0)", "Enter CFG Scale (e.g. 7.0)": "Zadejte měřítko CFG (např. 7.0)",
"Enter Chunk Overlap": "Zadejte překryv části", "Enter Chunk Overlap": "Zadejte překryv části",
"Enter Chunk Size": "Zadejte velikost bloku", "Enter Chunk Size": "Zadejte velikost bloku",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "Zadejte popis", "Enter description": "Zadejte popis",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "", "Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "Zadejte kódy jazyků", "Enter language codes": "Zadejte kódy jazyků",
"Enter Model ID": "Zadejte ID modelu", "Enter Model ID": "Zadejte ID modelu",
"Enter model tag (e.g. {{modelTag}})": "Zadejte označení modelu (např. {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "Zadejte označení modelu (např. {{modelTag}})",
"Enter Mojeek Search API Key": "", "Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "Zadejte počet kroků (např. 50)", "Enter Number of Steps (e.g. 50)": "Zadejte počet kroků (např. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "", "Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "Zadejte vzorkovač (např. Euler a)", "Enter Sampler (e.g. Euler a)": "Zadejte vzorkovač (např. Euler a)",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "Zadejte URL serveru Tika", "Enter Tika Server URL": "Zadejte URL serveru Tika",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Zadejte horní K", "Enter Top K": "Zadejte horní K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Zadejte URL (např. http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "Zadejte URL (např. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Zadejte URL (např. http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "Zadejte URL (např. http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "", "Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "", "Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "Vyloučit", "Exclude": "Vyloučit",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "Experimentální", "Experimental": "Experimentální",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "", "Explore the cosmos": "",
"Export": "Exportovat", "Export": "Exportovat",
"Export All Archived Chats": "", "Export All Archived Chats": "",
@ -566,7 +580,7 @@
"Include": "Zahrnout", "Include": "Zahrnout",
"Include `--api-auth` flag when running stable-diffusion-webui": "Zahrňte přepínač `--api-auth` při spuštění stable-diffusion-webui.", "Include `--api-auth` flag when running stable-diffusion-webui": "Zahrňte přepínač `--api-auth` při spuštění stable-diffusion-webui.",
"Include `--api` flag when running stable-diffusion-webui": "Při spuštění stable-diffusion-webui zahrňte příznak `--api`.", "Include `--api` flag when running stable-diffusion-webui": "Při spuštění stable-diffusion-webui zahrňte příznak `--api`.",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Info", "Info": "Info",
"Input commands": "Vstupní příkazy", "Input commands": "Vstupní příkazy",
"Install from Github URL": "Instalace z URL adresy Githubu", "Install from Github URL": "Instalace z URL adresy Githubu",
@ -624,6 +638,7 @@
"Local": "", "Local": "",
"Local Models": "Lokální modely", "Local Models": "Lokální modely",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "Ztracený", "Lost": "Ztracený",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "Vytvořeno komunitou OpenWebUI", "Made by Open WebUI Community": "Vytvořeno komunitou OpenWebUI",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "Přístup k mikrofonu byl odepřen", "Permission denied when accessing microphone": "Přístup k mikrofonu byl odepřen",
"Permission denied when accessing microphone: {{error}}": "Oprávnění zamítnuto při přístupu k mikrofonu: {{error}}", "Permission denied when accessing microphone: {{error}}": "Oprávnění zamítnuto při přístupu k mikrofonu: {{error}}",
"Permissions": "", "Permissions": "",
"Perplexity API Key": "",
"Personalization": "Personalizace", "Personalization": "Personalizace",
"Pin": "", "Pin": "",
"Pinned": "", "Pinned": "",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "Nahrát hlas", "Record voice": "Nahrát hlas",
"Redirecting you to Open WebUI Community": "Přesměrování na komunitu OpenWebUI", "Redirecting you to Open WebUI Community": "Přesměrování na komunitu OpenWebUI",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Odkazujte na sebe jako na \"uživatele\" (např. \"Uživatel se učí španělsky\").", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Odkazujte na sebe jako na \"uživatele\" (např. \"Uživatel se učí španělsky\").",
"References from": "Reference z", "References from": "Reference z",
"Refused when it shouldn't have": "Odmítnuto, když nemělo být.", "Refused when it shouldn't have": "Odmítnuto, když nemělo být.",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "Nastavit hlas", "Set Voice": "Nastavit hlas",
"Set whisper model": "Nastavit model whisper", "Set whisper model": "Nastavit model whisper",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "Nastavení", "Settings": "Nastavení",
"Settings saved successfully!": "Nastavení byla úspěšně uložena!", "Settings saved successfully!": "Nastavení byla úspěšně uložena!",
@ -964,7 +980,7 @@
"System Prompt": "Systémový prompt", "System Prompt": "Systémový prompt",
"Tags Generation": "", "Tags Generation": "",
"Tags Generation Prompt": "Prompt pro generování značek", "Tags Generation Prompt": "Prompt pro generování značek",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "Klepněte pro přerušení", "Tap to interrupt": "Klepněte pro přerušení",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "Děkujeme za vaši zpětnou vazbu!", "Thanks for your feedback!": "Děkujeme za vaši zpětnou vazbu!",
"The Application Account DN you bind with for search": "", "The Application Account DN you bind with for search": "",
"The base to search for users": "", "The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Vývojáři stojící za tímto pluginem jsou zapálení dobrovolníci z komunity. Pokud považujete tento plugin za užitečný, zvažte příspěvek k jeho vývoji.", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Vývojáři stojící za tímto pluginem jsou zapálení dobrovolníci z komunity. Pokud považujete tento plugin za užitečný, zvažte příspěvek k jeho vývoji.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Hodnotící žebříček je založen na systému hodnocení Elo a je aktualizován v reálném čase.", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Hodnotící žebříček je založen na systému hodnocení Elo a je aktualizován v reálném čase.",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Maximální velikost souboru v MB. Pokud velikost souboru překročí tento limit, soubor nebude nahrán.", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Maximální velikost souboru v MB. Pokud velikost souboru překročí tento limit, soubor nebude nahrán.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Maximální počet souborů, které mohou být použity najednou v chatu. Pokud počet souborů překročí tento limit, soubory nebudou nahrány.", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Maximální počet souborů, které mohou být použity najednou v chatu. Pokud počet souborů překročí tento limit, soubory nebudou nahrány.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Skóre by mělo být hodnotou mezi 0,0 (0%) a 1,0 (100%).", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "Skóre by mělo být hodnotou mezi 0,0 (0%) a 1,0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Téma", "Theme": "Téma",
"Thinking...": "Přemýšlím...", "Thinking...": "Přemýšlím...",
"This action cannot be undone. Do you wish to continue?": "Tuto akci nelze vrátit zpět. Přejete si pokračovat?", "This action cannot be undone. Do you wish to continue?": "Tuto akci nelze vrátit zpět. Přejete si pokračovat?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "To zajišťuje, že vaše cenné konverzace jsou bezpečně uloženy ve vaší backendové databázi. Děkujeme!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "To zajišťuje, že vaše cenné konverzace jsou bezpečně uloženy ve vaší backendové databázi. Děkujeme!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Jedná se o experimentální funkci, nemusí fungovat podle očekávání a může být kdykoliv změněna.", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "Jedná se o experimentální funkci, nemusí fungovat podle očekávání a může být kdykoliv změněna.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Tato volba odstraní všechny existující soubory ve sbírce a nahradí je nově nahranými soubory.", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "Tato volba odstraní všechny existující soubory ve sbírce a nahradí je nově nahranými soubory.",
"This response was generated by \"{{model}}\"": "Tato odpověď byla vygenerována pomocí \"{{model}}\"", "This response was generated by \"{{model}}\"": "Tato odpověď byla vygenerována pomocí \"{{model}}\"",
"This will delete": "Tohle odstraní", "This will delete": "Tohle odstraní",
@ -1132,7 +1148,7 @@
"Why?": "Proč?", "Why?": "Proč?",
"Widescreen Mode": "Režim širokoúhlého zobrazení", "Widescreen Mode": "Režim širokoúhlého zobrazení",
"Won": "Vyhrál", "Won": "Vyhrál",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "", "Workspace": "",
"Workspace Permissions": "", "Workspace Permissions": "",
"Write": "", "Write": "",
@ -1142,6 +1158,7 @@
"Write your model template content here": "", "Write your model template content here": "",
"Yesterday": "Včera", "Yesterday": "Včera",
"You": "Vy", "You": "Vy",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Můžete komunikovat pouze s maximálně {{maxCount}} soubor(y) najednou.", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Můžete komunikovat pouze s maximálně {{maxCount}} soubor(y) najednou.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Můžete personalizovat své interakce s LLM pomocí přidávání vzpomínek prostřednictvím tlačítka 'Spravovat' níže, což je učiní pro vás užitečnějšími a lépe přizpůsobenými.", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Můžete personalizovat své interakce s LLM pomocí přidávání vzpomínek prostřednictvím tlačítka 'Spravovat' níže, což je učiní pro vás užitečnějšími a lépe přizpůsobenými.",
"You cannot upload an empty file.": "Nemůžete nahrát prázdný soubor.", "You cannot upload an empty file.": "Nemůžete nahrát prázdný soubor.",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(f.eks. `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(f.eks. `sh webui.sh --api`)",
"(latest)": "(seneste)", "(latest)": "(seneste)",
"{{ models }}": "{{ modeller }}", "{{ models }}": "{{ modeller }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "", "{{COUNT}} Replies": "",
"{{user}}'s Chats": "{{user}}s chats", "{{user}}'s Chats": "{{user}}s chats",
"{{webUIName}} Backend Required": "{{webUIName}} Backend kræves", "{{webUIName}} Backend Required": "{{webUIName}} Backend kræves",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Administratorer har adgang til alle værktøjer altid; brugere skal tilføjes værktøjer pr. model i hvert workspace.", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Administratorer har adgang til alle værktøjer altid; brugere skal tilføjes værktøjer pr. model i hvert workspace.",
"Advanced Parameters": "Advancerede indstillinger", "Advanced Parameters": "Advancerede indstillinger",
"Advanced Params": "Advancerede indstillinger", "Advanced Params": "Advancerede indstillinger",
"All": "",
"All Documents": "Alle dokumenter", "All Documents": "Alle dokumenter",
"All models deleted successfully": "", "All models deleted successfully": "",
"Allow Chat Controls": "", "Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Tillad afbrydelser i stemme i opkald", "Allow Voice Interruption in Call": "Tillad afbrydelser i stemme i opkald",
"Allowed Endpoints": "", "Allowed Endpoints": "",
"Already have an account?": "Har du allerede en profil?", "Already have an account?": "Har du allerede en profil?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "", "Amazing": "",
"an assistant": "en assistent", "an assistant": "en assistent",
@ -93,6 +95,7 @@
"Are you sure?": "Er du sikker?", "Are you sure?": "Er du sikker?",
"Arena Models": "", "Arena Models": "",
"Artifacts": "Artifakter", "Artifacts": "Artifakter",
"Ask": "",
"Ask a question": "Stil et spørgsmål", "Ask a question": "Stil et spørgsmål",
"Assistant": "", "Assistant": "",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "", "Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "", "Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Brave Search API nøgle", "Brave Search API Key": "Brave Search API nøgle",
"By {{name}}": "", "By {{name}}": "",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Samling", "Collection": "Samling",
"Color": "", "Color": "",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "", "Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Forbindelser", "Connections": "Forbindelser",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Kontakt din administrator for adgang til WebUI", "Contact Admin for WebUI Access": "Kontakt din administrator for adgang til WebUI",
"Content": "Indhold", "Content": "Indhold",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "", "Continue with Email": "",
"Continue with LDAP": "", "Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Kontroller hvordan beskedens tekst bliver splittet til TTS requests. 'Punctuation' (tegnsætning) splitter i sætninger, 'paragraphs' splitter i paragraffer, og 'none' beholder beskeden som en samlet streng.", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Kontroller hvordan beskedens tekst bliver splittet til TTS requests. 'Punctuation' (tegnsætning) splitter i sætninger, 'paragraphs' splitter i paragraffer, og 'none' beholder beskeden som en samlet streng.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Indstillinger", "Controls": "Indstillinger",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Kopieret", "Copied": "Kopieret",
"Copied shared chat URL to clipboard!": "Link til deling kopieret til udklipsholder", "Copied shared chat URL to clipboard!": "Link til deling kopieret til udklipsholder",
"Copied to clipboard": "Kopieret til udklipsholder", "Copied to clipboard": "Kopieret til udklipsholder",
@ -245,6 +250,7 @@
"Created At": "Oprettet", "Created At": "Oprettet",
"Created by": "Oprettet af", "Created by": "Oprettet af",
"CSV Import": "Importer CSV", "CSV Import": "Importer CSV",
"Ctrl+Enter to Send": "",
"Current Model": "Nuværende model", "Current Model": "Nuværende model",
"Current Password": "Nuværende password", "Current Password": "Nuværende password",
"Custom": "Custom", "Custom": "Custom",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "Aktiver rating af besked", "Enable Message Rating": "Aktiver rating af besked",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Aktiver nye signups", "Enable New Sign Ups": "Aktiver nye signups",
"Enabled": "Aktiveret", "Enabled": "Aktiveret",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Sørg for at din CSV-fil indeholder 4 kolonner in denne rækkefølge: Name, Email, Password, Role.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Sørg for at din CSV-fil indeholder 4 kolonner in denne rækkefølge: Name, Email, Password, Role.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "Indtast CFG-skala (f.eks. 7.0)", "Enter CFG Scale (e.g. 7.0)": "Indtast CFG-skala (f.eks. 7.0)",
"Enter Chunk Overlap": "Indtast overlapning af tekststykker", "Enter Chunk Overlap": "Indtast overlapning af tekststykker",
"Enter Chunk Size": "Indtast størrelse af tekststykker", "Enter Chunk Size": "Indtast størrelse af tekststykker",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "", "Enter description": "",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "", "Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "Indtast sprogkoder", "Enter language codes": "Indtast sprogkoder",
"Enter Model ID": "Indtast model-ID", "Enter Model ID": "Indtast model-ID",
"Enter model tag (e.g. {{modelTag}})": "Indtast modelmærke (f.eks. {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "Indtast modelmærke (f.eks. {{modelTag}})",
"Enter Mojeek Search API Key": "", "Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "Indtast antal trin (f.eks. 50)", "Enter Number of Steps (e.g. 50)": "Indtast antal trin (f.eks. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "", "Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "Indtast sampler (f.eks. Euler a)", "Enter Sampler (e.g. Euler a)": "Indtast sampler (f.eks. Euler a)",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "Indtast Tika Server URL", "Enter Tika Server URL": "Indtast Tika Server URL",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Indtast Top K", "Enter Top K": "Indtast Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Indtast URL (f.eks. http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "Indtast URL (f.eks. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Indtast URL (f.eks. http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "Indtast URL (f.eks. http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "", "Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "", "Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "", "Exclude": "",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "Eksperimentel", "Experimental": "Eksperimentel",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "", "Explore the cosmos": "",
"Export": "Eksportér", "Export": "Eksportér",
"Export All Archived Chats": "", "Export All Archived Chats": "",
@ -566,7 +580,7 @@
"Include": "", "Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "Inkluder `--api-auth` flag, når du kører stable-diffusion-webui", "Include `--api-auth` flag when running stable-diffusion-webui": "Inkluder `--api-auth` flag, når du kører stable-diffusion-webui",
"Include `--api` flag when running stable-diffusion-webui": "Inkluder `--api` flag, når du kører stable-diffusion-webui", "Include `--api` flag when running stable-diffusion-webui": "Inkluder `--api` flag, når du kører stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Info", "Info": "Info",
"Input commands": "Inputkommandoer", "Input commands": "Inputkommandoer",
"Install from Github URL": "Installer fra Github URL", "Install from Github URL": "Installer fra Github URL",
@ -624,6 +638,7 @@
"Local": "", "Local": "",
"Local Models": "Lokale modeller", "Local Models": "Lokale modeller",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "", "Lost": "",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "Lavet af OpenWebUI Community", "Made by Open WebUI Community": "Lavet af OpenWebUI Community",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "Tilladelse nægtet ved adgang til mikrofon", "Permission denied when accessing microphone": "Tilladelse nægtet ved adgang til mikrofon",
"Permission denied when accessing microphone: {{error}}": "Tilladelse nægtet ved adgang til mikrofon: {{error}}", "Permission denied when accessing microphone: {{error}}": "Tilladelse nægtet ved adgang til mikrofon: {{error}}",
"Permissions": "", "Permissions": "",
"Perplexity API Key": "",
"Personalization": "Personalisering", "Personalization": "Personalisering",
"Pin": "Fastgør", "Pin": "Fastgør",
"Pinned": "Fastgjort", "Pinned": "Fastgjort",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "Optag stemme", "Record voice": "Optag stemme",
"Redirecting you to Open WebUI Community": "Omdirigerer dig til OpenWebUI Community", "Redirecting you to Open WebUI Community": "Omdirigerer dig til OpenWebUI Community",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Referer til dig selv som \"Bruger\" (f.eks. \"Bruger lærer spansk\")", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Referer til dig selv som \"Bruger\" (f.eks. \"Bruger lærer spansk\")",
"References from": "", "References from": "",
"Refused when it shouldn't have": "Afvist, når den ikke burde have været det", "Refused when it shouldn't have": "Afvist, når den ikke burde have været det",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "Indstil stemme", "Set Voice": "Indstil stemme",
"Set whisper model": "", "Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "Indstillinger", "Settings": "Indstillinger",
"Settings saved successfully!": "Indstillinger gemt!", "Settings saved successfully!": "Indstillinger gemt!",
@ -964,7 +980,7 @@
"System Prompt": "Systemprompt", "System Prompt": "Systemprompt",
"Tags Generation": "", "Tags Generation": "",
"Tags Generation Prompt": "", "Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "Tryk for at afbryde", "Tap to interrupt": "Tryk for at afbryde",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "Tak for din feedback!", "Thanks for your feedback!": "Tak for din feedback!",
"The Application Account DN you bind with for search": "", "The Application Account DN you bind with for search": "",
"The base to search for users": "", "The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Udviklerne bag dette plugin er passionerede frivillige fra fællesskabet. Hvis du finder dette plugin nyttigt, kan du overveje at bidrage til dets udvikling.", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Udviklerne bag dette plugin er passionerede frivillige fra fællesskabet. Hvis du finder dette plugin nyttigt, kan du overveje at bidrage til dets udvikling.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Den maksimale filstørrelse i MB. Hvis filstørrelsen overstiger denne grænse, uploades filen ikke.", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Den maksimale filstørrelse i MB. Hvis filstørrelsen overstiger denne grænse, uploades filen ikke.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Det maksimale antal filer, der kan bruges på én gang i chatten. Hvis antallet af filer overstiger denne grænse, uploades filerne ikke.", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Det maksimale antal filer, der kan bruges på én gang i chatten. Hvis antallet af filer overstiger denne grænse, uploades filerne ikke.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Scoren skal være en værdi mellem 0,0 (0%) og 1,0 (100%).", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "Scoren skal være en værdi mellem 0,0 (0%) og 1,0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Tema", "Theme": "Tema",
"Thinking...": "Tænker...", "Thinking...": "Tænker...",
"This action cannot be undone. Do you wish to continue?": "Denne handling kan ikke fortrydes. Vil du fortsætte?", "This action cannot be undone. Do you wish to continue?": "Denne handling kan ikke fortrydes. Vil du fortsætte?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Dette sikrer, at dine værdifulde samtaler gemmes sikkert i din backend-database. Tak!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Dette sikrer, at dine værdifulde samtaler gemmes sikkert i din backend-database. Tak!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Dette er en eksperimentel funktion, den fungerer muligvis ikke som forventet og kan ændres når som helst.", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "Dette er en eksperimentel funktion, den fungerer muligvis ikke som forventet og kan ændres når som helst.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Denne indstilling sletter alle eksisterende filer i samlingen og erstatter dem med nyligt uploadede filer.", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "Denne indstilling sletter alle eksisterende filer i samlingen og erstatter dem med nyligt uploadede filer.",
"This response was generated by \"{{model}}\"": "", "This response was generated by \"{{model}}\"": "",
"This will delete": "Dette vil slette", "This will delete": "Dette vil slette",
@ -1132,7 +1148,7 @@
"Why?": "", "Why?": "",
"Widescreen Mode": "Widescreen-tilstand", "Widescreen Mode": "Widescreen-tilstand",
"Won": "", "Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Arbejdsområde", "Workspace": "Arbejdsområde",
"Workspace Permissions": "", "Workspace Permissions": "",
"Write": "", "Write": "",
@ -1142,6 +1158,7 @@
"Write your model template content here": "", "Write your model template content here": "",
"Yesterday": "I går", "Yesterday": "I går",
"You": "Du", "You": "Du",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Du kan kun chatte med maksimalt {{maxCount}} fil(er) ad gangen.", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Du kan kun chatte med maksimalt {{maxCount}} fil(er) ad gangen.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Du kan personliggøre dine interaktioner med LLM'er ved at tilføje minder via knappen 'Administrer' nedenfor, hvilket gør dem mere nyttige og skræddersyet til dig.", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Du kan personliggøre dine interaktioner med LLM'er ved at tilføje minder via knappen 'Administrer' nedenfor, hvilket gør dem mere nyttige og skræddersyet til dig.",
"You cannot upload an empty file.": "", "You cannot upload an empty file.": "",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(z. B. `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(z. B. `sh webui.sh --api`)",
"(latest)": "(neueste)", "(latest)": "(neueste)",
"{{ models }}": "{{ Modelle }}", "{{ models }}": "{{ Modelle }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "{{COUNT}} Antworten", "{{COUNT}} Replies": "{{COUNT}} Antworten",
"{{user}}'s Chats": "{{user}}s Unterhaltungen", "{{user}}'s Chats": "{{user}}s Unterhaltungen",
"{{webUIName}} Backend Required": "{{webUIName}}-Backend erforderlich", "{{webUIName}} Backend Required": "{{webUIName}}-Backend erforderlich",
@ -13,7 +14,7 @@
"A task model is used when performing tasks such as generating titles for chats and web search queries": "Aufgabenmodelle können Unterhaltungstitel oder Websuchanfragen generieren.", "A task model is used when performing tasks such as generating titles for chats and web search queries": "Aufgabenmodelle können Unterhaltungstitel oder Websuchanfragen generieren.",
"a user": "ein Benutzer", "a user": "ein Benutzer",
"About": "Über", "About": "Über",
"Accept autocomplete generation / Jump to prompt variable": "", "Accept autocomplete generation / Jump to prompt variable": "Automatische Vervollständigung akzeptieren / Zur Prompt-Variable springen",
"Access": "Zugang", "Access": "Zugang",
"Access Control": "Zugangskontrolle", "Access Control": "Zugangskontrolle",
"Accessible to all users": "Für alle Benutzer zugänglich", "Accessible to all users": "Für alle Benutzer zugänglich",
@ -21,7 +22,7 @@
"Account Activation Pending": "Kontoaktivierung ausstehend", "Account Activation Pending": "Kontoaktivierung ausstehend",
"Accurate information": "Präzise Information(en)", "Accurate information": "Präzise Information(en)",
"Actions": "Aktionen", "Actions": "Aktionen",
"Activate": "", "Activate": "Aktivieren",
"Activate this command by typing \"/{{COMMAND}}\" to chat input.": "Aktivieren Sie diesen Befehl, indem Sie \"/{{COMMAND}}\" in die Chat-Eingabe eingeben.", "Activate this command by typing \"/{{COMMAND}}\" to chat input.": "Aktivieren Sie diesen Befehl, indem Sie \"/{{COMMAND}}\" in die Chat-Eingabe eingeben.",
"Active Users": "Aktive Benutzer", "Active Users": "Aktive Benutzer",
"Add": "Hinzufügen", "Add": "Hinzufügen",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Administratoren haben jederzeit Zugriff auf alle Werkzeuge. Benutzer können im Arbeitsbereich zugewiesen.", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Administratoren haben jederzeit Zugriff auf alle Werkzeuge. Benutzer können im Arbeitsbereich zugewiesen.",
"Advanced Parameters": "Erweiterte Parameter", "Advanced Parameters": "Erweiterte Parameter",
"Advanced Params": "Erweiterte Parameter", "Advanced Params": "Erweiterte Parameter",
"All": "",
"All Documents": "Alle Dokumente", "All Documents": "Alle Dokumente",
"All models deleted successfully": "Alle Modelle erfolgreich gelöscht", "All models deleted successfully": "Alle Modelle erfolgreich gelöscht",
"Allow Chat Controls": "Chat-Steuerung erlauben", "Allow Chat Controls": "Chat-Steuerung erlauben",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Unterbrechung durch Stimme im Anruf zulassen", "Allow Voice Interruption in Call": "Unterbrechung durch Stimme im Anruf zulassen",
"Allowed Endpoints": "Erlaubte Endpunkte", "Allowed Endpoints": "Erlaubte Endpunkte",
"Already have an account?": "Haben Sie bereits einen Account?", "Already have an account?": "Haben Sie bereits einen Account?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "Alternative zu top_p und zielt darauf ab, ein Gleichgewicht zwischen Qualität und Vielfalt zu gewährleisten. Der Parameter p repräsentiert die Mindestwahrscheinlichkeit für ein Token, um berücksichtigt zu werden, relativ zur Wahrscheinlichkeit des wahrscheinlichsten Tokens. Zum Beispiel, bei p=0.05 und das wahrscheinlichste Token hat eine Wahrscheinlichkeit von 0.9, werden Logits mit einem Wert von weniger als 0.045 herausgefiltert. (Standard: 0.0)", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "Immer", "Always": "Immer",
"Amazing": "Fantastisch", "Amazing": "Fantastisch",
"an assistant": "ein Assistent", "an assistant": "ein Assistent",
@ -86,23 +88,24 @@
"Archive All Chats": "Alle Unterhaltungen archivieren", "Archive All Chats": "Alle Unterhaltungen archivieren",
"Archived Chats": "Archivierte Unterhaltungen", "Archived Chats": "Archivierte Unterhaltungen",
"archived-chat-export": "archivierter-chat-export", "archived-chat-export": "archivierter-chat-export",
"Are you sure you want to clear all memories? This action cannot be undone.": "", "Are you sure you want to clear all memories? This action cannot be undone.": "Sind Sie sicher, dass Sie alle Erinnerungen löschen möchten? Diese Handlung kann nicht rückgängig gemacht werden.",
"Are you sure you want to delete this channel?": "Sind Sie sicher, dass Sie diesen Kanal löschen möchten?", "Are you sure you want to delete this channel?": "Sind Sie sicher, dass Sie diesen Kanal löschen möchten?",
"Are you sure you want to delete this message?": "Sind Sie sicher, dass Sie diese Nachricht löschen möchten?", "Are you sure you want to delete this message?": "Sind Sie sicher, dass Sie diese Nachricht löschen möchten?",
"Are you sure you want to unarchive all archived chats?": "Sind Sie sicher, dass Sie alle archivierten Unterhaltungen wiederherstellen möchten?", "Are you sure you want to unarchive all archived chats?": "Sind Sie sicher, dass Sie alle archivierten Unterhaltungen wiederherstellen möchten?",
"Are you sure?": "Sind Sie sicher?", "Are you sure?": "Sind Sie sicher?",
"Arena Models": "Arena-Modelle", "Arena Models": "Arena-Modelle",
"Artifacts": "Artefakte", "Artifacts": "Artefakte",
"Ask": "",
"Ask a question": "Stellen Sie eine Frage", "Ask a question": "Stellen Sie eine Frage",
"Assistant": "Assistent", "Assistant": "Assistent",
"Attach file from knowledge": "", "Attach file from knowledge": "Datei aus Wissensspeicher anhängen",
"Attention to detail": "Aufmerksamkeit für Details", "Attention to detail": "Aufmerksamkeit für Details",
"Attribute for Mail": "Attribut für E-Mail", "Attribute for Mail": "Attribut für E-Mail",
"Attribute for Username": "Attribut für Benutzername", "Attribute for Username": "Attribut für Benutzername",
"Audio": "Audio", "Audio": "Audio",
"August": "August", "August": "August",
"Authenticate": "Authentifizieren", "Authenticate": "Authentifizieren",
"Authentication": "", "Authentication": "Authentifizierung",
"Auto-Copy Response to Clipboard": "Antwort automatisch in die Zwischenablage kopieren", "Auto-Copy Response to Clipboard": "Antwort automatisch in die Zwischenablage kopieren",
"Auto-playback response": "Antwort automatisch abspielen", "Auto-playback response": "Antwort automatisch abspielen",
"Autocomplete Generation": "Automatische Vervollständigung", "Autocomplete Generation": "Automatische Vervollständigung",
@ -127,11 +130,12 @@
"Bing Search V7 Endpoint": "Bing Search V7-Endpunkt", "Bing Search V7 Endpoint": "Bing Search V7-Endpunkt",
"Bing Search V7 Subscription Key": "Bing Search V7-Abonnement-Schlüssel", "Bing Search V7 Subscription Key": "Bing Search V7-Abonnement-Schlüssel",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Brave Search API-Schlüssel", "Brave Search API Key": "Brave Search API-Schlüssel",
"By {{name}}": "Von {{name}}", "By {{name}}": "Von {{name}}",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "Embedding und Retrieval umgehen",
"Bypass SSL verification for Websites": "SSL-Überprüfung für Webseiten umgehen", "Bypass SSL verification for Websites": "SSL-Überprüfung für Webseiten umgehen",
"Calendar": "", "Calendar": "Kalender",
"Call": "Anrufen", "Call": "Anrufen",
"Call feature is not supported when using Web STT engine": "Die Anruffunktion wird nicht unterstützt, wenn die Web-STT-Engine verwendet wird.", "Call feature is not supported when using Web STT engine": "Die Anruffunktion wird nicht unterstützt, wenn die Web-STT-Engine verwendet wird.",
"Camera": "Kamera", "Camera": "Kamera",
@ -170,7 +174,7 @@
"Click here to": "Klicken Sie hier, um", "Click here to": "Klicken Sie hier, um",
"Click here to download user import template file.": "Klicken Sie hier, um die Vorlage für den Benutzerimport herunterzuladen.", "Click here to download user import template file.": "Klicken Sie hier, um die Vorlage für den Benutzerimport herunterzuladen.",
"Click here to learn more about faster-whisper and see the available models.": "Klicken Sie hier, um mehr über faster-whisper zu erfahren und die verfügbaren Modelle zu sehen.", "Click here to learn more about faster-whisper and see the available models.": "Klicken Sie hier, um mehr über faster-whisper zu erfahren und die verfügbaren Modelle zu sehen.",
"Click here to see available models.": "", "Click here to see available models.": "Klicken Sie hier, um die verfügbaren Modelle anzuzeigen.",
"Click here to select": "Klicke Sie zum Auswählen hier", "Click here to select": "Klicke Sie zum Auswählen hier",
"Click here to select a csv file.": "Klicken Sie zum Auswählen einer CSV-Datei hier.", "Click here to select a csv file.": "Klicken Sie zum Auswählen einer CSV-Datei hier.",
"Click here to select a py file.": "Klicken Sie zum Auswählen einer py-Datei hier.", "Click here to select a py file.": "Klicken Sie zum Auswählen einer py-Datei hier.",
@ -183,13 +187,14 @@
"Clone of {{TITLE}}": "Klon von {{TITLE}}", "Clone of {{TITLE}}": "Klon von {{TITLE}}",
"Close": "Schließen", "Close": "Schließen",
"Code execution": "Codeausführung", "Code execution": "Codeausführung",
"Code Execution": "", "Code Execution": "Codeausführung",
"Code Execution Engine": "", "Code Execution Engine": "",
"Code Execution Timeout": "", "Code Execution Timeout": "",
"Code formatted successfully": "Code erfolgreich formatiert", "Code formatted successfully": "Code erfolgreich formatiert",
"Code Interpreter": "Code-Interpreter", "Code Interpreter": "Code-Interpreter",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Kollektion", "Collection": "Kollektion",
"Color": "Farbe", "Color": "Farbe",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -206,9 +211,9 @@
"Confirm Password": "Passwort bestätigen", "Confirm Password": "Passwort bestätigen",
"Confirm your action": "Bestätigen Sie Ihre Aktion.", "Confirm your action": "Bestätigen Sie Ihre Aktion.",
"Confirm your new password": "Neues Passwort bestätigen", "Confirm your new password": "Neues Passwort bestätigen",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "Verbinden Sie sich zu Ihren OpenAI-kompatiblen Endpunkten.",
"Connections": "Verbindungen", "Connections": "Verbindungen",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "Beschränkt den Aufwand für das Schlussfolgern bei Schlussfolgerungsmodellen. Nur anwendbar auf Schlussfolgerungsmodelle von spezifischen Anbietern, die den Schlussfolgerungsaufwand unterstützen. (Standard: medium)", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Kontaktieren Sie den Administrator für den Zugriff auf die Weboberfläche", "Contact Admin for WebUI Access": "Kontaktieren Sie den Administrator für den Zugriff auf die Weboberfläche",
"Content": "Info", "Content": "Info",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "Mit Email fortfahren", "Continue with Email": "Mit Email fortfahren",
"Continue with LDAP": "Mit LDAP fortfahren", "Continue with LDAP": "Mit LDAP fortfahren",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Kontrollieren Sie, wie Nachrichtentext für TTS-Anfragen aufgeteilt wird. 'Punctuation' teilt in Sätze auf, 'paragraphs' teilt in Absätze auf und 'none' behält die Nachricht als einzelnen String.", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Kontrollieren Sie, wie Nachrichtentext für TTS-Anfragen aufgeteilt wird. 'Punctuation' teilt in Sätze auf, 'paragraphs' teilt in Absätze auf und 'none' behält die Nachricht als einzelnen String.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Steuerung", "Controls": "Steuerung",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "Kontrolliert das Gleichgewicht zwischen Kohärenz und Vielfalt des Ausgabetextes. Ein niedrigerer Wert führt zu fokussierterem und kohärenterem Text. (Standard: 5.0)", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Kopiert", "Copied": "Kopiert",
"Copied shared chat URL to clipboard!": "Freigabelink in die Zwischenablage kopiert!", "Copied shared chat URL to clipboard!": "Freigabelink in die Zwischenablage kopiert!",
"Copied to clipboard": "In die Zwischenablage kopiert", "Copied to clipboard": "In die Zwischenablage kopiert",
@ -230,7 +235,7 @@
"Copy Link": "Link kopieren", "Copy Link": "Link kopieren",
"Copy to clipboard": "In die Zwischenablage kopieren", "Copy to clipboard": "In die Zwischenablage kopieren",
"Copying to clipboard was successful!": "Das Kopieren in die Zwischenablage war erfolgreich!", "Copying to clipboard was successful!": "Das Kopieren in die Zwischenablage war erfolgreich!",
"CORS must be properly configured by the provider to allow requests from Open WebUI.": "", "CORS must be properly configured by the provider to allow requests from Open WebUI.": "CORS muss vom Anbieter korrekt konfiguriert werden, um Anfragen von Open WebUI zuzulassen.",
"Create": "Erstellen", "Create": "Erstellen",
"Create a knowledge base": "Wissensspeicher erstellen", "Create a knowledge base": "Wissensspeicher erstellen",
"Create a model": "Modell erstellen", "Create a model": "Modell erstellen",
@ -245,10 +250,11 @@
"Created At": "Erstellt am", "Created At": "Erstellt am",
"Created by": "Erstellt von", "Created by": "Erstellt von",
"CSV Import": "CSV-Import", "CSV Import": "CSV-Import",
"Ctrl+Enter to Send": "",
"Current Model": "Aktuelles Modell", "Current Model": "Aktuelles Modell",
"Current Password": "Aktuelles Passwort", "Current Password": "Aktuelles Passwort",
"Custom": "Benutzerdefiniert", "Custom": "Benutzerdefiniert",
"Danger Zone": "", "Danger Zone": "Gefahrenzone",
"Dark": "Dunkel", "Dark": "Dunkel",
"Database": "Datenbank", "Database": "Datenbank",
"December": "Dezember", "December": "Dezember",
@ -286,9 +292,9 @@
"Describe your knowledge base and objectives": "Beschreibe deinen Wissensspeicher und deine Ziele", "Describe your knowledge base and objectives": "Beschreibe deinen Wissensspeicher und deine Ziele",
"Description": "Beschreibung", "Description": "Beschreibung",
"Didn't fully follow instructions": "Nicht genau den Answeisungen gefolgt", "Didn't fully follow instructions": "Nicht genau den Answeisungen gefolgt",
"Direct Connections": "", "Direct Connections": "Direktverbindungen",
"Direct Connections allow users to connect to their own OpenAI compatible API endpoints.": "", "Direct Connections allow users to connect to their own OpenAI compatible API endpoints.": "Direktverbindungen ermöglichen es Benutzern, sich mit ihren eigenen OpenAI-kompatiblen API-Endpunkten zu verbinden.",
"Direct Connections settings updated": "", "Direct Connections settings updated": "Direktverbindungs-Einstellungen aktualisiert",
"Disabled": "Deaktiviert", "Disabled": "Deaktiviert",
"Discover a function": "Entdecken Sie weitere Funktionen", "Discover a function": "Entdecken Sie weitere Funktionen",
"Discover a model": "Entdecken Sie weitere Modelle", "Discover a model": "Entdecken Sie weitere Modelle",
@ -321,14 +327,14 @@
"Don't like the style": "schlechter Schreibstil", "Don't like the style": "schlechter Schreibstil",
"Done": "Erledigt", "Done": "Erledigt",
"Download": "Exportieren", "Download": "Exportieren",
"Download as SVG": "", "Download as SVG": "Exportieren als SVG",
"Download canceled": "Exportierung abgebrochen", "Download canceled": "Exportierung abgebrochen",
"Download Database": "Datenbank exportieren", "Download Database": "Datenbank exportieren",
"Drag and drop a file to upload or select a file to view": "Ziehen Sie eine Datei zum Hochladen oder wählen Sie eine Datei zum Anzeigen aus", "Drag and drop a file to upload or select a file to view": "Ziehen Sie eine Datei zum Hochladen oder wählen Sie eine Datei zum Anzeigen aus",
"Draw": "Zeichnen", "Draw": "Zeichnen",
"Drop any files here to add to the conversation": "Ziehen Sie beliebige Dateien hierher, um sie der Unterhaltung hinzuzufügen", "Drop any files here to add to the conversation": "Ziehen Sie beliebige Dateien hierher, um sie der Unterhaltung hinzuzufügen",
"e.g. '30s','10m'. Valid time units are 's', 'm', 'h'.": "z. B. '30s','10m'. Gültige Zeiteinheiten sind 's', 'm', 'h'.", "e.g. '30s','10m'. Valid time units are 's', 'm', 'h'.": "z. B. '30s','10m'. Gültige Zeiteinheiten sind 's', 'm', 'h'.",
"e.g. 60": "", "e.g. 60": "z. B. 60",
"e.g. A filter to remove profanity from text": "z. B. Ein Filter, um Schimpfwörter aus Text zu entfernen", "e.g. A filter to remove profanity from text": "z. B. Ein Filter, um Schimpfwörter aus Text zu entfernen",
"e.g. My Filter": "z. B. Mein Filter", "e.g. My Filter": "z. B. Mein Filter",
"e.g. My Tools": "z. B. Meine Werkzeuge", "e.g. My Tools": "z. B. Meine Werkzeuge",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Aktiviere Memory Locking (mlock), um zu verhindern, dass Modelldaten aus dem RAM ausgelagert werden. Diese Option sperrt die Arbeitsseiten des Modells im RAM, um sicherzustellen, dass sie nicht auf die Festplatte ausgelagert werden. Dies kann die Leistung verbessern, indem Page Faults vermieden und ein schneller Datenzugriff sichergestellt werden.", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Aktiviere Memory Locking (mlock), um zu verhindern, dass Modelldaten aus dem RAM ausgelagert werden. Diese Option sperrt die Arbeitsseiten des Modells im RAM, um sicherzustellen, dass sie nicht auf die Festplatte ausgelagert werden. Dies kann die Leistung verbessern, indem Page Faults vermieden und ein schneller Datenzugriff sichergestellt werden.",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Aktiviere Memory Mapping (mmap), um Modelldaten zu laden. Diese Option ermöglicht es dem System, den Festplattenspeicher als Erweiterung des RAM zu verwenden, indem Festplattendateien so behandelt werden, als ob sie im RAM wären. Dies kann die Modellleistung verbessern, indem ein schnellerer Datenzugriff ermöglicht wird. Es kann jedoch nicht auf allen Systemen korrekt funktionieren und einen erheblichen Teil des Festplattenspeichers beanspruchen.", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Aktiviere Memory Mapping (mmap), um Modelldaten zu laden. Diese Option ermöglicht es dem System, den Festplattenspeicher als Erweiterung des RAM zu verwenden, indem Festplattendateien so behandelt werden, als ob sie im RAM wären. Dies kann die Modellleistung verbessern, indem ein schnellerer Datenzugriff ermöglicht wird. Es kann jedoch nicht auf allen Systemen korrekt funktionieren und einen erheblichen Teil des Festplattenspeichers beanspruchen.",
"Enable Message Rating": "Nachrichtenbewertung aktivieren", "Enable Message Rating": "Nachrichtenbewertung aktivieren",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "Mirostat Sampling zur Steuerung der Perplexität aktivieren. (Standard: 0, 0 = Deaktiviert, 1 = Mirostat, 2 = Mirostat 2.0)", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Registrierung erlauben", "Enable New Sign Ups": "Registrierung erlauben",
"Enabled": "Aktiviert", "Enabled": "Aktiviert",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Stellen Sie sicher, dass Ihre CSV-Datei 4 Spalten in dieser Reihenfolge enthält: Name, E-Mail, Passwort, Rolle.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Stellen Sie sicher, dass Ihre CSV-Datei 4 Spalten in dieser Reihenfolge enthält: Name, E-Mail, Passwort, Rolle.",
@ -375,10 +381,11 @@
"Enter CFG Scale (e.g. 7.0)": "Geben Sie die CFG-Skala ein (z. B. 7.0)", "Enter CFG Scale (e.g. 7.0)": "Geben Sie die CFG-Skala ein (z. B. 7.0)",
"Enter Chunk Overlap": "Geben Sie die Blocküberlappung ein", "Enter Chunk Overlap": "Geben Sie die Blocküberlappung ein",
"Enter Chunk Size": "Geben Sie die Blockgröße ein", "Enter Chunk Size": "Geben Sie die Blockgröße ein",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "Geben Sie eine Beschreibung ein", "Enter description": "Geben Sie eine Beschreibung ein",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
"Enter domains separated by commas (e.g., example.com,site.org)": "", "Enter domains separated by commas (e.g., example.com,site.org)": "Geben Sie die Domains durch Kommas separiert ein (z.B. example.com,site.org)",
"Enter Exa API Key": "Geben Sie den Exa-API-Schlüssel ein", "Enter Exa API Key": "Geben Sie den Exa-API-Schlüssel ein",
"Enter Github Raw URL": "Geben Sie die Github Raw-URL ein", "Enter Github Raw URL": "Geben Sie die Github Raw-URL ein",
"Enter Google PSE API Key": "Geben Sie den Google PSE-API-Schlüssel ein", "Enter Google PSE API Key": "Geben Sie den Google PSE-API-Schlüssel ein",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "Geben sie den Kagi Search API-Schlüssel ein", "Enter Kagi Search API Key": "Geben sie den Kagi Search API-Schlüssel ein",
"Enter Key Behavior": "",
"Enter language codes": "Geben Sie die Sprachcodes ein", "Enter language codes": "Geben Sie die Sprachcodes ein",
"Enter Model ID": "Geben Sie die Modell-ID ein", "Enter Model ID": "Geben Sie die Modell-ID ein",
"Enter model tag (e.g. {{modelTag}})": "Geben Sie den Model-Tag ein", "Enter model tag (e.g. {{modelTag}})": "Geben Sie den Model-Tag ein",
"Enter Mojeek Search API Key": "Geben Sie den Mojeek Search API-Schlüssel ein", "Enter Mojeek Search API Key": "Geben Sie den Mojeek Search API-Schlüssel ein",
"Enter Number of Steps (e.g. 50)": "Geben Sie die Anzahl an Schritten ein (z. B. 50)", "Enter Number of Steps (e.g. 50)": "Geben Sie die Anzahl an Schritten ein (z. B. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "Geben sie die Proxy-URL ein (z. B. https://user:password@host:port)", "Enter proxy URL (e.g. https://user:password@host:port)": "Geben sie die Proxy-URL ein (z. B. https://user:password@host:port)",
"Enter reasoning effort": "Geben Sie den Schlussfolgerungsaufwand ein", "Enter reasoning effort": "Geben Sie den Schlussfolgerungsaufwand ein",
"Enter Sampler (e.g. Euler a)": "Geben Sie den Sampler ein (z. B. Euler a)", "Enter Sampler (e.g. Euler a)": "Geben Sie den Sampler ein (z. B. Euler a)",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "Geben sie die öffentliche URL Ihrer WebUI ein. Diese URL wird verwendet, um Links in den Benachrichtigungen zu generieren.", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "Geben sie die öffentliche URL Ihrer WebUI ein. Diese URL wird verwendet, um Links in den Benachrichtigungen zu generieren.",
"Enter Tika Server URL": "Geben Sie die Tika-Server-URL ein", "Enter Tika Server URL": "Geben Sie die Tika-Server-URL ein",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Geben Sie Top K ein", "Enter Top K": "Geben Sie Top K ein",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Geben Sie die URL ein (z. B. http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "Geben Sie die URL ein (z. B. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Geben Sie die URL ein (z. B. http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "Geben Sie die URL ein (z. B. http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "Beispiel: mail", "Example: mail": "Beispiel: mail",
"Example: ou=users,dc=foo,dc=example": "Beispiel: ou=users,dc=foo,dc=example", "Example: ou=users,dc=foo,dc=example": "Beispiel: ou=users,dc=foo,dc=example",
"Example: sAMAccountName or uid or userPrincipalName": "Beispiel: sAMAccountName or uid or userPrincipalName", "Example: sAMAccountName or uid or userPrincipalName": "Beispiel: sAMAccountName or uid or userPrincipalName",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "Ausschließen", "Exclude": "Ausschließen",
"Execute code for analysis": "Code für Analyse ausführen", "Execute code for analysis": "Code für Analyse ausführen",
"Expand": "",
"Experimental": "Experimentell", "Experimental": "Experimentell",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "Erforschen Sie das Universum", "Explore the cosmos": "Erforschen Sie das Universum",
"Export": "Exportieren", "Export": "Exportieren",
"Export All Archived Chats": "Alle archivierten Unterhaltungen exportieren", "Export All Archived Chats": "Alle archivierten Unterhaltungen exportieren",
@ -464,7 +478,7 @@
"Failed to save models configuration": "Fehler beim Speichern der Modellkonfiguration", "Failed to save models configuration": "Fehler beim Speichern der Modellkonfiguration",
"Failed to update settings": "Fehler beim Aktualisieren der Einstellungen", "Failed to update settings": "Fehler beim Aktualisieren der Einstellungen",
"Failed to upload file.": "Fehler beim Hochladen der Datei.", "Failed to upload file.": "Fehler beim Hochladen der Datei.",
"Features": "", "Features": "Funktionalitäten",
"Features Permissions": "Funktionen-Berechtigungen", "Features Permissions": "Funktionen-Berechtigungen",
"February": "Februar", "February": "Februar",
"Feedback History": "Feedback-Verlauf", "Feedback History": "Feedback-Verlauf",
@ -566,7 +580,7 @@
"Include": "Einschließen", "Include": "Einschließen",
"Include `--api-auth` flag when running stable-diffusion-webui": "Fügen Sie beim Ausführen von stable-diffusion-webui die Option `--api-auth` hinzu", "Include `--api-auth` flag when running stable-diffusion-webui": "Fügen Sie beim Ausführen von stable-diffusion-webui die Option `--api-auth` hinzu",
"Include `--api` flag when running stable-diffusion-webui": "Fügen Sie beim Ausführen von stable-diffusion-webui die Option `--api` hinzu", "Include `--api` flag when running stable-diffusion-webui": "Fügen Sie beim Ausführen von stable-diffusion-webui die Option `--api` hinzu",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "Beeinflusst, wie schnell der Algorithmus auf Feedback aus dem generierten Text reagiert. Eine niedrigere Lernrate führt zu langsameren Anpassungen, während eine höhere Lernrate den Algorithmus reaktionsschneller macht. (Standard: 0.1)", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Info", "Info": "Info",
"Input commands": "Eingabebefehle", "Input commands": "Eingabebefehle",
"Install from Github URL": "Installiere von der Github-URL", "Install from Github URL": "Installiere von der Github-URL",
@ -613,24 +627,25 @@
"Leave empty to include all models from \"{{URL}}/models\" endpoint": "Leer lassen, um alle Modelle vom \"{{URL}}/models\"-Endpunkt einzuschließen", "Leave empty to include all models from \"{{URL}}/models\" endpoint": "Leer lassen, um alle Modelle vom \"{{URL}}/models\"-Endpunkt einzuschließen",
"Leave empty to include all models or select specific models": "Leer lassen, um alle Modelle einzuschließen oder spezifische Modelle auszuwählen", "Leave empty to include all models or select specific models": "Leer lassen, um alle Modelle einzuschließen oder spezifische Modelle auszuwählen",
"Leave empty to use the default prompt, or enter a custom prompt": "Leer lassen, um den Standardprompt zu verwenden, oder geben Sie einen benutzerdefinierten Prompt ein", "Leave empty to use the default prompt, or enter a custom prompt": "Leer lassen, um den Standardprompt zu verwenden, oder geben Sie einen benutzerdefinierten Prompt ein",
"Leave model field empty to use the default model.": "", "Leave model field empty to use the default model.": "Leer lassen, um das Standardmodell zu verwenden.",
"License": "", "License": "Lizenz",
"Light": "Hell", "Light": "Hell",
"Listening...": "Höre zu...", "Listening...": "Höre zu...",
"Llama.cpp": "Llama.cpp", "Llama.cpp": "Llama.cpp",
"LLMs can make mistakes. Verify important information.": "LLMs können Fehler machen. Überprüfe wichtige Informationen.", "LLMs can make mistakes. Verify important information.": "LLMs können Fehler machen. Überprüfe wichtige Informationen.",
"Loader": "", "Loader": "",
"Loading Kokoro.js...": "", "Loading Kokoro.js...": "Lade Kokoro.js...",
"Local": "Lokal", "Local": "Lokal",
"Local Models": "Lokale Modelle", "Local Models": "Lokale Modelle",
"Location access not allowed": "", "Location access not allowed": "Standortzugriff nicht erlaub",
"Logit Bias": "",
"Lost": "Verloren", "Lost": "Verloren",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "Von der OpenWebUI-Community", "Made by Open WebUI Community": "Von der OpenWebUI-Community",
"Make sure to enclose them with": "Umschließe Variablen mit", "Make sure to enclose them with": "Umschließe Variablen mit",
"Make sure to export a workflow.json file as API format from ComfyUI.": "Stellen Sie sicher, dass sie eine workflow.json-Datei im API-Format von ComfyUI exportieren.", "Make sure to export a workflow.json file as API format from ComfyUI.": "Stellen Sie sicher, dass sie eine workflow.json-Datei im API-Format von ComfyUI exportieren.",
"Manage": "Verwalten", "Manage": "Verwalten",
"Manage Direct Connections": "", "Manage Direct Connections": "Direkte Verbindungen verwalten",
"Manage Models": "Modelle verwalten", "Manage Models": "Modelle verwalten",
"Manage Ollama": "Ollama verwalten", "Manage Ollama": "Ollama verwalten",
"Manage Ollama API Connections": "Ollama-API-Verbindungen verwalten", "Manage Ollama API Connections": "Ollama-API-Verbindungen verwalten",
@ -697,7 +712,7 @@
"No HTML, CSS, or JavaScript content found.": "Keine HTML-, CSS- oder JavaScript-Inhalte gefunden.", "No HTML, CSS, or JavaScript content found.": "Keine HTML-, CSS- oder JavaScript-Inhalte gefunden.",
"No inference engine with management support found": "Keine Inferenz-Engine mit Management-Unterstützung gefunden", "No inference engine with management support found": "Keine Inferenz-Engine mit Management-Unterstützung gefunden",
"No knowledge found": "Kein Wissen gefunden", "No knowledge found": "Kein Wissen gefunden",
"No memories to clear": "", "No memories to clear": "Keine Erinnerungen zum Entfernen",
"No model IDs": "Keine Modell-IDs", "No model IDs": "Keine Modell-IDs",
"No models found": "Keine Modelle gefunden", "No models found": "Keine Modelle gefunden",
"No models selected": "Keine Modelle ausgewählt", "No models selected": "Keine Modelle ausgewählt",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "Zugriff auf das Mikrofon verweigert", "Permission denied when accessing microphone": "Zugriff auf das Mikrofon verweigert",
"Permission denied when accessing microphone: {{error}}": "Zugriff auf das Mikrofon verweigert: {{error}}", "Permission denied when accessing microphone: {{error}}": "Zugriff auf das Mikrofon verweigert: {{error}}",
"Permissions": "Berechtigungen", "Permissions": "Berechtigungen",
"Perplexity API Key": "",
"Personalization": "Personalisierung", "Personalization": "Personalisierung",
"Pin": "Anheften", "Pin": "Anheften",
"Pinned": "Angeheftet", "Pinned": "Angeheftet",
@ -776,7 +792,7 @@
"Plain text (.txt)": "Nur Text (.txt)", "Plain text (.txt)": "Nur Text (.txt)",
"Playground": "Testumgebung", "Playground": "Testumgebung",
"Please carefully review the following warnings:": "Bitte überprüfen Sie die folgenden Warnungen sorgfältig:", "Please carefully review the following warnings:": "Bitte überprüfen Sie die folgenden Warnungen sorgfältig:",
"Please do not close the settings page while loading the model.": "", "Please do not close the settings page while loading the model.": "Bitte schließen die Einstellungen-Seite nicht, während das Modell lädt.",
"Please enter a prompt": "Bitte geben Sie einen Prompt ein", "Please enter a prompt": "Bitte geben Sie einen Prompt ein",
"Please fill in all fields.": "Bitte füllen Sie alle Felder aus.", "Please fill in all fields.": "Bitte füllen Sie alle Felder aus.",
"Please select a model first.": "Bitte wählen Sie zuerst ein Modell aus.", "Please select a model first.": "Bitte wählen Sie zuerst ein Modell aus.",
@ -809,7 +825,7 @@
"Reasoning Effort": "Schlussfolgerungsaufwand", "Reasoning Effort": "Schlussfolgerungsaufwand",
"Record voice": "Stimme aufnehmen", "Record voice": "Stimme aufnehmen",
"Redirecting you to Open WebUI Community": "Sie werden zur OpenWebUI-Community weitergeleitet", "Redirecting you to Open WebUI Community": "Sie werden zur OpenWebUI-Community weitergeleitet",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "Reduziert die Wahrscheinlichkeit, Unsinn zu generieren. Ein höherer Wert (z.B. 100) liefert vielfältigere Antworten, während ein niedrigerer Wert (z.B. 10) konservativer ist. (Standard: 40)", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Beziehen Sie sich auf sich selbst als \"Benutzer\" (z. B. \"Benutzer lernt Spanisch\")", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Beziehen Sie sich auf sich selbst als \"Benutzer\" (z. B. \"Benutzer lernt Spanisch\")",
"References from": "Referenzen aus", "References from": "Referenzen aus",
"Refused when it shouldn't have": "Abgelehnt, obwohl es nicht hätte abgelehnt werden sollen", "Refused when it shouldn't have": "Abgelehnt, obwohl es nicht hätte abgelehnt werden sollen",
@ -821,7 +837,7 @@
"Rename": "Umbenennen", "Rename": "Umbenennen",
"Reorder Models": "Modelle neu anordnen", "Reorder Models": "Modelle neu anordnen",
"Repeat Last N": "Wiederhole die letzten N", "Repeat Last N": "Wiederhole die letzten N",
"Repeat Penalty (Ollama)": "", "Repeat Penalty (Ollama)": "Wiederholungsstrafe (Ollama)",
"Reply in Thread": "Im Thread antworten", "Reply in Thread": "Im Thread antworten",
"Request Mode": "Anforderungsmodus", "Request Mode": "Anforderungsmodus",
"Reranking Model": "Reranking-Modell", "Reranking Model": "Reranking-Modell",
@ -885,7 +901,7 @@
"Select a pipeline": "Wählen Sie eine Pipeline", "Select a pipeline": "Wählen Sie eine Pipeline",
"Select a pipeline url": "Wählen Sie eine Pipeline-URL", "Select a pipeline url": "Wählen Sie eine Pipeline-URL",
"Select a tool": "Wählen Sie ein Werkzeug", "Select a tool": "Wählen Sie ein Werkzeug",
"Select an auth method": "", "Select an auth method": "Wählen Sie eine Authentifizierungsmethode",
"Select an Ollama instance": "Wählen Sie eine Ollama-Instanz", "Select an Ollama instance": "Wählen Sie eine Ollama-Instanz",
"Select Engine": "Engine auswählen", "Select Engine": "Engine auswählen",
"Select Knowledge": "Wissensdatenbank auswählen", "Select Knowledge": "Wissensdatenbank auswählen",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Legt die Anzahl der für die Berechnung verwendeten GPU-Geräte fest. Diese Option steuert, wie viele GPU-Geräte (falls verfügbar) zur Verarbeitung eingehender Anfragen verwendet werden. Eine Erhöhung dieses Wertes kann die Leistung für Modelle, die für GPU-Beschleunigung optimiert sind, erheblich verbessern, kann jedoch auch mehr Strom und GPU-Ressourcen verbrauchen.", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Legt die Anzahl der für die Berechnung verwendeten GPU-Geräte fest. Diese Option steuert, wie viele GPU-Geräte (falls verfügbar) zur Verarbeitung eingehender Anfragen verwendet werden. Eine Erhöhung dieses Wertes kann die Leistung für Modelle, die für GPU-Beschleunigung optimiert sind, erheblich verbessern, kann jedoch auch mehr Strom und GPU-Ressourcen verbrauchen.",
"Set Voice": "Stimme festlegen", "Set Voice": "Stimme festlegen",
"Set whisper model": "Whisper-Modell festlegen", "Set whisper model": "Whisper-Modell festlegen",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "Legt fest, wie weit das Modell zurückblicken soll, um Wiederholungen zu verhindern. (Standard: 64, 0 = deaktiviert, -1 = num_ctx)", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "Legt den Zufallszahlengenerator-Seed für die Generierung fest. Wenn dieser auf eine bestimmte Zahl gesetzt wird, erzeugt das Modell denselben Text für denselben Prompt. (Standard: zufällig)", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "Legt die Größe des Kontextfensters fest, das zur Generierung des nächsten Tokens verwendet wird. (Standard: 2048)", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Legt die zu verwendenden Stoppsequenzen fest. Wenn dieses Muster erkannt wird, stoppt das LLM die Textgenerierung und gibt zurück. Mehrere Stoppmuster können festgelegt werden, indem mehrere separate Stopp-Parameter in einer Modelldatei angegeben werden.", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Legt die zu verwendenden Stoppsequenzen fest. Wenn dieses Muster erkannt wird, stoppt das LLM die Textgenerierung und gibt zurück. Mehrere Stoppmuster können festgelegt werden, indem mehrere separate Stopp-Parameter in einer Modelldatei angegeben werden.",
"Settings": "Einstellungen", "Settings": "Einstellungen",
"Settings saved successfully!": "Einstellungen erfolgreich gespeichert!", "Settings saved successfully!": "Einstellungen erfolgreich gespeichert!",
@ -964,10 +980,10 @@
"System Prompt": "System-Prompt", "System Prompt": "System-Prompt",
"Tags Generation": "Tag-Generierung", "Tags Generation": "Tag-Generierung",
"Tags Generation Prompt": "Prompt für Tag-Generierung", "Tags Generation Prompt": "Prompt für Tag-Generierung",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "Tail-Free Sampling wird verwendet, um den Einfluss weniger wahrscheinlicher Tokens auf die Ausgabe zu reduzieren. Ein höherer Wert (z.B. 2.0) reduziert den Einfluss stärker, während ein Wert von 1.0 diese Einstellung deaktiviert. (Standard: 1)", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "Tail-Free Sampling wird verwendet, um den Einfluss weniger wahrscheinlicher Tokens auf die Ausgabe zu reduzieren. Ein höherer Wert (z.B. 2.0) reduziert den Einfluss stärker, während ein Wert von 1.0 diese Einstellung deaktiviert. (Standard: 1)",
"Talk to model": "", "Talk to model": "Zu einem Modell sprechen",
"Tap to interrupt": "Zum Unterbrechen tippen", "Tap to interrupt": "Zum Unterbrechen tippen",
"Tasks": "", "Tasks": "Aufgaben",
"Tavily API Key": "Tavily-API-Schlüssel", "Tavily API Key": "Tavily-API-Schlüssel",
"Tell us more:": "Erzähl uns mehr", "Tell us more:": "Erzähl uns mehr",
"Temperature": "Temperatur", "Temperature": "Temperatur",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "Danke für Ihr Feedback!", "Thanks for your feedback!": "Danke für Ihr Feedback!",
"The Application Account DN you bind with for search": "Der Anwendungs-Konto-DN, mit dem Sie für die Suche binden", "The Application Account DN you bind with for search": "Der Anwendungs-Konto-DN, mit dem Sie für die Suche binden",
"The base to search for users": "Die Basis, in der nach Benutzern gesucht wird", "The base to search for users": "Die Basis, in der nach Benutzern gesucht wird",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "Die Batch-Größe bestimmt, wie viele Textanfragen gleichzeitig verarbeitet werden. Eine größere Batch-Größe kann die Leistung und Geschwindigkeit des Modells erhöhen, erfordert jedoch auch mehr Speicher. (Standard: 512)", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Die Entwickler hinter diesem Plugin sind leidenschaftliche Freiwillige aus der Community. Wenn Sie dieses Plugin hilfreich finden, erwägen Sie bitte, zu seiner Entwicklung beizutragen.", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Die Entwickler hinter diesem Plugin sind leidenschaftliche Freiwillige aus der Community. Wenn Sie dieses Plugin hilfreich finden, erwägen Sie bitte, zu seiner Entwicklung beizutragen.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Die Bewertungs-Bestenliste basiert auf dem Elo-Bewertungssystem und wird in Echtzeit aktualisiert.", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Die Bewertungs-Bestenliste basiert auf dem Elo-Bewertungssystem und wird in Echtzeit aktualisiert.",
"The LDAP attribute that maps to the mail that users use to sign in.": "Das LDAP-Attribut, das der Mail zugeordnet ist, die Benutzer zum Anmelden verwenden.", "The LDAP attribute that maps to the mail that users use to sign in.": "Das LDAP-Attribut, das der Mail zugeordnet ist, die Benutzer zum Anmelden verwenden.",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Die maximale Dateigröße in MB. Wenn die Dateigröße dieses Limit überschreitet, wird die Datei nicht hochgeladen.", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Die maximale Dateigröße in MB. Wenn die Dateigröße dieses Limit überschreitet, wird die Datei nicht hochgeladen.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Die maximale Anzahl von Dateien, die gleichzeitig in der Unterhaltung verwendet werden können. Wenn die Anzahl der Dateien dieses Limit überschreitet, werden die Dateien nicht hochgeladen.", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Die maximale Anzahl von Dateien, die gleichzeitig in der Unterhaltung verwendet werden können. Wenn die Anzahl der Dateien dieses Limit überschreitet, werden die Dateien nicht hochgeladen.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Die Punktzahl sollte ein Wert zwischen 0,0 (0 %) und 1,0 (100 %) sein.", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "Die Punktzahl sollte ein Wert zwischen 0,0 (0 %) und 1,0 (100 %) sein.",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "Die Temperatur des Modells. Eine Erhöhung der Temperatur führt dazu, dass das Modell kreativer antwortet. (Standard: 0,8)", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Design", "Theme": "Design",
"Thinking...": "Denke nach...", "Thinking...": "Denke nach...",
"This action cannot be undone. Do you wish to continue?": "Diese Aktion kann nicht rückgängig gemacht werden. Möchten Sie fortfahren?", "This action cannot be undone. Do you wish to continue?": "Diese Aktion kann nicht rückgängig gemacht werden. Möchten Sie fortfahren?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Dies stellt sicher, dass Ihre wertvollen Unterhaltungen sicher in Ihrer Backend-Datenbank gespeichert werden. Vielen Dank!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Dies stellt sicher, dass Ihre wertvollen Unterhaltungen sicher in Ihrer Backend-Datenbank gespeichert werden. Vielen Dank!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Dies ist eine experimentelle Funktion, sie funktioniert möglicherweise nicht wie erwartet und kann jederzeit geändert werden.", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "Dies ist eine experimentelle Funktion, sie funktioniert möglicherweise nicht wie erwartet und kann jederzeit geändert werden.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "Diese Option steuert, wie viele Tokens beim Aktualisieren des Kontexts beibehalten werden. Wenn sie beispielsweise auf 2 gesetzt ist, werden die letzten 2 Tokens des Gesprächskontexts beibehalten. Das Beibehalten des Kontexts kann helfen, die Kontinuität eines Gesprächs aufrechtzuerhalten, kann jedoch die Fähigkeit verringern, auf neue Themen zu reagieren. (Standard: 24)", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "Diese Option legt die maximale Anzahl von Tokens fest, die das Modell in seiner Antwort generieren kann. Eine Erhöhung dieses Limits ermöglicht es dem Modell, längere Antworten zu geben, kann jedoch auch die Wahrscheinlichkeit erhöhen, dass unhilfreicher oder irrelevanter Inhalt generiert wird. (Standard: 128)", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Diese Option löscht alle vorhandenen Dateien in der Sammlung und ersetzt sie durch neu hochgeladene Dateien.", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "Diese Option löscht alle vorhandenen Dateien in der Sammlung und ersetzt sie durch neu hochgeladene Dateien.",
"This response was generated by \"{{model}}\"": "Diese Antwort wurde von \"{{model}}\" generiert", "This response was generated by \"{{model}}\"": "Diese Antwort wurde von \"{{model}}\" generiert",
"This will delete": "Dies löscht", "This will delete": "Dies löscht",
@ -1005,7 +1021,7 @@
"This will reset the knowledge base and sync all files. Do you wish to continue?": "Dadurch wird die Wissensdatenbank zurückgesetzt und alle Dateien synchronisiert. Möchten Sie fortfahren?", "This will reset the knowledge base and sync all files. Do you wish to continue?": "Dadurch wird die Wissensdatenbank zurückgesetzt und alle Dateien synchronisiert. Möchten Sie fortfahren?",
"Thorough explanation": "Ausführliche Erklärung", "Thorough explanation": "Ausführliche Erklärung",
"Thought for {{DURATION}}": "Nachgedacht für {{DURATION}}", "Thought for {{DURATION}}": "Nachgedacht für {{DURATION}}",
"Thought for {{DURATION}} seconds": "", "Thought for {{DURATION}} seconds": "Nachgedacht für {{DURATION}} Sekunden",
"Tika": "Tika", "Tika": "Tika",
"Tika Server URL required.": "Tika-Server-URL erforderlich.", "Tika Server URL required.": "Tika-Server-URL erforderlich.",
"Tiktoken": "Tiktoken", "Tiktoken": "Tiktoken",
@ -1014,7 +1030,7 @@
"Title (e.g. Tell me a fun fact)": "Titel (z. B. Erzähl mir einen lustigen Fakt)", "Title (e.g. Tell me a fun fact)": "Titel (z. B. Erzähl mir einen lustigen Fakt)",
"Title Auto-Generation": "Unterhaltungstitel automatisch generieren", "Title Auto-Generation": "Unterhaltungstitel automatisch generieren",
"Title cannot be an empty string.": "Titel darf nicht leer sein.", "Title cannot be an empty string.": "Titel darf nicht leer sein.",
"Title Generation": "", "Title Generation": "Titelgenerierung",
"Title Generation Prompt": "Prompt für Titelgenerierung", "Title Generation Prompt": "Prompt für Titelgenerierung",
"TLS": "TLS", "TLS": "TLS",
"To access the available model names for downloading,": "Um auf die verfügbaren Modellnamen zuzugreifen,", "To access the available model names for downloading,": "Um auf die verfügbaren Modellnamen zuzugreifen,",
@ -1132,7 +1148,7 @@
"Why?": "Warum?", "Why?": "Warum?",
"Widescreen Mode": "Breitbildmodus", "Widescreen Mode": "Breitbildmodus",
"Won": "Gewonnen", "Won": "Gewonnen",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "Funktioniert zusammen mit top-k. Ein höherer Wert (z.B. 0,95) führt zu vielfältigerem Text, während ein niedrigerer Wert (z.B. 0,5) fokussierteren und konservativeren Text erzeugt. (Standard: 0,9)", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Arbeitsbereich", "Workspace": "Arbeitsbereich",
"Workspace Permissions": "Arbeitsbereichsberechtigungen", "Workspace Permissions": "Arbeitsbereichsberechtigungen",
"Write": "Schreiben", "Write": "Schreiben",
@ -1142,6 +1158,7 @@
"Write your model template content here": "Schreiben Sie hier Ihren Modellvorlageninhalt", "Write your model template content here": "Schreiben Sie hier Ihren Modellvorlageninhalt",
"Yesterday": "Gestern", "Yesterday": "Gestern",
"You": "Sie", "You": "Sie",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Sie können nur mit maximal {{maxCount}} Datei(en) gleichzeitig chatten.", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Sie können nur mit maximal {{maxCount}} Datei(en) gleichzeitig chatten.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Personalisieren Sie Interaktionen mit LLMs, indem Sie über die Schaltfläche \"Verwalten\" Erinnerungen hinzufügen.", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Personalisieren Sie Interaktionen mit LLMs, indem Sie über die Schaltfläche \"Verwalten\" Erinnerungen hinzufügen.",
"You cannot upload an empty file.": "Sie können keine leere Datei hochladen.", "You cannot upload an empty file.": "Sie können keine leere Datei hochladen.",
@ -1155,6 +1172,6 @@
"Your account status is currently pending activation.": "Ihr Kontostatus ist derzeit ausstehend und wartet auf Aktivierung.", "Your account status is currently pending activation.": "Ihr Kontostatus ist derzeit ausstehend und wartet auf Aktivierung.",
"Your entire contribution will go directly to the plugin developer; Open WebUI does not take any percentage. However, the chosen funding platform might have its own fees.": "Ihr gesamter Beitrag geht direkt an den Plugin-Entwickler; Open WebUI behält keinen Prozentsatz ein. Die gewählte Finanzierungsplattform kann jedoch eigene Gebühren haben.", "Your entire contribution will go directly to the plugin developer; Open WebUI does not take any percentage. However, the chosen funding platform might have its own fees.": "Ihr gesamter Beitrag geht direkt an den Plugin-Entwickler; Open WebUI behält keinen Prozentsatz ein. Die gewählte Finanzierungsplattform kann jedoch eigene Gebühren haben.",
"Youtube": "YouTube", "Youtube": "YouTube",
"Youtube Language": "", "Youtube Language": "YouTube Sprache",
"Youtube Proxy URL": "" "Youtube Proxy URL": ""
} }

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(such e.g. `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(such e.g. `sh webui.sh --api`)",
"(latest)": "(much latest)", "(latest)": "(much latest)",
"{{ models }}": "", "{{ models }}": "",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "", "{{COUNT}} Replies": "",
"{{user}}'s Chats": "", "{{user}}'s Chats": "",
"{{webUIName}} Backend Required": "{{webUIName}} Backend Much Required", "{{webUIName}} Backend Required": "{{webUIName}} Backend Much Required",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "Advanced Parameters", "Advanced Parameters": "Advanced Parameters",
"Advanced Params": "", "Advanced Params": "",
"All": "",
"All Documents": "", "All Documents": "",
"All models deleted successfully": "", "All models deleted successfully": "",
"Allow Chat Controls": "", "Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "", "Allow Voice Interruption in Call": "",
"Allowed Endpoints": "", "Allowed Endpoints": "",
"Already have an account?": "Such account exists?", "Already have an account?": "Such account exists?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "", "Amazing": "",
"an assistant": "such assistant", "an assistant": "such assistant",
@ -93,6 +95,7 @@
"Are you sure?": "Such certainty?", "Are you sure?": "Such certainty?",
"Arena Models": "", "Arena Models": "",
"Artifacts": "", "Artifacts": "",
"Ask": "",
"Ask a question": "", "Ask a question": "",
"Assistant": "", "Assistant": "",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "", "Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "", "Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "", "Brave Search API Key": "",
"By {{name}}": "", "By {{name}}": "",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Collection", "Collection": "Collection",
"Color": "", "Color": "",
"ComfyUI": "", "ComfyUI": "",
@ -208,7 +213,7 @@
"Confirm your new password": "", "Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Connections", "Connections": "Connections",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "", "Contact Admin for WebUI Access": "",
"Content": "Content", "Content": "Content",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "", "Continue with Email": "",
"Continue with LDAP": "", "Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "", "Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "", "Copied": "",
"Copied shared chat URL to clipboard!": "", "Copied shared chat URL to clipboard!": "",
"Copied to clipboard": "", "Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "", "Created At": "",
"Created by": "", "Created by": "",
"CSV Import": "", "CSV Import": "",
"Ctrl+Enter to Send": "",
"Current Model": "Current Model", "Current Model": "Current Model",
"Current Password": "Current Password", "Current Password": "Current Password",
"Custom": "Custom", "Custom": "Custom",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "", "Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Enable New Bark Ups", "Enable New Sign Ups": "Enable New Bark Ups",
"Enabled": "", "Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "", "Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "Enter Overlap of Chunks", "Enter Chunk Overlap": "Enter Overlap of Chunks",
"Enter Chunk Size": "Enter Size of Chunk", "Enter Chunk Size": "Enter Size of Chunk",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "", "Enter description": "",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "", "Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "", "Enter language codes": "",
"Enter Model ID": "", "Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "Enter model doge tag (e.g. {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "Enter model doge tag (e.g. {{modelTag}})",
"Enter Mojeek Search API Key": "", "Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "Enter Number of Steps (e.g. 50)", "Enter Number of Steps (e.g. 50)": "Enter Number of Steps (e.g. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "", "Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "", "Enter Sampler (e.g. Euler a)": "",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "", "Enter Tika Server URL": "",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Enter Top Wow", "Enter Top K": "Enter Top Wow",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Enter URL (e.g. http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "Enter URL (e.g. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "", "Enter URL (e.g. http://localhost:11434)": "",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "", "Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "", "Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "", "Exclude": "",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "Much Experiment", "Experimental": "Much Experiment",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "", "Explore the cosmos": "",
"Export": "", "Export": "",
"Export All Archived Chats": "", "Export All Archived Chats": "",
@ -566,7 +580,7 @@
"Include": "", "Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "", "Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "Include `--api` flag when running stable-diffusion-webui", "Include `--api` flag when running stable-diffusion-webui": "Include `--api` flag when running stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "", "Info": "",
"Input commands": "Input commands", "Input commands": "Input commands",
"Install from Github URL": "", "Install from Github URL": "",
@ -624,6 +638,7 @@
"Local": "", "Local": "",
"Local Models": "", "Local Models": "",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "", "Lost": "",
"LTR": "", "LTR": "",
"Made by Open WebUI Community": "Made by Open WebUI Community", "Made by Open WebUI Community": "Made by Open WebUI Community",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "", "Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "Permission denied when accessing microphone: {{error}}", "Permission denied when accessing microphone: {{error}}": "Permission denied when accessing microphone: {{error}}",
"Permissions": "", "Permissions": "",
"Perplexity API Key": "",
"Personalization": "Personalization", "Personalization": "Personalization",
"Pin": "", "Pin": "",
"Pinned": "", "Pinned": "",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "Record Bark", "Record voice": "Record Bark",
"Redirecting you to Open WebUI Community": "Redirecting you to Open WebUI Community", "Redirecting you to Open WebUI Community": "Redirecting you to Open WebUI Community",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "", "References from": "",
"Refused when it shouldn't have": "", "Refused when it shouldn't have": "",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "Set Voice so speak", "Set Voice": "Set Voice so speak",
"Set whisper model": "", "Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "Settings much settings", "Settings": "Settings much settings",
"Settings saved successfully!": "Settings saved successfully! Very success!", "Settings saved successfully!": "Settings saved successfully! Very success!",
@ -964,7 +980,7 @@
"System Prompt": "System Prompt much prompt", "System Prompt": "System Prompt much prompt",
"Tags Generation": "", "Tags Generation": "",
"Tags Generation Prompt": "", "Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "", "Tap to interrupt": "",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "", "Thanks for your feedback!": "",
"The Application Account DN you bind with for search": "", "The Application Account DN you bind with for search": "",
"The base to search for users": "", "The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Theme much theme", "Theme": "Theme much theme",
"Thinking...": "", "Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "", "This action cannot be undone. Do you wish to continue?": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "This ensures that your valuable conversations are securely saved to your backend database. Thank you! Much secure!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "This ensures that your valuable conversations are securely saved to your backend database. Thank you! Much secure!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "", "This response was generated by \"{{model}}\"": "",
"This will delete": "", "This will delete": "",
@ -1132,7 +1148,7 @@
"Why?": "", "Why?": "",
"Widescreen Mode": "", "Widescreen Mode": "",
"Won": "", "Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "", "Workspace": "",
"Workspace Permissions": "", "Workspace Permissions": "",
"Write": "", "Write": "",
@ -1142,6 +1158,7 @@
"Write your model template content here": "", "Write your model template content here": "",
"Yesterday": "", "Yesterday": "",
"You": "", "You": "",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "", "You cannot upload an empty file.": "",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(π.χ. `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(π.χ. `sh webui.sh --api`)",
"(latest)": "(τελευταίο)", "(latest)": "(τελευταίο)",
"{{ models }}": "{{ models }}", "{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "", "{{COUNT}} Replies": "",
"{{user}}'s Chats": "Συνομιλίες του {{user}}", "{{user}}'s Chats": "Συνομιλίες του {{user}}",
"{{webUIName}} Backend Required": "{{webUIName}} Απαιτείται Backend", "{{webUIName}} Backend Required": "{{webUIName}} Απαιτείται Backend",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Οι διαχειριστές έχουν πρόσβαση σε όλα τα εργαλεία ανά πάσα στιγμή· οι χρήστες χρειάζονται εργαλεία ανά μοντέλο στον χώρο εργασίας.", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Οι διαχειριστές έχουν πρόσβαση σε όλα τα εργαλεία ανά πάσα στιγμή· οι χρήστες χρειάζονται εργαλεία ανά μοντέλο στον χώρο εργασίας.",
"Advanced Parameters": "Προηγμένοι Παράμετροι", "Advanced Parameters": "Προηγμένοι Παράμετροι",
"Advanced Params": "Προηγμένα Παράμετροι", "Advanced Params": "Προηγμένα Παράμετροι",
"All": "",
"All Documents": "Όλα τα Έγγραφα", "All Documents": "Όλα τα Έγγραφα",
"All models deleted successfully": "Όλα τα μοντέλα διαγράφηκαν με επιτυχία", "All models deleted successfully": "Όλα τα μοντέλα διαγράφηκαν με επιτυχία",
"Allow Chat Controls": "", "Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Επιτρέπεται η Παύση Φωνής στην Κλήση", "Allow Voice Interruption in Call": "Επιτρέπεται η Παύση Φωνής στην Κλήση",
"Allowed Endpoints": "", "Allowed Endpoints": "",
"Already have an account?": "Έχετε ήδη λογαριασμό;", "Already have an account?": "Έχετε ήδη λογαριασμό;",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "Εναλλακτικό στο top_p, και στοχεύει στη διασφάλιση μιας ισορροπίας μεταξύ ποιότητας και ποικιλίας. Η παράμετρος p αντιπροσωπεύει την ελάχιστη πιθανότητα για ένα token να θεωρηθεί, σε σχέση με την πιθανότητα του πιο πιθανού token. Για παράδειγμα, με p=0.05 και το πιο πιθανό token να έχει πιθανότητα 0.9, τα logits με τιμή μικρότερη από 0.045 φιλτράρονται. (Προεπιλογή: 0.0)", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "Καταπληκτικό", "Amazing": "Καταπληκτικό",
"an assistant": "ένας βοηθός", "an assistant": "ένας βοηθός",
@ -93,6 +95,7 @@
"Are you sure?": "Είστε σίγουροι;", "Are you sure?": "Είστε σίγουροι;",
"Arena Models": "Μοντέλα Arena", "Arena Models": "Μοντέλα Arena",
"Artifacts": "Αρχεία", "Artifacts": "Αρχεία",
"Ask": "",
"Ask a question": "Ρωτήστε μια ερώτηση", "Ask a question": "Ρωτήστε μια ερώτηση",
"Assistant": "Βοηθός", "Assistant": "Βοηθός",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "Τέλος Bing Search V7", "Bing Search V7 Endpoint": "Τέλος Bing Search V7",
"Bing Search V7 Subscription Key": "Κλειδί Συνδρομής Bing Search V7", "Bing Search V7 Subscription Key": "Κλειδί Συνδρομής Bing Search V7",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Κλειδί API Brave Search", "Brave Search API Key": "Κλειδί API Brave Search",
"By {{name}}": "Από {{name}}", "By {{name}}": "Από {{name}}",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Συλλογή", "Collection": "Συλλογή",
"Color": "Χρώμα", "Color": "Χρώμα",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "", "Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Συνδέσεις", "Connections": "Συνδέσεις",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Επικοινωνήστε με τον Διαχειριστή για Πρόσβαση στο WebUI", "Contact Admin for WebUI Access": "Επικοινωνήστε με τον Διαχειριστή για Πρόσβαση στο WebUI",
"Content": "Περιεχόμενο", "Content": "Περιεχόμενο",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "Συνέχεια με Email", "Continue with Email": "Συνέχεια με Email",
"Continue with LDAP": "Συνέχεια με LDAP", "Continue with LDAP": "Συνέχεια με LDAP",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Έλεγχος πώς διαχωρίζεται το κείμενο του μηνύματος για αιτήματα TTS. Το 'Στίξη' διαχωρίζει σε προτάσεις, οι 'παραγράφοι' σε παραγράφους, και το 'κανένα' κρατά το μήνυμα ως μια αλυσίδα.", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Έλεγχος πώς διαχωρίζεται το κείμενο του μηνύματος για αιτήματα TTS. Το 'Στίξη' διαχωρίζει σε προτάσεις, οι 'παραγράφοι' σε παραγράφους, και το 'κανένα' κρατά το μήνυμα ως μια αλυσίδα.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Έλεγχοι", "Controls": "Έλεγχοι",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "Διαχειρίζεται την ισορροπία μεταξύ συνεκτικότητας και ποικιλίας της εξόδου. Μια χαμηλότερη τιμή θα έχει ως αποτέλεσμα πιο εστιασμένο και συνεκτικό κείμενο. (Προεπιλογή: 5.0)", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Αντιγράφηκε", "Copied": "Αντιγράφηκε",
"Copied shared chat URL to clipboard!": "Αντιγράφηκε το URL της κοινόχρηστης συνομιλίας στο πρόχειρο!", "Copied shared chat URL to clipboard!": "Αντιγράφηκε το URL της κοινόχρηστης συνομιλίας στο πρόχειρο!",
"Copied to clipboard": "Αντιγράφηκε στο πρόχειρο", "Copied to clipboard": "Αντιγράφηκε στο πρόχειρο",
@ -245,6 +250,7 @@
"Created At": "Δημιουργήθηκε στις", "Created At": "Δημιουργήθηκε στις",
"Created by": "Δημιουργήθηκε από", "Created by": "Δημιουργήθηκε από",
"CSV Import": "Εισαγωγή CSV", "CSV Import": "Εισαγωγή CSV",
"Ctrl+Enter to Send": "",
"Current Model": "Τρέχον Μοντέλο", "Current Model": "Τρέχον Μοντέλο",
"Current Password": "Τρέχων Κωδικός", "Current Password": "Τρέχων Κωδικός",
"Custom": "Προσαρμοσμένο", "Custom": "Προσαρμοσμένο",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Ενεργοποίηση Κλείδωσης Μνήμης (mlock) για την αποτροπή της ανταλλαγής δεδομένων του μοντέλου από τη μνήμη RAM. Αυτή η επιλογή κλειδώνει το σύνολο εργασίας των σελίδων του μοντέλου στη μνήμη RAM, διασφαλίζοντας ότι δεν θα ανταλλαχθούν στο δίσκο. Αυτό μπορεί να βοηθήσει στη διατήρηση της απόδοσης αποφεύγοντας σφάλματα σελίδων και διασφαλίζοντας γρήγορη πρόσβαση στα δεδομένα.", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Ενεργοποίηση Κλείδωσης Μνήμης (mlock) για την αποτροπή της ανταλλαγής δεδομένων του μοντέλου από τη μνήμη RAM. Αυτή η επιλογή κλειδώνει το σύνολο εργασίας των σελίδων του μοντέλου στη μνήμη RAM, διασφαλίζοντας ότι δεν θα ανταλλαχθούν στο δίσκο. Αυτό μπορεί να βοηθήσει στη διατήρηση της απόδοσης αποφεύγοντας σφάλματα σελίδων και διασφαλίζοντας γρήγορη πρόσβαση στα δεδομένα.",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Ενεργοποίηση Χαρτογράφησης Μνήμης (mmap) για φόρτωση δεδομένων μοντέλου. Αυτή η επιλογή επιτρέπει στο σύστημα να χρησιμοποιεί αποθήκευση δίσκου ως επέκταση της μνήμης RAM, αντιμετωπίζοντας αρχεία δίσκου σαν να ήταν στη μνήμη RAM. Αυτό μπορεί να βελτιώσει την απόδοση του μοντέλου επιτρέποντας γρηγορότερη πρόσβαση στα δεδομένα. Ωστόσο, μπορεί να μην λειτουργεί σωστά με όλα τα συστήματα και να καταναλώνει σημαντικό χώρο στο δίσκο.", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Ενεργοποίηση Χαρτογράφησης Μνήμης (mmap) για φόρτωση δεδομένων μοντέλου. Αυτή η επιλογή επιτρέπει στο σύστημα να χρησιμοποιεί αποθήκευση δίσκου ως επέκταση της μνήμης RAM, αντιμετωπίζοντας αρχεία δίσκου σαν να ήταν στη μνήμη RAM. Αυτό μπορεί να βελτιώσει την απόδοση του μοντέλου επιτρέποντας γρηγορότερη πρόσβαση στα δεδομένα. Ωστόσο, μπορεί να μην λειτουργεί σωστά με όλα τα συστήματα και να καταναλώνει σημαντικό χώρο στο δίσκο.",
"Enable Message Rating": "Ενεργοποίηση Αξιολόγησης Μηνυμάτων", "Enable Message Rating": "Ενεργοποίηση Αξιολόγησης Μηνυμάτων",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "Ενεργοποίηση δειγματοληψίας Mirostat για έλεγχο της περιπλοκότητας. (Προεπιλογή: 0, 0 = Απενεργοποιημένο, 1 = Mirostat, 2 = Mirostat 2.0)", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Ενεργοποίηση Νέων Εγγραφών", "Enable New Sign Ups": "Ενεργοποίηση Νέων Εγγραφών",
"Enabled": "Ενεργοποιημένο", "Enabled": "Ενεργοποιημένο",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Βεβαιωθείτε ότι το αρχείο CSV σας περιλαμβάνει 4 στήλες με αυτή τη σειρά: Όνομα, Email, Κωδικός, Ρόλος.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Βεβαιωθείτε ότι το αρχείο CSV σας περιλαμβάνει 4 στήλες με αυτή τη σειρά: Όνομα, Email, Κωδικός, Ρόλος.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "Εισάγετε το CFG Scale (π.χ. 7.0)", "Enter CFG Scale (e.g. 7.0)": "Εισάγετε το CFG Scale (π.χ. 7.0)",
"Enter Chunk Overlap": "Εισάγετε την Επικάλυψη Τμημάτων", "Enter Chunk Overlap": "Εισάγετε την Επικάλυψη Τμημάτων",
"Enter Chunk Size": "Εισάγετε το Μέγεθος Τμημάτων", "Enter Chunk Size": "Εισάγετε το Μέγεθος Τμημάτων",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "Εισάγετε την περιγραφή", "Enter description": "Εισάγετε την περιγραφή",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "", "Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "Εισάγετε κωδικούς γλώσσας", "Enter language codes": "Εισάγετε κωδικούς γλώσσας",
"Enter Model ID": "Εισάγετε το ID Μοντέλου", "Enter Model ID": "Εισάγετε το ID Μοντέλου",
"Enter model tag (e.g. {{modelTag}})": "Εισάγετε την ετικέτα μοντέλου (π.χ. {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "Εισάγετε την ετικέτα μοντέλου (π.χ. {{modelTag}})",
"Enter Mojeek Search API Key": "Εισάγετε το Κλειδί API Mojeek Search", "Enter Mojeek Search API Key": "Εισάγετε το Κλειδί API Mojeek Search",
"Enter Number of Steps (e.g. 50)": "Εισάγετε τον Αριθμό Βημάτων (π.χ. 50)", "Enter Number of Steps (e.g. 50)": "Εισάγετε τον Αριθμό Βημάτων (π.χ. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "", "Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "Εισάγετε τον Sampler (π.χ. Euler a)", "Enter Sampler (e.g. Euler a)": "Εισάγετε τον Sampler (π.χ. Euler a)",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "Εισάγετε το URL διακομιστή Tika", "Enter Tika Server URL": "Εισάγετε το URL διακομιστή Tika",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Εισάγετε το Top K", "Enter Top K": "Εισάγετε το Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Εισάγετε το URL (π.χ. http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "Εισάγετε το URL (π.χ. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Εισάγετε το URL (π.χ. http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "Εισάγετε το URL (π.χ. http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "Παράδειγμα: ou=users,dc=foo,dc=example", "Example: ou=users,dc=foo,dc=example": "Παράδειγμα: ou=users,dc=foo,dc=example",
"Example: sAMAccountName or uid or userPrincipalName": "Παράδειγμα: sAMAccountName ή uid ή userPrincipalName", "Example: sAMAccountName or uid or userPrincipalName": "Παράδειγμα: sAMAccountName ή uid ή userPrincipalName",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "Εξαίρεση", "Exclude": "Εξαίρεση",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "Πειραματικό", "Experimental": "Πειραματικό",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "Εξερευνήστε το σύμπαν", "Explore the cosmos": "Εξερευνήστε το σύμπαν",
"Export": "Εξαγωγή", "Export": "Εξαγωγή",
"Export All Archived Chats": "Εξαγωγή Όλων των Αρχειοθετημένων Συνομιλιών", "Export All Archived Chats": "Εξαγωγή Όλων των Αρχειοθετημένων Συνομιλιών",
@ -566,7 +580,7 @@
"Include": "Συμπερίληψη", "Include": "Συμπερίληψη",
"Include `--api-auth` flag when running stable-diffusion-webui": "Συμπεριλάβετε το flag `--api-auth` όταν τρέχετε το stable-diffusion-webui", "Include `--api-auth` flag when running stable-diffusion-webui": "Συμπεριλάβετε το flag `--api-auth` όταν τρέχετε το stable-diffusion-webui",
"Include `--api` flag when running stable-diffusion-webui": "Συμπεριλάβετε το flag `--api` όταν τρέχετε το stable-diffusion-webui", "Include `--api` flag when running stable-diffusion-webui": "Συμπεριλάβετε το flag `--api` όταν τρέχετε το stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "Επηρεάζει πόσο γρήγορα ανταποκρίνεται ο αλγόριθμος στην ανατροφοδότηση από το παραγόμενο κείμενο. Μια χαμηλότερη ταχύτητα μάθησης θα έχει ως αποτέλεσμα πιο αργές προσαρμογές, ενώ μια υψηλότερη ταχύτητα μάθησης θα κάνει τον αλγόριθμο πιο ανταποκρινόμενο. (Προεπιλογή: 0.1)", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Πληροφορίες", "Info": "Πληροφορίες",
"Input commands": "Εισαγωγή εντολών", "Input commands": "Εισαγωγή εντολών",
"Install from Github URL": "Εγκατάσταση από URL Github", "Install from Github URL": "Εγκατάσταση από URL Github",
@ -624,6 +638,7 @@
"Local": "Τοπικό", "Local": "Τοπικό",
"Local Models": "Τοπικά Μοντέλα", "Local Models": "Τοπικά Μοντέλα",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "Χαμένος", "Lost": "Χαμένος",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "Δημιουργήθηκε από την Κοινότητα OpenWebUI", "Made by Open WebUI Community": "Δημιουργήθηκε από την Κοινότητα OpenWebUI",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "Άρνηση δικαιώματος κατά την πρόσβαση σε μικρόφωνο", "Permission denied when accessing microphone": "Άρνηση δικαιώματος κατά την πρόσβαση σε μικρόφωνο",
"Permission denied when accessing microphone: {{error}}": "Άρνηση δικαιώματος κατά την πρόσβαση σε μικρόφωνο: {{error}}", "Permission denied when accessing microphone: {{error}}": "Άρνηση δικαιώματος κατά την πρόσβαση σε μικρόφωνο: {{error}}",
"Permissions": "Δικαιώματα", "Permissions": "Δικαιώματα",
"Perplexity API Key": "",
"Personalization": "Προσωποποίηση", "Personalization": "Προσωποποίηση",
"Pin": "Καρφίτσωμα", "Pin": "Καρφίτσωμα",
"Pinned": "Καρφιτσωμένο", "Pinned": "Καρφιτσωμένο",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "Εγγραφή φωνής", "Record voice": "Εγγραφή φωνής",
"Redirecting you to Open WebUI Community": "Μετακατεύθυνση στην Κοινότητα OpenWebUI", "Redirecting you to Open WebUI Community": "Μετακατεύθυνση στην Κοινότητα OpenWebUI",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "Μειώνει την πιθανότητα δημιουργίας ανοησιών. Μια υψηλότερη τιμή (π.χ. 100) θα δώσει πιο ποικίλες απαντήσεις, ενώ μια χαμηλότερη τιμή (π.χ. 10) θα δημιουργήσει πιο συντηρητικές απαντήσεις. (Προεπιλογή: 40)", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Αναφέρεστε στον εαυτό σας ως \"User\" (π.χ., \"User μαθαίνει Ισπανικά\")", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Αναφέρεστε στον εαυτό σας ως \"User\" (π.χ., \"User μαθαίνει Ισπανικά\")",
"References from": "Αναφορές από", "References from": "Αναφορές από",
"Refused when it shouldn't have": "Αρνήθηκε όταν δεν έπρεπε", "Refused when it shouldn't have": "Αρνήθηκε όταν δεν έπρεπε",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Ορισμός του αριθμού των νημάτων εργασίας που χρησιμοποιούνται για υπολογισμούς. Αυτή η επιλογή ελέγχει πόσα νήματα χρησιμοποιούνται για την επεξεργασία των εισερχόμενων αιτημάτων ταυτόχρονα. Η αύξηση αυτής της τιμής μπορεί να βελτιώσει την απόδοση σε εργασίες υψηλής συγχρονισμένης φόρτωσης αλλά μπορεί επίσης να καταναλώσει περισσότερους πόρους CPU.", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Ορισμός του αριθμού των νημάτων εργασίας που χρησιμοποιούνται για υπολογισμούς. Αυτή η επιλογή ελέγχει πόσα νήματα χρησιμοποιούνται για την επεξεργασία των εισερχόμενων αιτημάτων ταυτόχρονα. Η αύξηση αυτής της τιμής μπορεί να βελτιώσει την απόδοση σε εργασίες υψηλής συγχρονισμένης φόρτωσης αλλά μπορεί επίσης να καταναλώσει περισσότερους πόρους CPU.",
"Set Voice": "Ορισμός Φωνής", "Set Voice": "Ορισμός Φωνής",
"Set whisper model": "Ορισμός μοντέλου whisper", "Set whisper model": "Ορισμός μοντέλου whisper",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "Ορίζει πόσο πίσω θα κοιτάξει το μοντέλο για να αποτρέψει την επανάληψη. (Προεπιλογή: 64, 0 = απενεργοποιημένο, -1 = num_ctx)", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "Ορίζει τον τυχαίο σπόρο αριθμού που θα χρησιμοποιηθεί για τη δημιουργία. Ορισμός αυτού σε έναν συγκεκριμένο αριθμό θα κάνει το μοντέλο να δημιουργεί το ίδιο κείμενο για την ίδια προτροπή. (Προεπιλογή: τυχαίο)", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "Ορίζει το μέγεθος του παραθύρου πλαισίου που χρησιμοποιείται για τη δημιουργία του επόμενου token. (Προεπιλογή: 2048)", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Ορίζει τις σειρές παύσης που θα χρησιμοποιηθούν. Όταν εντοπιστεί αυτό το μοτίβο, το LLM θα σταματήσει να δημιουργεί κείμενο και θα επιστρέψει. Πολλαπλά μοτίβα παύσης μπορούν να οριστούν καθορίζοντας πολλαπλές ξεχωριστές παραμέτρους παύσης σε ένα αρχείο μοντέλου.", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Ορίζει τις σειρές παύσης που θα χρησιμοποιηθούν. Όταν εντοπιστεί αυτό το μοτίβο, το LLM θα σταματήσει να δημιουργεί κείμενο και θα επιστρέψει. Πολλαπλά μοτίβα παύσης μπορούν να οριστούν καθορίζοντας πολλαπλές ξεχωριστές παραμέτρους παύσης σε ένα αρχείο μοντέλου.",
"Settings": "Ρυθμίσεις", "Settings": "Ρυθμίσεις",
"Settings saved successfully!": "Οι Ρυθμίσεις αποθηκεύτηκαν με επιτυχία!", "Settings saved successfully!": "Οι Ρυθμίσεις αποθηκεύτηκαν με επιτυχία!",
@ -964,7 +980,7 @@
"System Prompt": "Προτροπή Συστήματος", "System Prompt": "Προτροπή Συστήματος",
"Tags Generation": "", "Tags Generation": "",
"Tags Generation Prompt": "Προτροπή Γενιάς Ετικετών", "Tags Generation Prompt": "Προτροπή Γενιάς Ετικετών",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "Η δειγματοληψία Tail free χρησιμοποιείται για να μειώσει την επίδραση των λιγότερο πιθανών tokens από την έξοδο. Μια υψηλότερη τιμή (π.χ., 2.0) θα μειώσει την επίδραση περισσότερο, ενώ μια τιμή 1.0 απενεργοποιεί αυτή τη ρύθμιση. (προεπιλογή: 1)", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "Πατήστε για παύση", "Tap to interrupt": "Πατήστε για παύση",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "Ευχαριστούμε για την ανατροφοδότησή σας!", "Thanks for your feedback!": "Ευχαριστούμε για την ανατροφοδότησή σας!",
"The Application Account DN you bind with for search": "Το DN του Λογαριασμού Εφαρμογής που συνδέετε για αναζήτηση", "The Application Account DN you bind with for search": "Το DN του Λογαριασμού Εφαρμογής που συνδέετε για αναζήτηση",
"The base to search for users": "Η βάση για αναζήτηση χρηστών", "The base to search for users": "Η βάση για αναζήτηση χρηστών",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "Το μέγεθος παρτίδας καθορίζει πόσες αιτήσεις κειμένου επεξεργάζονται μαζί ταυτόχρονα. Ένα μεγαλύτερο μέγεθος παρτίδας μπορεί να αυξήσει την απόδοση και την ταχύτητα του μοντέλου, αλλά απαιτεί επίσης περισσότερη μνήμη. (Προεπιλογή: 512)", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Οι προγραμματιστές πίσω από αυτό το plugin είναι παθιασμένοι εθελοντές από την κοινότητα. Αν βρείτε αυτό το plugin χρήσιμο, παρακαλώ σκεφτείτε να συνεισφέρετε στην ανάπτυξή του.", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Οι προγραμματιστές πίσω από αυτό το plugin είναι παθιασμένοι εθελοντές από την κοινότητα. Αν βρείτε αυτό το plugin χρήσιμο, παρακαλώ σκεφτείτε να συνεισφέρετε στην ανάπτυξή του.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Η κατάταξη αξιολόγησης βασίζεται στο σύστημα βαθμολόγησης Elo και ενημερώνεται σε πραγματικό χρόνο.", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Η κατάταξη αξιολόγησης βασίζεται στο σύστημα βαθμολόγησης Elo και ενημερώνεται σε πραγματικό χρόνο.",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Το μέγιστο μέγεθος αρχείου σε MB. Αν το μέγεθος του αρχείου υπερβαίνει αυτό το όριο, το αρχείο δεν θα ανεβεί.", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Το μέγιστο μέγεθος αρχείου σε MB. Αν το μέγεθος του αρχείου υπερβαίνει αυτό το όριο, το αρχείο δεν θα ανεβεί.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Ο μέγιστος αριθμός αρχείων που μπορούν να χρησιμοποιηθούν ταυτόχρονα στη συνομιλία. Αν ο αριθμός των αρχείων υπερβαίνει αυτό το όριο, τα αρχεία δεν θα ανεβούν.", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Ο μέγιστος αριθμός αρχείων που μπορούν να χρησιμοποιηθούν ταυτόχρονα στη συνομιλία. Αν ο αριθμός των αρχείων υπερβαίνει αυτό το όριο, τα αρχεία δεν θα ανεβούν.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Η βαθμολογία θα πρέπει να είναι μια τιμή μεταξύ 0.0 (0%) και 1.0 (100%).", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "Η βαθμολογία θα πρέπει να είναι μια τιμή μεταξύ 0.0 (0%) και 1.0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "Η θερμοκρασία του μοντέλου. Η αύξηση της θερμοκρασίας θα κάνει το μοντέλο να απαντά πιο δημιουργικά. (Προεπιλογή: 0.8)", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Θέμα", "Theme": "Θέμα",
"Thinking...": "Σκέφτομαι...", "Thinking...": "Σκέφτομαι...",
"This action cannot be undone. Do you wish to continue?": "Αυτή η ενέργεια δεν μπορεί να αναιρεθεί. Θέλετε να συνεχίσετε;", "This action cannot be undone. Do you wish to continue?": "Αυτή η ενέργεια δεν μπορεί να αναιρεθεί. Θέλετε να συνεχίσετε;",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Αυτό διασφαλίζει ότι οι πολύτιμες συνομιλίες σας αποθηκεύονται με ασφάλεια στη βάση δεδομένων backend σας. Ευχαριστούμε!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Αυτό διασφαλίζει ότι οι πολύτιμες συνομιλίες σας αποθηκεύονται με ασφάλεια στη βάση δεδομένων backend σας. Ευχαριστούμε!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Αυτή είναι μια πειραματική λειτουργία, μπορεί να μην λειτουργεί όπως αναμένεται και υπόκειται σε αλλαγές οποιαδήποτε στιγμή.", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "Αυτή είναι μια πειραματική λειτουργία, μπορεί να μην λειτουργεί όπως αναμένεται και υπόκειται σε αλλαγές οποιαδήποτε στιγμή.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "Αυτή η επιλογή ελέγχει πόσα tokens διατηρούνται κατά την ανανέωση του πλαισίου. Για παράδειγμα, αν οριστεί σε 2, τα τελευταία 2 tokens του πλαισίου συνομιλίας θα διατηρηθούν. Η διατήρηση του πλαισίου μπορεί να βοηθήσει στη διατήρηση της συνέχειας μιας συνομιλίας, αλλά μπορεί να μειώσει την ικανότητα ανταπόκρισης σε νέα θέματα. (Προεπιλογή: 24)", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "Αυτή η επιλογή ορίζει τον μέγιστο αριθμό tokens που μπορεί να δημιουργήσει το μοντέλο στην απάντησή του. Η αύξηση αυτού του ορίου επιτρέπει στο μοντέλο να παρέχει μεγαλύτερες απαντήσεις, αλλά μπορεί επίσης να αυξήσει την πιθανότητα δημιουργίας αχρήσιμου ή άσχετου περιεχομένου. (Προεπιλογή: 128)", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Αυτή η επιλογή θα διαγράψει όλα τα υπάρχοντα αρχεία στη συλλογή και θα τα αντικαταστήσει με νέα ανεβασμένα αρχεία.", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "Αυτή η επιλογή θα διαγράψει όλα τα υπάρχοντα αρχεία στη συλλογή και θα τα αντικαταστήσει με νέα ανεβασμένα αρχεία.",
"This response was generated by \"{{model}}\"": "Αυτή η απάντηση δημιουργήθηκε από \"{{model}}\"", "This response was generated by \"{{model}}\"": "Αυτή η απάντηση δημιουργήθηκε από \"{{model}}\"",
"This will delete": "Αυτό θα διαγράψει", "This will delete": "Αυτό θα διαγράψει",
@ -1132,7 +1148,7 @@
"Why?": "Γιατί?", "Why?": "Γιατί?",
"Widescreen Mode": "Λειτουργία Οθόνης Ευρείας", "Widescreen Mode": "Λειτουργία Οθόνης Ευρείας",
"Won": "Κέρδισε", "Won": "Κέρδισε",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "Συνεργάζεται μαζί με top-k. Μια υψηλότερη τιμή (π.χ., 0.95) θα οδηγήσει σε πιο ποικίλο κείμενο, ενώ μια χαμηλότερη τιμή (π.χ., 0.5) θα δημιουργήσει πιο εστιασμένο και συντηρητικό κείμενο. (Προεπιλογή: 0.9)", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Χώρος Εργασίας", "Workspace": "Χώρος Εργασίας",
"Workspace Permissions": "Δικαιώματα Χώρου Εργασίας", "Workspace Permissions": "Δικαιώματα Χώρου Εργασίας",
"Write": "", "Write": "",
@ -1142,6 +1158,7 @@
"Write your model template content here": "Γράψτε το περιεχόμενο του προτύπου μοντέλου σας εδώ", "Write your model template content here": "Γράψτε το περιεχόμενο του προτύπου μοντέλου σας εδώ",
"Yesterday": "Εχθές", "Yesterday": "Εχθές",
"You": "Εσείς", "You": "Εσείς",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Μπορείτε να συνομιλήσετε μόνο με μέγιστο αριθμό {{maxCount}} αρχείου(-ων) ταυτόχρονα.", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Μπορείτε να συνομιλήσετε μόνο με μέγιστο αριθμό {{maxCount}} αρχείου(-ων) ταυτόχρονα.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Μπορείτε να προσωποποιήσετε τις αλληλεπιδράσεις σας με τα LLMs προσθέτοντας αναμνήσεις μέσω του κουμπιού 'Διαχείριση' παρακάτω, κάνοντάς τα πιο χρήσιμα και προσαρμοσμένα σε εσάς.", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Μπορείτε να προσωποποιήσετε τις αλληλεπιδράσεις σας με τα LLMs προσθέτοντας αναμνήσεις μέσω του κουμπιού 'Διαχείριση' παρακάτω, κάνοντάς τα πιο χρήσιμα και προσαρμοσμένα σε εσάς.",
"You cannot upload an empty file.": "Δεν μπορείτε να ανεβάσετε ένα κενό αρχείο.", "You cannot upload an empty file.": "Δεν μπορείτε να ανεβάσετε ένα κενό αρχείο.",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "", "(e.g. `sh webui.sh --api`)": "",
"(latest)": "", "(latest)": "",
"{{ models }}": "", "{{ models }}": "",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "", "{{COUNT}} Replies": "",
"{{user}}'s Chats": "", "{{user}}'s Chats": "",
"{{webUIName}} Backend Required": "", "{{webUIName}} Backend Required": "",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "", "Advanced Parameters": "",
"Advanced Params": "", "Advanced Params": "",
"All": "",
"All Documents": "", "All Documents": "",
"All models deleted successfully": "", "All models deleted successfully": "",
"Allow Chat Controls": "", "Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "", "Allow Voice Interruption in Call": "",
"Allowed Endpoints": "", "Allowed Endpoints": "",
"Already have an account?": "", "Already have an account?": "",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "", "Amazing": "",
"an assistant": "", "an assistant": "",
@ -93,6 +95,7 @@
"Are you sure?": "", "Are you sure?": "",
"Arena Models": "", "Arena Models": "",
"Artifacts": "", "Artifacts": "",
"Ask": "",
"Ask a question": "", "Ask a question": "",
"Assistant": "", "Assistant": "",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "", "Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "", "Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "", "Brave Search API Key": "",
"By {{name}}": "", "By {{name}}": "",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "", "Collection": "",
"Color": "", "Color": "",
"ComfyUI": "", "ComfyUI": "",
@ -208,7 +213,7 @@
"Confirm your new password": "", "Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "", "Connections": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "", "Contact Admin for WebUI Access": "",
"Content": "", "Content": "",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "", "Continue with Email": "",
"Continue with LDAP": "", "Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "", "Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "", "Copied": "",
"Copied shared chat URL to clipboard!": "", "Copied shared chat URL to clipboard!": "",
"Copied to clipboard": "", "Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "", "Created At": "",
"Created by": "", "Created by": "",
"CSV Import": "", "CSV Import": "",
"Ctrl+Enter to Send": "",
"Current Model": "", "Current Model": "",
"Current Password": "", "Current Password": "",
"Custom": "", "Custom": "",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "", "Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "", "Enable New Sign Ups": "",
"Enabled": "", "Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "", "Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "", "Enter Chunk Overlap": "",
"Enter Chunk Size": "", "Enter Chunk Size": "",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "", "Enter description": "",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "", "Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "", "Enter language codes": "",
"Enter Model ID": "", "Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "", "Enter model tag (e.g. {{modelTag}})": "",
"Enter Mojeek Search API Key": "", "Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "", "Enter Number of Steps (e.g. 50)": "",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "", "Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "", "Enter Sampler (e.g. Euler a)": "",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "", "Enter Tika Server URL": "",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "", "Enter Top K": "",
"Enter URL (e.g. http://127.0.0.1:7860/)": "", "Enter URL (e.g. http://127.0.0.1:7860/)": "",
"Enter URL (e.g. http://localhost:11434)": "", "Enter URL (e.g. http://localhost:11434)": "",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "", "Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "", "Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "", "Exclude": "",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "", "Experimental": "",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "", "Explore the cosmos": "",
"Export": "", "Export": "",
"Export All Archived Chats": "", "Export All Archived Chats": "",
@ -566,7 +580,7 @@
"Include": "", "Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "", "Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "", "Include `--api` flag when running stable-diffusion-webui": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "", "Info": "",
"Input commands": "", "Input commands": "",
"Install from Github URL": "", "Install from Github URL": "",
@ -624,6 +638,7 @@
"Local": "", "Local": "",
"Local Models": "", "Local Models": "",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "", "Lost": "",
"LTR": "", "LTR": "",
"Made by Open WebUI Community": "", "Made by Open WebUI Community": "",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "", "Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "", "Permission denied when accessing microphone: {{error}}": "",
"Permissions": "", "Permissions": "",
"Perplexity API Key": "",
"Personalization": "", "Personalization": "",
"Pin": "", "Pin": "",
"Pinned": "", "Pinned": "",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "", "Record voice": "",
"Redirecting you to Open WebUI Community": "", "Redirecting you to Open WebUI Community": "",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "", "References from": "",
"Refused when it shouldn't have": "", "Refused when it shouldn't have": "",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "", "Set Voice": "",
"Set whisper model": "", "Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "", "Settings": "",
"Settings saved successfully!": "", "Settings saved successfully!": "",
@ -964,7 +980,7 @@
"System Prompt": "", "System Prompt": "",
"Tags Generation": "", "Tags Generation": "",
"Tags Generation Prompt": "", "Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "", "Tap to interrupt": "",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "", "Thanks for your feedback!": "",
"The Application Account DN you bind with for search": "", "The Application Account DN you bind with for search": "",
"The base to search for users": "", "The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "", "Theme": "",
"Thinking...": "", "Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "", "This action cannot be undone. Do you wish to continue?": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "", "This response was generated by \"{{model}}\"": "",
"This will delete": "", "This will delete": "",
@ -1132,7 +1148,7 @@
"Why?": "", "Why?": "",
"Widescreen Mode": "", "Widescreen Mode": "",
"Won": "", "Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "", "Workspace": "",
"Workspace Permissions": "", "Workspace Permissions": "",
"Write": "", "Write": "",
@ -1142,6 +1158,7 @@
"Write your model template content here": "", "Write your model template content here": "",
"Yesterday": "", "Yesterday": "",
"You": "", "You": "",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "", "You cannot upload an empty file.": "",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "", "(e.g. `sh webui.sh --api`)": "",
"(latest)": "", "(latest)": "",
"{{ models }}": "", "{{ models }}": "",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "", "{{COUNT}} Replies": "",
"{{user}}'s Chats": "", "{{user}}'s Chats": "",
"{{webUIName}} Backend Required": "", "{{webUIName}} Backend Required": "",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "", "Advanced Parameters": "",
"Advanced Params": "", "Advanced Params": "",
"All": "",
"All Documents": "", "All Documents": "",
"All models deleted successfully": "", "All models deleted successfully": "",
"Allow Chat Controls": "", "Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "", "Allow Voice Interruption in Call": "",
"Allowed Endpoints": "", "Allowed Endpoints": "",
"Already have an account?": "", "Already have an account?": "",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "", "Amazing": "",
"an assistant": "", "an assistant": "",
@ -93,6 +95,7 @@
"Are you sure?": "", "Are you sure?": "",
"Arena Models": "", "Arena Models": "",
"Artifacts": "", "Artifacts": "",
"Ask": "",
"Ask a question": "", "Ask a question": "",
"Assistant": "", "Assistant": "",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "", "Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "", "Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "", "Brave Search API Key": "",
"By {{name}}": "", "By {{name}}": "",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "", "Collection": "",
"Color": "", "Color": "",
"ComfyUI": "", "ComfyUI": "",
@ -208,7 +213,7 @@
"Confirm your new password": "", "Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "", "Connections": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "", "Contact Admin for WebUI Access": "",
"Content": "", "Content": "",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "", "Continue with Email": "",
"Continue with LDAP": "", "Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "", "Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "", "Copied": "",
"Copied shared chat URL to clipboard!": "", "Copied shared chat URL to clipboard!": "",
"Copied to clipboard": "", "Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "", "Created At": "",
"Created by": "", "Created by": "",
"CSV Import": "", "CSV Import": "",
"Ctrl+Enter to Send": "",
"Current Model": "", "Current Model": "",
"Current Password": "", "Current Password": "",
"Custom": "", "Custom": "",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "", "Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "", "Enable New Sign Ups": "",
"Enabled": "", "Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "", "Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "", "Enter Chunk Overlap": "",
"Enter Chunk Size": "", "Enter Chunk Size": "",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "", "Enter description": "",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "", "Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "", "Enter language codes": "",
"Enter Model ID": "", "Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "", "Enter model tag (e.g. {{modelTag}})": "",
"Enter Mojeek Search API Key": "", "Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "", "Enter Number of Steps (e.g. 50)": "",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "", "Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "", "Enter Sampler (e.g. Euler a)": "",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "", "Enter Tika Server URL": "",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "", "Enter Top K": "",
"Enter URL (e.g. http://127.0.0.1:7860/)": "", "Enter URL (e.g. http://127.0.0.1:7860/)": "",
"Enter URL (e.g. http://localhost:11434)": "", "Enter URL (e.g. http://localhost:11434)": "",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "", "Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "", "Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "", "Exclude": "",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "", "Experimental": "",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "", "Explore the cosmos": "",
"Export": "", "Export": "",
"Export All Archived Chats": "", "Export All Archived Chats": "",
@ -566,7 +580,7 @@
"Include": "", "Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "", "Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "", "Include `--api` flag when running stable-diffusion-webui": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "", "Info": "",
"Input commands": "", "Input commands": "",
"Install from Github URL": "", "Install from Github URL": "",
@ -624,6 +638,7 @@
"Local": "", "Local": "",
"Local Models": "", "Local Models": "",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "", "Lost": "",
"LTR": "", "LTR": "",
"Made by Open WebUI Community": "", "Made by Open WebUI Community": "",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "", "Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "", "Permission denied when accessing microphone: {{error}}": "",
"Permissions": "", "Permissions": "",
"Perplexity API Key": "",
"Personalization": "", "Personalization": "",
"Pin": "", "Pin": "",
"Pinned": "", "Pinned": "",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "", "Record voice": "",
"Redirecting you to Open WebUI Community": "", "Redirecting you to Open WebUI Community": "",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "", "References from": "",
"Refused when it shouldn't have": "", "Refused when it shouldn't have": "",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "", "Set Voice": "",
"Set whisper model": "", "Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "", "Settings": "",
"Settings saved successfully!": "", "Settings saved successfully!": "",
@ -964,7 +980,7 @@
"System Prompt": "", "System Prompt": "",
"Tags Generation": "", "Tags Generation": "",
"Tags Generation Prompt": "", "Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "", "Tap to interrupt": "",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "", "Thanks for your feedback!": "",
"The Application Account DN you bind with for search": "", "The Application Account DN you bind with for search": "",
"The base to search for users": "", "The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "", "Theme": "",
"Thinking...": "", "Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "", "This action cannot be undone. Do you wish to continue?": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "", "This response was generated by \"{{model}}\"": "",
"This will delete": "", "This will delete": "",
@ -1132,7 +1148,7 @@
"Why?": "", "Why?": "",
"Widescreen Mode": "", "Widescreen Mode": "",
"Won": "", "Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "", "Workspace": "",
"Workspace Permissions": "", "Workspace Permissions": "",
"Write": "", "Write": "",
@ -1142,6 +1158,7 @@
"Write your model template content here": "", "Write your model template content here": "",
"Yesterday": "", "Yesterday": "",
"You": "", "You": "",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "", "You cannot upload an empty file.": "",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(p.ej. `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(p.ej. `sh webui.sh --api`)",
"(latest)": "(latest)", "(latest)": "(latest)",
"{{ models }}": "{{ models }}", "{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "{{COUNT}} Respuestas", "{{COUNT}} Replies": "{{COUNT}} Respuestas",
"{{user}}'s Chats": "Chats de {{user}}", "{{user}}'s Chats": "Chats de {{user}}",
"{{webUIName}} Backend Required": "{{webUIName}} Servidor Requerido", "{{webUIName}} Backend Required": "{{webUIName}} Servidor Requerido",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Admins tienen acceso a todas las herramientas en todo momento; los usuarios necesitan herramientas asignadas por modelo en el espacio de trabajo.", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Admins tienen acceso a todas las herramientas en todo momento; los usuarios necesitan herramientas asignadas por modelo en el espacio de trabajo.",
"Advanced Parameters": "Parámetros Avanzados", "Advanced Parameters": "Parámetros Avanzados",
"Advanced Params": "Parámetros avanzados", "Advanced Params": "Parámetros avanzados",
"All": "",
"All Documents": "Todos los Documentos", "All Documents": "Todos los Documentos",
"All models deleted successfully": "Todos los modelos han sido borrados", "All models deleted successfully": "Todos los modelos han sido borrados",
"Allow Chat Controls": "Permitir Control de Chats", "Allow Chat Controls": "Permitir Control de Chats",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Permitir interrupción de voz en llamada", "Allow Voice Interruption in Call": "Permitir interrupción de voz en llamada",
"Allowed Endpoints": "Endpoints permitidos", "Allowed Endpoints": "Endpoints permitidos",
"Already have an account?": "¿Ya tienes una cuenta?", "Already have an account?": "¿Ya tienes una cuenta?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "Alternativa a top_p, y busca asegurar un equilibrio entre calidad y variedad. El parámetro p representa la probabilidad mínima para que un token sea considerado, en relación con la probabilidad del token más probable. Por ejemplo, con p=0.05 y el token más probable con una probabilidad de 0.9, los logits con un valor menor a 0.045 son filtrados. (Predeterminado: 0.0)", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "Siempre", "Always": "Siempre",
"Amazing": "Sorprendente", "Amazing": "Sorprendente",
"an assistant": "un asistente", "an assistant": "un asistente",
@ -93,6 +95,7 @@
"Are you sure?": "¿Está seguro?", "Are you sure?": "¿Está seguro?",
"Arena Models": "Arena de Modelos", "Arena Models": "Arena de Modelos",
"Artifacts": "Artefactos", "Artifacts": "Artefactos",
"Ask": "",
"Ask a question": "Haz una pregunta", "Ask a question": "Haz una pregunta",
"Assistant": "Asistente", "Assistant": "Asistente",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "Endpoint de Bing Search V7", "Bing Search V7 Endpoint": "Endpoint de Bing Search V7",
"Bing Search V7 Subscription Key": "Clave de suscripción de Bing Search V7", "Bing Search V7 Subscription Key": "Clave de suscripción de Bing Search V7",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Clave de API de Brave Search", "Brave Search API Key": "Clave de API de Brave Search",
"By {{name}}": "Por {{name}}", "By {{name}}": "Por {{name}}",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "Interprete de Código", "Code Interpreter": "Interprete de Código",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Colección", "Collection": "Colección",
"Color": "Color", "Color": "Color",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "Confirmar tu nueva contraseña", "Confirm your new password": "Confirmar tu nueva contraseña",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Conexiones", "Connections": "Conexiones",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": " Restringe el esfuerzo en la razonamiento para los modelos de razonamiento. Solo aplicable a los modelos de razonamiento de proveedores específicos que admiten el esfuerzo de razonamiento. (Por defecto: medio)", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Contacta el administrador para obtener acceso al WebUI", "Contact Admin for WebUI Access": "Contacta el administrador para obtener acceso al WebUI",
"Content": "Contenido", "Content": "Contenido",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "Continuar con email", "Continue with Email": "Continuar con email",
"Continue with LDAP": "Continuar con LDAP", "Continue with LDAP": "Continuar con LDAP",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Controlar como el texto del mensaje se divide para las solicitudes de TTS. 'Punctuation' divide en oraciones, 'paragraphs' divide en párrafos y 'none' mantiene el mensaje como una sola cadena.", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Controlar como el texto del mensaje se divide para las solicitudes de TTS. 'Punctuation' divide en oraciones, 'paragraphs' divide en párrafos y 'none' mantiene el mensaje como una sola cadena.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Controles", "Controls": "Controles",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": " Controlar el equilibrio entre la coherencia y la diversidad de la salida. Un valor más bajo resultará en un texto más enfocado y coherente. (Por defecto: 5.0)", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Copiado", "Copied": "Copiado",
"Copied shared chat URL to clipboard!": "¡URL de chat compartido copiado al portapapeles!", "Copied shared chat URL to clipboard!": "¡URL de chat compartido copiado al portapapeles!",
"Copied to clipboard": "Copiado al portapapeles", "Copied to clipboard": "Copiado al portapapeles",
@ -245,6 +250,7 @@
"Created At": "Creado en", "Created At": "Creado en",
"Created by": "Creado por", "Created by": "Creado por",
"CSV Import": "Importa un CSV", "CSV Import": "Importa un CSV",
"Ctrl+Enter to Send": "",
"Current Model": "Modelo Actual", "Current Model": "Modelo Actual",
"Current Password": "Contraseña Actual", "Current Password": "Contraseña Actual",
"Custom": "Personalizado", "Custom": "Personalizado",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Habilitar bloqueo de memoria (mlock) para evitar que los datos del modelo se intercambien fuera de la RAM. Esta opción bloquea el conjunto de páginas de trabajo del modelo en la RAM, asegurando que no se intercambiarán fuera del disco. Esto puede ayudar a mantener el rendimiento evitando fallos de página y asegurando un acceso rápido a los datos.", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Habilitar bloqueo de memoria (mlock) para evitar que los datos del modelo se intercambien fuera de la RAM. Esta opción bloquea el conjunto de páginas de trabajo del modelo en la RAM, asegurando que no se intercambiarán fuera del disco. Esto puede ayudar a mantener el rendimiento evitando fallos de página y asegurando un acceso rápido a los datos.",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Habilitar asignación de memoria (mmap) para cargar datos del modelo. Esta opción permite al sistema usar el almacenamiento en disco como una extensión de la RAM al tratar los archivos en disco como si estuvieran en la RAM. Esto puede mejorar el rendimiento del modelo permitiendo un acceso más rápido a los datos. Sin embargo, puede no funcionar correctamente con todos los sistemas y puede consumir una cantidad significativa de espacio en disco.", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Habilitar asignación de memoria (mmap) para cargar datos del modelo. Esta opción permite al sistema usar el almacenamiento en disco como una extensión de la RAM al tratar los archivos en disco como si estuvieran en la RAM. Esto puede mejorar el rendimiento del modelo permitiendo un acceso más rápido a los datos. Sin embargo, puede no funcionar correctamente con todos los sistemas y puede consumir una cantidad significativa de espacio en disco.",
"Enable Message Rating": "Habilitar la calificación de los mensajes", "Enable Message Rating": "Habilitar la calificación de los mensajes",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "Habilitar muestreo Mirostat para controlar la perplejidad. (Predeterminado: 0, 0 = Deshabilitado, 1 = Mirostat, 2 = Mirostat 2.0)", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Habilitar Nuevos Registros", "Enable New Sign Ups": "Habilitar Nuevos Registros",
"Enabled": "Activado", "Enabled": "Activado",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Asegúrese de que su archivo CSV incluya 4 columnas en este orden: Nombre, Correo Electrónico, Contraseña, Rol.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Asegúrese de que su archivo CSV incluya 4 columnas en este orden: Nombre, Correo Electrónico, Contraseña, Rol.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "Ingresa la escala de CFG (p.ej., 7.0)", "Enter CFG Scale (e.g. 7.0)": "Ingresa la escala de CFG (p.ej., 7.0)",
"Enter Chunk Overlap": "Ingresar superposición de fragmentos", "Enter Chunk Overlap": "Ingresar superposición de fragmentos",
"Enter Chunk Size": "Ingrese el tamaño del fragmento", "Enter Chunk Size": "Ingrese el tamaño del fragmento",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "Ingrese la descripción", "Enter description": "Ingrese la descripción",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "Ingrese la clave API de Kagi Search", "Enter Kagi Search API Key": "Ingrese la clave API de Kagi Search",
"Enter Key Behavior": "",
"Enter language codes": "Ingrese códigos de idioma", "Enter language codes": "Ingrese códigos de idioma",
"Enter Model ID": "Ingresa el ID del modelo", "Enter Model ID": "Ingresa el ID del modelo",
"Enter model tag (e.g. {{modelTag}})": "Ingrese la etiqueta del modelo (p.ej. {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "Ingrese la etiqueta del modelo (p.ej. {{modelTag}})",
"Enter Mojeek Search API Key": "Ingrese la clave API de Mojeek Search", "Enter Mojeek Search API Key": "Ingrese la clave API de Mojeek Search",
"Enter Number of Steps (e.g. 50)": "Ingrese el número de pasos (p.ej., 50)", "Enter Number of Steps (e.g. 50)": "Ingrese el número de pasos (p.ej., 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "Ingrese la URL del proxy (p.ej. https://user:password@host:port)", "Enter proxy URL (e.g. https://user:password@host:port)": "Ingrese la URL del proxy (p.ej. https://user:password@host:port)",
"Enter reasoning effort": "Ingrese el esfuerzo de razonamiento", "Enter reasoning effort": "Ingrese el esfuerzo de razonamiento",
"Enter Sampler (e.g. Euler a)": "Ingrese el sampler (p.ej., Euler a)", "Enter Sampler (e.g. Euler a)": "Ingrese el sampler (p.ej., Euler a)",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "Ingrese la URL pública de su WebUI. Esta URL se utilizará para generar enlaces en las notificaciones.", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "Ingrese la URL pública de su WebUI. Esta URL se utilizará para generar enlaces en las notificaciones.",
"Enter Tika Server URL": "Ingrese la URL del servidor Tika", "Enter Tika Server URL": "Ingrese la URL del servidor Tika",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Ingrese el Top K", "Enter Top K": "Ingrese el Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Ingrese la URL (p.ej., http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "Ingrese la URL (p.ej., http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Ingrese la URL (p.ej., http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "Ingrese la URL (p.ej., http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "Ejemplo: correo", "Example: mail": "Ejemplo: correo",
"Example: ou=users,dc=foo,dc=example": "Ejemplo: ou=usuarios,dc=foo,dc=ejemplo", "Example: ou=users,dc=foo,dc=example": "Ejemplo: ou=usuarios,dc=foo,dc=ejemplo",
"Example: sAMAccountName or uid or userPrincipalName": "Ejemplo: sAMAccountName o uid o userPrincipalName", "Example: sAMAccountName or uid or userPrincipalName": "Ejemplo: sAMAccountName o uid o userPrincipalName",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "Excluir", "Exclude": "Excluir",
"Execute code for analysis": "Ejecutar código para análisis", "Execute code for analysis": "Ejecutar código para análisis",
"Expand": "",
"Experimental": "Experimental", "Experimental": "Experimental",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "Explora el cosmos", "Explore the cosmos": "Explora el cosmos",
"Export": "Exportar", "Export": "Exportar",
"Export All Archived Chats": "Exportar todos los chats archivados", "Export All Archived Chats": "Exportar todos los chats archivados",
@ -566,7 +580,7 @@
"Include": "Incluir", "Include": "Incluir",
"Include `--api-auth` flag when running stable-diffusion-webui": "Incluir el indicador `--api-auth` al ejecutar stable-diffusion-webui", "Include `--api-auth` flag when running stable-diffusion-webui": "Incluir el indicador `--api-auth` al ejecutar stable-diffusion-webui",
"Include `--api` flag when running stable-diffusion-webui": "Incluir el indicador `--api` al ejecutar stable-diffusion-webui", "Include `--api` flag when running stable-diffusion-webui": "Incluir el indicador `--api` al ejecutar stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "Influencia en qué medida el algoritmo responde rápidamente a la retroalimentación del texto generado. Una tasa de aprendizaje más baja resultará en ajustes más lentos, mientras que una tasa de aprendizaje más alta hará que el algoritmo sea más receptivo. (Predeterminado: 0.1)", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Información", "Info": "Información",
"Input commands": "Ingresar comandos", "Input commands": "Ingresar comandos",
"Install from Github URL": "Instalar desde la URL de Github", "Install from Github URL": "Instalar desde la URL de Github",
@ -624,6 +638,7 @@
"Local": "Local", "Local": "Local",
"Local Models": "Modelos locales", "Local Models": "Modelos locales",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "Perdido", "Lost": "Perdido",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "Hecho por la comunidad de OpenWebUI", "Made by Open WebUI Community": "Hecho por la comunidad de OpenWebUI",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "Permiso denegado al acceder a la micrófono", "Permission denied when accessing microphone": "Permiso denegado al acceder a la micrófono",
"Permission denied when accessing microphone: {{error}}": "Permiso denegado al acceder al micrófono: {{error}}", "Permission denied when accessing microphone: {{error}}": "Permiso denegado al acceder al micrófono: {{error}}",
"Permissions": "Permisos", "Permissions": "Permisos",
"Perplexity API Key": "",
"Personalization": "Personalización", "Personalization": "Personalización",
"Pin": "Fijar", "Pin": "Fijar",
"Pinned": "Fijado", "Pinned": "Fijado",
@ -809,7 +825,7 @@
"Reasoning Effort": "Esfuerzo de razonamiento", "Reasoning Effort": "Esfuerzo de razonamiento",
"Record voice": "Grabar voz", "Record voice": "Grabar voz",
"Redirecting you to Open WebUI Community": "Redireccionándote a la comunidad OpenWebUI", "Redirecting you to Open WebUI Community": "Redireccionándote a la comunidad OpenWebUI",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "Reduce la probabilidad de generar tonterías. Un valor más alto (p.ej. 100) dará respuestas más diversas, mientras que un valor más bajo (p.ej. 10) será más conservador. (Predeterminado: 40)", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Referirse a usted mismo como \"Usuario\" (por ejemplo, \"El usuario está aprendiendo Español\")", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Referirse a usted mismo como \"Usuario\" (por ejemplo, \"El usuario está aprendiendo Español\")",
"References from": "Referencias de", "References from": "Referencias de",
"Refused when it shouldn't have": "Rechazado cuando no debería", "Refused when it shouldn't have": "Rechazado cuando no debería",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Establece el número de hilos de trabajo utilizados para el cálculo. Esta opción controla cuántos hilos se utilizan para procesar las solicitudes entrantes simultáneamente. Aumentar este valor puede mejorar el rendimiento bajo cargas de trabajo de alta concurrencia, pero también puede consumir más recursos de CPU.", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Establece el número de hilos de trabajo utilizados para el cálculo. Esta opción controla cuántos hilos se utilizan para procesar las solicitudes entrantes simultáneamente. Aumentar este valor puede mejorar el rendimiento bajo cargas de trabajo de alta concurrencia, pero también puede consumir más recursos de CPU.",
"Set Voice": "Establecer la voz", "Set Voice": "Establecer la voz",
"Set whisper model": "Establecer modelo de whisper", "Set whisper model": "Establecer modelo de whisper",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "Establece cuán lejos atrás debe mirar el modelo para evitar la repetición. (Predeterminado: 64, 0 = deshabilitado, -1 = num_ctx)", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "Establece la semilla de número aleatorio a usar para la generación. Establecer esto en un número específico hará que el modelo genere el mismo texto para el mismo prompt. (Predeterminado: aleatorio)", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "Establece el tamaño de la ventana de contexto utilizada para generar el siguiente token. (Predeterminado: 2048)", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Establece las secuencias de parada a usar. Cuando se encuentre este patrón, el LLM dejará de generar texto y devolverá. Se pueden establecer varios patrones de parada especificando múltiples parámetros de parada separados en un archivo de modelo.", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Establece las secuencias de parada a usar. Cuando se encuentre este patrón, el LLM dejará de generar texto y devolverá. Se pueden establecer varios patrones de parada especificando múltiples parámetros de parada separados en un archivo de modelo.",
"Settings": "Configuración", "Settings": "Configuración",
"Settings saved successfully!": "¡Configuración guardada con éxito!", "Settings saved successfully!": "¡Configuración guardada con éxito!",
@ -964,7 +980,7 @@
"System Prompt": "Prompt del sistema", "System Prompt": "Prompt del sistema",
"Tags Generation": "Generación de etiquetas", "Tags Generation": "Generación de etiquetas",
"Tags Generation Prompt": "Prompt de generación de etiquetas", "Tags Generation Prompt": "Prompt de generación de etiquetas",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "El muestreo libre de cola se utiliza para reducir el impacto de los tokens menos probables en la salida. Un valor más alto (p.ej., 2.0) reducirá el impacto más, mientras que un valor de 1.0 deshabilitará esta configuración. (predeterminado: 1)", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "Toca para interrumpir", "Tap to interrupt": "Toca para interrumpir",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "¡Gracias por tu retroalimentación!", "Thanks for your feedback!": "¡Gracias por tu retroalimentación!",
"The Application Account DN you bind with for search": "La cuenta de aplicación DN que vincula para la búsqueda", "The Application Account DN you bind with for search": "La cuenta de aplicación DN que vincula para la búsqueda",
"The base to search for users": "La base para buscar usuarios", "The base to search for users": "La base para buscar usuarios",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Los desarrolladores de este plugin son apasionados voluntarios de la comunidad. Si encuentras este plugin útil, por favor considere contribuir a su desarrollo.", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Los desarrolladores de este plugin son apasionados voluntarios de la comunidad. Si encuentras este plugin útil, por favor considere contribuir a su desarrollo.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "El tablero de líderes de evaluación se basa en el sistema de clasificación Elo y se actualiza en tiempo real.", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "El tablero de líderes de evaluación se basa en el sistema de clasificación Elo y se actualiza en tiempo real.",
"The LDAP attribute that maps to the mail that users use to sign in.": "El atributo LDAP que se asigna al correo que los usuarios utilizan para iniciar sesión.", "The LDAP attribute that maps to the mail that users use to sign in.": "El atributo LDAP que se asigna al correo que los usuarios utilizan para iniciar sesión.",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "El tamaño máximo del archivo en MB. Si el tamaño del archivo supera este límite, el archivo no se subirá.", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "El tamaño máximo del archivo en MB. Si el tamaño del archivo supera este límite, el archivo no se subirá.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "El número máximo de archivos que se pueden utilizar a la vez en chat. Si este límite es superado, los archivos no se subirán.", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "El número máximo de archivos que se pueden utilizar a la vez en chat. Si este límite es superado, los archivos no se subirán.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "La puntuación debe ser un valor entre 0.0 (0%) y 1.0 (100%).", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "La puntuación debe ser un valor entre 0.0 (0%) y 1.0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "La temperatura del modelo. Aumentar la temperatura hará que el modelo responda de manera más creativa. (Predeterminado: 0.8)", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Tema", "Theme": "Tema",
"Thinking...": "Pensando...", "Thinking...": "Pensando...",
"This action cannot be undone. Do you wish to continue?": "Esta acción no se puede deshacer. ¿Desea continuar?", "This action cannot be undone. Do you wish to continue?": "Esta acción no se puede deshacer. ¿Desea continuar?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Esto garantiza que sus valiosas conversaciones se guarden de forma segura en su base de datos en el backend. ¡Gracias!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Esto garantiza que sus valiosas conversaciones se guarden de forma segura en su base de datos en el backend. ¡Gracias!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Esta es una característica experimental que puede no funcionar como se esperaba y está sujeto a cambios en cualquier momento.", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "Esta es una característica experimental que puede no funcionar como se esperaba y está sujeto a cambios en cualquier momento.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "Esta opción controla cuántos tokens se conservan al actualizar el contexto. Por ejemplo, si se establece en 2, se conservarán los últimos 2 tokens del contexto de la conversación. Conservar el contexto puede ayudar a mantener la continuidad de una conversación, pero puede reducir la capacidad de responder a nuevos temas. (Predeterminado: 24)", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Esta opción eliminará todos los archivos existentes en la colección y los reemplazará con nuevos archivos subidos.", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "Esta opción eliminará todos los archivos existentes en la colección y los reemplazará con nuevos archivos subidos.",
"This response was generated by \"{{model}}\"": "Esta respuesta fue generada por \"{{model}}\"", "This response was generated by \"{{model}}\"": "Esta respuesta fue generada por \"{{model}}\"",
"This will delete": "Esto eliminará", "This will delete": "Esto eliminará",
@ -1132,7 +1148,7 @@
"Why?": "¿Por qué?", "Why?": "¿Por qué?",
"Widescreen Mode": "Modo de pantalla ancha", "Widescreen Mode": "Modo de pantalla ancha",
"Won": "Ganado", "Won": "Ganado",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "Funciona junto con top-k. Un valor más alto (p.ej., 0.95) dará como resultado un texto más diverso, mientras que un valor más bajo (p.ej., 0.5) generará un texto más enfocado y conservador. (Predeterminado: 0.9)", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Espacio de trabajo", "Workspace": "Espacio de trabajo",
"Workspace Permissions": "Permisos del espacio de trabajo", "Workspace Permissions": "Permisos del espacio de trabajo",
"Write": "Escribir", "Write": "Escribir",
@ -1142,6 +1158,7 @@
"Write your model template content here": "Escribe el contenido de tu plantilla de modelo aquí", "Write your model template content here": "Escribe el contenido de tu plantilla de modelo aquí",
"Yesterday": "Ayer", "Yesterday": "Ayer",
"You": "Usted", "You": "Usted",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Solo puede chatear con un máximo de {{maxCount}} archivo(s) a la vez.", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Solo puede chatear con un máximo de {{maxCount}} archivo(s) a la vez.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Puede personalizar sus interacciones con LLMs añadiendo memorias a través del botón 'Gestionar' debajo, haciendo que sean más útiles y personalizados para usted.", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Puede personalizar sus interacciones con LLMs añadiendo memorias a través del botón 'Gestionar' debajo, haciendo que sean más útiles y personalizados para usted.",
"You cannot upload an empty file.": "No puede subir un archivo vacío.", "You cannot upload an empty file.": "No puede subir un archivo vacío.",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(adib. `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(adib. `sh webui.sh --api`)",
"(latest)": "(azkena)", "(latest)": "(azkena)",
"{{ models }}": "{{ models }}", "{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "", "{{COUNT}} Replies": "",
"{{user}}'s Chats": "{{user}}-ren Txatak", "{{user}}'s Chats": "{{user}}-ren Txatak",
"{{webUIName}} Backend Required": "{{webUIName}} Backend-a Beharrezkoa", "{{webUIName}} Backend Required": "{{webUIName}} Backend-a Beharrezkoa",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Administratzaileek tresna guztietarako sarbidea dute beti; erabiltzaileek lan-eremuan eredu bakoitzeko esleituak behar dituzte tresnak.", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Administratzaileek tresna guztietarako sarbidea dute beti; erabiltzaileek lan-eremuan eredu bakoitzeko esleituak behar dituzte tresnak.",
"Advanced Parameters": "Parametro Aurreratuak", "Advanced Parameters": "Parametro Aurreratuak",
"Advanced Params": "Parametro Aurreratuak", "Advanced Params": "Parametro Aurreratuak",
"All": "",
"All Documents": "Dokumentu Guztiak", "All Documents": "Dokumentu Guztiak",
"All models deleted successfully": "Eredu guztiak ongi ezabatu dira", "All models deleted successfully": "Eredu guztiak ongi ezabatu dira",
"Allow Chat Controls": "", "Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Baimendu Ahots Etena Deietan", "Allow Voice Interruption in Call": "Baimendu Ahots Etena Deietan",
"Allowed Endpoints": "", "Allowed Endpoints": "",
"Already have an account?": "Baduzu kontu bat?", "Already have an account?": "Baduzu kontu bat?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "top_p-ren alternatiba, kalitate eta aniztasunaren arteko oreka bermatzea du helburu. p parametroak token bat kontuan hartzeko gutxieneko probabilitatea adierazten du, token probableenaren probabilitatearen arabera. Adibidez, p=0.05 balioarekin eta token probableenaren probabilitatea 0.9 denean, 0.045 baino balio txikiagoko logit-ak baztertzen dira. (Lehenetsia: 0.0)", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "Harrigarria", "Amazing": "Harrigarria",
"an assistant": "laguntzaile bat", "an assistant": "laguntzaile bat",
@ -93,6 +95,7 @@
"Are you sure?": "Ziur zaude?", "Are you sure?": "Ziur zaude?",
"Arena Models": "Arena Ereduak", "Arena Models": "Arena Ereduak",
"Artifacts": "Artefaktuak", "Artifacts": "Artefaktuak",
"Ask": "",
"Ask a question": "Egin galdera bat", "Ask a question": "Egin galdera bat",
"Assistant": "Laguntzailea", "Assistant": "Laguntzailea",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "Bing Bilaketa V7 Endpointua", "Bing Search V7 Endpoint": "Bing Bilaketa V7 Endpointua",
"Bing Search V7 Subscription Key": "Bing Bilaketa V7 Harpidetza Gakoa", "Bing Search V7 Subscription Key": "Bing Bilaketa V7 Harpidetza Gakoa",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Brave Bilaketa API Gakoa", "Brave Search API Key": "Brave Bilaketa API Gakoa",
"By {{name}}": "{{name}}-k", "By {{name}}": "{{name}}-k",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Bilduma", "Collection": "Bilduma",
"Color": "Kolorea", "Color": "Kolorea",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "", "Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Konexioak", "Connections": "Konexioak",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Jarri harremanetan Administratzailearekin WebUI Sarbiderako", "Contact Admin for WebUI Access": "Jarri harremanetan Administratzailearekin WebUI Sarbiderako",
"Content": "Edukia", "Content": "Edukia",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "Jarraitu Posta Elektronikoarekin", "Continue with Email": "Jarraitu Posta Elektronikoarekin",
"Continue with LDAP": "Jarraitu LDAP-rekin", "Continue with LDAP": "Jarraitu LDAP-rekin",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Kontrolatu nola banatzen den mezuaren testua TTS eskaeretarako. 'Puntuazioa'-k esaldietan banatzen du, 'paragrafoak'-k paragrafoetan, eta 'bat ere ez'-ek mezua kate bakar gisa mantentzen du.", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Kontrolatu nola banatzen den mezuaren testua TTS eskaeretarako. 'Puntuazioa'-k esaldietan banatzen du, 'paragrafoak'-k paragrafoetan, eta 'bat ere ez'-ek mezua kate bakar gisa mantentzen du.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Kontrolak", "Controls": "Kontrolak",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "Irteeraren koherentzia eta aniztasunaren arteko oreka kontrolatzen du. Balio txikiagoak testu zentratuagoa eta koherenteagoa emango du. (Lehenetsia: 5.0)", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Kopiatuta", "Copied": "Kopiatuta",
"Copied shared chat URL to clipboard!": "Partekatutako txataren URLa arbelera kopiatu da!", "Copied shared chat URL to clipboard!": "Partekatutako txataren URLa arbelera kopiatu da!",
"Copied to clipboard": "Arbelera kopiatuta", "Copied to clipboard": "Arbelera kopiatuta",
@ -245,6 +250,7 @@
"Created At": "Sortze Data", "Created At": "Sortze Data",
"Created by": "Sortzailea", "Created by": "Sortzailea",
"CSV Import": "CSV Inportazioa", "CSV Import": "CSV Inportazioa",
"Ctrl+Enter to Send": "",
"Current Model": "Uneko Eredua", "Current Model": "Uneko Eredua",
"Current Password": "Uneko Pasahitza", "Current Password": "Uneko Pasahitza",
"Custom": "Pertsonalizatua", "Custom": "Pertsonalizatua",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Gaitu Memoria Blokeatzea (mlock) ereduaren datuak RAM memoriatik kanpo ez trukatzeko. Aukera honek ereduaren lan-orri multzoa RAMean blokatzen du, diskora ez direla trukatuko ziurtatuz. Honek errendimendua mantentzen lagun dezake, orri-hutsegiteak saihestuz eta datuen sarbide azkarra bermatuz.", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Gaitu Memoria Blokeatzea (mlock) ereduaren datuak RAM memoriatik kanpo ez trukatzeko. Aukera honek ereduaren lan-orri multzoa RAMean blokatzen du, diskora ez direla trukatuko ziurtatuz. Honek errendimendua mantentzen lagun dezake, orri-hutsegiteak saihestuz eta datuen sarbide azkarra bermatuz.",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Gaitu Memoria Mapaketa (mmap) ereduaren datuak kargatzeko. Aukera honek sistemari disko-biltegiratzea RAM memoriaren luzapen gisa erabiltzea ahalbidetzen dio, diskoko fitxategiak RAMean baleude bezala tratatuz. Honek ereduaren errendimendua hobe dezake, datuen sarbide azkarragoa ahalbidetuz. Hala ere, baliteke sistema guztietan behar bezala ez funtzionatzea eta disko-espazio handia kontsumitu dezake.", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Gaitu Memoria Mapaketa (mmap) ereduaren datuak kargatzeko. Aukera honek sistemari disko-biltegiratzea RAM memoriaren luzapen gisa erabiltzea ahalbidetzen dio, diskoko fitxategiak RAMean baleude bezala tratatuz. Honek ereduaren errendimendua hobe dezake, datuen sarbide azkarragoa ahalbidetuz. Hala ere, baliteke sistema guztietan behar bezala ez funtzionatzea eta disko-espazio handia kontsumitu dezake.",
"Enable Message Rating": "Gaitu Mezuen Balorazioa", "Enable Message Rating": "Gaitu Mezuen Balorazioa",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "Gaitu Mirostat laginketa nahasmena kontrolatzeko. (Lehenetsia: 0, 0 = Desgaituta, 1 = Mirostat, 2 = Mirostat 2.0)", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Gaitu Izena Emate Berriak", "Enable New Sign Ups": "Gaitu Izena Emate Berriak",
"Enabled": "Gaituta", "Enabled": "Gaituta",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Ziurtatu zure CSV fitxategiak 4 zutabe dituela ordena honetan: Izena, Posta elektronikoa, Pasahitza, Rola.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Ziurtatu zure CSV fitxategiak 4 zutabe dituela ordena honetan: Izena, Posta elektronikoa, Pasahitza, Rola.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "Sartu CFG Eskala (adib. 7.0)", "Enter CFG Scale (e.g. 7.0)": "Sartu CFG Eskala (adib. 7.0)",
"Enter Chunk Overlap": "Sartu Zatien Gainjartzea (chunk overlap)", "Enter Chunk Overlap": "Sartu Zatien Gainjartzea (chunk overlap)",
"Enter Chunk Size": "Sartu Zati Tamaina", "Enter Chunk Size": "Sartu Zati Tamaina",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "Sartu deskribapena", "Enter description": "Sartu deskribapena",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "", "Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "Sartu hizkuntza kodeak", "Enter language codes": "Sartu hizkuntza kodeak",
"Enter Model ID": "Sartu Eredu IDa", "Enter Model ID": "Sartu Eredu IDa",
"Enter model tag (e.g. {{modelTag}})": "Sartu eredu etiketa (adib. {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "Sartu eredu etiketa (adib. {{modelTag}})",
"Enter Mojeek Search API Key": "Sartu Mojeek Bilaketa API Gakoa", "Enter Mojeek Search API Key": "Sartu Mojeek Bilaketa API Gakoa",
"Enter Number of Steps (e.g. 50)": "Sartu Urrats Kopurua (adib. 50)", "Enter Number of Steps (e.g. 50)": "Sartu Urrats Kopurua (adib. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "", "Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "Sartu Sampler-a (adib. Euler a)", "Enter Sampler (e.g. Euler a)": "Sartu Sampler-a (adib. Euler a)",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "Sartu Tika Zerbitzari URLa", "Enter Tika Server URL": "Sartu Tika Zerbitzari URLa",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Sartu Top K", "Enter Top K": "Sartu Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Sartu URLa (adib. http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "Sartu URLa (adib. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Sartu URLa (adib. http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "Sartu URLa (adib. http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "Adibidea: ou=users,dc=foo,dc=example", "Example: ou=users,dc=foo,dc=example": "Adibidea: ou=users,dc=foo,dc=example",
"Example: sAMAccountName or uid or userPrincipalName": "Adibidea: sAMAccountName edo uid edo userPrincipalName", "Example: sAMAccountName or uid or userPrincipalName": "Adibidea: sAMAccountName edo uid edo userPrincipalName",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "Baztertu", "Exclude": "Baztertu",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "Esperimentala", "Experimental": "Esperimentala",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "Esploratu kosmosa", "Explore the cosmos": "Esploratu kosmosa",
"Export": "Esportatu", "Export": "Esportatu",
"Export All Archived Chats": "Esportatu Artxibatutako Txat Guztiak", "Export All Archived Chats": "Esportatu Artxibatutako Txat Guztiak",
@ -566,7 +580,7 @@
"Include": "Sartu", "Include": "Sartu",
"Include `--api-auth` flag when running stable-diffusion-webui": "Sartu `--api-auth` bandera stable-diffusion-webui exekutatzean", "Include `--api-auth` flag when running stable-diffusion-webui": "Sartu `--api-auth` bandera stable-diffusion-webui exekutatzean",
"Include `--api` flag when running stable-diffusion-webui": "Sartu `--api` bandera stable-diffusion-webui exekutatzean", "Include `--api` flag when running stable-diffusion-webui": "Sartu `--api` bandera stable-diffusion-webui exekutatzean",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "Algoritmoak sortutako testutik jasotako feedbackari erantzuteko abiadura zehazten du. Ikasketa-tasa baxuago batek doikuntza motelagoak eragingo ditu, eta ikasketa-tasa altuago batek algoritmoaren erantzuna bizkorragoa egingo du. (Lehenetsia: 0.1)", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Informazioa", "Info": "Informazioa",
"Input commands": "Sartu komandoak", "Input commands": "Sartu komandoak",
"Install from Github URL": "Instalatu Github URLtik", "Install from Github URL": "Instalatu Github URLtik",
@ -624,6 +638,7 @@
"Local": "Lokala", "Local": "Lokala",
"Local Models": "Modelo lokalak", "Local Models": "Modelo lokalak",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "Galduta", "Lost": "Galduta",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "OpenWebUI Komunitateak egina", "Made by Open WebUI Community": "OpenWebUI Komunitateak egina",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "Baimena ukatu da mikrofonoa atzitzean", "Permission denied when accessing microphone": "Baimena ukatu da mikrofonoa atzitzean",
"Permission denied when accessing microphone: {{error}}": "Baimena ukatu da mikrofonoa atzitzean: {{error}}", "Permission denied when accessing microphone: {{error}}": "Baimena ukatu da mikrofonoa atzitzean: {{error}}",
"Permissions": "Baimenak", "Permissions": "Baimenak",
"Perplexity API Key": "",
"Personalization": "Pertsonalizazioa", "Personalization": "Pertsonalizazioa",
"Pin": "Ainguratu", "Pin": "Ainguratu",
"Pinned": "Ainguratuta", "Pinned": "Ainguratuta",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "Grabatu ahotsa", "Record voice": "Grabatu ahotsa",
"Redirecting you to Open WebUI Community": "OpenWebUI Komunitatera berbideratzen", "Redirecting you to Open WebUI Community": "OpenWebUI Komunitatera berbideratzen",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "Zentzugabekeriak sortzeko probabilitatea murrizten du. Balio altuago batek (adib. 100) erantzun anitzagoak emango ditu, balio baxuago batek (adib. 10) kontserbadoreagoa izango den bitartean. (Lehenetsia: 40)", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Egin erreferentzia zure buruari \"Erabiltzaile\" gisa (adib., \"Erabiltzailea gaztelania ikasten ari da\")", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Egin erreferentzia zure buruari \"Erabiltzaile\" gisa (adib., \"Erabiltzailea gaztelania ikasten ari da\")",
"References from": "Erreferentziak hemendik", "References from": "Erreferentziak hemendik",
"Refused when it shouldn't have": "Ukatu duenean ukatu behar ez zuenean", "Refused when it shouldn't have": "Ukatu duenean ukatu behar ez zuenean",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Ezarri kalkulurako erabilitako langile harien kopurua. Aukera honek kontrolatzen du zenbat hari erabiltzen diren sarrerako eskaerak aldi berean prozesatzeko. Balio hau handitzeak errendimendua hobetu dezake konkurrentzia altuko lan-kargetan, baina CPU baliabide gehiago kontsumitu ditzake.", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Ezarri kalkulurako erabilitako langile harien kopurua. Aukera honek kontrolatzen du zenbat hari erabiltzen diren sarrerako eskaerak aldi berean prozesatzeko. Balio hau handitzeak errendimendua hobetu dezake konkurrentzia altuko lan-kargetan, baina CPU baliabide gehiago kontsumitu ditzake.",
"Set Voice": "Ezarri ahotsa", "Set Voice": "Ezarri ahotsa",
"Set whisper model": "Ezarri whisper modeloa", "Set whisper model": "Ezarri whisper modeloa",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "Ezartzen du modeloak zenbat atzera begiratu behar duen errepikapenak saihesteko. (Lehenetsia: 64, 0 = desgaituta, -1 = num_ctx)", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "Ezartzen du sorkuntzarako erabiliko den ausazko zenbakien hazia. Hau zenbaki zehatz batera ezartzeak modeloak testu bera sortzea eragingo du prompt bererako. (Lehenetsia: ausazkoa)", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "Ezartzen du hurrengo tokena sortzeko erabilitako testuinguru leihoaren tamaina. (Lehenetsia: 2048)", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Ezartzen ditu erabiliko diren gelditzeko sekuentziak. Patroi hau aurkitzen denean, LLMak testua sortzeari utziko dio eta itzuli egingo da. Gelditzeko patroi anitz ezar daitezke modelfile batean gelditzeko parametro anitz zehaztuz.", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Ezartzen ditu erabiliko diren gelditzeko sekuentziak. Patroi hau aurkitzen denean, LLMak testua sortzeari utziko dio eta itzuli egingo da. Gelditzeko patroi anitz ezar daitezke modelfile batean gelditzeko parametro anitz zehaztuz.",
"Settings": "Ezarpenak", "Settings": "Ezarpenak",
"Settings saved successfully!": "Ezarpenak ongi gorde dira!", "Settings saved successfully!": "Ezarpenak ongi gorde dira!",
@ -964,7 +980,7 @@
"System Prompt": "Sistema prompta", "System Prompt": "Sistema prompta",
"Tags Generation": "", "Tags Generation": "",
"Tags Generation Prompt": "Etiketa sortzeko prompta", "Tags Generation Prompt": "Etiketa sortzeko prompta",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "Isats-libre laginketa erabiltzen da irteran probabilitate txikiagoko tokenen eragina murrizteko. Balio altuago batek (adib., 2.0) eragina gehiago murriztuko du, 1.0 balioak ezarpen hau desgaitzen duen bitartean. (lehenetsia: 1)", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "Ukitu eteteko", "Tap to interrupt": "Ukitu eteteko",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "Eskerrik asko zure iritzia emateagatik!", "Thanks for your feedback!": "Eskerrik asko zure iritzia emateagatik!",
"The Application Account DN you bind with for search": "Bilaketarako lotzen duzun aplikazio kontuaren DN-a", "The Application Account DN you bind with for search": "Bilaketarako lotzen duzun aplikazio kontuaren DN-a",
"The base to search for users": "Erabiltzaileak bilatzeko oinarria", "The base to search for users": "Erabiltzaileak bilatzeko oinarria",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "Sorta tamainak zehazten du zenbat testu eskaera prozesatzen diren batera aldi berean. Sorta tamaina handiago batek modeloaren errendimendua eta abiadura handitu ditzake, baina memoria gehiago behar du. (Lehenetsia: 512)", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Plugin honen atzean dauden garatzaileak komunitateko boluntario sutsuak dira. Plugin hau baliagarria iruditzen bazaizu, mesedez kontuan hartu bere garapenean laguntzea.", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Plugin honen atzean dauden garatzaileak komunitateko boluntario sutsuak dira. Plugin hau baliagarria iruditzen bazaizu, mesedez kontuan hartu bere garapenean laguntzea.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Ebaluazio sailkapena Elo sailkapen sisteman oinarritzen da eta denbora errealean eguneratzen da.", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Ebaluazio sailkapena Elo sailkapen sisteman oinarritzen da eta denbora errealean eguneratzen da.",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Fitxategiaren gehienezko tamaina MB-tan. Fitxategiaren tamainak muga hau gainditzen badu, fitxategia ez da kargatuko.", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Fitxategiaren gehienezko tamaina MB-tan. Fitxategiaren tamainak muga hau gainditzen badu, fitxategia ez da kargatuko.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Txatean aldi berean erabili daitezkeen fitxategien gehienezko kopurua. Fitxategi kopuruak muga hau gainditzen badu, fitxategiak ez dira kargatuko.", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Txatean aldi berean erabili daitezkeen fitxategien gehienezko kopurua. Fitxategi kopuruak muga hau gainditzen badu, fitxategiak ez dira kargatuko.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Puntuazioa 0.0 (0%) eta 1.0 (100%) arteko balio bat izan behar da.", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "Puntuazioa 0.0 (0%) eta 1.0 (100%) arteko balio bat izan behar da.",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "Modeloaren tenperatura. Tenperatura handitzeak modeloaren erantzunak sortzaileagoak izatea eragingo du. (Lehenetsia: 0.8)", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Gaia", "Theme": "Gaia",
"Thinking...": "Pentsatzen...", "Thinking...": "Pentsatzen...",
"This action cannot be undone. Do you wish to continue?": "Ekintza hau ezin da desegin. Jarraitu nahi duzu?", "This action cannot be undone. Do you wish to continue?": "Ekintza hau ezin da desegin. Jarraitu nahi duzu?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Honek zure elkarrizketa baliotsuak modu seguruan zure backend datu-basean gordeko direla ziurtatzen du. Eskerrik asko!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Honek zure elkarrizketa baliotsuak modu seguruan zure backend datu-basean gordeko direla ziurtatzen du. Eskerrik asko!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Hau funtzionalitate esperimental bat da, baliteke espero bezala ez funtzionatzea eta edozein unetan aldaketak izatea.", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "Hau funtzionalitate esperimental bat da, baliteke espero bezala ez funtzionatzea eta edozein unetan aldaketak izatea.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "Aukera honek kontrolatzen du zenbat token mantentzen diren testuingurua freskatzean. Adibidez, 2-ra ezarrita badago, elkarrizketaren testuinguruko azken 2 tokenak mantenduko dira. Testuingurua mantentzeak elkarrizketaren jarraitutasuna mantentzen lagun dezake, baina gai berriei erantzuteko gaitasuna murriztu dezake. (Lehenetsia: 24)", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "Aukera honek ereduak bere erantzunean sor dezakeen token kopuru maximoa ezartzen du. Muga hau handitzeak ereduari erantzun luzeagoak emateko aukera ematen dio, baina eduki ez-erabilgarri edo ez-egokia sortzeko probabilitatea ere handitu dezake. (Lehenetsia: 128)", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Aukera honek bilduman dauden fitxategi guztiak ezabatuko ditu eta berriki kargatutako fitxategiekin ordezkatuko ditu.", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "Aukera honek bilduman dauden fitxategi guztiak ezabatuko ditu eta berriki kargatutako fitxategiekin ordezkatuko ditu.",
"This response was generated by \"{{model}}\"": "Erantzun hau \"{{model}}\" modeloak sortu du", "This response was generated by \"{{model}}\"": "Erantzun hau \"{{model}}\" modeloak sortu du",
"This will delete": "Honek ezabatuko du", "This will delete": "Honek ezabatuko du",
@ -1132,7 +1148,7 @@
"Why?": "Zergatik?", "Why?": "Zergatik?",
"Widescreen Mode": "Pantaila zabaleko modua", "Widescreen Mode": "Pantaila zabaleko modua",
"Won": "Irabazi du", "Won": "Irabazi du",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "Top-k-rekin batera lan egiten du. Balio altuago batek (adib., 0.95) testu anitzagoa sortuko du, balio baxuago batek (adib., 0.5) testu fokatu eta kontserbadoreagoa sortuko duen bitartean. (Lehenetsia: 0.9)", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Lan-eremua", "Workspace": "Lan-eremua",
"Workspace Permissions": "Lan-eremuaren baimenak", "Workspace Permissions": "Lan-eremuaren baimenak",
"Write": "", "Write": "",
@ -1142,6 +1158,7 @@
"Write your model template content here": "Idatzi hemen zure modelo txantiloi edukia", "Write your model template content here": "Idatzi hemen zure modelo txantiloi edukia",
"Yesterday": "Atzo", "Yesterday": "Atzo",
"You": "Zu", "You": "Zu",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Gehienez {{maxCount}} fitxategirekin txateatu dezakezu aldi berean.", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Gehienez {{maxCount}} fitxategirekin txateatu dezakezu aldi berean.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "LLMekin dituzun interakzioak pertsonalizatu ditzakezu memoriak gehituz beheko 'Kudeatu' botoiaren bidez, lagungarriagoak eta zuretzat egokituagoak eginez.", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "LLMekin dituzun interakzioak pertsonalizatu ditzakezu memoriak gehituz beheko 'Kudeatu' botoiaren bidez, lagungarriagoak eta zuretzat egokituagoak eginez.",
"You cannot upload an empty file.": "Ezin duzu fitxategi huts bat kargatu.", "You cannot upload an empty file.": "Ezin duzu fitxategi huts bat kargatu.",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(e.g. `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(e.g. `sh webui.sh --api`)",
"(latest)": "(آخرین)", "(latest)": "(آخرین)",
"{{ models }}": "{{ models }}", "{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "", "{{COUNT}} Replies": "",
"{{user}}'s Chats": "{{user}} گفتگوهای", "{{user}}'s Chats": "{{user}} گفتگوهای",
"{{webUIName}} Backend Required": "بکند {{webUIName}} نیاز است.", "{{webUIName}} Backend Required": "بکند {{webUIName}} نیاز است.",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "پارامترهای پیشرفته", "Advanced Parameters": "پارامترهای پیشرفته",
"Advanced Params": "پارام\u200cهای پیشرفته", "Advanced Params": "پارام\u200cهای پیشرفته",
"All": "",
"All Documents": "همهٔ سند\u200cها", "All Documents": "همهٔ سند\u200cها",
"All models deleted successfully": "", "All models deleted successfully": "",
"Allow Chat Controls": "", "Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "", "Allow Voice Interruption in Call": "",
"Allowed Endpoints": "", "Allowed Endpoints": "",
"Already have an account?": "از قبل حساب کاربری دارید؟", "Already have an account?": "از قبل حساب کاربری دارید؟",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "", "Amazing": "",
"an assistant": "یک دستیار", "an assistant": "یک دستیار",
@ -93,6 +95,7 @@
"Are you sure?": "مطمئنید؟", "Are you sure?": "مطمئنید؟",
"Arena Models": "", "Arena Models": "",
"Artifacts": "", "Artifacts": "",
"Ask": "",
"Ask a question": "سوالی بپرسید", "Ask a question": "سوالی بپرسید",
"Assistant": "دستیار", "Assistant": "دستیار",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "", "Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "", "Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "کلید API جستجوی شجاع", "Brave Search API Key": "کلید API جستجوی شجاع",
"By {{name}}": "", "By {{name}}": "",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "مجموعه", "Collection": "مجموعه",
"Color": "", "Color": "",
"ComfyUI": "کومیوآی", "ComfyUI": "کومیوآی",
@ -208,7 +213,7 @@
"Confirm your new password": "", "Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "ارتباطات", "Connections": "ارتباطات",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "برای دسترسی به WebUI با مدیر تماس بگیرید", "Contact Admin for WebUI Access": "برای دسترسی به WebUI با مدیر تماس بگیرید",
"Content": "محتوا", "Content": "محتوا",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "", "Continue with Email": "",
"Continue with LDAP": "", "Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "کنترل\u200cها", "Controls": "کنترل\u200cها",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "کپی شد", "Copied": "کپی شد",
"Copied shared chat URL to clipboard!": "URL چت به کلیپ بورد کپی شد!", "Copied shared chat URL to clipboard!": "URL چت به کلیپ بورد کپی شد!",
"Copied to clipboard": "به بریده\u200cدان کپی\u200cشد", "Copied to clipboard": "به بریده\u200cدان کپی\u200cشد",
@ -245,6 +250,7 @@
"Created At": "ایجاد شده در", "Created At": "ایجاد شده در",
"Created by": "ایجاد شده توسط", "Created by": "ایجاد شده توسط",
"CSV Import": "درون\u200cریزی CSV", "CSV Import": "درون\u200cریزی CSV",
"Ctrl+Enter to Send": "",
"Current Model": "مدل فعلی", "Current Model": "مدل فعلی",
"Current Password": "رمز عبور فعلی", "Current Password": "رمز عبور فعلی",
"Custom": "دلخواه", "Custom": "دلخواه",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "", "Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "فعال کردن ثبت نام\u200cهای جدید", "Enable New Sign Ups": "فعال کردن ثبت نام\u200cهای جدید",
"Enabled": "", "Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "اطمینان حاصل کنید که فایل CSV شما شامل چهار ستون در این ترتیب است: نام، ایمیل، رمز عبور، نقش.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "اطمینان حاصل کنید که فایل CSV شما شامل چهار ستون در این ترتیب است: نام، ایمیل، رمز عبور، نقش.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "", "Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "مقدار Chunk Overlap را وارد کنید", "Enter Chunk Overlap": "مقدار Chunk Overlap را وارد کنید",
"Enter Chunk Size": "مقدار Chunk Size را وارد کنید", "Enter Chunk Size": "مقدار Chunk Size را وارد کنید",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "", "Enter description": "",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "", "Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "کد زبان را وارد کنید", "Enter language codes": "کد زبان را وارد کنید",
"Enter Model ID": "", "Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "تگ مدل را وارد کنید (مثلا {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "تگ مدل را وارد کنید (مثلا {{modelTag}})",
"Enter Mojeek Search API Key": "", "Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "تعداد گام ها را وارد کنید (مثال: 50)", "Enter Number of Steps (e.g. 50)": "تعداد گام ها را وارد کنید (مثال: 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "", "Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "", "Enter Sampler (e.g. Euler a)": "",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "", "Enter Tika Server URL": "",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "مقدار Top K را وارد کنید", "Enter Top K": "مقدار Top K را وارد کنید",
"Enter URL (e.g. http://127.0.0.1:7860/)": "مقدار URL را وارد کنید (مثال http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "مقدار URL را وارد کنید (مثال http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "مقدار URL را وارد کنید (مثال http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "مقدار URL را وارد کنید (مثال http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "", "Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "", "Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "", "Exclude": "",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "آزمایشی", "Experimental": "آزمایشی",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "", "Explore the cosmos": "",
"Export": "برون\u200cریزی", "Export": "برون\u200cریزی",
"Export All Archived Chats": "", "Export All Archived Chats": "",
@ -566,7 +580,7 @@
"Include": "شامل", "Include": "شامل",
"Include `--api-auth` flag when running stable-diffusion-webui": "", "Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "فلگ `--api` را هنکام اجرای stable-diffusion-webui استفاده کنید.", "Include `--api` flag when running stable-diffusion-webui": "فلگ `--api` را هنکام اجرای stable-diffusion-webui استفاده کنید.",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "اطلاعات", "Info": "اطلاعات",
"Input commands": "ورودی دستورات", "Input commands": "ورودی دستورات",
"Install from Github URL": "نصب از ادرس Github", "Install from Github URL": "نصب از ادرس Github",
@ -624,6 +638,7 @@
"Local": "", "Local": "",
"Local Models": "", "Local Models": "",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "", "Lost": "",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "ساخته شده توسط OpenWebUI Community", "Made by Open WebUI Community": "ساخته شده توسط OpenWebUI Community",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "", "Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "هنگام دسترسی به میکروفون، اجازه داده نشد: {{error}}", "Permission denied when accessing microphone: {{error}}": "هنگام دسترسی به میکروفون، اجازه داده نشد: {{error}}",
"Permissions": "", "Permissions": "",
"Perplexity API Key": "",
"Personalization": "شخصی سازی", "Personalization": "شخصی سازی",
"Pin": "", "Pin": "",
"Pinned": "", "Pinned": "",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "ضبط صدا", "Record voice": "ضبط صدا",
"Redirecting you to Open WebUI Community": "در حال هدایت به OpenWebUI Community", "Redirecting you to Open WebUI Community": "در حال هدایت به OpenWebUI Community",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "", "References from": "",
"Refused when it shouldn't have": "رد شده زمانی که باید نباشد", "Refused when it shouldn't have": "رد شده زمانی که باید نباشد",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "تنظیم صدا", "Set Voice": "تنظیم صدا",
"Set whisper model": "", "Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "تنظیمات", "Settings": "تنظیمات",
"Settings saved successfully!": "تنظیمات با موفقیت ذخیره شد!", "Settings saved successfully!": "تنظیمات با موفقیت ذخیره شد!",
@ -964,7 +980,7 @@
"System Prompt": "پرامپت سیستم", "System Prompt": "پرامپت سیستم",
"Tags Generation": "", "Tags Generation": "",
"Tags Generation Prompt": "", "Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "", "Tap to interrupt": "",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "با تشکر از بازخورد شما!", "Thanks for your feedback!": "با تشکر از بازخورد شما!",
"The Application Account DN you bind with for search": "", "The Application Account DN you bind with for search": "",
"The base to search for users": "", "The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "امتیاز باید یک مقدار بین 0.0 (0%) و 1.0 (100%) باشد.", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "امتیاز باید یک مقدار بین 0.0 (0%) و 1.0 (100%) باشد.",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "پوسته", "Theme": "پوسته",
"Thinking...": "در حال فکر...", "Thinking...": "در حال فکر...",
"This action cannot be undone. Do you wish to continue?": "این اقدام قابل بازگردانی نیست. برای ادامه اطمینان دارید؟", "This action cannot be undone. Do you wish to continue?": "این اقدام قابل بازگردانی نیست. برای ادامه اطمینان دارید؟",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "این تضمین می کند که مکالمات ارزشمند شما به طور ایمن در پایگاه داده بکند ذخیره می شود. تشکر!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "این تضمین می کند که مکالمات ارزشمند شما به طور ایمن در پایگاه داده بکند ذخیره می شود. تشکر!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "", "This response was generated by \"{{model}}\"": "",
"This will delete": "", "This will delete": "",
@ -1132,7 +1148,7 @@
"Why?": "", "Why?": "",
"Widescreen Mode": "حالت صفحهٔ عریض", "Widescreen Mode": "حالت صفحهٔ عریض",
"Won": "", "Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "محیط کار", "Workspace": "محیط کار",
"Workspace Permissions": "", "Workspace Permissions": "",
"Write": "", "Write": "",
@ -1142,6 +1158,7 @@
"Write your model template content here": "", "Write your model template content here": "",
"Yesterday": "دیروز", "Yesterday": "دیروز",
"You": "شما", "You": "شما",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "شما در هر زمان نهایتا می\u200cتوانید با {{maxCount}} پرونده گفتگو کنید.", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "شما در هر زمان نهایتا می\u200cتوانید با {{maxCount}} پرونده گفتگو کنید.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "", "You cannot upload an empty file.": "",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(esim. `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(esim. `sh webui.sh --api`)",
"(latest)": "(uusin)", "(latest)": "(uusin)",
"{{ models }}": "{{ mallit }}", "{{ models }}": "{{ mallit }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "{{COUNT}} vastausta", "{{COUNT}} Replies": "{{COUNT}} vastausta",
"{{user}}'s Chats": "{{user}}:n keskustelut", "{{user}}'s Chats": "{{user}}:n keskustelut",
"{{webUIName}} Backend Required": "{{webUIName}}-backend vaaditaan", "{{webUIName}} Backend Required": "{{webUIName}}-backend vaaditaan",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Ylläpitäjillä on pääsy kaikkiin työkaluihin koko ajan; käyttäjät tarvitsevat työkaluja mallille määritettynä työtilassa.", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Ylläpitäjillä on pääsy kaikkiin työkaluihin koko ajan; käyttäjät tarvitsevat työkaluja mallille määritettynä työtilassa.",
"Advanced Parameters": "Edistyneet parametrit", "Advanced Parameters": "Edistyneet parametrit",
"Advanced Params": "Edistyneet parametrit", "Advanced Params": "Edistyneet parametrit",
"All": "",
"All Documents": "Kaikki asiakirjat", "All Documents": "Kaikki asiakirjat",
"All models deleted successfully": "Kaikki mallit poistettu onnistuneesti", "All models deleted successfully": "Kaikki mallit poistettu onnistuneesti",
"Allow Chat Controls": "Salli keskustelujen hallinta", "Allow Chat Controls": "Salli keskustelujen hallinta",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Salli äänen keskeytys puhelussa", "Allow Voice Interruption in Call": "Salli äänen keskeytys puhelussa",
"Allowed Endpoints": "Hyväksytyt päätepisteet", "Allowed Endpoints": "Hyväksytyt päätepisteet",
"Already have an account?": "Onko sinulla jo tili?", "Already have an account?": "Onko sinulla jo tili?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "Vaihtoehto top_p:lle, jolla pyritään varmistamaan laadun ja monipuolisuuden tasapaino. Parametri p edustaa pienintä todennäköisyyttä, jolla token otetaan huomioon suhteessa todennäköisimpään tokeniin. Esimerkiksi p=0.05 ja todennäköisin token todennäköisyydellä 0.9, arvoltaan alle 0.045 olevat logit suodatetaan pois. (Oletus: 0.0)", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "Aina", "Always": "Aina",
"Amazing": "Hämmästyttävä", "Amazing": "Hämmästyttävä",
"an assistant": "avustaja", "an assistant": "avustaja",
@ -93,6 +95,7 @@
"Are you sure?": "Oletko varma?", "Are you sure?": "Oletko varma?",
"Arena Models": "Arena-mallit", "Arena Models": "Arena-mallit",
"Artifacts": "Artefaktit", "Artifacts": "Artefaktit",
"Ask": "",
"Ask a question": "Kysyä kysymys", "Ask a question": "Kysyä kysymys",
"Assistant": "Avustaja", "Assistant": "Avustaja",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "Bing Search V7 -päätepisteen osoite", "Bing Search V7 Endpoint": "Bing Search V7 -päätepisteen osoite",
"Bing Search V7 Subscription Key": "Bing Search V7 -tilauskäyttäjäavain", "Bing Search V7 Subscription Key": "Bing Search V7 -tilauskäyttäjäavain",
"Bocha Search API Key": "Bocha Search API -avain", "Bocha Search API Key": "Bocha Search API -avain",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Brave Search API -avain", "Brave Search API Key": "Brave Search API -avain",
"By {{name}}": "Tekijä {{name}}", "By {{name}}": "Tekijä {{name}}",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "Ohjelmatulkki", "Code Interpreter": "Ohjelmatulkki",
"Code Interpreter Engine": "Ohjelmatulkin moottori", "Code Interpreter Engine": "Ohjelmatulkin moottori",
"Code Interpreter Prompt Template": "Ohjelmatulkin kehotemalli", "Code Interpreter Prompt Template": "Ohjelmatulkin kehotemalli",
"Collapse": "",
"Collection": "Kokoelma", "Collection": "Kokoelma",
"Color": "Väri", "Color": "Väri",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "Vahvista uusi salasanasi", "Confirm your new password": "Vahvista uusi salasanasi",
"Connect to your own OpenAI compatible API endpoints.": "Yhdistä oma OpenAI yhteensopiva API päätepiste.", "Connect to your own OpenAI compatible API endpoints.": "Yhdistä oma OpenAI yhteensopiva API päätepiste.",
"Connections": "Yhteydet", "Connections": "Yhteydet",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Ota yhteyttä ylläpitäjään WebUI-käyttöä varten", "Contact Admin for WebUI Access": "Ota yhteyttä ylläpitäjään WebUI-käyttöä varten",
"Content": "Sisältö", "Content": "Sisältö",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "Jatka sähköpostilla", "Continue with Email": "Jatka sähköpostilla",
"Continue with LDAP": "Jatka LDAP:illa", "Continue with LDAP": "Jatka LDAP:illa",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Säädä, miten viestin teksti jaetaan puhesynteesipyyntöjä varten. 'Välimerkit' jakaa lauseisiin, 'kappaleet' jakaa kappaleisiin ja 'ei mitään' pitää viestin yhtenä merkkijonona.", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Säädä, miten viestin teksti jaetaan puhesynteesipyyntöjä varten. 'Välimerkit' jakaa lauseisiin, 'kappaleet' jakaa kappaleisiin ja 'ei mitään' pitää viestin yhtenä merkkijonona.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Ohjaimet", "Controls": "Ohjaimet",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "Säätelee tulosteen yhtenäisyyden ja monimuotoisuuden välistä tasapainoa. Alhaisempi arvo tuottaa keskittyneempää ja yhtenäisempää tekstiä. (Oletus: 5.0)", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Kopioitu", "Copied": "Kopioitu",
"Copied shared chat URL to clipboard!": "Jaettu keskustelulinkki kopioitu leikepöydälle!", "Copied shared chat URL to clipboard!": "Jaettu keskustelulinkki kopioitu leikepöydälle!",
"Copied to clipboard": "Kopioitu leikepöydälle", "Copied to clipboard": "Kopioitu leikepöydälle",
@ -245,6 +250,7 @@
"Created At": "Luotu", "Created At": "Luotu",
"Created by": "Luonut", "Created by": "Luonut",
"CSV Import": "CSV-tuonti", "CSV Import": "CSV-tuonti",
"Ctrl+Enter to Send": "",
"Current Model": "Nykyinen malli", "Current Model": "Nykyinen malli",
"Current Password": "Nykyinen salasana", "Current Password": "Nykyinen salasana",
"Custom": "Mukautettu", "Custom": "Mukautettu",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Ota Memory Locking (mlock) käyttöön estääksesi mallidatan vaihtamisen pois RAM-muistista. Tämä lukitsee mallin työsivut RAM-muistiin, varmistaen että niitä ei vaihdeta levylle. Tämä voi parantaa suorituskykyä välttämällä sivuvikoja ja varmistamalla nopean tietojen käytön.", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Ota Memory Locking (mlock) käyttöön estääksesi mallidatan vaihtamisen pois RAM-muistista. Tämä lukitsee mallin työsivut RAM-muistiin, varmistaen että niitä ei vaihdeta levylle. Tämä voi parantaa suorituskykyä välttämällä sivuvikoja ja varmistamalla nopean tietojen käytön.",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Ota Memory Mapping (mmap) käyttöön ladataksesi mallidataa. Tämä vaihtoehto sallii järjestelmän käyttää levytilaa RAM-laajennuksena käsittelemällä levytiedostoja kuin ne olisivat RAM-muistissa. Tämä voi parantaa mallin suorituskykyä sallimalla nopeamman tietojen käytön. Kuitenkin se ei välttämättä toimi oikein kaikissa järjestelmissä ja voi kuluttaa huomattavasti levytilaa.", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Ota Memory Mapping (mmap) käyttöön ladataksesi mallidataa. Tämä vaihtoehto sallii järjestelmän käyttää levytilaa RAM-laajennuksena käsittelemällä levytiedostoja kuin ne olisivat RAM-muistissa. Tämä voi parantaa mallin suorituskykyä sallimalla nopeamman tietojen käytön. Kuitenkin se ei välttämättä toimi oikein kaikissa järjestelmissä ja voi kuluttaa huomattavasti levytilaa.",
"Enable Message Rating": "Ota viestiarviointi käyttöön", "Enable Message Rating": "Ota viestiarviointi käyttöön",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "Ota Mirostat-näytteenotto käyttöön hallinnan monimerkityksellisyydelle. (Oletus: 0, 0 = Ei käytössä, 1 = Mirostat, 2 = Mirostat 2.0)", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Salli uudet rekisteröitymiset", "Enable New Sign Ups": "Salli uudet rekisteröitymiset",
"Enabled": "Käytössä", "Enabled": "Käytössä",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Varmista, että CSV-tiedostossasi on 4 saraketta tässä järjestyksessä: Nimi, Sähköposti, Salasana, Rooli.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Varmista, että CSV-tiedostossasi on 4 saraketta tässä järjestyksessä: Nimi, Sähköposti, Salasana, Rooli.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "Kirjoita CFG-mitta (esim. 7.0)", "Enter CFG Scale (e.g. 7.0)": "Kirjoita CFG-mitta (esim. 7.0)",
"Enter Chunk Overlap": "Syötä osien päällekkäisyys", "Enter Chunk Overlap": "Syötä osien päällekkäisyys",
"Enter Chunk Size": "Syötä osien koko", "Enter Chunk Size": "Syötä osien koko",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "Kirjoita kuvaus", "Enter description": "Kirjoita kuvaus",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "Kirjoita Juypyter token", "Enter Jupyter Token": "Kirjoita Juypyter token",
"Enter Jupyter URL": "Kirjoita Jupyter verkko-osoite", "Enter Jupyter URL": "Kirjoita Jupyter verkko-osoite",
"Enter Kagi Search API Key": "Kirjoita Kagi Search API -avain", "Enter Kagi Search API Key": "Kirjoita Kagi Search API -avain",
"Enter Key Behavior": "",
"Enter language codes": "Kirjoita kielikoodit", "Enter language codes": "Kirjoita kielikoodit",
"Enter Model ID": "Kirjoita mallitunnus", "Enter Model ID": "Kirjoita mallitunnus",
"Enter model tag (e.g. {{modelTag}})": "Kirjoita mallitagi (esim. {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "Kirjoita mallitagi (esim. {{modelTag}})",
"Enter Mojeek Search API Key": "Kirjoita Mojeek Search API -avain", "Enter Mojeek Search API Key": "Kirjoita Mojeek Search API -avain",
"Enter Number of Steps (e.g. 50)": "Kirjoita askelten määrä (esim. 50)", "Enter Number of Steps (e.g. 50)": "Kirjoita askelten määrä (esim. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "Kirjoita välityspalvelimen verkko-osoite (esim. https://käyttäjä:salasana@host:portti)", "Enter proxy URL (e.g. https://user:password@host:port)": "Kirjoita välityspalvelimen verkko-osoite (esim. https://käyttäjä:salasana@host:portti)",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "Kirjoita näytteistäjä (esim. Euler a)", "Enter Sampler (e.g. Euler a)": "Kirjoita näytteistäjä (esim. Euler a)",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "Kirjoita julkinen WebUI verkko-osoitteesi. Verkko-osoitetta käytetään osoitteiden luontiin ilmoituksissa.", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "Kirjoita julkinen WebUI verkko-osoitteesi. Verkko-osoitetta käytetään osoitteiden luontiin ilmoituksissa.",
"Enter Tika Server URL": "Kirjoita Tika Server URL", "Enter Tika Server URL": "Kirjoita Tika Server URL",
"Enter timeout in seconds": "Aseta aikakatkaisu sekunneissa", "Enter timeout in seconds": "Aseta aikakatkaisu sekunneissa",
"Enter to Send": "",
"Enter Top K": "Kirjoita Top K", "Enter Top K": "Kirjoita Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Kirjoita verkko-osoite (esim. http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "Kirjoita verkko-osoite (esim. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Kirjoita verkko-osoite (esim. http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "Kirjoita verkko-osoite (esim. http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "Esimerkki: posti", "Example: mail": "Esimerkki: posti",
"Example: ou=users,dc=foo,dc=example": "Esimerkki: ou=käyttäjät,dc=foo,dc=example", "Example: ou=users,dc=foo,dc=example": "Esimerkki: ou=käyttäjät,dc=foo,dc=example",
"Example: sAMAccountName or uid or userPrincipalName": "Esimerkki: sAMAccountName tai uid tai userPrincipalName", "Example: sAMAccountName or uid or userPrincipalName": "Esimerkki: sAMAccountName tai uid tai userPrincipalName",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "Jätä pois", "Exclude": "Jätä pois",
"Execute code for analysis": "Suorita koodi analysointia varten", "Execute code for analysis": "Suorita koodi analysointia varten",
"Expand": "",
"Experimental": "Kokeellinen", "Experimental": "Kokeellinen",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "Tutki avaruutta", "Explore the cosmos": "Tutki avaruutta",
"Export": "Vie", "Export": "Vie",
"Export All Archived Chats": "Vie kaikki arkistoidut keskustelut", "Export All Archived Chats": "Vie kaikki arkistoidut keskustelut",
@ -566,7 +580,7 @@
"Include": "Sisällytä", "Include": "Sisällytä",
"Include `--api-auth` flag when running stable-diffusion-webui": "Sisällytä `--api-auth`-lippu ajettaessa stable-diffusion-webui", "Include `--api-auth` flag when running stable-diffusion-webui": "Sisällytä `--api-auth`-lippu ajettaessa stable-diffusion-webui",
"Include `--api` flag when running stable-diffusion-webui": "Sisällytä `--api`-lippu ajettaessa stable-diffusion-webui", "Include `--api` flag when running stable-diffusion-webui": "Sisällytä `--api`-lippu ajettaessa stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "Vaikuttaa siihen, kuinka nopeasti algoritmi reagoi tuotetusta tekstistä saatuun palautteeseen. Alhaisempi oppimisaste johtaa hitaampiin säätöihin, kun taas korkeampi oppimisaste tekee algoritmista reaktiivisemman. (Oletus: 0.1)", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Tiedot", "Info": "Tiedot",
"Input commands": "Syötekäskyt", "Input commands": "Syötekäskyt",
"Install from Github URL": "Asenna Github-URL:stä", "Install from Github URL": "Asenna Github-URL:stä",
@ -624,6 +638,7 @@
"Local": "Paikallinen", "Local": "Paikallinen",
"Local Models": "Paikalliset mallit", "Local Models": "Paikalliset mallit",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "Mennyt", "Lost": "Mennyt",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "Tehnyt OpenWebUI-yhteisö", "Made by Open WebUI Community": "Tehnyt OpenWebUI-yhteisö",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "Käyttöoikeus evätty mikrofonille", "Permission denied when accessing microphone": "Käyttöoikeus evätty mikrofonille",
"Permission denied when accessing microphone: {{error}}": "Käyttöoikeus evätty mikrofonille: {{error}}", "Permission denied when accessing microphone: {{error}}": "Käyttöoikeus evätty mikrofonille: {{error}}",
"Permissions": "Käyttöoikeudet", "Permissions": "Käyttöoikeudet",
"Perplexity API Key": "",
"Personalization": "Personointi", "Personalization": "Personointi",
"Pin": "Kiinnitä", "Pin": "Kiinnitä",
"Pinned": "Kiinnitetty", "Pinned": "Kiinnitetty",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "Nauhoita ääntä", "Record voice": "Nauhoita ääntä",
"Redirecting you to Open WebUI Community": "Ohjataan sinut OpenWebUI-yhteisöön", "Redirecting you to Open WebUI Community": "Ohjataan sinut OpenWebUI-yhteisöön",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "Vähentää merkityksetöntä sisältöä tuottavan todennäköisyyttä. Korkeampi arvo (esim. 100) antaa monipuolisempia vastauksia, kun taas alhaisempi arvo (esim. 10) on konservatiivisempi. (Oletus: 40)", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Viittaa itseen \"Käyttäjänä\" (esim. \"Käyttäjä opiskelee espanjaa\")", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Viittaa itseen \"Käyttäjänä\" (esim. \"Käyttäjä opiskelee espanjaa\")",
"References from": "Viitteet lähteistä", "References from": "Viitteet lähteistä",
"Refused when it shouldn't have": "Kieltäytyi, vaikka ei olisi pitänyt", "Refused when it shouldn't have": "Kieltäytyi, vaikka ei olisi pitänyt",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Aseta työntekijäsäikeiden määrä laskentaa varten. Tämä asetus kontrolloi, kuinka monta säiettä käytetään saapuvien pyyntöjen rinnakkaiseen käsittelyyn. Arvon kasvattaminen voi parantaa suorituskykyä suurissa samanaikaisissa työkuormissa, mutta voi myös kuluttaa enemmän keskussuorittimen resursseja.", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Aseta työntekijäsäikeiden määrä laskentaa varten. Tämä asetus kontrolloi, kuinka monta säiettä käytetään saapuvien pyyntöjen rinnakkaiseen käsittelyyn. Arvon kasvattaminen voi parantaa suorituskykyä suurissa samanaikaisissa työkuormissa, mutta voi myös kuluttaa enemmän keskussuorittimen resursseja.",
"Set Voice": "Aseta puheääni", "Set Voice": "Aseta puheääni",
"Set whisper model": "Aseta whisper-malli", "Set whisper model": "Aseta whisper-malli",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "Määrittää, kuinka kauas taaksepäin malli katsoo välttääkseen toistoa. (Oletus: 64, 0 = pois käytöstä, -1 = num_ctx)", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "Määrittää satunnaislukujen siemenen käytettäväksi generoinnissa. Tämän asettaminen tiettyyn numeroon saa mallin tuottamaan saman tekstin samalle kehoteelle. (Oletus: satunnainen)", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "Määrittää kontekstiikkunan koon, jota käytetään seuraavan tokenin tuottamiseen. (Oletus: 2048)", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Määrittää käytettävät lopetussekvenssit. Kun tämä kuvio havaitaan, LLM lopettaa tekstin tuottamisen ja palauttaa. Useita lopetuskuvioita voidaan asettaa määrittämällä useita erillisiä lopetusparametreja mallitiedostoon.", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Määrittää käytettävät lopetussekvenssit. Kun tämä kuvio havaitaan, LLM lopettaa tekstin tuottamisen ja palauttaa. Useita lopetuskuvioita voidaan asettaa määrittämällä useita erillisiä lopetusparametreja mallitiedostoon.",
"Settings": "Asetukset", "Settings": "Asetukset",
"Settings saved successfully!": "Asetukset tallennettu onnistuneesti!", "Settings saved successfully!": "Asetukset tallennettu onnistuneesti!",
@ -964,7 +980,7 @@
"System Prompt": "Järjestelmäkehote", "System Prompt": "Järjestelmäkehote",
"Tags Generation": "Tagien luonti", "Tags Generation": "Tagien luonti",
"Tags Generation Prompt": "Tagien luontikehote", "Tags Generation Prompt": "Tagien luontikehote",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "Tail-free-otanta käytetään vähentämään vähemmän todennäköisten tokenien vaikutusta tulokseen. Korkeampi arvo (esim. 2,0) vähentää vaikutusta enemmän, kun taas arvo 1,0 poistaa tämän asetuksen käytöstä. (oletus: 1)", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "Napauta keskeyttääksesi", "Tap to interrupt": "Napauta keskeyttääksesi",
"Tasks": "Tehtävät", "Tasks": "Tehtävät",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "Kiitos palautteestasi!", "Thanks for your feedback!": "Kiitos palautteestasi!",
"The Application Account DN you bind with for search": "Hakua varten sidottu sovelluksen käyttäjätilin DN", "The Application Account DN you bind with for search": "Hakua varten sidottu sovelluksen käyttäjätilin DN",
"The base to search for users": "Käyttäjien haun perusta", "The base to search for users": "Käyttäjien haun perusta",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "Erän koko määrittää, kuinka monta tekstipyyntöä käsitellään yhdessä kerralla. Suurempi erän koko voi parantaa mallin suorituskykyä ja nopeutta, mutta se vaatii myös enemmän muistia. (Oletus: 512)", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Tämän lisäosan takana olevat kehittäjät ovat intohimoisia vapaaehtoisyhteisöstä. Jos koet tämän lisäosan hyödylliseksi, harkitse sen kehittämisen tukemista.", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Tämän lisäosan takana olevat kehittäjät ovat intohimoisia vapaaehtoisyhteisöstä. Jos koet tämän lisäosan hyödylliseksi, harkitse sen kehittämisen tukemista.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Arviointitulosluettelo perustuu Elo-luokitusjärjestelmään ja päivittyy reaaliajassa.", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Arviointitulosluettelo perustuu Elo-luokitusjärjestelmään ja päivittyy reaaliajassa.",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Enimmäistiedostokoko megatavuissa. Jos tiedoston koko ylittää tämän rajan, tiedostoa ei ladata.", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Enimmäistiedostokoko megatavuissa. Jos tiedoston koko ylittää tämän rajan, tiedostoa ei ladata.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Suurin sallittu tiedostojen määrä käytettäväksi kerralla chatissa. Jos tiedostojen määrä ylittää tämän rajan, niitä ei ladata.", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Suurin sallittu tiedostojen määrä käytettäväksi kerralla chatissa. Jos tiedostojen määrä ylittää tämän rajan, niitä ei ladata.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Pisteytyksen tulee olla arvo välillä 0,0 (0 %) ja 1,0 (100 %).", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "Pisteytyksen tulee olla arvo välillä 0,0 (0 %) ja 1,0 (100 %).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "Mallin lämpötila. Lämpötilan nostaminen saa mallin vastaamaan luovemmin. (Oletus: 0,8)", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Teema", "Theme": "Teema",
"Thinking...": "Ajattelee...", "Thinking...": "Ajattelee...",
"This action cannot be undone. Do you wish to continue?": "Tätä toimintoa ei voi peruuttaa. Haluatko jatkaa?", "This action cannot be undone. Do you wish to continue?": "Tätä toimintoa ei voi peruuttaa. Haluatko jatkaa?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Tämä varmistaa, että arvokkaat keskustelusi tallennetaan turvallisesti backend-tietokantaasi. Kiitos!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Tämä varmistaa, että arvokkaat keskustelusi tallennetaan turvallisesti backend-tietokantaasi. Kiitos!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Tämä on kokeellinen ominaisuus, se ei välttämättä toimi odotetulla tavalla ja se voi muuttua milloin tahansa.", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "Tämä on kokeellinen ominaisuus, se ei välttämättä toimi odotetulla tavalla ja se voi muuttua milloin tahansa.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "Tämä asetus kontrolloi, kuinka monta tokenia säilytetään päivittäessä kontekstia. Esimerkiksi, jos asetetaan arvoksi 2, säilytetään viimeiset 2 keskustelukon-tekstin tokenia. Kontekstin säilyttäminen voi auttaa ylläpitämään keskustelun jatkuvuutta, mutta se voi vähentää kykyä vastata uusiin aiheisiin. (Oletus: 24)", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "Tämä asetus määrittää mallin vastauksen enimmäistokenmäärän. Tämän rajan nostaminen mahdollistaa mallin antavan pidempiä vastauksia, mutta se voi myös lisätä epähyödyllisen tai epärelevantin sisällön todennäköisyyttä. (Oletus: 128)", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Tämä vaihtoehto poistaa kaikki kokoelman nykyiset tiedostot ja korvaa ne uusilla ladatuilla tiedostoilla.", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "Tämä vaihtoehto poistaa kaikki kokoelman nykyiset tiedostot ja korvaa ne uusilla ladatuilla tiedostoilla.",
"This response was generated by \"{{model}}\"": "Tämän vastauksen tuotti \"{{model}}\"", "This response was generated by \"{{model}}\"": "Tämän vastauksen tuotti \"{{model}}\"",
"This will delete": "Tämä poistaa", "This will delete": "Tämä poistaa",
@ -1132,7 +1148,7 @@
"Why?": "Miksi?", "Why?": "Miksi?",
"Widescreen Mode": "Laajakuvatila", "Widescreen Mode": "Laajakuvatila",
"Won": "Voitti", "Won": "Voitti",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "Toimii yhdessä top-k:n kanssa. Korkeampi arvo (esim. 0,95) tuottaa monipuolisempaa tekstiä, kun taas alhaisempi arvo (esim. 0,5) tuottaa keskittyneempää ja konservatiivisempaa tekstiä. (Oletus: 0,9)", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Työtila", "Workspace": "Työtila",
"Workspace Permissions": "Työtilan käyttöoikeudet", "Workspace Permissions": "Työtilan käyttöoikeudet",
"Write": "Kirjoita", "Write": "Kirjoita",
@ -1142,6 +1158,7 @@
"Write your model template content here": "Kirjoita mallisi mallinnesisältö tähän", "Write your model template content here": "Kirjoita mallisi mallinnesisältö tähän",
"Yesterday": "Eilen", "Yesterday": "Eilen",
"You": "Sinä", "You": "Sinä",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Voit keskustella enintään {{maxCount}} tiedoston kanssa kerralla.", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Voit keskustella enintään {{maxCount}} tiedoston kanssa kerralla.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Voit personoida vuorovaikutustasi LLM-ohjelmien kanssa lisäämällä muistoja 'Hallitse'-painikkeen kautta, jolloin ne ovat hyödyllisempiä ja räätälöityjä sinua varten.", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Voit personoida vuorovaikutustasi LLM-ohjelmien kanssa lisäämällä muistoja 'Hallitse'-painikkeen kautta, jolloin ne ovat hyödyllisempiä ja räätälöityjä sinua varten.",
"You cannot upload an empty file.": "Et voi ladata tyhjää tiedostoa.", "You cannot upload an empty file.": "Et voi ladata tyhjää tiedostoa.",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(par exemple `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(par exemple `sh webui.sh --api`)",
"(latest)": "(dernier)", "(latest)": "(dernier)",
"{{ models }}": "{{ modèles }}", "{{ models }}": "{{ modèles }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "", "{{COUNT}} Replies": "",
"{{user}}'s Chats": "Discussions de {{user}}", "{{user}}'s Chats": "Discussions de {{user}}",
"{{webUIName}} Backend Required": "Backend {{webUIName}} requis", "{{webUIName}} Backend Required": "Backend {{webUIName}} requis",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Les administrateurs ont accès à tous les outils en tout temps ; les utilisateurs ont besoin d'outils affectés par modèle dans l'espace de travail.", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Les administrateurs ont accès à tous les outils en tout temps ; les utilisateurs ont besoin d'outils affectés par modèle dans l'espace de travail.",
"Advanced Parameters": "Paramètres avancés", "Advanced Parameters": "Paramètres avancés",
"Advanced Params": "Paramètres avancés", "Advanced Params": "Paramètres avancés",
"All": "",
"All Documents": "Tous les documents", "All Documents": "Tous les documents",
"All models deleted successfully": "", "All models deleted successfully": "",
"Allow Chat Controls": "", "Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Autoriser l'interruption vocale pendant un appel", "Allow Voice Interruption in Call": "Autoriser l'interruption vocale pendant un appel",
"Allowed Endpoints": "", "Allowed Endpoints": "",
"Already have an account?": "Avez-vous déjà un compte ?", "Already have an account?": "Avez-vous déjà un compte ?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "", "Amazing": "",
"an assistant": "un assistant", "an assistant": "un assistant",
@ -93,6 +95,7 @@
"Are you sure?": "Êtes-vous certain ?", "Are you sure?": "Êtes-vous certain ?",
"Arena Models": "", "Arena Models": "",
"Artifacts": "", "Artifacts": "",
"Ask": "",
"Ask a question": "", "Ask a question": "",
"Assistant": "", "Assistant": "",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "", "Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "", "Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Clé API Brave Search", "Brave Search API Key": "Clé API Brave Search",
"By {{name}}": "", "By {{name}}": "",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Collection", "Collection": "Collection",
"Color": "", "Color": "",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "", "Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Connexions", "Connections": "Connexions",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Contacter l'administrateur pour l'accès à l'interface Web", "Contact Admin for WebUI Access": "Contacter l'administrateur pour l'accès à l'interface Web",
"Content": "Contenu", "Content": "Contenu",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "", "Continue with Email": "",
"Continue with LDAP": "", "Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Contrôle comment le texte des messages est divisé pour les demandes de TTS. 'Ponctuation' divise en phrases, 'paragraphes' divise en paragraphes et 'aucun' garde le message comme une seule chaîne.", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Contrôle comment le texte des messages est divisé pour les demandes de TTS. 'Ponctuation' divise en phrases, 'paragraphes' divise en paragraphes et 'aucun' garde le message comme une seule chaîne.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Contrôles", "Controls": "Contrôles",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "", "Copied": "",
"Copied shared chat URL to clipboard!": "URL du chat copiée dans le presse-papiers\u00a0!", "Copied shared chat URL to clipboard!": "URL du chat copiée dans le presse-papiers\u00a0!",
"Copied to clipboard": "", "Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "Créé le", "Created At": "Créé le",
"Created by": "Créé par", "Created by": "Créé par",
"CSV Import": "Import CSV", "CSV Import": "Import CSV",
"Ctrl+Enter to Send": "",
"Current Model": "Modèle actuel amélioré", "Current Model": "Modèle actuel amélioré",
"Current Password": "Mot de passe actuel", "Current Password": "Mot de passe actuel",
"Custom": "Sur mesure", "Custom": "Sur mesure",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "", "Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Activer les nouvelles inscriptions", "Enable New Sign Ups": "Activer les nouvelles inscriptions",
"Enabled": "", "Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Vérifiez que votre fichier CSV comprenne les 4 colonnes dans cet ordre : Name, Email, Password, Role.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Vérifiez que votre fichier CSV comprenne les 4 colonnes dans cet ordre : Name, Email, Password, Role.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "", "Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "Entrez le chevauchement de chunk", "Enter Chunk Overlap": "Entrez le chevauchement de chunk",
"Enter Chunk Size": "Entrez la taille de bloc", "Enter Chunk Size": "Entrez la taille de bloc",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "", "Enter description": "",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "", "Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "Entrez les codes de langue", "Enter language codes": "Entrez les codes de langue",
"Enter Model ID": "", "Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "Entrez l'étiquette du modèle (par ex. {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "Entrez l'étiquette du modèle (par ex. {{modelTag}})",
"Enter Mojeek Search API Key": "", "Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "Entrez le nombre de pas (par ex. 50)", "Enter Number of Steps (e.g. 50)": "Entrez le nombre de pas (par ex. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "", "Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "", "Enter Sampler (e.g. Euler a)": "",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "", "Enter Tika Server URL": "",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Entrez les Top K", "Enter Top K": "Entrez les Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Entrez l'URL (par ex. {http://127.0.0.1:7860/})", "Enter URL (e.g. http://127.0.0.1:7860/)": "Entrez l'URL (par ex. {http://127.0.0.1:7860/})",
"Enter URL (e.g. http://localhost:11434)": "Entrez l'URL (par ex. http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "Entrez l'URL (par ex. http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "", "Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "", "Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "", "Exclude": "",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "Expérimental", "Experimental": "Expérimental",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "", "Explore the cosmos": "",
"Export": "Exportation", "Export": "Exportation",
"Export All Archived Chats": "", "Export All Archived Chats": "",
@ -566,7 +580,7 @@
"Include": "", "Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "Inclure le drapeau `--api-auth` lors de l'exécution de stable-diffusion-webui", "Include `--api-auth` flag when running stable-diffusion-webui": "Inclure le drapeau `--api-auth` lors de l'exécution de stable-diffusion-webui",
"Include `--api` flag when running stable-diffusion-webui": "Inclure le drapeau `--api` lorsque vous exécutez stable-diffusion-webui", "Include `--api` flag when running stable-diffusion-webui": "Inclure le drapeau `--api` lorsque vous exécutez stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Info", "Info": "Info",
"Input commands": "Entrez les commandes", "Input commands": "Entrez les commandes",
"Install from Github URL": "Installer depuis l'URL GitHub", "Install from Github URL": "Installer depuis l'URL GitHub",
@ -624,6 +638,7 @@
"Local": "", "Local": "",
"Local Models": "Modèles locaux", "Local Models": "Modèles locaux",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "", "Lost": "",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "Réalisé par la communauté OpenWebUI", "Made by Open WebUI Community": "Réalisé par la communauté OpenWebUI",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "Autorisation refusée lors de l'accès au micro", "Permission denied when accessing microphone": "Autorisation refusée lors de l'accès au micro",
"Permission denied when accessing microphone: {{error}}": "Permission refusée lors de l'accès au microphone : {{error}}", "Permission denied when accessing microphone: {{error}}": "Permission refusée lors de l'accès au microphone : {{error}}",
"Permissions": "", "Permissions": "",
"Perplexity API Key": "",
"Personalization": "Personnalisation", "Personalization": "Personnalisation",
"Pin": "Épingler", "Pin": "Épingler",
"Pinned": "Épinglé", "Pinned": "Épinglé",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "Enregistrer la voix", "Record voice": "Enregistrer la voix",
"Redirecting you to Open WebUI Community": "Redirection vers la communauté OpenWebUI", "Redirecting you to Open WebUI Community": "Redirection vers la communauté OpenWebUI",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Désignez-vous comme « Utilisateur » (par ex. « L'utilisateur apprend l'espagnol »)", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Désignez-vous comme « Utilisateur » (par ex. « L'utilisateur apprend l'espagnol »)",
"References from": "", "References from": "",
"Refused when it shouldn't have": "Refusé alors qu'il n'aurait pas dû l'être", "Refused when it shouldn't have": "Refusé alors qu'il n'aurait pas dû l'être",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "Définir la voix", "Set Voice": "Définir la voix",
"Set whisper model": "", "Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "Paramètres", "Settings": "Paramètres",
"Settings saved successfully!": "Paramètres enregistrés avec succès !", "Settings saved successfully!": "Paramètres enregistrés avec succès !",
@ -964,7 +980,7 @@
"System Prompt": "Prompt du système", "System Prompt": "Prompt du système",
"Tags Generation": "", "Tags Generation": "",
"Tags Generation Prompt": "", "Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "Appuyez pour interrompre", "Tap to interrupt": "Appuyez pour interrompre",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "Merci pour vos commentaires !", "Thanks for your feedback!": "Merci pour vos commentaires !",
"The Application Account DN you bind with for search": "", "The Application Account DN you bind with for search": "",
"The base to search for users": "", "The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Le score doit être une valeur comprise entre 0,0 (0\u00a0%) et 1,0 (100\u00a0%).", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "Le score doit être une valeur comprise entre 0,0 (0\u00a0%) et 1,0 (100\u00a0%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Thème", "Theme": "Thème",
"Thinking...": "En train de réfléchir...", "Thinking...": "En train de réfléchir...",
"This action cannot be undone. Do you wish to continue?": "Cette action ne peut pas être annulée. Souhaitez-vous continuer ?", "This action cannot be undone. Do you wish to continue?": "Cette action ne peut pas être annulée. Souhaitez-vous continuer ?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Cela garantit que vos conversations précieuses soient sauvegardées en toute sécurité dans votre base de données backend. Merci !", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Cela garantit que vos conversations précieuses soient sauvegardées en toute sécurité dans votre base de données backend. Merci !",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Il s'agit d'une fonctionnalité expérimentale, elle peut ne pas fonctionner comme prévu et est sujette à modification à tout moment.", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "Il s'agit d'une fonctionnalité expérimentale, elle peut ne pas fonctionner comme prévu et est sujette à modification à tout moment.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "", "This response was generated by \"{{model}}\"": "",
"This will delete": "Cela supprimera", "This will delete": "Cela supprimera",
@ -1132,7 +1148,7 @@
"Why?": "", "Why?": "",
"Widescreen Mode": "Mode Grand Écran", "Widescreen Mode": "Mode Grand Écran",
"Won": "", "Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Espace de travail", "Workspace": "Espace de travail",
"Workspace Permissions": "", "Workspace Permissions": "",
"Write": "", "Write": "",
@ -1142,6 +1158,7 @@
"Write your model template content here": "", "Write your model template content here": "",
"Yesterday": "Hier", "Yesterday": "Hier",
"You": "Vous", "You": "Vous",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Vous pouvez personnaliser vos interactions avec les LLM en ajoutant des souvenirs via le bouton 'Gérer' ci-dessous, ce qui les rendra plus utiles et adaptés à vos besoins.", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Vous pouvez personnaliser vos interactions avec les LLM en ajoutant des souvenirs via le bouton 'Gérer' ci-dessous, ce qui les rendra plus utiles et adaptés à vos besoins.",
"You cannot upload an empty file.": "", "You cannot upload an empty file.": "",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(par exemple `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(par exemple `sh webui.sh --api`)",
"(latest)": "(dernière version)", "(latest)": "(dernière version)",
"{{ models }}": "{{ models }}", "{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "{{COUNT}} réponses", "{{COUNT}} Replies": "{{COUNT}} réponses",
"{{user}}'s Chats": "Conversations de {{user}}", "{{user}}'s Chats": "Conversations de {{user}}",
"{{webUIName}} Backend Required": "Backend {{webUIName}} requis", "{{webUIName}} Backend Required": "Backend {{webUIName}} requis",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Les administrateurs ont accès à tous les outils en permanence ; les utilisateurs doivent se voir attribuer des outils pour chaque modèle dans lespace de travail.", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Les administrateurs ont accès à tous les outils en permanence ; les utilisateurs doivent se voir attribuer des outils pour chaque modèle dans lespace de travail.",
"Advanced Parameters": "Paramètres avancés", "Advanced Parameters": "Paramètres avancés",
"Advanced Params": "Paramètres avancés", "Advanced Params": "Paramètres avancés",
"All": "",
"All Documents": "Tous les documents", "All Documents": "Tous les documents",
"All models deleted successfully": "Tous les modèles ont été supprimés avec succès", "All models deleted successfully": "Tous les modèles ont été supprimés avec succès",
"Allow Chat Controls": "Autoriser les contrôles de chat", "Allow Chat Controls": "Autoriser les contrôles de chat",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Autoriser l'interruption vocale pendant un appel", "Allow Voice Interruption in Call": "Autoriser l'interruption vocale pendant un appel",
"Allowed Endpoints": "Points de terminaison autorisés", "Allowed Endpoints": "Points de terminaison autorisés",
"Already have an account?": "Avez-vous déjà un compte ?", "Already have an account?": "Avez-vous déjà un compte ?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "Alternative au top_p, visant à assurer un équilibre entre qualité et variété. Le paramètre p représente la probabilité minimale pour qu'un token soit pris en compte, par rapport à la probabilité du token le plus probable. Par exemple, avec p=0.05 et le token le plus probable ayant une probabilité de 0.9, les logits ayant une valeur inférieure à 0.045 sont filtrés. (Par défaut : 0.0)", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "Incroyable", "Amazing": "Incroyable",
"an assistant": "un assistant", "an assistant": "un assistant",
@ -93,6 +95,7 @@
"Are you sure?": "Êtes-vous certain ?", "Are you sure?": "Êtes-vous certain ?",
"Arena Models": "Modèles d'arène", "Arena Models": "Modèles d'arène",
"Artifacts": "Artéfacts", "Artifacts": "Artéfacts",
"Ask": "",
"Ask a question": "Posez votre question", "Ask a question": "Posez votre question",
"Assistant": "Assistant", "Assistant": "Assistant",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "Point de terminaison Bing Search V7", "Bing Search V7 Endpoint": "Point de terminaison Bing Search V7",
"Bing Search V7 Subscription Key": "Clé d'abonnement Bing Search V7", "Bing Search V7 Subscription Key": "Clé d'abonnement Bing Search V7",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Clé API Brave Search", "Brave Search API Key": "Clé API Brave Search",
"By {{name}}": "Par {{name}}", "By {{name}}": "Par {{name}}",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Collection", "Collection": "Collection",
"Color": "Couleur", "Color": "Couleur",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "Confirmer votre nouveau mot de passe", "Confirm your new password": "Confirmer votre nouveau mot de passe",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Connexions", "Connections": "Connexions",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "Contraint l'effort de raisonnement pour les modèles de raisonnement. Applicable uniquement aux modèles de raisonnement de fournisseurs spécifiques qui prennent en charge l'effort de raisonnement. (Par défaut : medium)", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Contacter l'administrateur pour obtenir l'accès à WebUI", "Contact Admin for WebUI Access": "Contacter l'administrateur pour obtenir l'accès à WebUI",
"Content": "Contenu", "Content": "Contenu",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "Continuer avec l'email", "Continue with Email": "Continuer avec l'email",
"Continue with LDAP": "Continuer avec LDAP", "Continue with LDAP": "Continuer avec LDAP",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Contrôle la façon dont le texte des messages est divisé pour les demandes de Text-to-Speech. « ponctuation » divise en phrases, « paragraphes » divise en paragraphes et « aucun » garde le message en tant que chaîne de texte unique.", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Contrôle la façon dont le texte des messages est divisé pour les demandes de Text-to-Speech. « ponctuation » divise en phrases, « paragraphes » divise en paragraphes et « aucun » garde le message en tant que chaîne de texte unique.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Contrôles", "Controls": "Contrôles",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "Contrôle l'équilibre entre la cohérence et la diversité de la sortie. Une valeur plus basse produira un texte plus focalisé et cohérent. (Par défaut : 5.0)", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Copié", "Copied": "Copié",
"Copied shared chat URL to clipboard!": "URL du chat copié dans le presse-papiers !", "Copied shared chat URL to clipboard!": "URL du chat copié dans le presse-papiers !",
"Copied to clipboard": "Copié dans le presse-papiers", "Copied to clipboard": "Copié dans le presse-papiers",
@ -245,6 +250,7 @@
"Created At": "Créé le", "Created At": "Créé le",
"Created by": "Créé par", "Created by": "Créé par",
"CSV Import": "Import CSV", "CSV Import": "Import CSV",
"Ctrl+Enter to Send": "",
"Current Model": "Modèle actuel", "Current Model": "Modèle actuel",
"Current Password": "Mot de passe actuel", "Current Password": "Mot de passe actuel",
"Custom": "Sur mesure", "Custom": "Sur mesure",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Activer le verrouillage de la mémoire (mlock) pour empêcher les données du modèle d'être échangées de la RAM. Cette option verrouille l'ensemble de pages de travail du modèle en RAM, garantissant qu'elles ne seront pas échangées vers le disque. Cela peut aider à maintenir les performances en évitant les défauts de page et en assurant un accès rapide aux données.", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Activer le verrouillage de la mémoire (mlock) pour empêcher les données du modèle d'être échangées de la RAM. Cette option verrouille l'ensemble de pages de travail du modèle en RAM, garantissant qu'elles ne seront pas échangées vers le disque. Cela peut aider à maintenir les performances en évitant les défauts de page et en assurant un accès rapide aux données.",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Activer le mappage de la mémoire (mmap) pour charger les données du modèle. Cette option permet au système d'utiliser le stockage disque comme une extension de la RAM en traitant les fichiers disque comme s'ils étaient en RAM. Cela peut améliorer les performances du modèle en permettant un accès plus rapide aux données. Cependant, cela peut ne pas fonctionner correctement avec tous les systèmes et peut consommer une quantité significative d'espace disque.", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Activer le mappage de la mémoire (mmap) pour charger les données du modèle. Cette option permet au système d'utiliser le stockage disque comme une extension de la RAM en traitant les fichiers disque comme s'ils étaient en RAM. Cela peut améliorer les performances du modèle en permettant un accès plus rapide aux données. Cependant, cela peut ne pas fonctionner correctement avec tous les systèmes et peut consommer une quantité significative d'espace disque.",
"Enable Message Rating": "Activer l'évaluation des messages", "Enable Message Rating": "Activer l'évaluation des messages",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "Activer l'échantillonnage Mirostat pour contrôler la perplexité. (Par défaut : 0, 0 = Désactivé, 1 = Mirostat, 2 = Mirostat 2.0)", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Activer les nouvelles inscriptions", "Enable New Sign Ups": "Activer les nouvelles inscriptions",
"Enabled": "Activé", "Enabled": "Activé",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Vérifiez que votre fichier CSV comprenne les 4 colonnes dans cet ordre : Name, Email, Password, Role.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Vérifiez que votre fichier CSV comprenne les 4 colonnes dans cet ordre : Name, Email, Password, Role.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "Entrez l'échelle CFG (par ex. 7.0)", "Enter CFG Scale (e.g. 7.0)": "Entrez l'échelle CFG (par ex. 7.0)",
"Enter Chunk Overlap": "Entrez le chevauchement des chunks", "Enter Chunk Overlap": "Entrez le chevauchement des chunks",
"Enter Chunk Size": "Entrez la taille des chunks", "Enter Chunk Size": "Entrez la taille des chunks",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "Entrez la description", "Enter description": "Entrez la description",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "Entrez la clé API Kagi Search", "Enter Kagi Search API Key": "Entrez la clé API Kagi Search",
"Enter Key Behavior": "",
"Enter language codes": "Entrez les codes de langue", "Enter language codes": "Entrez les codes de langue",
"Enter Model ID": "Entrez l'ID du modèle", "Enter Model ID": "Entrez l'ID du modèle",
"Enter model tag (e.g. {{modelTag}})": "Entrez le tag du modèle (par ex. {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "Entrez le tag du modèle (par ex. {{modelTag}})",
"Enter Mojeek Search API Key": "Entrez la clé API Mojeek", "Enter Mojeek Search API Key": "Entrez la clé API Mojeek",
"Enter Number of Steps (e.g. 50)": "Entrez le nombre d'étapes (par ex. 50)", "Enter Number of Steps (e.g. 50)": "Entrez le nombre d'étapes (par ex. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "Entrez l'URL du proxy (par ex. https://use:password@host:port)", "Enter proxy URL (e.g. https://user:password@host:port)": "Entrez l'URL du proxy (par ex. https://use:password@host:port)",
"Enter reasoning effort": "Entrez l'effort de raisonnement", "Enter reasoning effort": "Entrez l'effort de raisonnement",
"Enter Sampler (e.g. Euler a)": "Entrez le sampler (par ex. Euler a)", "Enter Sampler (e.g. Euler a)": "Entrez le sampler (par ex. Euler a)",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "Entrez l'URL publique de votre WebUI. Cette URL sera utilisée pour générer des liens dans les notifications.", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "Entrez l'URL publique de votre WebUI. Cette URL sera utilisée pour générer des liens dans les notifications.",
"Enter Tika Server URL": "Entrez l'URL du serveur Tika", "Enter Tika Server URL": "Entrez l'URL du serveur Tika",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Entrez les Top K", "Enter Top K": "Entrez les Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Entrez l'URL (par ex. {http://127.0.0.1:7860/})", "Enter URL (e.g. http://127.0.0.1:7860/)": "Entrez l'URL (par ex. {http://127.0.0.1:7860/})",
"Enter URL (e.g. http://localhost:11434)": "Entrez l'URL (par ex. http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "Entrez l'URL (par ex. http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "Exemple: mail", "Example: mail": "Exemple: mail",
"Example: ou=users,dc=foo,dc=example": "Exemple: ou=utilisateurs,dc=foo,dc=exemple", "Example: ou=users,dc=foo,dc=example": "Exemple: ou=utilisateurs,dc=foo,dc=exemple",
"Example: sAMAccountName or uid or userPrincipalName": "Exemple: sAMAccountName ou uid ou userPrincipalName", "Example: sAMAccountName or uid or userPrincipalName": "Exemple: sAMAccountName ou uid ou userPrincipalName",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "Exclure", "Exclude": "Exclure",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "Expérimental", "Experimental": "Expérimental",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "Explorer le cosmos", "Explore the cosmos": "Explorer le cosmos",
"Export": "Exportation", "Export": "Exportation",
"Export All Archived Chats": "Exporter toutes les conversations archivées", "Export All Archived Chats": "Exporter toutes les conversations archivées",
@ -566,7 +580,7 @@
"Include": "Inclure", "Include": "Inclure",
"Include `--api-auth` flag when running stable-diffusion-webui": "Inclure le drapeau `--api-auth` lors de l'exécution de stable-diffusion-webui", "Include `--api-auth` flag when running stable-diffusion-webui": "Inclure le drapeau `--api-auth` lors de l'exécution de stable-diffusion-webui",
"Include `--api` flag when running stable-diffusion-webui": "Inclure le drapeau `--api` lorsque vous exécutez stable-diffusion-webui", "Include `--api` flag when running stable-diffusion-webui": "Inclure le drapeau `--api` lorsque vous exécutez stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "Influence la rapidité avec laquelle l'algorithme répond aux retours du texte généré. Un taux d'apprentissage plus bas entraînera des ajustements plus lents, tandis qu'un taux d'apprentissage plus élevé rendra l'algorithme plus réactif. (Par défaut : 0.1)", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Info", "Info": "Info",
"Input commands": "Commandes d'entrée", "Input commands": "Commandes d'entrée",
"Install from Github URL": "Installer depuis une URL GitHub", "Install from Github URL": "Installer depuis une URL GitHub",
@ -624,6 +638,7 @@
"Local": "Local", "Local": "Local",
"Local Models": "Modèles locaux", "Local Models": "Modèles locaux",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "Perdu", "Lost": "Perdu",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "Réalisé par la communauté OpenWebUI", "Made by Open WebUI Community": "Réalisé par la communauté OpenWebUI",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "Accès au microphone refusé", "Permission denied when accessing microphone": "Accès au microphone refusé",
"Permission denied when accessing microphone: {{error}}": "Accès au microphone refusé : {{error}}", "Permission denied when accessing microphone: {{error}}": "Accès au microphone refusé : {{error}}",
"Permissions": "Permissions", "Permissions": "Permissions",
"Perplexity API Key": "",
"Personalization": "Personnalisation", "Personalization": "Personnalisation",
"Pin": "Épingler", "Pin": "Épingler",
"Pinned": "Épinglé", "Pinned": "Épinglé",
@ -809,7 +825,7 @@
"Reasoning Effort": "Effort de raisonnement", "Reasoning Effort": "Effort de raisonnement",
"Record voice": "Enregistrer la voix", "Record voice": "Enregistrer la voix",
"Redirecting you to Open WebUI Community": "Redirection vers la communauté OpenWebUI", "Redirecting you to Open WebUI Community": "Redirection vers la communauté OpenWebUI",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "Réduit la probabilité de générer des non-sens. Une valeur plus élevée (par exemple 100) donnera des réponses plus diversifiées, tandis qu'une valeur plus basse (par exemple 10) sera plus conservatrice. (Par défaut : 40)", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Désignez-vous comme « Utilisateur » (par ex. « L'utilisateur apprend l'espagnol »)", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Désignez-vous comme « Utilisateur » (par ex. « L'utilisateur apprend l'espagnol »)",
"References from": "Références de", "References from": "Références de",
"Refused when it shouldn't have": "Refusé alors qu'il n'aurait pas dû l'être", "Refused when it shouldn't have": "Refusé alors qu'il n'aurait pas dû l'être",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Définir le nombre de threads de travail utilisés pour le calcul. Cette option contrôle combien de threads sont utilisés pour traiter les demandes entrantes simultanément. L'augmentation de cette valeur peut améliorer les performances sous de fortes charges de travail concurrentes mais peut également consommer plus de ressources CPU.", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Définir le nombre de threads de travail utilisés pour le calcul. Cette option contrôle combien de threads sont utilisés pour traiter les demandes entrantes simultanément. L'augmentation de cette valeur peut améliorer les performances sous de fortes charges de travail concurrentes mais peut également consommer plus de ressources CPU.",
"Set Voice": "Choisir la voix", "Set Voice": "Choisir la voix",
"Set whisper model": "Choisir le modèle Whisper", "Set whisper model": "Choisir le modèle Whisper",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "Définit la profondeur de recherche du modèle pour prévenir les répétitions. (Par défaut : 64, 0 = désactivé, -1 = num_ctx)", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "Définit la graine de nombre aléatoire à utiliser pour la génération. La définition de cette valeur à un nombre spécifique fera que le modèle générera le même texte pour le même prompt. (Par défaut : aléatoire)", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "Définit la taille de la fenêtre contextuelle utilisée pour générer le prochain token. (Par défaut : 2048)", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Définit les séquences d'arrêt à utiliser. Lorsque ce motif est rencontré, le LLM cessera de générer du texte et retournera. Plusieurs motifs d'arrêt peuvent être définis en spécifiant plusieurs paramètres d'arrêt distincts dans un fichier modèle.", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Définit les séquences d'arrêt à utiliser. Lorsque ce motif est rencontré, le LLM cessera de générer du texte et retournera. Plusieurs motifs d'arrêt peuvent être définis en spécifiant plusieurs paramètres d'arrêt distincts dans un fichier modèle.",
"Settings": "Paramètres", "Settings": "Paramètres",
"Settings saved successfully!": "Paramètres enregistrés avec succès !", "Settings saved successfully!": "Paramètres enregistrés avec succès !",
@ -964,7 +980,7 @@
"System Prompt": "Prompt système", "System Prompt": "Prompt système",
"Tags Generation": "Génération de tags", "Tags Generation": "Génération de tags",
"Tags Generation Prompt": "Prompt de génération de tags", "Tags Generation Prompt": "Prompt de génération de tags",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "L'échantillonnage sans queue est utilisé pour réduire l'impact des tokens moins probables dans la sortie. Une valeur plus élevée (par exemple 2.0) réduira davantage l'impact, tandis qu'une valeur de 1.0 désactive ce paramètre. (par défaut : 1)", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "Parler au modèle", "Talk to model": "Parler au modèle",
"Tap to interrupt": "Appuyez pour interrompre", "Tap to interrupt": "Appuyez pour interrompre",
"Tasks": "Tâches", "Tasks": "Tâches",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "Merci pour vos commentaires !", "Thanks for your feedback!": "Merci pour vos commentaires !",
"The Application Account DN you bind with for search": "Le DN du compte de l'application avec lequel vous vous liez pour la recherche", "The Application Account DN you bind with for search": "Le DN du compte de l'application avec lequel vous vous liez pour la recherche",
"The base to search for users": "La base pour rechercher des utilisateurs", "The base to search for users": "La base pour rechercher des utilisateurs",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "La taille de lot détermine combien de demandes de texte sont traitées ensemble en une fois. Une taille de lot plus grande peut augmenter les performances et la vitesse du modèle, mais elle nécessite également plus de mémoire. (Par défaut : 512)", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Les développeurs de ce plugin sont des bénévoles passionnés issus de la communauté. Si vous trouvez ce plugin utile, merci de contribuer à son développement.", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Les développeurs de ce plugin sont des bénévoles passionnés issus de la communauté. Si vous trouvez ce plugin utile, merci de contribuer à son développement.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Le classement d'évaluation est basé sur le système de notation Elo et est mis à jour en temps réel.", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Le classement d'évaluation est basé sur le système de notation Elo et est mis à jour en temps réel.",
"The LDAP attribute that maps to the mail that users use to sign in.": "L'attribut LDAP qui correspond à l'adresse e-mail que les utilisateurs utilisent pour se connecter.", "The LDAP attribute that maps to the mail that users use to sign in.": "L'attribut LDAP qui correspond à l'adresse e-mail que les utilisateurs utilisent pour se connecter.",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "La taille maximale du fichier en Mo. Si la taille du fichier dépasse cette limite, le fichier ne sera pas téléchargé.", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "La taille maximale du fichier en Mo. Si la taille du fichier dépasse cette limite, le fichier ne sera pas téléchargé.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Le nombre maximal de fichiers pouvant être utilisés en même temps dans la conversation. Si le nombre de fichiers dépasse cette limite, les fichiers ne seront pas téléchargés.", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Le nombre maximal de fichiers pouvant être utilisés en même temps dans la conversation. Si le nombre de fichiers dépasse cette limite, les fichiers ne seront pas téléchargés.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Le score doit être une valeur comprise entre 0,0 (0%) et 1,0 (100%).", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "Le score doit être une valeur comprise entre 0,0 (0%) et 1,0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "La température du modèle. Augmenter la température rendra le modèle plus créatif dans ses réponses. (Par défaut : 0.8)", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Thème", "Theme": "Thème",
"Thinking...": "En train de réfléchir...", "Thinking...": "En train de réfléchir...",
"This action cannot be undone. Do you wish to continue?": "Cette action ne peut pas être annulée. Souhaitez-vous continuer ?", "This action cannot be undone. Do you wish to continue?": "Cette action ne peut pas être annulée. Souhaitez-vous continuer ?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Cela garantit que vos conversations précieuses soient sauvegardées en toute sécurité dans votre base de données backend. Merci !", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Cela garantit que vos conversations précieuses soient sauvegardées en toute sécurité dans votre base de données backend. Merci !",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Il s'agit d'une fonctionnalité expérimentale, elle peut ne pas fonctionner comme prévu et est sujette à modification à tout moment.", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "Il s'agit d'une fonctionnalité expérimentale, elle peut ne pas fonctionner comme prévu et est sujette à modification à tout moment.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "Cette option contrôle combien de tokens sont conservés lors du rafraîchissement du contexte. Par exemple, si ce paramètre est défini à 2, les 2 derniers tokens du contexte de conversation seront conservés. Préserver le contexte peut aider à maintenir la continuité d'une conversation, mais cela peut réduire la capacité à répondre à de nouveaux sujets. (Par défaut : 24)", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "Cette option définit le nombre maximum de tokens que le modèle peut générer dans sa réponse. Augmenter cette limite permet au modèle de fournir des réponses plus longues, mais cela peut également augmenter la probabilité de générer du contenu inutile ou non pertinent. (Par défaut : 128)", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Cette option supprimera tous les fichiers existants dans la collection et les remplacera par les fichiers nouvellement téléchargés.", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "Cette option supprimera tous les fichiers existants dans la collection et les remplacera par les fichiers nouvellement téléchargés.",
"This response was generated by \"{{model}}\"": "Cette réponse a été générée par \"{{model}}\"", "This response was generated by \"{{model}}\"": "Cette réponse a été générée par \"{{model}}\"",
"This will delete": "Cela supprimera", "This will delete": "Cela supprimera",
@ -1132,7 +1148,7 @@
"Why?": "Pourquoi ?", "Why?": "Pourquoi ?",
"Widescreen Mode": "Mode grand écran", "Widescreen Mode": "Mode grand écran",
"Won": "Victoires", "Won": "Victoires",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "Fonctionne avec le top-k. Une valeur plus élevée (par ex. 0.95) donnera un texte plus diversifié, tandis qu'une valeur plus basse (par ex. 0.5) générera un texte plus concentré et conservateur. (Par défaut : 0.9)", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Espace de travail", "Workspace": "Espace de travail",
"Workspace Permissions": "Autorisations de l'espace de travail", "Workspace Permissions": "Autorisations de l'espace de travail",
"Write": "Écrire", "Write": "Écrire",
@ -1142,6 +1158,7 @@
"Write your model template content here": "Écrivez ici le contenu de votre modèle", "Write your model template content here": "Écrivez ici le contenu de votre modèle",
"Yesterday": "Hier", "Yesterday": "Hier",
"You": "Vous", "You": "Vous",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Vous ne pouvez discuter qu'avec un maximum de {{maxCount}} fichier(s) à la fois.", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Vous ne pouvez discuter qu'avec un maximum de {{maxCount}} fichier(s) à la fois.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Vous pouvez personnaliser vos interactions avec les LLM en ajoutant des mémoires à l'aide du bouton « Gérer » ci-dessous, ce qui les rendra plus utiles et mieux adaptées à vos besoins.", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Vous pouvez personnaliser vos interactions avec les LLM en ajoutant des mémoires à l'aide du bouton « Gérer » ci-dessous, ce qui les rendra plus utiles et mieux adaptées à vos besoins.",
"You cannot upload an empty file.": "Vous ne pouvez pas envoyer un fichier vide.", "You cannot upload an empty file.": "Vous ne pouvez pas envoyer un fichier vide.",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(למשל `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(למשל `sh webui.sh --api`)",
"(latest)": "(האחרון)", "(latest)": "(האחרון)",
"{{ models }}": "{{ דגמים }}", "{{ models }}": "{{ דגמים }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "", "{{COUNT}} Replies": "",
"{{user}}'s Chats": "צ'אטים של {{user}}", "{{user}}'s Chats": "צ'אטים של {{user}}",
"{{webUIName}} Backend Required": "נדרש Backend של {{webUIName}}", "{{webUIName}} Backend Required": "נדרש Backend של {{webUIName}}",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "פרמטרים מתקדמים", "Advanced Parameters": "פרמטרים מתקדמים",
"Advanced Params": "פרמטרים מתקדמים", "Advanced Params": "פרמטרים מתקדמים",
"All": "",
"All Documents": "כל המסמכים", "All Documents": "כל המסמכים",
"All models deleted successfully": "", "All models deleted successfully": "",
"Allow Chat Controls": "", "Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "", "Allow Voice Interruption in Call": "",
"Allowed Endpoints": "", "Allowed Endpoints": "",
"Already have an account?": "כבר יש לך חשבון?", "Already have an account?": "כבר יש לך חשבון?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "", "Amazing": "",
"an assistant": "עוזר", "an assistant": "עוזר",
@ -93,6 +95,7 @@
"Are you sure?": "האם אתה בטוח?", "Are you sure?": "האם אתה בטוח?",
"Arena Models": "", "Arena Models": "",
"Artifacts": "", "Artifacts": "",
"Ask": "",
"Ask a question": "", "Ask a question": "",
"Assistant": "", "Assistant": "",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "", "Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "", "Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "מפתח API של חיפוש אמיץ", "Brave Search API Key": "מפתח API של חיפוש אמיץ",
"By {{name}}": "", "By {{name}}": "",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "אוסף", "Collection": "אוסף",
"Color": "", "Color": "",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "", "Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "חיבורים", "Connections": "חיבורים",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "", "Contact Admin for WebUI Access": "",
"Content": "תוכן", "Content": "תוכן",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "", "Continue with Email": "",
"Continue with LDAP": "", "Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "", "Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "", "Copied": "",
"Copied shared chat URL to clipboard!": "העתקת כתובת URL של צ'אט משותף ללוח!", "Copied shared chat URL to clipboard!": "העתקת כתובת URL של צ'אט משותף ללוח!",
"Copied to clipboard": "", "Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "נוצר ב", "Created At": "נוצר ב",
"Created by": "", "Created by": "",
"CSV Import": "", "CSV Import": "",
"Ctrl+Enter to Send": "",
"Current Model": "המודל הנוכחי", "Current Model": "המודל הנוכחי",
"Current Password": "הסיסמה הנוכחית", "Current Password": "הסיסמה הנוכחית",
"Custom": "מותאם אישית", "Custom": "מותאם אישית",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "", "Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "אפשר הרשמות חדשות", "Enable New Sign Ups": "אפשר הרשמות חדשות",
"Enabled": "", "Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "ודא שקובץ ה-CSV שלך כולל 4 עמודות בסדר הבא: שם, דוא\"ל, סיסמה, תפקיד.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "ודא שקובץ ה-CSV שלך כולל 4 עמודות בסדר הבא: שם, דוא\"ל, סיסמה, תפקיד.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "", "Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "הזן חפיפת נתונים", "Enter Chunk Overlap": "הזן חפיפת נתונים",
"Enter Chunk Size": "הזן גודל נתונים", "Enter Chunk Size": "הזן גודל נתונים",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "", "Enter description": "",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "", "Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "הזן קודי שפה", "Enter language codes": "הזן קודי שפה",
"Enter Model ID": "", "Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "הזן תג מודל (למשל {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "הזן תג מודל (למשל {{modelTag}})",
"Enter Mojeek Search API Key": "", "Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "הזן מספר שלבים (למשל 50)", "Enter Number of Steps (e.g. 50)": "הזן מספר שלבים (למשל 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "", "Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "", "Enter Sampler (e.g. Euler a)": "",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "", "Enter Tika Server URL": "",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "הזן Top K", "Enter Top K": "הזן Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "הזן כתובת URL (למשל http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "הזן כתובת URL (למשל http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "הזן כתובת URL (למשל http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "הזן כתובת URL (למשל http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "", "Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "", "Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "", "Exclude": "",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "ניסיוני", "Experimental": "ניסיוני",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "", "Explore the cosmos": "",
"Export": "ייצא", "Export": "ייצא",
"Export All Archived Chats": "", "Export All Archived Chats": "",
@ -566,7 +580,7 @@
"Include": "", "Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "", "Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "כלול את הדגל `--api` בעת הרצת stable-diffusion-webui", "Include `--api` flag when running stable-diffusion-webui": "כלול את הדגל `--api` בעת הרצת stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "מידע", "Info": "מידע",
"Input commands": "פקודות קלט", "Input commands": "פקודות קלט",
"Install from Github URL": "התקן מכתובת URL של Github", "Install from Github URL": "התקן מכתובת URL של Github",
@ -624,6 +638,7 @@
"Local": "", "Local": "",
"Local Models": "", "Local Models": "",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "", "Lost": "",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "נוצר על ידי קהילת OpenWebUI", "Made by Open WebUI Community": "נוצר על ידי קהילת OpenWebUI",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "", "Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "ההרשאה נדחתה בעת גישה למיקרופון: {{error}}", "Permission denied when accessing microphone: {{error}}": "ההרשאה נדחתה בעת גישה למיקרופון: {{error}}",
"Permissions": "", "Permissions": "",
"Perplexity API Key": "",
"Personalization": "תאור", "Personalization": "תאור",
"Pin": "", "Pin": "",
"Pinned": "", "Pinned": "",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "הקלט קול", "Record voice": "הקלט קול",
"Redirecting you to Open WebUI Community": "מפנה אותך לקהילת OpenWebUI", "Redirecting you to Open WebUI Community": "מפנה אותך לקהילת OpenWebUI",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "", "References from": "",
"Refused when it shouldn't have": "נדחה כאשר לא היה צריך", "Refused when it shouldn't have": "נדחה כאשר לא היה צריך",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "הגדר קול", "Set Voice": "הגדר קול",
"Set whisper model": "", "Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "הגדרות", "Settings": "הגדרות",
"Settings saved successfully!": "ההגדרות נשמרו בהצלחה!", "Settings saved successfully!": "ההגדרות נשמרו בהצלחה!",
@ -964,7 +980,7 @@
"System Prompt": "תגובת מערכת", "System Prompt": "תגובת מערכת",
"Tags Generation": "", "Tags Generation": "",
"Tags Generation Prompt": "", "Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "", "Tap to interrupt": "",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "תודה על המשוב שלך!", "Thanks for your feedback!": "תודה על המשוב שלך!",
"The Application Account DN you bind with for search": "", "The Application Account DN you bind with for search": "",
"The base to search for users": "", "The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "ציון צריך להיות ערך בין 0.0 (0%) ל-1.0 (100%)", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "ציון צריך להיות ערך בין 0.0 (0%) ל-1.0 (100%)",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "נושא", "Theme": "נושא",
"Thinking...": "", "Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "", "This action cannot be undone. Do you wish to continue?": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "פעולה זו מבטיחה שהשיחות בעלות הערך שלך יישמרו באופן מאובטח במסד הנתונים העורפי שלך. תודה!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "פעולה זו מבטיחה שהשיחות בעלות הערך שלך יישמרו באופן מאובטח במסד הנתונים העורפי שלך. תודה!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "", "This response was generated by \"{{model}}\"": "",
"This will delete": "", "This will delete": "",
@ -1132,7 +1148,7 @@
"Why?": "", "Why?": "",
"Widescreen Mode": "", "Widescreen Mode": "",
"Won": "", "Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "סביבה", "Workspace": "סביבה",
"Workspace Permissions": "", "Workspace Permissions": "",
"Write": "", "Write": "",
@ -1142,6 +1158,7 @@
"Write your model template content here": "", "Write your model template content here": "",
"Yesterday": "אתמול", "Yesterday": "אתמול",
"You": "אתה", "You": "אתה",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "", "You cannot upload an empty file.": "",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(e.g. `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(e.g. `sh webui.sh --api`)",
"(latest)": "(latest)", "(latest)": "(latest)",
"{{ models }}": "{{ मॉडल }}", "{{ models }}": "{{ मॉडल }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "", "{{COUNT}} Replies": "",
"{{user}}'s Chats": "{{user}} की चैट", "{{user}}'s Chats": "{{user}} की चैट",
"{{webUIName}} Backend Required": "{{webUIName}} बैकएंड आवश्यक", "{{webUIName}} Backend Required": "{{webUIName}} बैकएंड आवश्यक",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "उन्नत पैरामीटर", "Advanced Parameters": "उन्नत पैरामीटर",
"Advanced Params": "उन्नत परम", "Advanced Params": "उन्नत परम",
"All": "",
"All Documents": "सभी डॉक्यूमेंट्स", "All Documents": "सभी डॉक्यूमेंट्स",
"All models deleted successfully": "", "All models deleted successfully": "",
"Allow Chat Controls": "", "Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "", "Allow Voice Interruption in Call": "",
"Allowed Endpoints": "", "Allowed Endpoints": "",
"Already have an account?": "क्या आपके पास पहले से एक खाता मौजूद है?", "Already have an account?": "क्या आपके पास पहले से एक खाता मौजूद है?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "", "Amazing": "",
"an assistant": "एक सहायक", "an assistant": "एक सहायक",
@ -93,6 +95,7 @@
"Are you sure?": "क्या आपको यकीन है?", "Are you sure?": "क्या आपको यकीन है?",
"Arena Models": "", "Arena Models": "",
"Artifacts": "", "Artifacts": "",
"Ask": "",
"Ask a question": "", "Ask a question": "",
"Assistant": "", "Assistant": "",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "", "Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "", "Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Brave सर्च एपीआई कुंजी", "Brave Search API Key": "Brave सर्च एपीआई कुंजी",
"By {{name}}": "", "By {{name}}": "",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "संग्रह", "Collection": "संग्रह",
"Color": "", "Color": "",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "", "Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "सम्बन्ध", "Connections": "सम्बन्ध",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "", "Contact Admin for WebUI Access": "",
"Content": "सामग्री", "Content": "सामग्री",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "", "Continue with Email": "",
"Continue with LDAP": "", "Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "", "Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "", "Copied": "",
"Copied shared chat URL to clipboard!": "साझा चैट URL को क्लिपबोर्ड पर कॉपी किया गया!", "Copied shared chat URL to clipboard!": "साझा चैट URL को क्लिपबोर्ड पर कॉपी किया गया!",
"Copied to clipboard": "", "Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "किस समय बनाया गया", "Created At": "किस समय बनाया गया",
"Created by": "", "Created by": "",
"CSV Import": "", "CSV Import": "",
"Ctrl+Enter to Send": "",
"Current Model": "वर्तमान मॉडल", "Current Model": "वर्तमान मॉडल",
"Current Password": "वर्तमान पासवर्ड", "Current Password": "वर्तमान पासवर्ड",
"Custom": "कस्टम संस्करण", "Custom": "कस्टम संस्करण",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "", "Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "नए साइन अप सक्रिय करें", "Enable New Sign Ups": "नए साइन अप सक्रिय करें",
"Enabled": "", "Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "सुनिश्चित करें कि आपकी CSV फ़ाइल में इस क्रम में 4 कॉलम शामिल हैं: नाम, ईमेल, पासवर्ड, भूमिका।", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "सुनिश्चित करें कि आपकी CSV फ़ाइल में इस क्रम में 4 कॉलम शामिल हैं: नाम, ईमेल, पासवर्ड, भूमिका।",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "", "Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "चंक ओवरलैप दर्ज करें", "Enter Chunk Overlap": "चंक ओवरलैप दर्ज करें",
"Enter Chunk Size": "खंड आकार दर्ज करें", "Enter Chunk Size": "खंड आकार दर्ज करें",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "", "Enter description": "",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "", "Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "भाषा कोड दर्ज करें", "Enter language codes": "भाषा कोड दर्ज करें",
"Enter Model ID": "", "Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "Model tag दर्ज करें (उदा. {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "Model tag दर्ज करें (उदा. {{modelTag}})",
"Enter Mojeek Search API Key": "", "Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "चरणों की संख्या दर्ज करें (उदा. 50)", "Enter Number of Steps (e.g. 50)": "चरणों की संख्या दर्ज करें (उदा. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "", "Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "", "Enter Sampler (e.g. Euler a)": "",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "", "Enter Tika Server URL": "",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "शीर्ष K दर्ज करें", "Enter Top K": "शीर्ष K दर्ज करें",
"Enter URL (e.g. http://127.0.0.1:7860/)": "यूआरएल दर्ज करें (उदा. http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "यूआरएल दर्ज करें (उदा. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "यूआरएल दर्ज करें (उदा. http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "यूआरएल दर्ज करें (उदा. http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "", "Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "", "Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "", "Exclude": "",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "प्रयोगात्मक", "Experimental": "प्रयोगात्मक",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "", "Explore the cosmos": "",
"Export": "निर्यातित माल", "Export": "निर्यातित माल",
"Export All Archived Chats": "", "Export All Archived Chats": "",
@ -566,7 +580,7 @@
"Include": "", "Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "", "Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "stable-diffusion-webui चलाते समय `--api` ध्वज शामिल करें", "Include `--api` flag when running stable-diffusion-webui": "stable-diffusion-webui चलाते समय `--api` ध्वज शामिल करें",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "सूचना-विषयक", "Info": "सूचना-विषयक",
"Input commands": "इनपुट क命", "Input commands": "इनपुट क命",
"Install from Github URL": "Github URL से इंस्टॉल करें", "Install from Github URL": "Github URL से इंस्टॉल करें",
@ -624,6 +638,7 @@
"Local": "", "Local": "",
"Local Models": "", "Local Models": "",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "", "Lost": "",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "OpenWebUI समुदाय द्वारा निर्मित", "Made by Open WebUI Community": "OpenWebUI समुदाय द्वारा निर्मित",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "", "Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "माइक्रोफ़ोन तक पहुँचने पर अनुमति अस्वीकृत: {{error}}", "Permission denied when accessing microphone: {{error}}": "माइक्रोफ़ोन तक पहुँचने पर अनुमति अस्वीकृत: {{error}}",
"Permissions": "", "Permissions": "",
"Perplexity API Key": "",
"Personalization": "पेरसनलाइज़मेंट", "Personalization": "पेरसनलाइज़मेंट",
"Pin": "", "Pin": "",
"Pinned": "", "Pinned": "",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "आवाज रिकॉर्ड करना", "Record voice": "आवाज रिकॉर्ड करना",
"Redirecting you to Open WebUI Community": "आपको OpenWebUI समुदाय पर पुनर्निर्देशित किया जा रहा है", "Redirecting you to Open WebUI Community": "आपको OpenWebUI समुदाय पर पुनर्निर्देशित किया जा रहा है",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "", "References from": "",
"Refused when it shouldn't have": "जब ऐसा नहीं होना चाहिए था तो मना कर दिया", "Refused when it shouldn't have": "जब ऐसा नहीं होना चाहिए था तो मना कर दिया",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "आवाज सेट करें", "Set Voice": "आवाज सेट करें",
"Set whisper model": "", "Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "सेटिंग्स", "Settings": "सेटिंग्स",
"Settings saved successfully!": "सेटिंग्स सफलतापूर्वक सहेजी गईं!", "Settings saved successfully!": "सेटिंग्स सफलतापूर्वक सहेजी गईं!",
@ -964,7 +980,7 @@
"System Prompt": "सिस्टम प्रॉम्प्ट", "System Prompt": "सिस्टम प्रॉम्प्ट",
"Tags Generation": "", "Tags Generation": "",
"Tags Generation Prompt": "", "Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "", "Tap to interrupt": "",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "आपकी प्रतिक्रिया के लिए धन्यवाद!", "Thanks for your feedback!": "आपकी प्रतिक्रिया के लिए धन्यवाद!",
"The Application Account DN you bind with for search": "", "The Application Account DN you bind with for search": "",
"The base to search for users": "", "The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "स्कोर का मान 0.0 (0%) और 1.0 (100%) के बीच होना चाहिए।", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "स्कोर का मान 0.0 (0%) और 1.0 (100%) के बीच होना चाहिए।",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "थीम", "Theme": "थीम",
"Thinking...": "", "Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "", "This action cannot be undone. Do you wish to continue?": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "यह सुनिश्चित करता है कि आपकी मूल्यवान बातचीत आपके बैकएंड डेटाबेस में सुरक्षित रूप से सहेजी गई है। धन्यवाद!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "यह सुनिश्चित करता है कि आपकी मूल्यवान बातचीत आपके बैकएंड डेटाबेस में सुरक्षित रूप से सहेजी गई है। धन्यवाद!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "", "This response was generated by \"{{model}}\"": "",
"This will delete": "", "This will delete": "",
@ -1132,7 +1148,7 @@
"Why?": "", "Why?": "",
"Widescreen Mode": "", "Widescreen Mode": "",
"Won": "", "Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "वर्कस्पेस", "Workspace": "वर्कस्पेस",
"Workspace Permissions": "", "Workspace Permissions": "",
"Write": "", "Write": "",
@ -1142,6 +1158,7 @@
"Write your model template content here": "", "Write your model template content here": "",
"Yesterday": "कल", "Yesterday": "कल",
"You": "आप", "You": "आप",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "", "You cannot upload an empty file.": "",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(npr. `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(npr. `sh webui.sh --api`)",
"(latest)": "(najnovije)", "(latest)": "(najnovije)",
"{{ models }}": "{{ modeli }}", "{{ models }}": "{{ modeli }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "", "{{COUNT}} Replies": "",
"{{user}}'s Chats": "Razgovori korisnika {{user}}", "{{user}}'s Chats": "Razgovori korisnika {{user}}",
"{{webUIName}} Backend Required": "{{webUIName}} Backend je potreban", "{{webUIName}} Backend Required": "{{webUIName}} Backend je potreban",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "Napredni parametri", "Advanced Parameters": "Napredni parametri",
"Advanced Params": "Napredni parametri", "Advanced Params": "Napredni parametri",
"All": "",
"All Documents": "Svi dokumenti", "All Documents": "Svi dokumenti",
"All models deleted successfully": "", "All models deleted successfully": "",
"Allow Chat Controls": "", "Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "", "Allow Voice Interruption in Call": "",
"Allowed Endpoints": "", "Allowed Endpoints": "",
"Already have an account?": "Već imate račun?", "Already have an account?": "Već imate račun?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "", "Amazing": "",
"an assistant": "asistent", "an assistant": "asistent",
@ -93,6 +95,7 @@
"Are you sure?": "Jeste li sigurni?", "Are you sure?": "Jeste li sigurni?",
"Arena Models": "", "Arena Models": "",
"Artifacts": "", "Artifacts": "",
"Ask": "",
"Ask a question": "", "Ask a question": "",
"Assistant": "", "Assistant": "",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "", "Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "", "Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Brave tražilica - API ključ", "Brave Search API Key": "Brave tražilica - API ključ",
"By {{name}}": "", "By {{name}}": "",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Kolekcija", "Collection": "Kolekcija",
"Color": "", "Color": "",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "", "Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Povezivanja", "Connections": "Povezivanja",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Kontaktirajte admina za WebUI pristup", "Contact Admin for WebUI Access": "Kontaktirajte admina za WebUI pristup",
"Content": "Sadržaj", "Content": "Sadržaj",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "", "Continue with Email": "",
"Continue with LDAP": "", "Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "", "Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "", "Copied": "",
"Copied shared chat URL to clipboard!": "URL dijeljenog razgovora kopiran u međuspremnik!", "Copied shared chat URL to clipboard!": "URL dijeljenog razgovora kopiran u međuspremnik!",
"Copied to clipboard": "", "Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "Stvoreno", "Created At": "Stvoreno",
"Created by": "", "Created by": "",
"CSV Import": "", "CSV Import": "",
"Ctrl+Enter to Send": "",
"Current Model": "Trenutni model", "Current Model": "Trenutni model",
"Current Password": "Trenutna lozinka", "Current Password": "Trenutna lozinka",
"Custom": "Prilagođeno", "Custom": "Prilagođeno",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "", "Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Omogući nove prijave", "Enable New Sign Ups": "Omogući nove prijave",
"Enabled": "", "Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Provjerite da vaša CSV datoteka uključuje 4 stupca u ovom redoslijedu: Name, Email, Password, Role.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Provjerite da vaša CSV datoteka uključuje 4 stupca u ovom redoslijedu: Name, Email, Password, Role.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "", "Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "Unesite preklapanje dijelova", "Enter Chunk Overlap": "Unesite preklapanje dijelova",
"Enter Chunk Size": "Unesite veličinu dijela", "Enter Chunk Size": "Unesite veličinu dijela",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "", "Enter description": "",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "", "Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "Unesite kodove jezika", "Enter language codes": "Unesite kodove jezika",
"Enter Model ID": "", "Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "Unesite oznaku modela (npr. {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "Unesite oznaku modela (npr. {{modelTag}})",
"Enter Mojeek Search API Key": "", "Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "Unesite broj koraka (npr. 50)", "Enter Number of Steps (e.g. 50)": "Unesite broj koraka (npr. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "", "Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "", "Enter Sampler (e.g. Euler a)": "",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "", "Enter Tika Server URL": "",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Unesite Top K", "Enter Top K": "Unesite Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Unesite URL (npr. http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "Unesite URL (npr. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Unesite URL (npr. http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "Unesite URL (npr. http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "", "Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "", "Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "", "Exclude": "",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "Eksperimentalno", "Experimental": "Eksperimentalno",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "", "Explore the cosmos": "",
"Export": "Izvoz", "Export": "Izvoz",
"Export All Archived Chats": "", "Export All Archived Chats": "",
@ -566,7 +580,7 @@
"Include": "", "Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "", "Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "Uključite zastavicu `--api` prilikom pokretanja stable-diffusion-webui", "Include `--api` flag when running stable-diffusion-webui": "Uključite zastavicu `--api` prilikom pokretanja stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Informacije", "Info": "Informacije",
"Input commands": "Unos naredbi", "Input commands": "Unos naredbi",
"Install from Github URL": "Instaliraj s Github URL-a", "Install from Github URL": "Instaliraj s Github URL-a",
@ -624,6 +638,7 @@
"Local": "", "Local": "",
"Local Models": "Lokalni modeli", "Local Models": "Lokalni modeli",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "", "Lost": "",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "Izradio OpenWebUI Community", "Made by Open WebUI Community": "Izradio OpenWebUI Community",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "Dopuštenje je odbijeno prilikom pristupa mikrofonu", "Permission denied when accessing microphone": "Dopuštenje je odbijeno prilikom pristupa mikrofonu",
"Permission denied when accessing microphone: {{error}}": "Pristup mikrofonu odbijen: {{error}}", "Permission denied when accessing microphone: {{error}}": "Pristup mikrofonu odbijen: {{error}}",
"Permissions": "", "Permissions": "",
"Perplexity API Key": "",
"Personalization": "Prilagodba", "Personalization": "Prilagodba",
"Pin": "", "Pin": "",
"Pinned": "", "Pinned": "",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "Snimanje glasa", "Record voice": "Snimanje glasa",
"Redirecting you to Open WebUI Community": "Preusmjeravanje na OpenWebUI zajednicu", "Redirecting you to Open WebUI Community": "Preusmjeravanje na OpenWebUI zajednicu",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Nazivajte se \"Korisnik\" (npr. \"Korisnik uči španjolski\")", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Nazivajte se \"Korisnik\" (npr. \"Korisnik uči španjolski\")",
"References from": "", "References from": "",
"Refused when it shouldn't have": "Odbijen kada nije trebao biti", "Refused when it shouldn't have": "Odbijen kada nije trebao biti",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "Postavi glas", "Set Voice": "Postavi glas",
"Set whisper model": "", "Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "Postavke", "Settings": "Postavke",
"Settings saved successfully!": "Postavke su uspješno spremljene!", "Settings saved successfully!": "Postavke su uspješno spremljene!",
@ -964,7 +980,7 @@
"System Prompt": "Sistemski prompt", "System Prompt": "Sistemski prompt",
"Tags Generation": "", "Tags Generation": "",
"Tags Generation Prompt": "", "Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "", "Tap to interrupt": "",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "Hvala na povratnim informacijama!", "Thanks for your feedback!": "Hvala na povratnim informacijama!",
"The Application Account DN you bind with for search": "", "The Application Account DN you bind with for search": "",
"The base to search for users": "", "The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Ocjena treba biti vrijednost između 0,0 (0%) i 1,0 (100%).", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "Ocjena treba biti vrijednost između 0,0 (0%) i 1,0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Tema", "Theme": "Tema",
"Thinking...": "Razmišljam", "Thinking...": "Razmišljam",
"This action cannot be undone. Do you wish to continue?": "", "This action cannot be undone. Do you wish to continue?": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Ovo osigurava da su vaši vrijedni razgovori sigurno spremljeni u bazu podataka. Hvala vam!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Ovo osigurava da su vaši vrijedni razgovori sigurno spremljeni u bazu podataka. Hvala vam!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Ovo je eksperimentalna značajka, možda neće funkcionirati prema očekivanjima i podložna je promjenama u bilo kojem trenutku.", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "Ovo je eksperimentalna značajka, možda neće funkcionirati prema očekivanjima i podložna je promjenama u bilo kojem trenutku.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "", "This response was generated by \"{{model}}\"": "",
"This will delete": "", "This will delete": "",
@ -1132,7 +1148,7 @@
"Why?": "", "Why?": "",
"Widescreen Mode": "Mod širokog zaslona", "Widescreen Mode": "Mod širokog zaslona",
"Won": "", "Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Radna ploča", "Workspace": "Radna ploča",
"Workspace Permissions": "", "Workspace Permissions": "",
"Write": "", "Write": "",
@ -1142,6 +1158,7 @@
"Write your model template content here": "", "Write your model template content here": "",
"Yesterday": "Jučer", "Yesterday": "Jučer",
"You": "Vi", "You": "Vi",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Možete personalizirati svoje interakcije s LLM-ima dodavanjem uspomena putem gumba 'Upravljanje' u nastavku, čineći ih korisnijima i prilagođenijima vama.", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Možete personalizirati svoje interakcije s LLM-ima dodavanjem uspomena putem gumba 'Upravljanje' u nastavku, čineći ih korisnijima i prilagođenijima vama.",
"You cannot upload an empty file.": "", "You cannot upload an empty file.": "",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(pl. `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(pl. `sh webui.sh --api`)",
"(latest)": "(legújabb)", "(latest)": "(legújabb)",
"{{ models }}": "{{ modellek }}", "{{ models }}": "{{ modellek }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "", "{{COUNT}} Replies": "",
"{{user}}'s Chats": "{{user}} beszélgetései", "{{user}}'s Chats": "{{user}} beszélgetései",
"{{webUIName}} Backend Required": "{{webUIName}} Backend szükséges", "{{webUIName}} Backend Required": "{{webUIName}} Backend szükséges",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Az adminok mindig hozzáférnek minden eszközhöz; a felhasználóknak modellenként kell eszközöket hozzárendelni a munkaterületen.", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Az adminok mindig hozzáférnek minden eszközhöz; a felhasználóknak modellenként kell eszközöket hozzárendelni a munkaterületen.",
"Advanced Parameters": "Haladó paraméterek", "Advanced Parameters": "Haladó paraméterek",
"Advanced Params": "Haladó paraméterek", "Advanced Params": "Haladó paraméterek",
"All": "",
"All Documents": "Minden dokumentum", "All Documents": "Minden dokumentum",
"All models deleted successfully": "", "All models deleted successfully": "",
"Allow Chat Controls": "", "Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Hang megszakítás engedélyezése hívás közben", "Allow Voice Interruption in Call": "Hang megszakítás engedélyezése hívás közben",
"Allowed Endpoints": "", "Allowed Endpoints": "",
"Already have an account?": "Már van fiókod?", "Already have an account?": "Már van fiókod?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "", "Amazing": "",
"an assistant": "egy asszisztens", "an assistant": "egy asszisztens",
@ -93,6 +95,7 @@
"Are you sure?": "Biztos vagy benne?", "Are you sure?": "Biztos vagy benne?",
"Arena Models": "Arena modellek", "Arena Models": "Arena modellek",
"Artifacts": "Műtermékek", "Artifacts": "Műtermékek",
"Ask": "",
"Ask a question": "Kérdezz valamit", "Ask a question": "Kérdezz valamit",
"Assistant": "Asszisztens", "Assistant": "Asszisztens",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "", "Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "", "Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Brave Search API kulcs", "Brave Search API Key": "Brave Search API kulcs",
"By {{name}}": "", "By {{name}}": "",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Gyűjtemény", "Collection": "Gyűjtemény",
"Color": "", "Color": "",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "", "Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Kapcsolatok", "Connections": "Kapcsolatok",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Lépj kapcsolatba az adminnal a WebUI hozzáférésért", "Contact Admin for WebUI Access": "Lépj kapcsolatba az adminnal a WebUI hozzáférésért",
"Content": "Tartalom", "Content": "Tartalom",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "", "Continue with Email": "",
"Continue with LDAP": "", "Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Szabályozd, hogyan legyen felosztva az üzenet szövege a TTS kérésekhez. A 'Központozás' mondatokra bontja, a 'Bekezdések' bekezdésekre bontja, a 'Nincs' pedig egyetlen szövegként kezeli az üzenetet.", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Szabályozd, hogyan legyen felosztva az üzenet szövege a TTS kérésekhez. A 'Központozás' mondatokra bontja, a 'Bekezdések' bekezdésekre bontja, a 'Nincs' pedig egyetlen szövegként kezeli az üzenetet.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Vezérlők", "Controls": "Vezérlők",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Másolva", "Copied": "Másolva",
"Copied shared chat URL to clipboard!": "Megosztott beszélgetés URL másolva a vágólapra!", "Copied shared chat URL to clipboard!": "Megosztott beszélgetés URL másolva a vágólapra!",
"Copied to clipboard": "Vágólapra másolva", "Copied to clipboard": "Vágólapra másolva",
@ -245,6 +250,7 @@
"Created At": "Létrehozva", "Created At": "Létrehozva",
"Created by": "Létrehozta", "Created by": "Létrehozta",
"CSV Import": "CSV importálás", "CSV Import": "CSV importálás",
"Ctrl+Enter to Send": "",
"Current Model": "Jelenlegi modell", "Current Model": "Jelenlegi modell",
"Current Password": "Jelenlegi jelszó", "Current Password": "Jelenlegi jelszó",
"Custom": "Egyéni", "Custom": "Egyéni",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "Üzenet értékelés engedélyezése", "Enable Message Rating": "Üzenet értékelés engedélyezése",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Új regisztrációk engedélyezése", "Enable New Sign Ups": "Új regisztrációk engedélyezése",
"Enabled": "Engedélyezve", "Enabled": "Engedélyezve",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Győződj meg róla, hogy a CSV fájl tartalmazza ezt a 4 oszlopot ebben a sorrendben: Név, Email, Jelszó, Szerep.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Győződj meg róla, hogy a CSV fájl tartalmazza ezt a 4 oszlopot ebben a sorrendben: Név, Email, Jelszó, Szerep.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "Add meg a CFG skálát (pl. 7.0)", "Enter CFG Scale (e.g. 7.0)": "Add meg a CFG skálát (pl. 7.0)",
"Enter Chunk Overlap": "Add meg a darab átfedést", "Enter Chunk Overlap": "Add meg a darab átfedést",
"Enter Chunk Size": "Add meg a darab méretet", "Enter Chunk Size": "Add meg a darab méretet",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "Add meg a leírást", "Enter description": "Add meg a leírást",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "", "Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "Add meg a nyelvi kódokat", "Enter language codes": "Add meg a nyelvi kódokat",
"Enter Model ID": "Add meg a modell azonosítót", "Enter Model ID": "Add meg a modell azonosítót",
"Enter model tag (e.g. {{modelTag}})": "Add meg a modell címkét (pl. {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "Add meg a modell címkét (pl. {{modelTag}})",
"Enter Mojeek Search API Key": "", "Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "Add meg a lépések számát (pl. 50)", "Enter Number of Steps (e.g. 50)": "Add meg a lépések számát (pl. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "", "Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "Add meg a mintavételezőt (pl. Euler a)", "Enter Sampler (e.g. Euler a)": "Add meg a mintavételezőt (pl. Euler a)",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "Add meg a Tika szerver URL-t", "Enter Tika Server URL": "Add meg a Tika szerver URL-t",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Add meg a Top K értéket", "Enter Top K": "Add meg a Top K értéket",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Add meg az URL-t (pl. http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "Add meg az URL-t (pl. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Add meg az URL-t (pl. http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "Add meg az URL-t (pl. http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "", "Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "", "Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "Kizárás", "Exclude": "Kizárás",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "Kísérleti", "Experimental": "Kísérleti",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "", "Explore the cosmos": "",
"Export": "Exportálás", "Export": "Exportálás",
"Export All Archived Chats": "", "Export All Archived Chats": "",
@ -566,7 +580,7 @@
"Include": "Tartalmaz", "Include": "Tartalmaz",
"Include `--api-auth` flag when running stable-diffusion-webui": "Add hozzá a `--api-auth` kapcsolót a stable-diffusion-webui futtatásakor", "Include `--api-auth` flag when running stable-diffusion-webui": "Add hozzá a `--api-auth` kapcsolót a stable-diffusion-webui futtatásakor",
"Include `--api` flag when running stable-diffusion-webui": "Add hozzá a `--api` kapcsolót a stable-diffusion-webui futtatásakor", "Include `--api` flag when running stable-diffusion-webui": "Add hozzá a `--api` kapcsolót a stable-diffusion-webui futtatásakor",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Információ", "Info": "Információ",
"Input commands": "Beviteli parancsok", "Input commands": "Beviteli parancsok",
"Install from Github URL": "Telepítés Github URL-ről", "Install from Github URL": "Telepítés Github URL-ről",
@ -624,6 +638,7 @@
"Local": "", "Local": "",
"Local Models": "Helyi modellek", "Local Models": "Helyi modellek",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "Elveszett", "Lost": "Elveszett",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "Az OpenWebUI közösség által készítve", "Made by Open WebUI Community": "Az OpenWebUI közösség által készítve",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "Hozzáférés megtagadva a mikrofonhoz", "Permission denied when accessing microphone": "Hozzáférés megtagadva a mikrofonhoz",
"Permission denied when accessing microphone: {{error}}": "Hozzáférés megtagadva a mikrofonhoz: {{error}}", "Permission denied when accessing microphone: {{error}}": "Hozzáférés megtagadva a mikrofonhoz: {{error}}",
"Permissions": "", "Permissions": "",
"Perplexity API Key": "",
"Personalization": "Személyre szabás", "Personalization": "Személyre szabás",
"Pin": "Rögzítés", "Pin": "Rögzítés",
"Pinned": "Rögzítve", "Pinned": "Rögzítve",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "Hang rögzítése", "Record voice": "Hang rögzítése",
"Redirecting you to Open WebUI Community": "Átirányítás az OpenWebUI közösséghez", "Redirecting you to Open WebUI Community": "Átirányítás az OpenWebUI közösséghez",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Hivatkozzon magára \"Felhasználó\"-ként (pl. \"A Felhasználó spanyolul tanul\")", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Hivatkozzon magára \"Felhasználó\"-ként (pl. \"A Felhasználó spanyolul tanul\")",
"References from": "Hivatkozások innen", "References from": "Hivatkozások innen",
"Refused when it shouldn't have": "Elutasítva, amikor nem kellett volna", "Refused when it shouldn't have": "Elutasítva, amikor nem kellett volna",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "Hang beállítása", "Set Voice": "Hang beállítása",
"Set whisper model": "Whisper modell beállítása", "Set whisper model": "Whisper modell beállítása",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "Beállítások", "Settings": "Beállítások",
"Settings saved successfully!": "Beállítások sikeresen mentve!", "Settings saved successfully!": "Beállítások sikeresen mentve!",
@ -964,7 +980,7 @@
"System Prompt": "Rendszer prompt", "System Prompt": "Rendszer prompt",
"Tags Generation": "", "Tags Generation": "",
"Tags Generation Prompt": "Címke generálási prompt", "Tags Generation Prompt": "Címke generálási prompt",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "Koppintson a megszakításhoz", "Tap to interrupt": "Koppintson a megszakításhoz",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "Köszönjük a visszajelzést!", "Thanks for your feedback!": "Köszönjük a visszajelzést!",
"The Application Account DN you bind with for search": "", "The Application Account DN you bind with for search": "",
"The base to search for users": "", "The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "A bővítmény fejlesztői lelkes önkéntesek a közösségből. Ha hasznosnak találja ezt a bővítményt, kérjük, fontolja meg a fejlesztéséhez való hozzájárulást.", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "A bővítmény fejlesztői lelkes önkéntesek a közösségből. Ha hasznosnak találja ezt a bővítményt, kérjük, fontolja meg a fejlesztéséhez való hozzájárulást.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Az értékelési ranglista az Elo értékelési rendszeren alapul és valós időben frissül.", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Az értékelési ranglista az Elo értékelési rendszeren alapul és valós időben frissül.",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "A maximális fájlméret MB-ban. Ha a fájlméret meghaladja ezt a limitet, a fájl nem lesz feltöltve.", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "A maximális fájlméret MB-ban. Ha a fájlméret meghaladja ezt a limitet, a fájl nem lesz feltöltve.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "A chatben egyszerre használható fájlok maximális száma. Ha a fájlok száma meghaladja ezt a limitet, a fájlok nem lesznek feltöltve.", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "A chatben egyszerre használható fájlok maximális száma. Ha a fájlok száma meghaladja ezt a limitet, a fájlok nem lesznek feltöltve.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "A pontszámnak 0,0 (0%) és 1,0 (100%) közötti értéknek kell lennie.", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "A pontszámnak 0,0 (0%) és 1,0 (100%) közötti értéknek kell lennie.",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Téma", "Theme": "Téma",
"Thinking...": "Gondolkodik...", "Thinking...": "Gondolkodik...",
"This action cannot be undone. Do you wish to continue?": "Ez a művelet nem vonható vissza. Szeretné folytatni?", "This action cannot be undone. Do you wish to continue?": "Ez a művelet nem vonható vissza. Szeretné folytatni?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Ez biztosítja, hogy értékes beszélgetései biztonságosan mentésre kerüljenek a backend adatbázisban. Köszönjük!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Ez biztosítja, hogy értékes beszélgetései biztonságosan mentésre kerüljenek a backend adatbázisban. Köszönjük!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Ez egy kísérleti funkció, lehet, hogy nem a várt módon működik és bármikor változhat.", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "Ez egy kísérleti funkció, lehet, hogy nem a várt módon működik és bármikor változhat.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Ez az opció törli az összes meglévő fájlt a gyűjteményben és lecseréli őket az újonnan feltöltött fájlokkal.", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "Ez az opció törli az összes meglévő fájlt a gyűjteményben és lecseréli őket az újonnan feltöltött fájlokkal.",
"This response was generated by \"{{model}}\"": "Ezt a választ a \"{{model}}\" generálta", "This response was generated by \"{{model}}\"": "Ezt a választ a \"{{model}}\" generálta",
"This will delete": "Ez törölni fogja", "This will delete": "Ez törölni fogja",
@ -1132,7 +1148,7 @@
"Why?": "", "Why?": "",
"Widescreen Mode": "Szélesvásznú mód", "Widescreen Mode": "Szélesvásznú mód",
"Won": "Nyert", "Won": "Nyert",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Munkaterület", "Workspace": "Munkaterület",
"Workspace Permissions": "", "Workspace Permissions": "",
"Write": "", "Write": "",
@ -1142,6 +1158,7 @@
"Write your model template content here": "", "Write your model template content here": "",
"Yesterday": "Tegnap", "Yesterday": "Tegnap",
"You": "Ön", "You": "Ön",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Egyszerre maximum {{maxCount}} fájllal tud csevegni.", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Egyszerre maximum {{maxCount}} fájllal tud csevegni.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Az LLM-ekkel való interakcióit személyre szabhatja emlékek hozzáadásával a lenti 'Kezelés' gomb segítségével, így azok még hasznosabbak és személyre szabottabbak lesznek.", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Az LLM-ekkel való interakcióit személyre szabhatja emlékek hozzáadásával a lenti 'Kezelés' gomb segítségével, így azok még hasznosabbak és személyre szabottabbak lesznek.",
"You cannot upload an empty file.": "Nem tölthet fel üres fájlt.", "You cannot upload an empty file.": "Nem tölthet fel üres fájlt.",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(contoh: `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(contoh: `sh webui.sh --api`)",
"(latest)": "(terbaru)", "(latest)": "(terbaru)",
"{{ models }}": "{{ models }}", "{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "", "{{COUNT}} Replies": "",
"{{user}}'s Chats": "Obrolan {{user}}", "{{user}}'s Chats": "Obrolan {{user}}",
"{{webUIName}} Backend Required": "{{webUIName}} Diperlukan Backend", "{{webUIName}} Backend Required": "{{webUIName}} Diperlukan Backend",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Admin memiliki akses ke semua alat setiap saat; pengguna memerlukan alat yang ditetapkan per model di ruang kerja.", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Admin memiliki akses ke semua alat setiap saat; pengguna memerlukan alat yang ditetapkan per model di ruang kerja.",
"Advanced Parameters": "Parameter Lanjutan", "Advanced Parameters": "Parameter Lanjutan",
"Advanced Params": "Parameter Lanjutan", "Advanced Params": "Parameter Lanjutan",
"All": "",
"All Documents": "Semua Dokumen", "All Documents": "Semua Dokumen",
"All models deleted successfully": "", "All models deleted successfully": "",
"Allow Chat Controls": "", "Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Izinkan Gangguan Suara dalam Panggilan", "Allow Voice Interruption in Call": "Izinkan Gangguan Suara dalam Panggilan",
"Allowed Endpoints": "", "Allowed Endpoints": "",
"Already have an account?": "Sudah memiliki akun?", "Already have an account?": "Sudah memiliki akun?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "", "Amazing": "",
"an assistant": "asisten", "an assistant": "asisten",
@ -93,6 +95,7 @@
"Are you sure?": "Apakah Anda yakin?", "Are you sure?": "Apakah Anda yakin?",
"Arena Models": "", "Arena Models": "",
"Artifacts": "", "Artifacts": "",
"Ask": "",
"Ask a question": "", "Ask a question": "",
"Assistant": "", "Assistant": "",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "", "Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "", "Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Kunci API Pencarian Berani", "Brave Search API Key": "Kunci API Pencarian Berani",
"By {{name}}": "", "By {{name}}": "",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Koleksi", "Collection": "Koleksi",
"Color": "", "Color": "",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "", "Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Koneksi", "Connections": "Koneksi",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Hubungi Admin untuk Akses WebUI", "Contact Admin for WebUI Access": "Hubungi Admin untuk Akses WebUI",
"Content": "Konten", "Content": "Konten",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "", "Continue with Email": "",
"Continue with LDAP": "", "Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "", "Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "", "Copied": "",
"Copied shared chat URL to clipboard!": "Menyalin URL obrolan bersama ke papan klip!", "Copied shared chat URL to clipboard!": "Menyalin URL obrolan bersama ke papan klip!",
"Copied to clipboard": "", "Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "Dibuat di", "Created At": "Dibuat di",
"Created by": "Dibuat oleh", "Created by": "Dibuat oleh",
"CSV Import": "Impor CSV", "CSV Import": "Impor CSV",
"Ctrl+Enter to Send": "",
"Current Model": "Model Saat Ini", "Current Model": "Model Saat Ini",
"Current Password": "Kata Sandi Saat Ini", "Current Password": "Kata Sandi Saat Ini",
"Custom": "Kustom", "Custom": "Kustom",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "", "Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Aktifkan Pendaftaran Baru", "Enable New Sign Ups": "Aktifkan Pendaftaran Baru",
"Enabled": "", "Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Pastikan file CSV Anda menyertakan 4 kolom dengan urutan sebagai berikut: Nama, Email, Kata Sandi, Peran.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Pastikan file CSV Anda menyertakan 4 kolom dengan urutan sebagai berikut: Nama, Email, Kata Sandi, Peran.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "", "Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "Masukkan Tumpang Tindih Chunk", "Enter Chunk Overlap": "Masukkan Tumpang Tindih Chunk",
"Enter Chunk Size": "Masukkan Ukuran Potongan", "Enter Chunk Size": "Masukkan Ukuran Potongan",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "", "Enter description": "",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "", "Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "Masukkan kode bahasa", "Enter language codes": "Masukkan kode bahasa",
"Enter Model ID": "", "Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "Masukkan tag model (misalnya {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "Masukkan tag model (misalnya {{modelTag}})",
"Enter Mojeek Search API Key": "", "Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "Masukkan Jumlah Langkah (mis. 50)", "Enter Number of Steps (e.g. 50)": "Masukkan Jumlah Langkah (mis. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "", "Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "", "Enter Sampler (e.g. Euler a)": "",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "", "Enter Tika Server URL": "",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Masukkan Top K", "Enter Top K": "Masukkan Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Masukkan URL (mis. http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "Masukkan URL (mis. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Masukkan URL (mis. http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "Masukkan URL (mis. http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "", "Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "", "Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "", "Exclude": "",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "Percobaan", "Experimental": "Percobaan",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "", "Explore the cosmos": "",
"Export": "Ekspor", "Export": "Ekspor",
"Export All Archived Chats": "", "Export All Archived Chats": "",
@ -566,7 +580,7 @@
"Include": "", "Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "Sertakan bendera `--api-auth` saat menjalankan stable-diffusion-webui", "Include `--api-auth` flag when running stable-diffusion-webui": "Sertakan bendera `--api-auth` saat menjalankan stable-diffusion-webui",
"Include `--api` flag when running stable-diffusion-webui": "Sertakan bendera `--api` saat menjalankan stable-diffusion-webui", "Include `--api` flag when running stable-diffusion-webui": "Sertakan bendera `--api` saat menjalankan stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Info", "Info": "Info",
"Input commands": "Perintah masukan", "Input commands": "Perintah masukan",
"Install from Github URL": "Instal dari URL Github", "Install from Github URL": "Instal dari URL Github",
@ -624,6 +638,7 @@
"Local": "", "Local": "",
"Local Models": "Model Lokal", "Local Models": "Model Lokal",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "", "Lost": "",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "Dibuat oleh Komunitas OpenWebUI", "Made by Open WebUI Community": "Dibuat oleh Komunitas OpenWebUI",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "Izin ditolak saat mengakses mikrofon", "Permission denied when accessing microphone": "Izin ditolak saat mengakses mikrofon",
"Permission denied when accessing microphone: {{error}}": "Izin ditolak saat mengakses mikrofon: {{error}}", "Permission denied when accessing microphone: {{error}}": "Izin ditolak saat mengakses mikrofon: {{error}}",
"Permissions": "", "Permissions": "",
"Perplexity API Key": "",
"Personalization": "Personalisasi", "Personalization": "Personalisasi",
"Pin": "", "Pin": "",
"Pinned": "", "Pinned": "",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "Rekam suara", "Record voice": "Rekam suara",
"Redirecting you to Open WebUI Community": "Mengarahkan Anda ke Komunitas OpenWebUI", "Redirecting you to Open WebUI Community": "Mengarahkan Anda ke Komunitas OpenWebUI",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Merujuk diri Anda sebagai \"Pengguna\" (misalnya, \"Pengguna sedang belajar bahasa Spanyol\")", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Merujuk diri Anda sebagai \"Pengguna\" (misalnya, \"Pengguna sedang belajar bahasa Spanyol\")",
"References from": "", "References from": "",
"Refused when it shouldn't have": "Menolak ketika seharusnya tidak", "Refused when it shouldn't have": "Menolak ketika seharusnya tidak",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "Mengatur Suara", "Set Voice": "Mengatur Suara",
"Set whisper model": "", "Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "Pengaturan", "Settings": "Pengaturan",
"Settings saved successfully!": "Pengaturan berhasil disimpan!", "Settings saved successfully!": "Pengaturan berhasil disimpan!",
@ -964,7 +980,7 @@
"System Prompt": "Permintaan Sistem", "System Prompt": "Permintaan Sistem",
"Tags Generation": "", "Tags Generation": "",
"Tags Generation Prompt": "", "Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "Ketuk untuk menyela", "Tap to interrupt": "Ketuk untuk menyela",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "Terima kasih atas umpan balik Anda!", "Thanks for your feedback!": "Terima kasih atas umpan balik Anda!",
"The Application Account DN you bind with for search": "", "The Application Account DN you bind with for search": "",
"The base to search for users": "", "The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Nilai yang diberikan haruslah nilai antara 0,0 (0%) dan 1,0 (100%).", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "Nilai yang diberikan haruslah nilai antara 0,0 (0%) dan 1,0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Tema", "Theme": "Tema",
"Thinking...": "Berpikir", "Thinking...": "Berpikir",
"This action cannot be undone. Do you wish to continue?": "Tindakan ini tidak dapat dibatalkan. Apakah Anda ingin melanjutkan?", "This action cannot be undone. Do you wish to continue?": "Tindakan ini tidak dapat dibatalkan. Apakah Anda ingin melanjutkan?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Ini akan memastikan bahwa percakapan Anda yang berharga disimpan dengan aman ke basis data backend. Terima kasih!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Ini akan memastikan bahwa percakapan Anda yang berharga disimpan dengan aman ke basis data backend. Terima kasih!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Ini adalah fitur eksperimental, mungkin tidak berfungsi seperti yang diharapkan dan dapat berubah sewaktu-waktu.", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "Ini adalah fitur eksperimental, mungkin tidak berfungsi seperti yang diharapkan dan dapat berubah sewaktu-waktu.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "", "This response was generated by \"{{model}}\"": "",
"This will delete": "Ini akan menghapus", "This will delete": "Ini akan menghapus",
@ -1132,7 +1148,7 @@
"Why?": "", "Why?": "",
"Widescreen Mode": "Mode Layar Lebar", "Widescreen Mode": "Mode Layar Lebar",
"Won": "", "Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Ruang Kerja", "Workspace": "Ruang Kerja",
"Workspace Permissions": "", "Workspace Permissions": "",
"Write": "", "Write": "",
@ -1142,6 +1158,7 @@
"Write your model template content here": "", "Write your model template content here": "",
"Yesterday": "Kemarin", "Yesterday": "Kemarin",
"You": "Anda", "You": "Anda",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Anda dapat mempersonalisasi interaksi Anda dengan LLM dengan menambahkan kenangan melalui tombol 'Kelola' di bawah ini, sehingga lebih bermanfaat dan disesuaikan untuk Anda.", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Anda dapat mempersonalisasi interaksi Anda dengan LLM dengan menambahkan kenangan melalui tombol 'Kelola' di bawah ini, sehingga lebih bermanfaat dan disesuaikan untuk Anda.",
"You cannot upload an empty file.": "", "You cannot upload an empty file.": "",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(m.sh. `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(m.sh. `sh webui.sh --api`)",
"(latest)": "(is déanaí)", "(latest)": "(is déanaí)",
"{{ models }}": "{{ models }}", "{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "{{COUNT}} Freagra", "{{COUNT}} Replies": "{{COUNT}} Freagra",
"{{user}}'s Chats": "Comhráite {{user}}", "{{user}}'s Chats": "Comhráite {{user}}",
"{{webUIName}} Backend Required": "{{webUIName}} Ceoldeireadh Riachtanach", "{{webUIName}} Backend Required": "{{webUIName}} Ceoldeireadh Riachtanach",
@ -13,7 +14,7 @@
"A task model is used when performing tasks such as generating titles for chats and web search queries": "Úsáidtear múnla tasc agus tascanna á ndéanamh agat mar theidil a ghiniúint do chomhráite agus ceisteanna cuardaigh gréasáin", "A task model is used when performing tasks such as generating titles for chats and web search queries": "Úsáidtear múnla tasc agus tascanna á ndéanamh agat mar theidil a ghiniúint do chomhráite agus ceisteanna cuardaigh gréasáin",
"a user": "úsáideoir", "a user": "úsáideoir",
"About": "Maidir", "About": "Maidir",
"Accept autocomplete generation / Jump to prompt variable": "", "Accept autocomplete generation / Jump to prompt variable": "Glac giniúint uathchríochnaithe / Léim chun athróg a spreagadh",
"Access": "Rochtain", "Access": "Rochtain",
"Access Control": "Rialaithe Rochtana", "Access Control": "Rialaithe Rochtana",
"Accessible to all users": "Inrochtana do gach úsáideoir", "Accessible to all users": "Inrochtana do gach úsáideoir",
@ -21,7 +22,7 @@
"Account Activation Pending": "Gníomhachtaithe Cuntas", "Account Activation Pending": "Gníomhachtaithe Cuntas",
"Accurate information": "Faisnéis chruinn", "Accurate information": "Faisnéis chruinn",
"Actions": "Gníomhartha", "Actions": "Gníomhartha",
"Activate": "", "Activate": "Gníomhachtaigh",
"Activate this command by typing \"/{{COMMAND}}\" to chat input.": "Gníomhachtaigh an t-ordú seo trí \"/{{COMMAND}}\" a chlóscríobh chun ionchur comhrá a dhéanamh.", "Activate this command by typing \"/{{COMMAND}}\" to chat input.": "Gníomhachtaigh an t-ordú seo trí \"/{{COMMAND}}\" a chlóscríobh chun ionchur comhrá a dhéanamh.",
"Active Users": "Úsáideoirí Gníomhacha", "Active Users": "Úsáideoirí Gníomhacha",
"Add": "Cuir", "Add": "Cuir",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Tá rochtain ag riarthóirí ar gach uirlis i gcónaí; teastaíonn ó úsáideoirí uirlisí a shanntar in aghaidh an mhúnla sa spás oibre.", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Tá rochtain ag riarthóirí ar gach uirlis i gcónaí; teastaíonn ó úsáideoirí uirlisí a shanntar in aghaidh an mhúnla sa spás oibre.",
"Advanced Parameters": "Paraiméadair Casta", "Advanced Parameters": "Paraiméadair Casta",
"Advanced Params": "Paraiméid Casta", "Advanced Params": "Paraiméid Casta",
"All": "",
"All Documents": "Gach Doiciméad", "All Documents": "Gach Doiciméad",
"All models deleted successfully": "Scriosadh na múnlaí go léir go rathúil", "All models deleted successfully": "Scriosadh na múnlaí go léir go rathúil",
"Allow Chat Controls": "Ceadaigh Rialuithe Comhrá", "Allow Chat Controls": "Ceadaigh Rialuithe Comhrá",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Ceadaigh Briseadh Guth i nGlao", "Allow Voice Interruption in Call": "Ceadaigh Briseadh Guth i nGlao",
"Allowed Endpoints": "Críochphointí Ceadaithe", "Allowed Endpoints": "Críochphointí Ceadaithe",
"Already have an account?": "Tá cuntas agat cheana féin?", "Already have an account?": "Tá cuntas agat cheana féin?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "Rogha eile seachas an top_p, agus tá sé mar aidhm aige cothromaíocht cáilíochta agus éagsúlachta a chinntiú. Léiríonn an paraiméadar p an dóchúlacht íosta go mbreithneofar comhartha, i gcoibhneas le dóchúlacht an chomhartha is dóichí. Mar shampla, le p=0.05 agus dóchúlacht 0.9 ag an comhartha is dóichí, déantar logits le luach níos lú ná 0.045 a scagadh amach. (Réamhshocrú: 0.0)", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "I gcónaí", "Always": "I gcónaí",
"Amazing": "Iontach", "Amazing": "Iontach",
"an assistant": "cúntóir", "an assistant": "cúntóir",
@ -86,23 +88,24 @@
"Archive All Chats": "Cartlann Gach Comhrá", "Archive All Chats": "Cartlann Gach Comhrá",
"Archived Chats": "Comhráite Cartlann", "Archived Chats": "Comhráite Cartlann",
"archived-chat-export": "gcartlann-comhrá-onnmhairiú", "archived-chat-export": "gcartlann-comhrá-onnmhairiú",
"Are you sure you want to clear all memories? This action cannot be undone.": "", "Are you sure you want to clear all memories? This action cannot be undone.": "An bhfuil tú cinnte gur mhaith leat na cuimhní go léir a ghlanadh? Ní féidir an gníomh seo a chealú.",
"Are you sure you want to delete this channel?": "An bhfuil tú cinnte gur mhaith leat an cainéal seo a scriosadh?", "Are you sure you want to delete this channel?": "An bhfuil tú cinnte gur mhaith leat an cainéal seo a scriosadh?",
"Are you sure you want to delete this message?": "An bhfuil tú cinnte gur mhaith leat an teachtaireacht seo a scriosadh?", "Are you sure you want to delete this message?": "An bhfuil tú cinnte gur mhaith leat an teachtaireacht seo a scriosadh?",
"Are you sure you want to unarchive all archived chats?": "An bhfuil tú cinnte gur mhaith leat gach comhrá cartlainne a dhíchartlannú?", "Are you sure you want to unarchive all archived chats?": "An bhfuil tú cinnte gur mhaith leat gach comhrá cartlainne a dhíchartlannú?",
"Are you sure?": "An bhfuil tú cinnte?", "Are you sure?": "An bhfuil tú cinnte?",
"Arena Models": "Múnlaí Airéine", "Arena Models": "Múnlaí Airéine",
"Artifacts": "Déantáin", "Artifacts": "Déantáin",
"Ask": "",
"Ask a question": "Cuir ceist", "Ask a question": "Cuir ceist",
"Assistant": "Cúntóir", "Assistant": "Cúntóir",
"Attach file from knowledge": "", "Attach file from knowledge": "Ceangail comhad ó eolas",
"Attention to detail": "Aird ar mhionsonraí", "Attention to detail": "Aird ar mhionsonraí",
"Attribute for Mail": "Tréith don Phost", "Attribute for Mail": "Tréith don Phost",
"Attribute for Username": "Tréith don Ainm Úsáideora", "Attribute for Username": "Tréith don Ainm Úsáideora",
"Audio": "Fuaim", "Audio": "Fuaim",
"August": "Lúnasa", "August": "Lúnasa",
"Authenticate": "Fíordheimhnigh", "Authenticate": "Fíordheimhnigh",
"Authentication": "", "Authentication": "Fíordheimhniú",
"Auto-Copy Response to Clipboard": "Freagra AutoCopy go Gearrthaisce", "Auto-Copy Response to Clipboard": "Freagra AutoCopy go Gearrthaisce",
"Auto-playback response": "Freagra uathsheinm", "Auto-playback response": "Freagra uathsheinm",
"Autocomplete Generation": "Giniúint Uathchríochnaithe", "Autocomplete Generation": "Giniúint Uathchríochnaithe",
@ -126,12 +129,13 @@
"Beta": "Béite", "Beta": "Béite",
"Bing Search V7 Endpoint": "Cuardach Bing V7 Críochphointe", "Bing Search V7 Endpoint": "Cuardach Bing V7 Críochphointe",
"Bing Search V7 Subscription Key": "Eochair Síntiúis Bing Cuardach V7", "Bing Search V7 Subscription Key": "Eochair Síntiúis Bing Cuardach V7",
"Bocha Search API Key": "", "Bocha Search API Key": "Eochair API Cuardach Bocha",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Eochair API Cuardaigh Brave", "Brave Search API Key": "Eochair API Cuardaigh Brave",
"By {{name}}": "Le {{name}}", "By {{name}}": "Le {{name}}",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "Seachbhóthar Leabú agus Aisghabháil",
"Bypass SSL verification for Websites": "Seachbhachtar fíorú SSL do Láithreáin", "Bypass SSL verification for Websites": "Seachbhachtar fíorú SSL do Láithreáin",
"Calendar": "", "Calendar": "Féilire",
"Call": "Glaoigh", "Call": "Glaoigh",
"Call feature is not supported when using Web STT engine": "Ní thacaítear le gné glaonna agus inneall Web STT á úsáid", "Call feature is not supported when using Web STT engine": "Ní thacaítear le gné glaonna agus inneall Web STT á úsáid",
"Camera": "Ceamara", "Camera": "Ceamara",
@ -163,14 +167,14 @@
"Ciphers": "Cipéirí", "Ciphers": "Cipéirí",
"Citation": "Lua", "Citation": "Lua",
"Clear memory": "Cuimhne ghlan", "Clear memory": "Cuimhne ghlan",
"Clear Memory": "", "Clear Memory": "Glan Cuimhne",
"click here": "cliceáil anseo", "click here": "cliceáil anseo",
"Click here for filter guides.": "Cliceáil anseo le haghaidh treoracha scagaire.", "Click here for filter guides.": "Cliceáil anseo le haghaidh treoracha scagaire.",
"Click here for help.": "Cliceáil anseo le haghaidh cabhair.", "Click here for help.": "Cliceáil anseo le haghaidh cabhair.",
"Click here to": "Cliceáil anseo chun", "Click here to": "Cliceáil anseo chun",
"Click here to download user import template file.": "Cliceáil anseo chun an comhad iompórtála úsáideora a íoslódáil.", "Click here to download user import template file.": "Cliceáil anseo chun an comhad iompórtála úsáideora a íoslódáil.",
"Click here to learn more about faster-whisper and see the available models.": "Cliceáil anseo chun níos mó a fhoghlaim faoi cogar níos tapúla agus na múnlaí atá ar fáil a fheiceáil.", "Click here to learn more about faster-whisper and see the available models.": "Cliceáil anseo chun níos mó a fhoghlaim faoi cogar níos tapúla agus na múnlaí atá ar fáil a fheiceáil.",
"Click here to see available models.": "", "Click here to see available models.": "Cliceáil anseo chun na samhlacha atá ar fáil a fheiceáil.",
"Click here to select": "Cliceáil anseo chun roghnú", "Click here to select": "Cliceáil anseo chun roghnú",
"Click here to select a csv file.": "Cliceáil anseo chun comhad csv a roghnú.", "Click here to select a csv file.": "Cliceáil anseo chun comhad csv a roghnú.",
"Click here to select a py file.": "Cliceáil anseo chun comhad py a roghnú.", "Click here to select a py file.": "Cliceáil anseo chun comhad py a roghnú.",
@ -183,13 +187,14 @@
"Clone of {{TITLE}}": "Clón de {{TITLE}}", "Clone of {{TITLE}}": "Clón de {{TITLE}}",
"Close": "Dún", "Close": "Dún",
"Code execution": "Cód a fhorghníomhú", "Code execution": "Cód a fhorghníomhú",
"Code Execution": "", "Code Execution": "Forghníomhú Cóid",
"Code Execution Engine": "", "Code Execution Engine": "Inneall Forghníomhaithe Cóid",
"Code Execution Timeout": "", "Code Execution Timeout": "Teorainn Ama Forghníomhaithe Cóid",
"Code formatted successfully": "Cód formáidithe go rathúil", "Code formatted successfully": "Cód formáidithe go rathúil",
"Code Interpreter": "Ateangaire Cód", "Code Interpreter": "Ateangaire Cód",
"Code Interpreter Engine": "", "Code Interpreter Engine": "Inneall Ateangaire Cóid",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "Teimpléad Pras Ateangaire Cód",
"Collapse": "",
"Collection": "Bailiúchán", "Collection": "Bailiúchán",
"Color": "Dath", "Color": "Dath",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -206,21 +211,21 @@
"Confirm Password": "Deimhnigh Pasfhocal", "Confirm Password": "Deimhnigh Pasfhocal",
"Confirm your action": "Deimhnigh do ghníomh", "Confirm your action": "Deimhnigh do ghníomh",
"Confirm your new password": "Deimhnigh do phasfhocal nua", "Confirm your new password": "Deimhnigh do phasfhocal nua",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "Ceangail le do chríochphointí API atá comhoiriúnach le OpenAI.",
"Connections": "Naisc", "Connections": "Naisc",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "Srianann iarracht ar réasúnaíocht a dhéanamh ar shamhlacha réasúnaíochta. Ní bhaineann ach le samhlacha réasúnaíochta ó sholáthraithe sonracha a thacaíonn le hiarracht réasúnaíochta. (Réamhshocrú: meánach)", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Déan teagmháil le Riarachán le haghaidh Rochtana WebUI", "Contact Admin for WebUI Access": "Déan teagmháil le Riarachán le haghaidh Rochtana WebUI",
"Content": "Ábhar", "Content": "Ábhar",
"Content Extraction Engine": "", "Content Extraction Engine": "Inneall Eastóscadh Ábhar",
"Context Length": "Fad Comhthéacs", "Context Length": "Fad Comhthéacs",
"Continue Response": "Leanúint ar aghaidh", "Continue Response": "Leanúint ar aghaidh",
"Continue with {{provider}}": "Lean ar aghaidh le {{provider}}", "Continue with {{provider}}": "Lean ar aghaidh le {{provider}}",
"Continue with Email": "Lean ar aghaidh le Ríomhphost", "Continue with Email": "Lean ar aghaidh le Ríomhphost",
"Continue with LDAP": "Lean ar aghaidh le LDAP", "Continue with LDAP": "Lean ar aghaidh le LDAP",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Rialú conas a roinntear téacs teachtaireachta d'iarratais TTS. Roinneann 'poncaíocht' ina abairtí, scoilteann 'míreanna' i míreanna, agus coinníonn 'aon' an teachtaireacht mar shreang amháin.", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Rialú conas a roinntear téacs teachtaireachta d'iarratais TTS. Roinneann 'poncaíocht' ina abairtí, scoilteann 'míreanna' i míreanna, agus coinníonn 'aon' an teachtaireacht mar shreang amháin.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "Rialú a dhéanamh ar athrá seichimh chomharthaí sa téacs ginte. Cuirfidh luach níos airde (m.sh., 1.5) pionós níos láidre ar athrá, agus beidh luach níos ísle (m.sh., 1.1) níos boige. Ag 1, tá sé díchumasaithe. (Réamhshocrú: 1.1)",
"Controls": "Rialuithe", "Controls": "Rialuithe",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "Rialaíonn sé an chothromaíocht idir comhleanúnachas agus éagsúlacht an aschuir. Beidh téacs níos dírithe agus níos soiléire mar thoradh ar luach níos ísle. (Réamhshocrú: 5.0)", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Cóipeáladh", "Copied": "Cóipeáladh",
"Copied shared chat URL to clipboard!": "Cóipeáladh URL an chomhrá roinnte chuig an ngearrthaisce!", "Copied shared chat URL to clipboard!": "Cóipeáladh URL an chomhrá roinnte chuig an ngearrthaisce!",
"Copied to clipboard": "Cóipeáilte go gear", "Copied to clipboard": "Cóipeáilte go gear",
@ -230,7 +235,7 @@
"Copy Link": "Cóipeáil Nasc", "Copy Link": "Cóipeáil Nasc",
"Copy to clipboard": "Cóipeáil chuig an ngearrthaisce", "Copy to clipboard": "Cóipeáil chuig an ngearrthaisce",
"Copying to clipboard was successful!": "D'éirigh le cóipeáil chuig an ngearrthaisce!", "Copying to clipboard was successful!": "D'éirigh le cóipeáil chuig an ngearrthaisce!",
"CORS must be properly configured by the provider to allow requests from Open WebUI.": "", "CORS must be properly configured by the provider to allow requests from Open WebUI.": "Ní mór don soláthraí CORS a chumrú i gceart chun iarratais ó Open WebUI a cheadú.",
"Create": "Cruthaigh", "Create": "Cruthaigh",
"Create a knowledge base": "Cruthaigh bonn eolais", "Create a knowledge base": "Cruthaigh bonn eolais",
"Create a model": "Cruthaigh múnla", "Create a model": "Cruthaigh múnla",
@ -245,10 +250,11 @@
"Created At": "Cruthaithe Ag", "Created At": "Cruthaithe Ag",
"Created by": "Cruthaithe ag", "Created by": "Cruthaithe ag",
"CSV Import": "Iompórtáil CSV", "CSV Import": "Iompórtáil CSV",
"Ctrl+Enter to Send": "",
"Current Model": "Múnla Reatha", "Current Model": "Múnla Reatha",
"Current Password": "Pasfhocal Reatha", "Current Password": "Pasfhocal Reatha",
"Custom": "Saincheaptha", "Custom": "Saincheaptha",
"Danger Zone": "", "Danger Zone": "Crios Contúirte",
"Dark": "Dorcha", "Dark": "Dorcha",
"Database": "Bunachar Sonraí", "Database": "Bunachar Sonraí",
"December": "Nollaig", "December": "Nollaig",
@ -275,7 +281,7 @@
"Delete folder?": "Scrios fillteán?", "Delete folder?": "Scrios fillteán?",
"Delete function?": "Scrios feidhm?", "Delete function?": "Scrios feidhm?",
"Delete Message": "Scrios Teachtaireacht", "Delete Message": "Scrios Teachtaireacht",
"Delete message?": "", "Delete message?": "Scrios teachtaireacht?",
"Delete prompt?": "Scrios leid?", "Delete prompt?": "Scrios leid?",
"delete this link": "scrios an nasc seo", "delete this link": "scrios an nasc seo",
"Delete tool?": "Uirlis a scriosadh?", "Delete tool?": "Uirlis a scriosadh?",
@ -286,15 +292,15 @@
"Describe your knowledge base and objectives": "Déan cur síos ar do bhunachar eolais agus do chuspóirí", "Describe your knowledge base and objectives": "Déan cur síos ar do bhunachar eolais agus do chuspóirí",
"Description": "Cur síos", "Description": "Cur síos",
"Didn't fully follow instructions": "Níor lean sé treoracha go hiomlán", "Didn't fully follow instructions": "Níor lean sé treoracha go hiomlán",
"Direct Connections": "", "Direct Connections": "Naisc Dhíreacha",
"Direct Connections allow users to connect to their own OpenAI compatible API endpoints.": "", "Direct Connections allow users to connect to their own OpenAI compatible API endpoints.": "Ligeann Connections Direct dúsáideoirí ceangal lena gcríochphointí API féin atá comhoiriúnach le OpenAI.",
"Direct Connections settings updated": "", "Direct Connections settings updated": "Nuashonraíodh socruithe Connections Direct",
"Disabled": "Díchumasaithe", "Disabled": "Díchumasaithe",
"Discover a function": "Faigh amach feidhm", "Discover a function": "Faigh amach feidhm",
"Discover a model": "Faigh amach múnla", "Discover a model": "Faigh amach múnla",
"Discover a prompt": "Faigh amach leid", "Discover a prompt": "Faigh amach leid",
"Discover a tool": "Faigh amach uirlis", "Discover a tool": "Faigh amach uirlis",
"Discover how to use Open WebUI and seek support from the community.": "", "Discover how to use Open WebUI and seek support from the community.": "Faigh amach conas Open WebUI a úsáid agus lorg tacaíocht ón bpobal.",
"Discover wonders": "Faigh amach iontais", "Discover wonders": "Faigh amach iontais",
"Discover, download, and explore custom functions": "Faigh amach, íoslódáil agus iniúchadh feidhmeanna saincheaptha", "Discover, download, and explore custom functions": "Faigh amach, íoslódáil agus iniúchadh feidhmeanna saincheaptha",
"Discover, download, and explore custom prompts": "Leideanna saincheaptha a fháil amach, a íoslódáil agus a iniúchadh", "Discover, download, and explore custom prompts": "Leideanna saincheaptha a fháil amach, a íoslódáil agus a iniúchadh",
@ -309,26 +315,26 @@
"Do not install functions from sources you do not fully trust.": "Ná suiteáil feidhmeanna ó fhoinsí nach bhfuil muinín iomlán agat.", "Do not install functions from sources you do not fully trust.": "Ná suiteáil feidhmeanna ó fhoinsí nach bhfuil muinín iomlán agat.",
"Do not install tools from sources you do not fully trust.": "Ná suiteáil uirlisí ó fhoinsí nach bhfuil muinín iomlán agat.", "Do not install tools from sources you do not fully trust.": "Ná suiteáil uirlisí ó fhoinsí nach bhfuil muinín iomlán agat.",
"Document": "Doiciméad", "Document": "Doiciméad",
"Document Intelligence": "", "Document Intelligence": "Faisnéise Doiciméad",
"Document Intelligence endpoint and key required.": "", "Document Intelligence endpoint and key required.": "Críochphointe Faisnéise Doiciméad agus eochair ag teastáil.",
"Documentation": "Doiciméadú", "Documentation": "Doiciméadú",
"Documents": "Doiciméid", "Documents": "Doiciméid",
"does not make any external connections, and your data stays securely on your locally hosted server.": "ní dhéanann sé aon naisc sheachtracha, agus fanann do chuid sonraí go slán ar do fhreastalaí a óstáiltear go háitiúil.", "does not make any external connections, and your data stays securely on your locally hosted server.": "ní dhéanann sé aon naisc sheachtracha, agus fanann do chuid sonraí go slán ar do fhreastalaí a óstáiltear go háitiúil.",
"Domain Filter List": "", "Domain Filter List": "Liosta Scagairí Fearainn",
"Don't have an account?": "Níl cuntas agat?", "Don't have an account?": "Níl cuntas agat?",
"don't install random functions from sources you don't trust.": "ná suiteáil feidhmeanna randamacha ó fhoinsí nach bhfuil muinín agat.", "don't install random functions from sources you don't trust.": "ná suiteáil feidhmeanna randamacha ó fhoinsí nach bhfuil muinín agat.",
"don't install random tools from sources you don't trust.": "ná suiteáil uirlisí randamacha ó fhoinsí nach bhfuil muinín agat.", "don't install random tools from sources you don't trust.": "ná suiteáil uirlisí randamacha ó fhoinsí nach bhfuil muinín agat.",
"Don't like the style": "Ná thaitníonn an stíl", "Don't like the style": "Ná thaitníonn an stíl",
"Done": "Déanta", "Done": "Déanta",
"Download": "Íoslódáil", "Download": "Íoslódáil",
"Download as SVG": "", "Download as SVG": "Íoslódáil i SVG",
"Download canceled": "Íoslódáil cealaithe", "Download canceled": "Íoslódáil cealaithe",
"Download Database": "Íoslódáil Bunachair", "Download Database": "Íoslódáil Bunachair",
"Drag and drop a file to upload or select a file to view": "Tarraing agus scaoil comhad le huaslódáil nó roghnaigh comhad le féachaint air", "Drag and drop a file to upload or select a file to view": "Tarraing agus scaoil comhad le huaslódáil nó roghnaigh comhad le féachaint air",
"Draw": "Tarraing", "Draw": "Tarraing",
"Drop any files here to add to the conversation": "Scaoil aon chomhaid anseo le cur leis an gcomhrá", "Drop any files here to add to the conversation": "Scaoil aon chomhaid anseo le cur leis an gcomhrá",
"e.g. '30s','10m'. Valid time units are 's', 'm', 'h'.": "m.sh. '30s', '10m'. Is iad aonaid ama bailí ná 's', 'm', 'h'.", "e.g. '30s','10m'. Valid time units are 's', 'm', 'h'.": "m.sh. '30s', '10m'. Is iad aonaid ama bailí ná 's', 'm', 'h'.",
"e.g. 60": "", "e.g. 60": "m.sh. 60",
"e.g. A filter to remove profanity from text": "m.h. Scagaire chun profanity a bhaint as téacs", "e.g. A filter to remove profanity from text": "m.h. Scagaire chun profanity a bhaint as téacs",
"e.g. My Filter": "m.sh. Mo Scagaire", "e.g. My Filter": "m.sh. Mo Scagaire",
"e.g. My Tools": "e.g. Mo Uirlisí", "e.g. My Tools": "e.g. Mo Uirlisí",
@ -346,19 +352,19 @@
"ElevenLabs": "Eleven Labs", "ElevenLabs": "Eleven Labs",
"Email": "Ríomhphost", "Email": "Ríomhphost",
"Embark on adventures": "Dul ar eachtraí", "Embark on adventures": "Dul ar eachtraí",
"Embedding": "", "Embedding": "Leabú",
"Embedding Batch Size": "Méid Baisc a ionchorprú", "Embedding Batch Size": "Méid Baisc Leabaith",
"Embedding Model": "Múnla Leabháilte", "Embedding Model": "Múnla Leabháilte",
"Embedding Model Engine": "Inneall Múnla Ionchorprú", "Embedding Model Engine": "Inneall Múnla Leabaithe",
"Embedding model set to \"{{embedding_model}}\"": "Samhail leabaithe atá socraithe go \"{{embedding_model}}\"", "Embedding model set to \"{{embedding_model}}\"": "Múnla leabaithe socraithe go \"{{embedding_model}}\"",
"Enable API Key": "Cumasaigh Eochair API", "Enable API Key": "Cumasaigh Eochair API",
"Enable autocomplete generation for chat messages": "Cumasaigh giniúint uathchríochnaithe le haghaidh teachtaireachtaí comhrá", "Enable autocomplete generation for chat messages": "Cumasaigh giniúint uathchríochnaithe le haghaidh teachtaireachtaí comhrá",
"Enable Code Interpreter": "", "Enable Code Interpreter": "Cumasaigh Ateangaire Cóid",
"Enable Community Sharing": "Cumasaigh Comhroinnt Pobail", "Enable Community Sharing": "Cumasaigh Comhroinnt Pobail",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Cumasaigh Glasáil Cuimhne (mlock) chun sonraí samhaltaithe a chosc ó RAM. Glasálann an rogha seo sraith oibre leathanaigh an mhúnla isteach i RAM, ag cinntiú nach ndéanfar iad a mhalartú go diosca. Is féidir leis seo cabhrú le feidhmíocht a choinneáil trí lochtanna leathanaigh a sheachaint agus rochtain tapa ar shonraí a chinntiú.", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Cumasaigh Glasáil Cuimhne (mlock) chun sonraí samhaltaithe a chosc ó RAM. Glasálann an rogha seo sraith oibre leathanaigh an mhúnla isteach i RAM, ag cinntiú nach ndéanfar iad a mhalartú go diosca. Is féidir leis seo cabhrú le feidhmíocht a choinneáil trí lochtanna leathanaigh a sheachaint agus rochtain tapa ar shonraí a chinntiú.",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Cumasaigh Mapáil Cuimhne (mmap) chun sonraí samhla a lódáil. Ligeann an rogha seo don chóras stóráil diosca a úsáid mar leathnú ar RAM trí chomhaid diosca a chóireáil amhail is dá mba i RAM iad. Is féidir leis seo feidhmíocht na samhla a fheabhsú trí rochtain níos tapúla ar shonraí a cheadú. Mar sin féin, d'fhéadfadh sé nach n-oibreoidh sé i gceart le gach córas agus féadfaidh sé méid suntasach spáis diosca a ithe.", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Cumasaigh Mapáil Cuimhne (mmap) chun sonraí samhla a lódáil. Ligeann an rogha seo don chóras stóráil diosca a úsáid mar leathnú ar RAM trí chomhaid diosca a chóireáil amhail is dá mba i RAM iad. Is féidir leis seo feidhmíocht na samhla a fheabhsú trí rochtain níos tapúla ar shonraí a cheadú. Mar sin féin, d'fhéadfadh sé nach n-oibreoidh sé i gceart le gach córas agus féadfaidh sé méid suntasach spáis diosca a ithe.",
"Enable Message Rating": "Cumasaigh Rátáil Teachtai", "Enable Message Rating": "Cumasaigh Rátáil Teachtai",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "Cumasaigh sampláil Mirostat chun seachrán a rialú. (Réamhshocrú: 0, 0 = Díchumasaithe, 1 = Mirostat, 2 = Mirostat 2.0)", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Cumasaigh Clárúcháin Nua", "Enable New Sign Ups": "Cumasaigh Clárúcháin Nua",
"Enabled": "Cumasaithe", "Enabled": "Cumasaithe",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Déan cinnte go bhfuil 4 cholún san ord seo i do chomhad CSV: Ainm, Ríomhphost, Pasfhocal, Ról.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Déan cinnte go bhfuil 4 cholún san ord seo i do chomhad CSV: Ainm, Ríomhphost, Pasfhocal, Ról.",
@ -369,31 +375,34 @@
"Enter Application DN Password": "Iontráil Feidhmchlár DN Pasfhocal", "Enter Application DN Password": "Iontráil Feidhmchlár DN Pasfhocal",
"Enter Bing Search V7 Endpoint": "Cuir isteach Cuardach Bing V7 Críochphointe", "Enter Bing Search V7 Endpoint": "Cuir isteach Cuardach Bing V7 Críochphointe",
"Enter Bing Search V7 Subscription Key": "Cuir isteach Eochair Síntiúis Bing Cuardach V7", "Enter Bing Search V7 Subscription Key": "Cuir isteach Eochair Síntiúis Bing Cuardach V7",
"Enter Bocha Search API Key": "", "Enter Bocha Search API Key": "Cuir isteach Eochair API Bocha Cuardach",
"Enter Brave Search API Key": "Cuir isteach Eochair API Brave Cuardach", "Enter Brave Search API Key": "Cuir isteach Eochair API Brave Cuardach",
"Enter certificate path": "Cuir isteach cosán an teastais", "Enter certificate path": "Cuir isteach cosán an teastais",
"Enter CFG Scale (e.g. 7.0)": "Cuir isteach Scála CFG (m.sh. 7.0)", "Enter CFG Scale (e.g. 7.0)": "Cuir isteach Scála CFG (m.sh. 7.0)",
"Enter Chunk Overlap": "Cuir isteach Chunk Forluí", "Enter Chunk Overlap": "Cuir isteach Chunk Forluí",
"Enter Chunk Size": "Cuir isteach Méid an Chunc", "Enter Chunk Size": "Cuir isteach Méid an Smután",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "Iontráil cur síos", "Enter description": "Iontráil cur síos",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "Iontráil Críochphointe Faisnéise Doiciméid",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "Iontráil Eochair Faisnéise Doiciméad",
"Enter domains separated by commas (e.g., example.com,site.org)": "", "Enter domains separated by commas (e.g., example.com,site.org)": "Cuir isteach fearainn atá scartha le camóga (m.sh., example.com,site.org)",
"Enter Exa API Key": "Cuir isteach Eochair Exa API", "Enter Exa API Key": "Cuir isteach Eochair Exa API",
"Enter Github Raw URL": "Cuir isteach URL Github Raw", "Enter Github Raw URL": "Cuir isteach URL Github Raw",
"Enter Google PSE API Key": "Cuir isteach Eochair API Google PSE", "Enter Google PSE API Key": "Cuir isteach Eochair API Google PSE",
"Enter Google PSE Engine Id": "Cuir isteach ID Inneall Google PSE", "Enter Google PSE Engine Id": "Cuir isteach ID Inneall Google PSE",
"Enter Image Size (e.g. 512x512)": "Iontráil Méid Íomhá (m.sh. 512x512)", "Enter Image Size (e.g. 512x512)": "Iontráil Méid Íomhá (m.sh. 512x512)",
"Enter Jina API Key": "Cuir isteach Eochair API Jina", "Enter Jina API Key": "Cuir isteach Eochair API Jina",
"Enter Jupyter Password": "", "Enter Jupyter Password": "Cuir isteach Pasfhocal Jupyter",
"Enter Jupyter Token": "", "Enter Jupyter Token": "Cuir isteach Jupyter Chomhartha",
"Enter Jupyter URL": "", "Enter Jupyter URL": "Cuir isteach URL Jupyter",
"Enter Kagi Search API Key": "Cuir isteach Eochair Kagi Search API", "Enter Kagi Search API Key": "Cuir isteach Eochair Kagi Cuardach API",
"Enter Key Behavior": "",
"Enter language codes": "Cuir isteach cóid teanga", "Enter language codes": "Cuir isteach cóid teanga",
"Enter Model ID": "Iontráil ID Mhúnla", "Enter Model ID": "Iontráil ID Mhúnla",
"Enter model tag (e.g. {{modelTag}})": "Cuir isteach chlib samhail (m.sh. {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "Cuir isteach chlib samhail (m.sh. {{modelTag}})",
"Enter Mojeek Search API Key": "Cuir isteach Eochair API Cuardach Mojeek", "Enter Mojeek Search API Key": "Cuir isteach Eochair API Cuardach Mojeek",
"Enter Number of Steps (e.g. 50)": "Iontráil Líon na gCéimeanna (m.sh. 50)", "Enter Number of Steps (e.g. 50)": "Iontráil Líon na gCéimeanna (m.sh. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "Cuir isteach URL seachfhreastalaí (m.sh. https://user:password@host:port)", "Enter proxy URL (e.g. https://user:password@host:port)": "Cuir isteach URL seachfhreastalaí (m.sh. https://user:password@host:port)",
"Enter reasoning effort": "Cuir isteach iarracht réasúnaíochta", "Enter reasoning effort": "Cuir isteach iarracht réasúnaíochta",
"Enter Sampler (e.g. Euler a)": "Cuir isteach Sampler (m.sh. Euler a)", "Enter Sampler (e.g. Euler a)": "Cuir isteach Sampler (m.sh. Euler a)",
@ -403,8 +412,8 @@
"Enter SearchApi Engine": "Cuir isteach Inneall SearchAPI", "Enter SearchApi Engine": "Cuir isteach Inneall SearchAPI",
"Enter Searxng Query URL": "Cuir isteach URL Ceist Searxng", "Enter Searxng Query URL": "Cuir isteach URL Ceist Searxng",
"Enter Seed": "Cuir isteach Síl", "Enter Seed": "Cuir isteach Síl",
"Enter SerpApi API Key": "", "Enter SerpApi API Key": "Cuir isteach Eochair API SerpApi",
"Enter SerpApi Engine": "", "Enter SerpApi Engine": "Cuir isteach Inneall SerpApi",
"Enter Serper API Key": "Cuir isteach Eochair API Serper", "Enter Serper API Key": "Cuir isteach Eochair API Serper",
"Enter Serply API Key": "Cuir isteach Eochair API Serply", "Enter Serply API Key": "Cuir isteach Eochair API Serply",
"Enter Serpstack API Key": "Cuir isteach Eochair API Serpstack", "Enter Serpstack API Key": "Cuir isteach Eochair API Serpstack",
@ -416,7 +425,8 @@
"Enter Tavily API Key": "Cuir isteach eochair API Tavily", "Enter Tavily API Key": "Cuir isteach eochair API Tavily",
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "Cuir isteach URL poiblí do WebUI. Bainfear úsáid as an URL seo chun naisc a ghiniúint sna fógraí.", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "Cuir isteach URL poiblí do WebUI. Bainfear úsáid as an URL seo chun naisc a ghiniúint sna fógraí.",
"Enter Tika Server URL": "Cuir isteach URL freastalaí Tika", "Enter Tika Server URL": "Cuir isteach URL freastalaí Tika",
"Enter timeout in seconds": "", "Enter timeout in seconds": "Cuir isteach an t-am istigh i soicindí",
"Enter to Send": "",
"Enter Top K": "Cuir isteach Barr K", "Enter Top K": "Cuir isteach Barr K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Iontráil URL (m.sh. http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "Iontráil URL (m.sh. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Iontráil URL (m.sh. http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "Iontráil URL (m.sh. http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "Sampla: ríomhphost", "Example: mail": "Sampla: ríomhphost",
"Example: ou=users,dc=foo,dc=example": "Sampla: ou=úsáideoirí,dc=foo,dc=sampla", "Example: ou=users,dc=foo,dc=example": "Sampla: ou=úsáideoirí,dc=foo,dc=sampla",
"Example: sAMAccountName or uid or userPrincipalName": "Sampla: sAMAaccountName nó uid nó userPrincipalName", "Example: sAMAccountName or uid or userPrincipalName": "Sampla: sAMAaccountName nó uid nó userPrincipalName",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "Eisigh", "Exclude": "Eisigh",
"Execute code for analysis": "Íosluchtaigh cód le haghaidh anailíse", "Execute code for analysis": "Íosluchtaigh cód le haghaidh anailíse",
"Expand": "",
"Experimental": "Turgnamhach", "Experimental": "Turgnamhach",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "Déan iniúchadh ar an cosmos", "Explore the cosmos": "Déan iniúchadh ar an cosmos",
"Export": "Easpórtáil", "Export": "Easpórtáil",
"Export All Archived Chats": "Easpórtáil Gach Comhrá Cartlainne", "Export All Archived Chats": "Easpórtáil Gach Comhrá Cartlainne",
@ -464,7 +478,7 @@
"Failed to save models configuration": "Theip ar chumraíocht na múnlaí a shábháil", "Failed to save models configuration": "Theip ar chumraíocht na múnlaí a shábháil",
"Failed to update settings": "Theip ar shocruithe a nuashonrú", "Failed to update settings": "Theip ar shocruithe a nuashonrú",
"Failed to upload file.": "Theip ar uaslódáil an chomhaid.", "Failed to upload file.": "Theip ar uaslódáil an chomhaid.",
"Features": "", "Features": "Gnéithe",
"Features Permissions": "Ceadanna Gnéithe", "Features Permissions": "Ceadanna Gnéithe",
"February": "Feabhra", "February": "Feabhra",
"Feedback History": "Stair Aiseolais", "Feedback History": "Stair Aiseolais",
@ -494,7 +508,7 @@
"Form": "Foirm", "Form": "Foirm",
"Format your variables using brackets like this:": "Formáidigh na hathróga ag baint úsáide as lúibíní mar seo:", "Format your variables using brackets like this:": "Formáidigh na hathróga ag baint úsáide as lúibíní mar seo:",
"Frequency Penalty": "Pionós Minicíochta", "Frequency Penalty": "Pionós Minicíochta",
"Full Context Mode": "", "Full Context Mode": "Mód Comhthéacs Iomlán",
"Function": "Feidhm", "Function": "Feidhm",
"Function Calling": "Glaonna Feidhme", "Function Calling": "Glaonna Feidhme",
"Function created successfully": "Cruthaíodh feidhm go rathúil", "Function created successfully": "Cruthaíodh feidhm go rathúil",
@ -509,13 +523,13 @@
"Functions allow arbitrary code execution": "Ligeann feidhmeanna forghníomhú cód", "Functions allow arbitrary code execution": "Ligeann feidhmeanna forghníomhú cód",
"Functions allow arbitrary code execution.": "Ceadaíonn feidhmeanna forghníomhú cód treallach.", "Functions allow arbitrary code execution.": "Ceadaíonn feidhmeanna forghníomhú cód treallach.",
"Functions imported successfully": "Feidhmeanna allmhairi", "Functions imported successfully": "Feidhmeanna allmhairi",
"Gemini": "", "Gemini": "Gemini",
"Gemini API Config": "", "Gemini API Config": "Cumraíocht Gemini API",
"Gemini API Key is required.": "", "Gemini API Key is required.": "Tá Eochair Gemini API ag teastáil.",
"General": "Ginearálta", "General": "Ginearálta",
"Generate an image": "Gin íomhá", "Generate an image": "Gin íomhá",
"Generate Image": "Ginigh Íomhá", "Generate Image": "Ginigh Íomhá",
"Generate prompt pair": "", "Generate prompt pair": "Gin péire pras",
"Generating search query": "Giniúint ceist cuardaigh", "Generating search query": "Giniúint ceist cuardaigh",
"Get started": "Cuir tús leis", "Get started": "Cuir tús leis",
"Get started with {{WEBUI_NAME}}": "Cuir tús le {{WEBUI_NAME}}", "Get started with {{WEBUI_NAME}}": "Cuir tús le {{WEBUI_NAME}}",
@ -538,7 +552,7 @@
"Hex Color": "Dath Heics", "Hex Color": "Dath Heics",
"Hex Color - Leave empty for default color": "Dath Heics - Fág folamh don dath réamhshocraithe", "Hex Color - Leave empty for default color": "Dath Heics - Fág folamh don dath réamhshocraithe",
"Hide": "Folaigh", "Hide": "Folaigh",
"Home": "", "Home": "Baile",
"Host": "Óstach", "Host": "Óstach",
"How can I help you today?": "Conas is féidir liom cabhrú leat inniu?", "How can I help you today?": "Conas is féidir liom cabhrú leat inniu?",
"How would you rate this response?": "Cad é mar a mheasfá an freagra seo?", "How would you rate this response?": "Cad é mar a mheasfá an freagra seo?",
@ -566,12 +580,12 @@
"Include": "Cuir san áireamh", "Include": "Cuir san áireamh",
"Include `--api-auth` flag when running stable-diffusion-webui": "Cuir bratach `--api-auth` san áireamh agus webui stable-diffusion-reatha á rith", "Include `--api-auth` flag when running stable-diffusion-webui": "Cuir bratach `--api-auth` san áireamh agus webui stable-diffusion-reatha á rith",
"Include `--api` flag when running stable-diffusion-webui": "Cuir bratach `--api` san áireamh agus webui cobhsaí-scaipthe á rith", "Include `--api` flag when running stable-diffusion-webui": "Cuir bratach `--api` san áireamh agus webui cobhsaí-scaipthe á rith",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "Bíonn tionchar aige ar chomh tapa agus a fhreagraíonn an t-algartam daiseolas ón téacs ginte. Beidh coigeartuithe níos moille mar thoradh ar ráta foghlama níos ísle, agus déanfaidh ráta foghlama níos airde an t-algartam níos freagraí. (Réamhshocrú: 0.1)", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Eolas", "Info": "Eolas",
"Input commands": "Orduithe ionchuir", "Input commands": "Orduithe ionchuir",
"Install from Github URL": "Suiteáil ó Github URL", "Install from Github URL": "Suiteáil ó Github URL",
"Instant Auto-Send After Voice Transcription": "Seoladh Uathoibríoch Láithreach Tar éis", "Instant Auto-Send After Voice Transcription": "Seoladh Uathoibríoch Láithreach Tar éis",
"Integration": "", "Integration": "Comhtháthú",
"Interface": "Comhéadan", "Interface": "Comhéadan",
"Invalid file format.": "Formáid comhaid neamhbhailí.", "Invalid file format.": "Formáid comhaid neamhbhailí.",
"Invalid Tag": "Clib neamhbhailí", "Invalid Tag": "Clib neamhbhailí",
@ -583,8 +597,8 @@
"JSON Preview": "Réamhamharc JSON", "JSON Preview": "Réamhamharc JSON",
"July": "Lúil", "July": "Lúil",
"June": "Meitheamh", "June": "Meitheamh",
"Jupyter Auth": "", "Jupyter Auth": "Fíordheimhniú Jupyter",
"Jupyter URL": "", "Jupyter URL": "URL Jupyter",
"JWT Expiration": "Éag JWT", "JWT Expiration": "Éag JWT",
"JWT Token": "Comhartha JWT", "JWT Token": "Comhartha JWT",
"Kagi Search API Key": "Eochair API Chuardaigh Kagi", "Kagi Search API Key": "Eochair API Chuardaigh Kagi",
@ -597,8 +611,8 @@
"Knowledge deleted successfully.": "D'éirigh leis an eolas a scriosadh.", "Knowledge deleted successfully.": "D'éirigh leis an eolas a scriosadh.",
"Knowledge reset successfully.": "D'éirigh le hathshocrú eolais.", "Knowledge reset successfully.": "D'éirigh le hathshocrú eolais.",
"Knowledge updated successfully": "D'éirigh leis an eolas a nuashonrú", "Knowledge updated successfully": "D'éirigh leis an eolas a nuashonrú",
"Kokoro.js (Browser)": "", "Kokoro.js (Browser)": "Kokoro.js (Brabhsálaí)",
"Kokoro.js Dtype": "", "Kokoro.js Dtype": "Kokoro.js Dtype",
"Label": "Lipéad", "Label": "Lipéad",
"Landing Page Mode": "Mód Leathanach Tuirlingthe", "Landing Page Mode": "Mód Leathanach Tuirlingthe",
"Language": "Teanga", "Language": "Teanga",
@ -613,24 +627,25 @@
"Leave empty to include all models from \"{{URL}}/models\" endpoint": "Fág folamh chun gach múnla ón gcríochphointe \"{{URL}}/models\" a chur san áireamh", "Leave empty to include all models from \"{{URL}}/models\" endpoint": "Fág folamh chun gach múnla ón gcríochphointe \"{{URL}}/models\" a chur san áireamh",
"Leave empty to include all models or select specific models": "Fág folamh chun gach múnla a chur san áireamh nó roghnaigh múnlaí sonracha", "Leave empty to include all models or select specific models": "Fág folamh chun gach múnla a chur san áireamh nó roghnaigh múnlaí sonracha",
"Leave empty to use the default prompt, or enter a custom prompt": "Fág folamh chun an leid réamhshocraithe a úsáid, nó cuir isteach leid saincheaptha", "Leave empty to use the default prompt, or enter a custom prompt": "Fág folamh chun an leid réamhshocraithe a úsáid, nó cuir isteach leid saincheaptha",
"Leave model field empty to use the default model.": "", "Leave model field empty to use the default model.": "Fág réimse an mhúnla folamh chun an tsamhail réamhshocraithe a úsáid.",
"License": "", "License": "Ceadúnas",
"Light": "Solas", "Light": "Solas",
"Listening...": "Éisteacht...", "Listening...": "Éisteacht...",
"Llama.cpp": "Llama.cpp", "Llama.cpp": "Llama.cpp",
"LLMs can make mistakes. Verify important information.": "Is féidir le LLManna botúin a dhéanamh. Fíoraigh faisnéis thábhachtach.", "LLMs can make mistakes. Verify important information.": "Is féidir le LLManna botúin a dhéanamh. Fíoraigh faisnéis thábhachtach.",
"Loader": "", "Loader": "Lódóir",
"Loading Kokoro.js...": "", "Loading Kokoro.js...": "Kokoro.js á lódáil...",
"Local": "Áitiúil", "Local": "Áitiúil",
"Local Models": "Múnlaí Áitiúla", "Local Models": "Múnlaí Áitiúla",
"Location access not allowed": "", "Location access not allowed": "Ní cheadaítear rochtain suímh",
"Logit Bias": "",
"Lost": "Cailleadh", "Lost": "Cailleadh",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "Déanta ag OpenWebUI Community", "Made by Open WebUI Community": "Déanta ag OpenWebUI Community",
"Make sure to enclose them with": "Déan cinnte iad a cheangal le", "Make sure to enclose them with": "Déan cinnte iad a cheangal le",
"Make sure to export a workflow.json file as API format from ComfyUI.": "Déan cinnte comhad workflow.json a onnmhairiú mar fhormáid API ó ComfyUI.", "Make sure to export a workflow.json file as API format from ComfyUI.": "Déan cinnte comhad workflow.json a onnmhairiú mar fhormáid API ó ComfyUI.",
"Manage": "Bainistiú", "Manage": "Bainistiú",
"Manage Direct Connections": "", "Manage Direct Connections": "Bainistigh Naisc Dhíreacha",
"Manage Models": "Samhlacha a bhainistiú", "Manage Models": "Samhlacha a bhainistiú",
"Manage Ollama": "Bainistigh Ollama", "Manage Ollama": "Bainistigh Ollama",
"Manage Ollama API Connections": "Bainistigh Naisc API Ollama", "Manage Ollama API Connections": "Bainistigh Naisc API Ollama",
@ -697,7 +712,7 @@
"No HTML, CSS, or JavaScript content found.": "Níor aimsíodh aon ábhar HTML, CSS nó JavaScript.", "No HTML, CSS, or JavaScript content found.": "Níor aimsíodh aon ábhar HTML, CSS nó JavaScript.",
"No inference engine with management support found": "Níor aimsíodh aon inneall tátail le tacaíocht bhainistíochta", "No inference engine with management support found": "Níor aimsíodh aon inneall tátail le tacaíocht bhainistíochta",
"No knowledge found": "Níor aimsíodh aon eolas", "No knowledge found": "Níor aimsíodh aon eolas",
"No memories to clear": "", "No memories to clear": "Gan cuimhní cinn a ghlanadh",
"No model IDs": "Gan IDanna múnla", "No model IDs": "Gan IDanna múnla",
"No models found": "Níor aimsíodh aon mhúnlaí", "No models found": "Níor aimsíodh aon mhúnlaí",
"No models selected": "Níor roghnaíodh aon mhúnlaí", "No models selected": "Níor roghnaíodh aon mhúnlaí",
@ -727,7 +742,7 @@
"Ollama API settings updated": "Nuashonraíodh socruithe Olama API", "Ollama API settings updated": "Nuashonraíodh socruithe Olama API",
"Ollama Version": "Leagan Ollama", "Ollama Version": "Leagan Ollama",
"On": "Ar", "On": "Ar",
"OneDrive": "", "OneDrive": "OneDrive",
"Only alphanumeric characters and hyphens are allowed": "Ní cheadaítear ach carachtair alfa-uimhriúla agus fleiscíní", "Only alphanumeric characters and hyphens are allowed": "Ní cheadaítear ach carachtair alfa-uimhriúla agus fleiscíní",
"Only alphanumeric characters and hyphens are allowed in the command string.": "Ní cheadaítear ach carachtair alfauméireacha agus braithíní sa sreangán ordaithe.", "Only alphanumeric characters and hyphens are allowed in the command string.": "Ní cheadaítear ach carachtair alfauméireacha agus braithíní sa sreangán ordaithe.",
"Only collections can be edited, create a new knowledge base to edit/add documents.": "Ní féidir ach bailiúcháin a chur in eagar, bonn eolais nua a chruthú chun doiciméid a chur in eagar/a chur leis.", "Only collections can be edited, create a new knowledge base to edit/add documents.": "Ní féidir ach bailiúcháin a chur in eagar, bonn eolais nua a chruthú chun doiciméid a chur in eagar/a chur leis.",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "Cead diúltaithe agus tú ag rochtain ar", "Permission denied when accessing microphone": "Cead diúltaithe agus tú ag rochtain ar",
"Permission denied when accessing microphone: {{error}}": "Cead diúltaithe agus tú ag teacht ar mhicreafón: {{error}}", "Permission denied when accessing microphone: {{error}}": "Cead diúltaithe agus tú ag teacht ar mhicreafón: {{error}}",
"Permissions": "Ceadanna", "Permissions": "Ceadanna",
"Perplexity API Key": "",
"Personalization": "Pearsantú", "Personalization": "Pearsantú",
"Pin": "Bioráin", "Pin": "Bioráin",
"Pinned": "Pinneáilte", "Pinned": "Pinneáilte",
@ -776,7 +792,7 @@
"Plain text (.txt)": "Téacs simplí (.txt)", "Plain text (.txt)": "Téacs simplí (.txt)",
"Playground": "Clós súgartha", "Playground": "Clós súgartha",
"Please carefully review the following warnings:": "Déan athbhreithniú cúramach ar na rabhaidh seo a leanas le do thoil:", "Please carefully review the following warnings:": "Déan athbhreithniú cúramach ar na rabhaidh seo a leanas le do thoil:",
"Please do not close the settings page while loading the model.": "", "Please do not close the settings page while loading the model.": "Ná dún leathanach na socruithe agus an tsamhail á luchtú.",
"Please enter a prompt": "Cuir isteach leid", "Please enter a prompt": "Cuir isteach leid",
"Please fill in all fields.": "Líon isteach gach réimse le do thoil.", "Please fill in all fields.": "Líon isteach gach réimse le do thoil.",
"Please select a model first.": "Roghnaigh munla ar dtús le do thoil.", "Please select a model first.": "Roghnaigh munla ar dtús le do thoil.",
@ -786,7 +802,7 @@
"Positive attitude": "Dearcadh dearfach", "Positive attitude": "Dearcadh dearfach",
"Prefix ID": "Aitheantas Réimír", "Prefix ID": "Aitheantas Réimír",
"Prefix ID is used to avoid conflicts with other connections by adding a prefix to the model IDs - leave empty to disable": "Úsáidtear Aitheantas Réimír chun coinbhleachtaí le naisc eile a sheachaint trí réimír a chur le haitheantas na samhla - fág folamh le díchumasú", "Prefix ID is used to avoid conflicts with other connections by adding a prefix to the model IDs - leave empty to disable": "Úsáidtear Aitheantas Réimír chun coinbhleachtaí le naisc eile a sheachaint trí réimír a chur le haitheantas na samhla - fág folamh le díchumasú",
"Presence Penalty": "", "Presence Penalty": "Pionós Láithreacht",
"Previous 30 days": "30 lá roimhe seo", "Previous 30 days": "30 lá roimhe seo",
"Previous 7 days": "7 lá roimhe seo", "Previous 7 days": "7 lá roimhe seo",
"Profile Image": "Íomhá Próifíl", "Profile Image": "Íomhá Próifíl",
@ -809,7 +825,7 @@
"Reasoning Effort": "Iarracht Réasúnúcháin", "Reasoning Effort": "Iarracht Réasúnúcháin",
"Record voice": "Taifead guth", "Record voice": "Taifead guth",
"Redirecting you to Open WebUI Community": "Tú a atreorú chuig OpenWebUI Community", "Redirecting you to Open WebUI Community": "Tú a atreorú chuig OpenWebUI Community",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "Laghdaíonn sé an dóchúlacht go giniúint nonsense. Tabharfaidh luach níos airde (m.sh. 100) freagraí níos éagsúla, agus beidh luach níos ísle (m.sh. 10) níos coimeádaí. (Réamhshocrú: 40)", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Tagairt duit féin mar \"Úsáideoir\" (m.sh., \"Tá an úsáideoir ag foghlaim Spáinnis\")", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Tagairt duit féin mar \"Úsáideoir\" (m.sh., \"Tá an úsáideoir ag foghlaim Spáinnis\")",
"References from": "Tagairtí ó", "References from": "Tagairtí ó",
"Refused when it shouldn't have": "Diúltaíodh nuair nár chóir dó", "Refused when it shouldn't have": "Diúltaíodh nuair nár chóir dó",
@ -821,7 +837,7 @@
"Rename": "Athainmnigh", "Rename": "Athainmnigh",
"Reorder Models": "Múnlaí Athordú", "Reorder Models": "Múnlaí Athordú",
"Repeat Last N": "Déan an N deireanach arís", "Repeat Last N": "Déan an N deireanach arís",
"Repeat Penalty (Ollama)": "", "Repeat Penalty (Ollama)": "Pionós Athrá (Ollama)",
"Reply in Thread": "Freagra i Snáithe", "Reply in Thread": "Freagra i Snáithe",
"Request Mode": "Mód Iarratais", "Request Mode": "Mód Iarratais",
"Reranking Model": "Múnla Athrangú", "Reranking Model": "Múnla Athrangú",
@ -835,13 +851,13 @@
"Response notifications cannot be activated as the website permissions have been denied. Please visit your browser settings to grant the necessary access.": "Ní féidir fógraí freagartha a ghníomhachtú toisc gur diúltaíodh ceadanna an tsuímh Ghréasáin. Tabhair cuairt ar do shocruithe brabhsálaí chun an rochtain riachtanach a dheonú.", "Response notifications cannot be activated as the website permissions have been denied. Please visit your browser settings to grant the necessary access.": "Ní féidir fógraí freagartha a ghníomhachtú toisc gur diúltaíodh ceadanna an tsuímh Ghréasáin. Tabhair cuairt ar do shocruithe brabhsálaí chun an rochtain riachtanach a dheonú.",
"Response splitting": "Scoilt freagartha", "Response splitting": "Scoilt freagartha",
"Result": "Toradh", "Result": "Toradh",
"Retrieval": "", "Retrieval": "Aisghabháil",
"Retrieval Query Generation": "Aisghabháil Giniúint Ceist", "Retrieval Query Generation": "Aisghabháil Giniúint Ceist",
"Rich Text Input for Chat": "Ionchur Saibhir Téacs don Chomhrá", "Rich Text Input for Chat": "Ionchur Saibhir Téacs don Chomhrá",
"RK": "RK", "RK": "RK",
"Role": "Ról", "Role": "Ról",
"Rosé Pine": "Pine Rosé", "Rosé Pine": "Péine Rosé",
"Rosé Pine Dawn": "Rose Pine Dawn", "Rosé Pine Dawn": "Rosé Péine Breacadh an lae",
"RTL": "RTL", "RTL": "RTL",
"Run": "Rith", "Run": "Rith",
"Running": "Ag rith", "Running": "Ag rith",
@ -885,7 +901,7 @@
"Select a pipeline": "Roghnaigh píblíne", "Select a pipeline": "Roghnaigh píblíne",
"Select a pipeline url": "Roghnaigh url píblíne", "Select a pipeline url": "Roghnaigh url píblíne",
"Select a tool": "Roghnaigh uirlis", "Select a tool": "Roghnaigh uirlis",
"Select an auth method": "", "Select an auth method": "Roghnaigh modh an údair",
"Select an Ollama instance": "Roghnaigh sampla Olama", "Select an Ollama instance": "Roghnaigh sampla Olama",
"Select Engine": "Roghnaigh Inneall", "Select Engine": "Roghnaigh Inneall",
"Select Knowledge": "Roghnaigh Eolais", "Select Knowledge": "Roghnaigh Eolais",
@ -897,8 +913,8 @@
"Send message": "Seol teachtaireacht", "Send message": "Seol teachtaireacht",
"Sends `stream_options: { include_usage: true }` in the request.\nSupported providers will return token usage information in the response when set.": "Seolann `stream_options: { include_usage: true }` san iarratas.\nTabharfaidh soláthraithe a fhaigheann tacaíocht faisnéis úsáide chomharthaí ar ais sa fhreagra nuair a bheidh sé socraithe.", "Sends `stream_options: { include_usage: true }` in the request.\nSupported providers will return token usage information in the response when set.": "Seolann `stream_options: { include_usage: true }` san iarratas.\nTabharfaidh soláthraithe a fhaigheann tacaíocht faisnéis úsáide chomharthaí ar ais sa fhreagra nuair a bheidh sé socraithe.",
"September": "Meán Fómhair", "September": "Meán Fómhair",
"SerpApi API Key": "", "SerpApi API Key": "Eochair API SerpApi",
"SerpApi Engine": "", "SerpApi Engine": "Inneall SerpApi",
"Serper API Key": "Serper API Eochair", "Serper API Key": "Serper API Eochair",
"Serply API Key": "Eochair API Serply", "Serply API Key": "Eochair API Serply",
"Serpstack API Key": "Eochair API Serpstack", "Serpstack API Key": "Eochair API Serpstack",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Socraigh líon na snáitheanna oibrithe a úsáidtear le haghaidh ríomh. Rialaíonn an rogha seo cé mhéad snáithe a úsáidtear chun iarratais a thagann isteach a phróiseáil i gcomhthráth. D'fhéadfadh méadú ar an luach seo feidhmíocht a fheabhsú faoi ualaí oibre comhairgeadra ard ach féadfaidh sé níos mó acmhainní LAP a úsáid freisin.", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Socraigh líon na snáitheanna oibrithe a úsáidtear le haghaidh ríomh. Rialaíonn an rogha seo cé mhéad snáithe a úsáidtear chun iarratais a thagann isteach a phróiseáil i gcomhthráth. D'fhéadfadh méadú ar an luach seo feidhmíocht a fheabhsú faoi ualaí oibre comhairgeadra ard ach féadfaidh sé níos mó acmhainní LAP a úsáid freisin.",
"Set Voice": "Socraigh Guth", "Set Voice": "Socraigh Guth",
"Set whisper model": "Socraigh múnla cogar", "Set whisper model": "Socraigh múnla cogar",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "Socraíonn sé cé chomh fada siar is atá an tsamhail le breathnú siar chun athrá a chosc. (Réamhshocrú: 64, 0 = díchumasaithe, -1 = num_ctx)", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "Socraíonn sé an síol uimhir randamach a úsáid le haghaidh giniúna. Má shocraítear é seo ar uimhir shainiúil, ginfidh an tsamhail an téacs céanna don leid céanna. (Réamhshocrú: randamach)", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "Socraíonn sé méid na fuinneoige comhthéacs a úsáidtear chun an chéad chomhartha eile a ghiniúint. (Réamhshocrú: 2048)", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Socraíonn sé na stadanna le húsáid. Nuair a thagtar ar an bpatrún seo, stopfaidh an LLM ag giniúint téacs agus ag filleadh. Is féidir patrúin stad iolracha a shocrú trí pharaiméadair stadanna iolracha a shonrú i gcomhad samhail.", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Socraíonn sé na stadanna le húsáid. Nuair a thagtar ar an bpatrún seo, stopfaidh an LLM ag giniúint téacs agus ag filleadh. Is féidir patrúin stad iolracha a shocrú trí pharaiméadair stadanna iolracha a shonrú i gcomhad samhail.",
"Settings": "Socruithe", "Settings": "Socruithe",
"Settings saved successfully!": "Socruithe sábhálta go rathúil!", "Settings saved successfully!": "Socruithe sábhálta go rathúil!",
@ -964,10 +980,10 @@
"System Prompt": "Córas Leid", "System Prompt": "Córas Leid",
"Tags Generation": "Giniúint Clibeanna", "Tags Generation": "Giniúint Clibeanna",
"Tags Generation Prompt": "Clibeanna Giniúint Leid", "Tags Generation Prompt": "Clibeanna Giniúint Leid",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "Úsáidtear sampláil saor ó eireabaill chun tionchar na n-chomharthaí ón aschur nach bhfuil chomh dóchúil céanna a laghdú. Laghdóidh luach níos airde (m.sh., 2.0) an tionchar níos mó, agus díchumasaíonn luach 1.0 an socrú seo. (réamhshocraithe: 1)", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "Úsáidtear sampláil saor ó eireabaill chun tionchar na n-chomharthaí ón aschur nach bhfuil chomh dóchúil céanna a laghdú. Laghdóidh luach níos airde (m.sh., 2.0) an tionchar níos mó, agus díchumasaíonn luach 1.0 an socrú seo. (réamhshocraithe: 1)",
"Talk to model": "", "Talk to model": "Labhair le múnla",
"Tap to interrupt": "Tapáil chun cur isteach", "Tap to interrupt": "Tapáil chun cur isteach",
"Tasks": "", "Tasks": "Tascanna",
"Tavily API Key": "Eochair API Tavily", "Tavily API Key": "Eochair API Tavily",
"Tell us more:": "Inis dúinn níos mó:", "Tell us more:": "Inis dúinn níos mó:",
"Temperature": "Teocht", "Temperature": "Teocht",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "Go raibh maith agat as do chuid aiseolas!", "Thanks for your feedback!": "Go raibh maith agat as do chuid aiseolas!",
"The Application Account DN you bind with for search": "An Cuntas Feidhmchláir DN a nascann tú leis le haghaidh cuardaigh", "The Application Account DN you bind with for search": "An Cuntas Feidhmchláir DN a nascann tú leis le haghaidh cuardaigh",
"The base to search for users": "An bonn chun cuardach a dhéanamh ar úsáideoirí", "The base to search for users": "An bonn chun cuardach a dhéanamh ar úsáideoirí",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "Cinneann méid an bhaisc cé mhéad iarratas téacs a phróiseáiltear le chéile ag an am céanna. Is féidir le méid baisc níos airde feidhmíocht agus luas an mhúnla a mhéadú, ach éilíonn sé níos mó cuimhne freisin. (Réamhshocrú: 512)", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Is deonacha paiseanta ón bpobal iad na forbróirí taobh thiar den bhreiseán seo. Má aimsíonn an breiseán seo cabhrach leat, smaoinigh ar rannchuidiú lena fhorbairt.", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Is deonacha paiseanta ón bpobal iad na forbróirí taobh thiar den bhreiseán seo. Má aimsíonn an breiseán seo cabhrach leat, smaoinigh ar rannchuidiú lena fhorbairt.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Tá an clár ceannairí meastóireachta bunaithe ar chóras rátála Elo agus déantar é a nuashonrú i bhfíor-am.", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Tá an clár ceannairí meastóireachta bunaithe ar chóras rátála Elo agus déantar é a nuashonrú i bhfíor-am.",
"The LDAP attribute that maps to the mail that users use to sign in.": "An tréith LDAP a mhapálann don ríomhphost a úsáideann úsáideoirí chun síniú isteach.", "The LDAP attribute that maps to the mail that users use to sign in.": "An tréith LDAP a mhapálann don ríomhphost a úsáideann úsáideoirí chun síniú isteach.",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Uasmhéid an chomhaid i MB. Má sháraíonn méid an chomhaid an teorainn seo, ní uaslódófar an comhad.", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Uasmhéid an chomhaid i MB. Má sháraíonn méid an chomhaid an teorainn seo, ní uaslódófar an comhad.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "An líon uasta na gcomhaid is féidir a úsáid ag an am céanna i gcomhrá. Má sháraíonn líon na gcomhaid an teorainn seo, ní uaslódófar na comhaid.", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "An líon uasta na gcomhaid is féidir a úsáid ag an am céanna i gcomhrá. Má sháraíonn líon na gcomhaid an teorainn seo, ní uaslódófar na comhaid.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Ba chóir go mbeadh an scór ina luach idir 0.0 (0%) agus 1.0 (100%).", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "Ba chóir go mbeadh an scór ina luach idir 0.0 (0%) agus 1.0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "Teocht an mhúnla. Déanfaidh méadú ar an teocht an freagra múnla níos cruthaithí. (Réamhshocrú: 0.8)", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Téama", "Theme": "Téama",
"Thinking...": "Ag smaoineamh...", "Thinking...": "Ag smaoineamh...",
"This action cannot be undone. Do you wish to continue?": "Ní féidir an gníomh seo a chur ar ais. Ar mhaith leat leanúint ar aghaidh?", "This action cannot be undone. Do you wish to continue?": "Ní féidir an gníomh seo a chur ar ais. Ar mhaith leat leanúint ar aghaidh?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Cinntíonn sé seo go sábhálfar do chomhráite luachmhara go daingean i do bhunachar sonraí cúltaca Go raibh maith agat!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Cinntíonn sé seo go sábhálfar do chomhráite luachmhara go daingean i do bhunachar sonraí cúltaca Go raibh maith agat!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Is gné turgnamhach í seo, b'fhéidir nach bhfeidhmeoidh sé mar a bhíothas ag súil leis agus tá sé faoi réir athraithe ag am ar bith.", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "Is gné turgnamhach í seo, b'fhéidir nach bhfeidhmeoidh sé mar a bhíothas ag súil leis agus tá sé faoi réir athraithe ag am ar bith.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "Rialaíonn an rogha seo cé mhéad comhartha a chaomhnaítear agus an comhthéacs á athnuachan. Mar shampla, má shocraítear go 2 é, coinneofar an 2 chomhartha dheireanacha de chomhthéacs an chomhrá. Is féidir le comhthéacs a chaomhnú cabhrú le leanúnachas comhrá a choinneáil, ach dfhéadfadh sé laghdú a dhéanamh ar an gcumas freagairt do thopaicí nua. (Réamhshocrú: 24)", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "Socraíonn an rogha seo an t-uaslíon comharthaí is féidir leis an tsamhail a ghiniúint ina fhreagra. Tríd an teorainn seo a mhéadú is féidir leis an tsamhail freagraí níos faide a sholáthar, ach dfhéadfadh go méadódh sé an dóchúlacht go nginfear ábhar neamhchabhrach nó nach mbaineann le hábhar. (Réamhshocrú: 128)", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Scriosfaidh an rogha seo gach comhad atá sa bhailiúchán agus cuirfear comhaid nua-uaslódála ina n-ionad.", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "Scriosfaidh an rogha seo gach comhad atá sa bhailiúchán agus cuirfear comhaid nua-uaslódála ina n-ionad.",
"This response was generated by \"{{model}}\"": "Gin an freagra seo ag \"{{model}}\"", "This response was generated by \"{{model}}\"": "Gin an freagra seo ag \"{{model}}\"",
"This will delete": "Scriosfaidh sé seo", "This will delete": "Scriosfaidh sé seo",
@ -1005,7 +1021,7 @@
"This will reset the knowledge base and sync all files. Do you wish to continue?": "Déanfaidh sé seo an bonn eolais a athshocrú agus gach comhad a shioncronú. Ar mhaith leat leanúint ar aghaidh?", "This will reset the knowledge base and sync all files. Do you wish to continue?": "Déanfaidh sé seo an bonn eolais a athshocrú agus gach comhad a shioncronú. Ar mhaith leat leanúint ar aghaidh?",
"Thorough explanation": "Míniú críochnúil", "Thorough explanation": "Míniú críochnúil",
"Thought for {{DURATION}}": "Smaoineamh ar {{DURATION}}", "Thought for {{DURATION}}": "Smaoineamh ar {{DURATION}}",
"Thought for {{DURATION}} seconds": "", "Thought for {{DURATION}} seconds": "Smaoineamh ar feadh {{DURATION}} soicind",
"Tika": "Tika", "Tika": "Tika",
"Tika Server URL required.": "Teastaíonn URL Freastalaí Tika.", "Tika Server URL required.": "Teastaíonn URL Freastalaí Tika.",
"Tiktoken": "Tictoken", "Tiktoken": "Tictoken",
@ -1014,7 +1030,7 @@
"Title (e.g. Tell me a fun fact)": "Teideal (m.sh. inis dom fíric spraíúil)", "Title (e.g. Tell me a fun fact)": "Teideal (m.sh. inis dom fíric spraíúil)",
"Title Auto-Generation": "Teideal Auto-Generation", "Title Auto-Generation": "Teideal Auto-Generation",
"Title cannot be an empty string.": "Ní féidir leis an teideal a bheith ina teaghrán folamh.", "Title cannot be an empty string.": "Ní féidir leis an teideal a bheith ina teaghrán folamh.",
"Title Generation": "", "Title Generation": "Giniúint Teidil",
"Title Generation Prompt": "Leid Giniúint Teideal", "Title Generation Prompt": "Leid Giniúint Teideal",
"TLS": "TLS", "TLS": "TLS",
"To access the available model names for downloading,": "Chun teacht ar na hainmneacha múnla atá ar fáil le híoslódáil,", "To access the available model names for downloading,": "Chun teacht ar na hainmneacha múnla atá ar fáil le híoslódáil,",
@ -1050,7 +1066,7 @@
"Top P": "Barr P", "Top P": "Barr P",
"Transformers": "Claochladáin", "Transformers": "Claochladáin",
"Trouble accessing Ollama?": "Deacracht teacht ar Ollama?", "Trouble accessing Ollama?": "Deacracht teacht ar Ollama?",
"Trust Proxy Environment": "", "Trust Proxy Environment": "Timpeallacht Iontaobhais do Phróicís",
"TTS Model": "TTS Múnla", "TTS Model": "TTS Múnla",
"TTS Settings": "Socruithe TTS", "TTS Settings": "Socruithe TTS",
"TTS Voice": "Guth TTS", "TTS Voice": "Guth TTS",
@ -1072,14 +1088,14 @@
"Updated": "Nuashonraithe", "Updated": "Nuashonraithe",
"Updated at": "Nuashonraithe ag", "Updated at": "Nuashonraithe ag",
"Updated At": "Nuashonraithe Ag", "Updated At": "Nuashonraithe Ag",
"Upgrade to a licensed plan for enhanced capabilities, including custom theming and branding, and dedicated support.": "", "Upgrade to a licensed plan for enhanced capabilities, including custom theming and branding, and dedicated support.": "Uasghrádú go dtí plean ceadúnaithe le haghaidh cumais fheabhsaithe, lena n-áirítear téamaí saincheaptha agus brandáil, agus tacaíocht thiomanta.",
"Upload": "Uaslódáil", "Upload": "Uaslódáil",
"Upload a GGUF model": "Uaslódáil múnla GGUF", "Upload a GGUF model": "Uaslódáil múnla GGUF",
"Upload directory": "Uaslódáil eolaire", "Upload directory": "Uaslódáil eolaire",
"Upload files": "Uaslódáil comhaid", "Upload files": "Uaslódáil comhaid",
"Upload Files": "Uaslódáil Comhaid", "Upload Files": "Uaslódáil Comhaid",
"Upload Pipeline": "Uaslódáil píblíne", "Upload Pipeline": "Uaslódáil píblíne",
"Upload Progress": "Uaslódáil an Dul", "Upload Progress": "Dul Chun Cinn an Uaslódála",
"URL": "URL", "URL": "URL",
"URL Mode": "Mód URL", "URL Mode": "Mód URL",
"Use '#' in the prompt input to load and include your knowledge.": "Úsáid '#' san ionchur leid chun do chuid eolais a lódáil agus a chur san áireamh.", "Use '#' in the prompt input to load and include your knowledge.": "Úsáid '#' san ionchur leid chun do chuid eolais a lódáil agus a chur san áireamh.",
@ -1111,7 +1127,7 @@
"Warning:": "Rabhadh:", "Warning:": "Rabhadh:",
"Warning: Enabling this will allow users to upload arbitrary code on the server.": "Rabhadh: Cuirfidh sé seo ar chumas úsáideoirí cód treallach a uaslódáil ar an bhfreastalaí.", "Warning: Enabling this will allow users to upload arbitrary code on the server.": "Rabhadh: Cuirfidh sé seo ar chumas úsáideoirí cód treallach a uaslódáil ar an bhfreastalaí.",
"Warning: If you update or change your embedding model, you will need to re-import all documents.": "Rabhadh: Má nuashonraíonn tú nó má athraíonn tú do mhúnla leabaithe, beidh ort gach doiciméad a athiompórtáil.", "Warning: If you update or change your embedding model, you will need to re-import all documents.": "Rabhadh: Má nuashonraíonn tú nó má athraíonn tú do mhúnla leabaithe, beidh ort gach doiciméad a athiompórtáil.",
"Warning: Jupyter execution enables arbitrary code execution, posing severe security risks—proceed with extreme caution.": "", "Warning: Jupyter execution enables arbitrary code execution, posing severe security risks—proceed with extreme caution.": "Rabhadh: Trí fhorghníomhú Jupyter is féidir cód a fhorghníomhú go treallach, rud a chruthaíonn mór-rioscaí slándála - bí fíorchúramach.",
"Web": "Gréasán", "Web": "Gréasán",
"Web API": "API Gréasáin", "Web API": "API Gréasáin",
"Web Search": "Cuardach Gréasáin", "Web Search": "Cuardach Gréasáin",
@ -1132,7 +1148,7 @@
"Why?": "Cén fáth?", "Why?": "Cén fáth?",
"Widescreen Mode": "Mód Leathanscáileán", "Widescreen Mode": "Mód Leathanscáileán",
"Won": "Bhuaigh", "Won": "Bhuaigh",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "Oibríonn sé le barr-k. Beidh téacs níos éagsúla mar thoradh ar luach níos airde (m.sh., 0.95), agus ginfidh luach níos ísle (m.sh., 0.5) téacs níos dírithe agus níos coimeádaí. (Réamhshocrú: 0.9)", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Spás oibre", "Workspace": "Spás oibre",
"Workspace Permissions": "Ceadanna Spás Oibre", "Workspace Permissions": "Ceadanna Spás Oibre",
"Write": "Scríobh", "Write": "Scríobh",
@ -1142,6 +1158,7 @@
"Write your model template content here": "Scríobh do mhúnla ábhar teimpléad anseo", "Write your model template content here": "Scríobh do mhúnla ábhar teimpléad anseo",
"Yesterday": "Inné", "Yesterday": "Inné",
"You": "Tú", "You": "Tú",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Ní féidir leat comhrá a dhéanamh ach le comhad {{maxCount}} ar a mhéad ag an am.", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Ní féidir leat comhrá a dhéanamh ach le comhad {{maxCount}} ar a mhéad ag an am.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Is féidir leat do chuid idirghníomhaíochtaí le LLManna a phearsantú ach cuimhní cinn a chur leis tríd an gcnaipe 'Bainistigh' thíos, rud a fhágann go mbeidh siad níos cabhrach agus níos oiriúnaí duit.", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Is féidir leat do chuid idirghníomhaíochtaí le LLManna a phearsantú ach cuimhní cinn a chur leis tríd an gcnaipe 'Bainistigh' thíos, rud a fhágann go mbeidh siad níos cabhrach agus níos oiriúnaí duit.",
"You cannot upload an empty file.": "Ní féidir leat comhad folamh a uaslódáil.", "You cannot upload an empty file.": "Ní féidir leat comhad folamh a uaslódáil.",
@ -1155,6 +1172,6 @@
"Your account status is currently pending activation.": "Tá stádas do chuntais ar feitheamh faoi ghníomhachtú.", "Your account status is currently pending activation.": "Tá stádas do chuntais ar feitheamh faoi ghníomhachtú.",
"Your entire contribution will go directly to the plugin developer; Open WebUI does not take any percentage. However, the chosen funding platform might have its own fees.": "Rachaidh do ranníocaíocht iomlán go díreach chuig an bhforbróir breiseán; Ní ghlacann Open WebUI aon chéatadán. Mar sin féin, d'fhéadfadh a tháillí féin a bheith ag an ardán maoinithe roghnaithe.", "Your entire contribution will go directly to the plugin developer; Open WebUI does not take any percentage. However, the chosen funding platform might have its own fees.": "Rachaidh do ranníocaíocht iomlán go díreach chuig an bhforbróir breiseán; Ní ghlacann Open WebUI aon chéatadán. Mar sin féin, d'fhéadfadh a tháillí féin a bheith ag an ardán maoinithe roghnaithe.",
"Youtube": "Youtube", "Youtube": "Youtube",
"Youtube Language": "", "Youtube Language": "Teanga Youtube",
"Youtube Proxy URL": "" "Youtube Proxy URL": "URL Seachfhreastalaí YouTube"
} }

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(p.e. `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(p.e. `sh webui.sh --api`)",
"(latest)": "(ultima)", "(latest)": "(ultima)",
"{{ models }}": "{{ modelli }}", "{{ models }}": "{{ modelli }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "", "{{COUNT}} Replies": "",
"{{user}}'s Chats": "{{user}} Chat", "{{user}}'s Chats": "{{user}} Chat",
"{{webUIName}} Backend Required": "{{webUIName}} Backend richiesto", "{{webUIName}} Backend Required": "{{webUIName}} Backend richiesto",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "Parametri avanzati", "Advanced Parameters": "Parametri avanzati",
"Advanced Params": "Parametri avanzati", "Advanced Params": "Parametri avanzati",
"All": "",
"All Documents": "Tutti i documenti", "All Documents": "Tutti i documenti",
"All models deleted successfully": "", "All models deleted successfully": "",
"Allow Chat Controls": "", "Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "", "Allow Voice Interruption in Call": "",
"Allowed Endpoints": "", "Allowed Endpoints": "",
"Already have an account?": "Hai già un account?", "Already have an account?": "Hai già un account?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "", "Amazing": "",
"an assistant": "un assistente", "an assistant": "un assistente",
@ -93,6 +95,7 @@
"Are you sure?": "Sei sicuro?", "Are you sure?": "Sei sicuro?",
"Arena Models": "", "Arena Models": "",
"Artifacts": "", "Artifacts": "",
"Ask": "",
"Ask a question": "", "Ask a question": "",
"Assistant": "", "Assistant": "",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "", "Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "", "Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Chiave API di ricerca Brave", "Brave Search API Key": "Chiave API di ricerca Brave",
"By {{name}}": "", "By {{name}}": "",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Collezione", "Collection": "Collezione",
"Color": "", "Color": "",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "", "Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Connessioni", "Connections": "Connessioni",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "", "Contact Admin for WebUI Access": "",
"Content": "Contenuto", "Content": "Contenuto",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "", "Continue with Email": "",
"Continue with LDAP": "", "Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "", "Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "", "Copied": "",
"Copied shared chat URL to clipboard!": "URL della chat condivisa copiato negli appunti!", "Copied shared chat URL to clipboard!": "URL della chat condivisa copiato negli appunti!",
"Copied to clipboard": "", "Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "Creato il", "Created At": "Creato il",
"Created by": "", "Created by": "",
"CSV Import": "", "CSV Import": "",
"Ctrl+Enter to Send": "",
"Current Model": "Modello corrente", "Current Model": "Modello corrente",
"Current Password": "Password corrente", "Current Password": "Password corrente",
"Custom": "Personalizzato", "Custom": "Personalizzato",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "", "Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Abilita nuove iscrizioni", "Enable New Sign Ups": "Abilita nuove iscrizioni",
"Enabled": "", "Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Assicurati che il tuo file CSV includa 4 colonne in questo ordine: Nome, Email, Password, Ruolo.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Assicurati che il tuo file CSV includa 4 colonne in questo ordine: Nome, Email, Password, Ruolo.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "", "Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "Inserisci la sovrapposizione chunk", "Enter Chunk Overlap": "Inserisci la sovrapposizione chunk",
"Enter Chunk Size": "Inserisci la dimensione chunk", "Enter Chunk Size": "Inserisci la dimensione chunk",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "", "Enter description": "",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "", "Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "Inserisci i codici lingua", "Enter language codes": "Inserisci i codici lingua",
"Enter Model ID": "", "Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "Inserisci il tag del modello (ad esempio {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "Inserisci il tag del modello (ad esempio {{modelTag}})",
"Enter Mojeek Search API Key": "", "Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "Inserisci il numero di passaggi (ad esempio 50)", "Enter Number of Steps (e.g. 50)": "Inserisci il numero di passaggi (ad esempio 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "", "Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "", "Enter Sampler (e.g. Euler a)": "",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "", "Enter Tika Server URL": "",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Inserisci Top K", "Enter Top K": "Inserisci Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Inserisci URL (ad esempio http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "Inserisci URL (ad esempio http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Inserisci URL (ad esempio http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "Inserisci URL (ad esempio http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "", "Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "", "Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "", "Exclude": "",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "Sperimentale", "Experimental": "Sperimentale",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "", "Explore the cosmos": "",
"Export": "Esportazione", "Export": "Esportazione",
"Export All Archived Chats": "", "Export All Archived Chats": "",
@ -566,7 +580,7 @@
"Include": "", "Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "", "Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "Includi il flag `--api` quando esegui stable-diffusion-webui", "Include `--api` flag when running stable-diffusion-webui": "Includi il flag `--api` quando esegui stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Informazioni", "Info": "Informazioni",
"Input commands": "Comandi di input", "Input commands": "Comandi di input",
"Install from Github URL": "Eseguire l'installazione dall'URL di Github", "Install from Github URL": "Eseguire l'installazione dall'URL di Github",
@ -624,6 +638,7 @@
"Local": "", "Local": "",
"Local Models": "", "Local Models": "",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "", "Lost": "",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "Realizzato dalla comunità OpenWebUI", "Made by Open WebUI Community": "Realizzato dalla comunità OpenWebUI",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "", "Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "Autorizzazione negata durante l'accesso al microfono: {{error}}", "Permission denied when accessing microphone: {{error}}": "Autorizzazione negata durante l'accesso al microfono: {{error}}",
"Permissions": "", "Permissions": "",
"Perplexity API Key": "",
"Personalization": "Personalizzazione", "Personalization": "Personalizzazione",
"Pin": "", "Pin": "",
"Pinned": "", "Pinned": "",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "Registra voce", "Record voice": "Registra voce",
"Redirecting you to Open WebUI Community": "Reindirizzamento alla comunità OpenWebUI", "Redirecting you to Open WebUI Community": "Reindirizzamento alla comunità OpenWebUI",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "", "References from": "",
"Refused when it shouldn't have": "Rifiutato quando non avrebbe dovuto", "Refused when it shouldn't have": "Rifiutato quando non avrebbe dovuto",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "Imposta voce", "Set Voice": "Imposta voce",
"Set whisper model": "", "Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "Impostazioni", "Settings": "Impostazioni",
"Settings saved successfully!": "Impostazioni salvate con successo!", "Settings saved successfully!": "Impostazioni salvate con successo!",
@ -964,7 +980,7 @@
"System Prompt": "Prompt di sistema", "System Prompt": "Prompt di sistema",
"Tags Generation": "", "Tags Generation": "",
"Tags Generation Prompt": "", "Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "", "Tap to interrupt": "",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "Grazie per il tuo feedback!", "Thanks for your feedback!": "Grazie per il tuo feedback!",
"The Application Account DN you bind with for search": "", "The Application Account DN you bind with for search": "",
"The base to search for users": "", "The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Il punteggio dovrebbe essere un valore compreso tra 0.0 (0%) e 1.0 (100%).", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "Il punteggio dovrebbe essere un valore compreso tra 0.0 (0%) e 1.0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Tema", "Theme": "Tema",
"Thinking...": "", "Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "", "This action cannot be undone. Do you wish to continue?": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Ciò garantisce che le tue preziose conversazioni siano salvate in modo sicuro nel tuo database backend. Grazie!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Ciò garantisce che le tue preziose conversazioni siano salvate in modo sicuro nel tuo database backend. Grazie!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "", "This response was generated by \"{{model}}\"": "",
"This will delete": "", "This will delete": "",
@ -1132,7 +1148,7 @@
"Why?": "", "Why?": "",
"Widescreen Mode": "", "Widescreen Mode": "",
"Won": "", "Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Area di lavoro", "Workspace": "Area di lavoro",
"Workspace Permissions": "", "Workspace Permissions": "",
"Write": "", "Write": "",
@ -1142,6 +1158,7 @@
"Write your model template content here": "", "Write your model template content here": "",
"Yesterday": "Ieri", "Yesterday": "Ieri",
"You": "Tu", "You": "Tu",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "", "You cannot upload an empty file.": "",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(例: `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(例: `sh webui.sh --api`)",
"(latest)": "(最新)", "(latest)": "(最新)",
"{{ models }}": "{{ モデル }}", "{{ models }}": "{{ モデル }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "", "{{COUNT}} Replies": "",
"{{user}}'s Chats": "{{user}} のチャット", "{{user}}'s Chats": "{{user}} のチャット",
"{{webUIName}} Backend Required": "{{webUIName}} バックエンドが必要です", "{{webUIName}} Backend Required": "{{webUIName}} バックエンドが必要です",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "管理者は全てのツールにアクセス出来ます。ユーザーはワークスペースのモデル毎に割り当てて下さい。", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "管理者は全てのツールにアクセス出来ます。ユーザーはワークスペースのモデル毎に割り当てて下さい。",
"Advanced Parameters": "詳細パラメーター", "Advanced Parameters": "詳細パラメーター",
"Advanced Params": "高度なパラメータ", "Advanced Params": "高度なパラメータ",
"All": "",
"All Documents": "全てのドキュメント", "All Documents": "全てのドキュメント",
"All models deleted successfully": "", "All models deleted successfully": "",
"Allow Chat Controls": "", "Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "通話中に音声の割り込みを許可", "Allow Voice Interruption in Call": "通話中に音声の割り込みを許可",
"Allowed Endpoints": "", "Allowed Endpoints": "",
"Already have an account?": "すでにアカウントをお持ちですか?", "Already have an account?": "すでにアカウントをお持ちですか?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "", "Amazing": "",
"an assistant": "アシスタント", "an assistant": "アシスタント",
@ -93,6 +95,7 @@
"Are you sure?": "よろしいですか?", "Are you sure?": "よろしいですか?",
"Arena Models": "", "Arena Models": "",
"Artifacts": "", "Artifacts": "",
"Ask": "",
"Ask a question": "質問して下さい。", "Ask a question": "質問して下さい。",
"Assistant": "", "Assistant": "",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "", "Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "", "Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Brave Search APIキー", "Brave Search API Key": "Brave Search APIキー",
"By {{name}}": "", "By {{name}}": "",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "コレクション", "Collection": "コレクション",
"Color": "", "Color": "",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "", "Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "接続", "Connections": "接続",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "WEBUIへの接続について管理者に問い合わせ下さい。", "Contact Admin for WebUI Access": "WEBUIへの接続について管理者に問い合わせ下さい。",
"Content": "コンテンツ", "Content": "コンテンツ",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "", "Continue with Email": "",
"Continue with LDAP": "", "Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "コントロール", "Controls": "コントロール",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "コピー", "Copied": "コピー",
"Copied shared chat URL to clipboard!": "共有チャットURLをクリップボードにコピーしました!", "Copied shared chat URL to clipboard!": "共有チャットURLをクリップボードにコピーしました!",
"Copied to clipboard": "クリップボードにコピーしました。", "Copied to clipboard": "クリップボードにコピーしました。",
@ -245,6 +250,7 @@
"Created At": "作成日時", "Created At": "作成日時",
"Created by": "", "Created by": "",
"CSV Import": "CSVインポート", "CSV Import": "CSVインポート",
"Ctrl+Enter to Send": "",
"Current Model": "現在のモデル", "Current Model": "現在のモデル",
"Current Password": "現在のパスワード", "Current Password": "現在のパスワード",
"Custom": "カスタム", "Custom": "カスタム",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "メッセージ評価を有効にする", "Enable Message Rating": "メッセージ評価を有効にする",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "新規登録を有効にする", "Enable New Sign Ups": "新規登録を有効にする",
"Enabled": "有効", "Enabled": "有効",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "CSVファイルに4つの列が含まれていることを確認してください: Name, Email, Password, Role.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "CSVファイルに4つの列が含まれていることを確認してください: Name, Email, Password, Role.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "CFGスケースを入力してください (例: 7.0)", "Enter CFG Scale (e.g. 7.0)": "CFGスケースを入力してください (例: 7.0)",
"Enter Chunk Overlap": "チャンクオーバーラップを入力してください", "Enter Chunk Overlap": "チャンクオーバーラップを入力してください",
"Enter Chunk Size": "チャンクサイズを入力してください", "Enter Chunk Size": "チャンクサイズを入力してください",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "", "Enter description": "",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "", "Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "言語コードを入力してください", "Enter language codes": "言語コードを入力してください",
"Enter Model ID": "モデルIDを入力してください。", "Enter Model ID": "モデルIDを入力してください。",
"Enter model tag (e.g. {{modelTag}})": "モデルタグを入力してください (例: {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "モデルタグを入力してください (例: {{modelTag}})",
"Enter Mojeek Search API Key": "", "Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "ステップ数を入力してください (例: 50)", "Enter Number of Steps (e.g. 50)": "ステップ数を入力してください (例: 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "", "Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "サンプラーを入力してください(e.g. Euler a)。", "Enter Sampler (e.g. Euler a)": "サンプラーを入力してください(e.g. Euler a)。",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "Tika Server URLを入力してください。", "Enter Tika Server URL": "Tika Server URLを入力してください。",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "トップ K を入力してください", "Enter Top K": "トップ K を入力してください",
"Enter URL (e.g. http://127.0.0.1:7860/)": "URL を入力してください (例: http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "URL を入力してください (例: http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "URL を入力してください (例: http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "URL を入力してください (例: http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "", "Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "", "Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "", "Exclude": "",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "実験的", "Experimental": "実験的",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "", "Explore the cosmos": "",
"Export": "エクスポート", "Export": "エクスポート",
"Export All Archived Chats": "", "Export All Archived Chats": "",
@ -566,7 +580,7 @@
"Include": "", "Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "", "Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "stable-diffusion-webuiを実行する際に`--api`フラグを含める", "Include `--api` flag when running stable-diffusion-webui": "stable-diffusion-webuiを実行する際に`--api`フラグを含める",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "情報", "Info": "情報",
"Input commands": "入力コマンド", "Input commands": "入力コマンド",
"Install from Github URL": "Github URLからインストール", "Install from Github URL": "Github URLからインストール",
@ -624,6 +638,7 @@
"Local": "", "Local": "",
"Local Models": "ローカルモデル", "Local Models": "ローカルモデル",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "", "Lost": "",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "OpenWebUI コミュニティによって作成", "Made by Open WebUI Community": "OpenWebUI コミュニティによって作成",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "", "Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "マイクへのアクセス時に権限が拒否されました: {{error}}", "Permission denied when accessing microphone: {{error}}": "マイクへのアクセス時に権限が拒否されました: {{error}}",
"Permissions": "", "Permissions": "",
"Perplexity API Key": "",
"Personalization": "個人化", "Personalization": "個人化",
"Pin": "", "Pin": "",
"Pinned": "", "Pinned": "",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "音声を録音", "Record voice": "音声を録音",
"Redirecting you to Open WebUI Community": "OpenWebUI コミュニティにリダイレクトしています", "Redirecting you to Open WebUI Community": "OpenWebUI コミュニティにリダイレクトしています",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "", "References from": "",
"Refused when it shouldn't have": "拒否すべきでないのに拒否した", "Refused when it shouldn't have": "拒否すべきでないのに拒否した",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "音声を設定", "Set Voice": "音声を設定",
"Set whisper model": "", "Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "設定", "Settings": "設定",
"Settings saved successfully!": "設定が正常に保存されました!", "Settings saved successfully!": "設定が正常に保存されました!",
@ -964,7 +980,7 @@
"System Prompt": "システムプロンプト", "System Prompt": "システムプロンプト",
"Tags Generation": "", "Tags Generation": "",
"Tags Generation Prompt": "", "Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "", "Tap to interrupt": "",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "ご意見ありがとうございます!", "Thanks for your feedback!": "ご意見ありがとうございます!",
"The Application Account DN you bind with for search": "", "The Application Account DN you bind with for search": "",
"The base to search for users": "", "The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "スコアは0.0(0%)から1.0(100%)の間の値にしてください。", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "スコアは0.0(0%)から1.0(100%)の間の値にしてください。",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "テーマ", "Theme": "テーマ",
"Thinking...": "思考中...", "Thinking...": "思考中...",
"This action cannot be undone. Do you wish to continue?": "このアクションは取り消し不可です。続けますか?", "This action cannot be undone. Do you wish to continue?": "このアクションは取り消し不可です。続けますか?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "これは、貴重な会話がバックエンドデータベースに安全に保存されることを保証します。ありがとうございます!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "これは、貴重な会話がバックエンドデータベースに安全に保存されることを保証します。ありがとうございます!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "実験的機能であり正常動作しない場合があります。", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "実験的機能であり正常動作しない場合があります。",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "", "This response was generated by \"{{model}}\"": "",
"This will delete": "", "This will delete": "",
@ -1132,7 +1148,7 @@
"Why?": "", "Why?": "",
"Widescreen Mode": "ワイドスクリーンモード", "Widescreen Mode": "ワイドスクリーンモード",
"Won": "", "Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "ワークスペース", "Workspace": "ワークスペース",
"Workspace Permissions": "", "Workspace Permissions": "",
"Write": "", "Write": "",
@ -1142,6 +1158,7 @@
"Write your model template content here": "", "Write your model template content here": "",
"Yesterday": "昨日", "Yesterday": "昨日",
"You": "あなた", "You": "あなた",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "", "You cannot upload an empty file.": "",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(მაგ: `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(მაგ: `sh webui.sh --api`)",
"(latest)": "(უახლესი)", "(latest)": "(უახლესი)",
"{{ models }}": "{{ მოდელები }}", "{{ models }}": "{{ მოდელები }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "{{COUNT}} პასუხი", "{{COUNT}} Replies": "{{COUNT}} პასუხი",
"{{user}}'s Chats": "{{user}}-ის ჩათები", "{{user}}'s Chats": "{{user}}-ის ჩათები",
"{{webUIName}} Backend Required": "{{webUIName}} საჭიროა უკანაბოლო", "{{webUIName}} Backend Required": "{{webUIName}} საჭიროა უკანაბოლო",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "დამატებითი პარამეტრები", "Advanced Parameters": "დამატებითი პარამეტრები",
"Advanced Params": "დამატებითი პარამეტრები", "Advanced Params": "დამატებითი პარამეტრები",
"All": "",
"All Documents": "ყველა დოკუმენტი", "All Documents": "ყველა დოკუმენტი",
"All models deleted successfully": "ყველა მოდელი წარმატებით წაიშალა", "All models deleted successfully": "ყველა მოდელი წარმატებით წაიშალა",
"Allow Chat Controls": "ჩატის კონტროლის ელემენტების დაშვება", "Allow Chat Controls": "ჩატის კონტროლის ელემენტების დაშვება",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "", "Allow Voice Interruption in Call": "",
"Allowed Endpoints": "დაშვებული ბოლოწერტილები", "Allowed Endpoints": "დაშვებული ბოლოწერტილები",
"Already have an account?": "უკვე გაქვთ ანგარიში?", "Already have an account?": "უკვე გაქვთ ანგარიში?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "ყოველთვის", "Always": "ყოველთვის",
"Amazing": "გადასარევია", "Amazing": "გადასარევია",
"an assistant": "დამხმარე", "an assistant": "დამხმარე",
@ -93,6 +95,7 @@
"Are you sure?": "დარწმუნებული ბრძანდებით?", "Are you sure?": "დარწმუნებული ბრძანდებით?",
"Arena Models": "არენის მოდელები", "Arena Models": "არენის მოდელები",
"Artifacts": "არტეფაქტები", "Artifacts": "არტეფაქტები",
"Ask": "",
"Ask a question": "კითხვის დასმა", "Ask a question": "კითხვის დასმა",
"Assistant": "დამხმარე", "Assistant": "დამხმარე",
"Attach file from knowledge": "ფაილის მიმაგრება", "Attach file from knowledge": "ფაილის მიმაგრება",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "", "Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "", "Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Brave Search API-ის გასაღები", "Brave Search API Key": "Brave Search API-ის გასაღები",
"By {{name}}": "ავტორი {{name}}", "By {{name}}": "ავტორი {{name}}",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "კოლექცია", "Collection": "კოლექცია",
"Color": "ფერი", "Color": "ფერი",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "", "Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "კავშირები", "Connections": "კავშირები",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "", "Contact Admin for WebUI Access": "",
"Content": "შემცველობა", "Content": "შემცველობა",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "", "Continue with Email": "",
"Continue with LDAP": "", "Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "მმართველები", "Controls": "მმართველები",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "დაკოპირდა", "Copied": "დაკოპირდა",
"Copied shared chat URL to clipboard!": "გაზიარებული ჩატის ბმული დაკოპირდა ბუფერში!", "Copied shared chat URL to clipboard!": "გაზიარებული ჩატის ბმული დაკოპირდა ბუფერში!",
"Copied to clipboard": "დაკოპირდა გაცვლის ბაფერში", "Copied to clipboard": "დაკოპირდა გაცვლის ბაფერში",
@ -245,6 +250,7 @@
"Created At": "შექმნის დრო", "Created At": "შექმნის დრო",
"Created by": "ავტორი", "Created by": "ავტორი",
"CSV Import": "CSV-ის შემოტანა", "CSV Import": "CSV-ის შემოტანა",
"Ctrl+Enter to Send": "",
"Current Model": "მიმდინარე მოდელი", "Current Model": "მიმდინარე მოდელი",
"Current Password": "მიმდინარე პაროლი", "Current Password": "მიმდინარე პაროლი",
"Custom": "ხელით", "Custom": "ხელით",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "", "Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "ახალი რეგისტრაციების ჩართვა", "Enable New Sign Ups": "ახალი რეგისტრაციების ჩართვა",
"Enabled": "ჩართულია", "Enabled": "ჩართულია",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "დარწმუნდით, რომ თქვენი CSV-ფაილი შეიცავს 4 ველს ამ მიმდევრობით: სახელი, ელფოსტა, პაროლი, როლი.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "დარწმუნდით, რომ თქვენი CSV-ფაილი შეიცავს 4 ველს ამ მიმდევრობით: სახელი, ელფოსტა, პაროლი, როლი.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "", "Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "შეიყვანეთ ფრაგმენტის გადაფარვა", "Enter Chunk Overlap": "შეიყვანეთ ფრაგმენტის გადაფარვა",
"Enter Chunk Size": "შეიყვანე ფრაგმენტის ზომა", "Enter Chunk Size": "შეიყვანე ფრაგმენტის ზომა",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "შეიყვანეთ აღწერა", "Enter description": "შეიყვანეთ აღწერა",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "", "Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "შეიყვანეთ ენის კოდები", "Enter language codes": "შეიყვანეთ ენის კოდები",
"Enter Model ID": "", "Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "შეიყვანეთ მოდელის ჭდე (მაგ: {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "შეიყვანეთ მოდელის ჭდე (მაგ: {{modelTag}})",
"Enter Mojeek Search API Key": "", "Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "შეიყვანეთ ნაბიჯების რაოდენობა (მაგ. 50)", "Enter Number of Steps (e.g. 50)": "შეიყვანეთ ნაბიჯების რაოდენობა (მაგ. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "", "Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "", "Enter Sampler (e.g. Euler a)": "",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "", "Enter Tika Server URL": "",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "შეიყვანეთ Top K", "Enter Top K": "შეიყვანეთ Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "შეიყვანეთ ბმული (მაგ: http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "შეიყვანეთ ბმული (მაგ: http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "შეიყვანეთ ბმული (მაგ: http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "შეიყვანეთ ბმული (მაგ: http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "", "Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "", "Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "გამორიცხვა", "Exclude": "გამორიცხვა",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "ექსპერიმენტული", "Experimental": "ექსპერიმენტული",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "", "Explore the cosmos": "",
"Export": "გატანა", "Export": "გატანა",
"Export All Archived Chats": "", "Export All Archived Chats": "",
@ -566,7 +580,7 @@
"Include": "ჩართვა", "Include": "ჩართვა",
"Include `--api-auth` flag when running stable-diffusion-webui": "", "Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "`--api` ალმის ჩასმა stable-diffusion-webui-ის გამოყენებისას", "Include `--api` flag when running stable-diffusion-webui": "`--api` ალმის ჩასმა stable-diffusion-webui-ის გამოყენებისას",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "ინფორმაცია", "Info": "ინფორმაცია",
"Input commands": "შეიყვანეთ ბრძანებები", "Input commands": "შეიყვანეთ ბრძანებები",
"Install from Github URL": "დაყენება Github-ის ბმულიდან", "Install from Github URL": "დაყენება Github-ის ბმულიდან",
@ -624,6 +638,7 @@
"Local": "ლოკალური", "Local": "ლოკალური",
"Local Models": "ლოკალური მოდელები", "Local Models": "ლოკალური მოდელები",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "წაგება", "Lost": "წაგება",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "შექმნილია OpenWebUI საზოგადოების მიერ", "Made by Open WebUI Community": "შექმნილია OpenWebUI საზოგადოების მიერ",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "", "Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "ნებართვა უარყოფილია მიკროფონზე წვდომისას: {{error}}", "Permission denied when accessing microphone: {{error}}": "ნებართვა უარყოფილია მიკროფონზე წვდომისას: {{error}}",
"Permissions": "ნებართვები", "Permissions": "ნებართვები",
"Perplexity API Key": "",
"Personalization": "პერსონალიზაცია", "Personalization": "პერსონალიზაცია",
"Pin": "მიმაგრება", "Pin": "მიმაგრება",
"Pinned": "მიმაგრებულია", "Pinned": "მიმაგრებულია",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "ხმის ჩაწერა", "Record voice": "ხმის ჩაწერა",
"Redirecting you to Open WebUI Community": "მიმდინარეობს გადამისამართება OpenWebUI-ის საზოგადოების საიტზე", "Redirecting you to Open WebUI Community": "მიმდინარეობს გადამისამართება OpenWebUI-ის საზოგადოების საიტზე",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "", "References from": "",
"Refused when it shouldn't have": "უარა, როგორც უნდა იყოს", "Refused when it shouldn't have": "უარა, როგორც უნდა იყოს",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "ხმის დაყენება", "Set Voice": "ხმის დაყენება",
"Set whisper model": "", "Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "მორგება", "Settings": "მორგება",
"Settings saved successfully!": "პარამეტრები შენახვა წარმატებულია!", "Settings saved successfully!": "პარამეტრები შენახვა წარმატებულია!",
@ -964,7 +980,7 @@
"System Prompt": "სისტემური მოთხოვნა", "System Prompt": "სისტემური მოთხოვნა",
"Tags Generation": "", "Tags Generation": "",
"Tags Generation Prompt": "", "Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "", "Tap to interrupt": "",
"Tasks": "ამოცანები", "Tasks": "ამოცანები",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "მადლობა გამოხმაურებისთვის!", "Thanks for your feedback!": "მადლობა გამოხმაურებისთვის!",
"The Application Account DN you bind with for search": "", "The Application Account DN you bind with for search": "",
"The base to search for users": "", "The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "რეიტინგი უნდა იყოს მნიშვნელობ შუალედიდან 0.0 (0%) - 1.0 (100%).", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "რეიტინგი უნდა იყოს მნიშვნელობ შუალედიდან 0.0 (0%) - 1.0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "თემა", "Theme": "თემა",
"Thinking...": "", "Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "", "This action cannot be undone. Do you wish to continue?": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "ეს უზრუნველყოფს, რომ თქვენი ღირებული საუბრები უსაფრთხოდ შეინახება თქვენს უკანაბოლო მონაცემთა ბაზაში. მადლობა!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "ეს უზრუნველყოფს, რომ თქვენი ღირებული საუბრები უსაფრთხოდ შეინახება თქვენს უკანაბოლო მონაცემთა ბაზაში. მადლობა!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "", "This response was generated by \"{{model}}\"": "",
"This will delete": "", "This will delete": "",
@ -1132,7 +1148,7 @@
"Why?": "რატომ?", "Why?": "რატომ?",
"Widescreen Mode": "", "Widescreen Mode": "",
"Won": "ვონი", "Won": "ვონი",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "სამუშაო სივრცე", "Workspace": "სამუშაო სივრცე",
"Workspace Permissions": "", "Workspace Permissions": "",
"Write": "ჩაწერა", "Write": "ჩაწერა",
@ -1142,6 +1158,7 @@
"Write your model template content here": "", "Write your model template content here": "",
"Yesterday": "გუშინ", "Yesterday": "გუშინ",
"You": "თქვენ", "You": "თქვენ",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "", "You cannot upload an empty file.": "",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(예: `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(예: `sh webui.sh --api`)",
"(latest)": "(최근)", "(latest)": "(최근)",
"{{ models }}": "{{ models }}", "{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "", "{{COUNT}} Replies": "",
"{{user}}'s Chats": "{{user}}의 채팅", "{{user}}'s Chats": "{{user}}의 채팅",
"{{webUIName}} Backend Required": "{{webUIName}} 백엔드가 필요합니다.", "{{webUIName}} Backend Required": "{{webUIName}} 백엔드가 필요합니다.",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "관리자는 항상 모든 도구에 접근할 수 있지만, 사용자는 워크스페이스에서 모델마다 도구를 할당받아야 합니다.", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "관리자는 항상 모든 도구에 접근할 수 있지만, 사용자는 워크스페이스에서 모델마다 도구를 할당받아야 합니다.",
"Advanced Parameters": "고급 매개변수", "Advanced Parameters": "고급 매개변수",
"Advanced Params": "고급 매개변수", "Advanced Params": "고급 매개변수",
"All": "",
"All Documents": "모든 문서", "All Documents": "모든 문서",
"All models deleted successfully": "성공적으로 모든 모델이 삭제되었습니다", "All models deleted successfully": "성공적으로 모든 모델이 삭제되었습니다",
"Allow Chat Controls": "채팅 제어 허용", "Allow Chat Controls": "채팅 제어 허용",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "음성 기능에서 음성 방해 허용", "Allow Voice Interruption in Call": "음성 기능에서 음성 방해 허용",
"Allowed Endpoints": "", "Allowed Endpoints": "",
"Already have an account?": "이미 계정이 있으신가요?", "Already have an account?": "이미 계정이 있으신가요?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "놀라움", "Amazing": "놀라움",
"an assistant": "어시스턴트", "an assistant": "어시스턴트",
@ -93,6 +95,7 @@
"Are you sure?": "확실합니까?", "Are you sure?": "확실합니까?",
"Arena Models": "아레나 모델", "Arena Models": "아레나 모델",
"Artifacts": "아티팩트", "Artifacts": "아티팩트",
"Ask": "",
"Ask a question": "질문하기", "Ask a question": "질문하기",
"Assistant": "어시스턴트", "Assistant": "어시스턴트",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "Bing Search V7 엔드포인트", "Bing Search V7 Endpoint": "Bing Search V7 엔드포인트",
"Bing Search V7 Subscription Key": "Bing Search V7 구독 키", "Bing Search V7 Subscription Key": "Bing Search V7 구독 키",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Brave Search API 키", "Brave Search API Key": "Brave Search API 키",
"By {{name}}": "", "By {{name}}": "",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "컬렉션", "Collection": "컬렉션",
"Color": "", "Color": "",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "새로운 비밀번호를 한 번 더 입력해 주세요", "Confirm your new password": "새로운 비밀번호를 한 번 더 입력해 주세요",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "연결", "Connections": "연결",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "WebUI 접속을 위해서는 관리자에게 연락에 연락하십시오", "Contact Admin for WebUI Access": "WebUI 접속을 위해서는 관리자에게 연락에 연락하십시오",
"Content": "내용", "Content": "내용",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "", "Continue with Email": "",
"Continue with LDAP": "", "Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "TTS 요청에 메시지가 어떻게 나뉘어지는지 제어하십시오. '문장 부호'는 문장으로 나뉘고, '문단'은 문단으로 나뉘고, '없음'은 메세지를 하나의 문자열로 인식합니다.", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "TTS 요청에 메시지가 어떻게 나뉘어지는지 제어하십시오. '문장 부호'는 문장으로 나뉘고, '문단'은 문단으로 나뉘고, '없음'은 메세지를 하나의 문자열로 인식합니다.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "제어", "Controls": "제어",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "복사됨", "Copied": "복사됨",
"Copied shared chat URL to clipboard!": "채팅 공유 URL이 클립보드에 복사되었습니다!", "Copied shared chat URL to clipboard!": "채팅 공유 URL이 클립보드에 복사되었습니다!",
"Copied to clipboard": "클립보드에 복사되었습니다", "Copied to clipboard": "클립보드에 복사되었습니다",
@ -245,6 +250,7 @@
"Created At": "생성일", "Created At": "생성일",
"Created by": "생성자", "Created by": "생성자",
"CSV Import": "CSV 가져오기", "CSV Import": "CSV 가져오기",
"Ctrl+Enter to Send": "",
"Current Model": "현재 모델", "Current Model": "현재 모델",
"Current Password": "현재 비밀번호", "Current Password": "현재 비밀번호",
"Custom": "사용자 정의", "Custom": "사용자 정의",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "메시지 평가 활성화", "Enable Message Rating": "메시지 평가 활성화",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "새 회원가입 활성화", "Enable New Sign Ups": "새 회원가입 활성화",
"Enabled": "활성화됨", "Enabled": "활성화됨",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "CSV 파일에 이름, 이메일, 비밀번호, 역할 4개의 열이 순서대로 포함되어 있는지 확인하세요.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "CSV 파일에 이름, 이메일, 비밀번호, 역할 4개의 열이 순서대로 포함되어 있는지 확인하세요.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "CFG Scale 입력 (예: 7.0)", "Enter CFG Scale (e.g. 7.0)": "CFG Scale 입력 (예: 7.0)",
"Enter Chunk Overlap": "청크 오버랩 입력", "Enter Chunk Overlap": "청크 오버랩 입력",
"Enter Chunk Size": "청크 크기 입력", "Enter Chunk Size": "청크 크기 입력",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "설명 입력", "Enter description": "설명 입력",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "Kagi Search API 키 입력", "Enter Kagi Search API Key": "Kagi Search API 키 입력",
"Enter Key Behavior": "",
"Enter language codes": "언어 코드 입력", "Enter language codes": "언어 코드 입력",
"Enter Model ID": "모델 ID 입력", "Enter Model ID": "모델 ID 입력",
"Enter model tag (e.g. {{modelTag}})": "모델 태그 입력(예: {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "모델 태그 입력(예: {{modelTag}})",
"Enter Mojeek Search API Key": "Mojeek Search API 키 입력", "Enter Mojeek Search API Key": "Mojeek Search API 키 입력",
"Enter Number of Steps (e.g. 50)": "단계 수 입력(예: 50)", "Enter Number of Steps (e.g. 50)": "단계 수 입력(예: 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "프록시 URL 입력(예: https://user:password@host:port)", "Enter proxy URL (e.g. https://user:password@host:port)": "프록시 URL 입력(예: https://user:password@host:port)",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "샘플러 입력 (예: 오일러 a(Euler a))", "Enter Sampler (e.g. Euler a)": "샘플러 입력 (예: 오일러 a(Euler a))",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "WebUI의 공개 URL을 입력해 주세요. 이 URL은 알림에서 링크를 생성하는 데 사용합니다.", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "WebUI의 공개 URL을 입력해 주세요. 이 URL은 알림에서 링크를 생성하는 데 사용합니다.",
"Enter Tika Server URL": "Tika 서버 URL 입력", "Enter Tika Server URL": "Tika 서버 URL 입력",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Top K 입력", "Enter Top K": "Top K 입력",
"Enter URL (e.g. http://127.0.0.1:7860/)": "URL 입력(예: http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "URL 입력(예: http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "URL 입력(예: http://localhost:11434)", "Enter URL (e.g. http://localhost:11434)": "URL 입력(예: http://localhost:11434)",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "", "Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "", "Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "미포함", "Exclude": "미포함",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "실험적", "Experimental": "실험적",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "", "Explore the cosmos": "",
"Export": "내보내기", "Export": "내보내기",
"Export All Archived Chats": "", "Export All Archived Chats": "",
@ -566,7 +580,7 @@
"Include": "포함", "Include": "포함",
"Include `--api-auth` flag when running stable-diffusion-webui": "stable-diffusion-webui를 실행 시 `--api-auth` 플래그를 포함하세요", "Include `--api-auth` flag when running stable-diffusion-webui": "stable-diffusion-webui를 실행 시 `--api-auth` 플래그를 포함하세요",
"Include `--api` flag when running stable-diffusion-webui": "stable-diffusion-webui를 실행 시 `--api` 플래그를 포함하세요", "Include `--api` flag when running stable-diffusion-webui": "stable-diffusion-webui를 실행 시 `--api` 플래그를 포함하세요",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "정보", "Info": "정보",
"Input commands": "명령어 입력", "Input commands": "명령어 입력",
"Install from Github URL": "Github URL에서 설치", "Install from Github URL": "Github URL에서 설치",
@ -624,6 +638,7 @@
"Local": "", "Local": "",
"Local Models": "로컬 모델", "Local Models": "로컬 모델",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "패배", "Lost": "패배",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "OpenWebUI 커뮤니티에 의해 개발됨", "Made by Open WebUI Community": "OpenWebUI 커뮤니티에 의해 개발됨",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "마이크 접근 권한이 거부되었습니다.", "Permission denied when accessing microphone": "마이크 접근 권한이 거부되었습니다.",
"Permission denied when accessing microphone: {{error}}": "마이크 접근 권환이 거부되었습니다: {{error}}", "Permission denied when accessing microphone: {{error}}": "마이크 접근 권환이 거부되었습니다: {{error}}",
"Permissions": "권한", "Permissions": "권한",
"Perplexity API Key": "",
"Personalization": "개인화", "Personalization": "개인화",
"Pin": "고정", "Pin": "고정",
"Pinned": "고정됨", "Pinned": "고정됨",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "음성 녹음", "Record voice": "음성 녹음",
"Redirecting you to Open WebUI Community": "OpenWebUI 커뮤니티로 리디렉션 중", "Redirecting you to Open WebUI Community": "OpenWebUI 커뮤니티로 리디렉션 중",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "스스로를 \"사용자\" 라고 지칭하세요. (예: \"사용자는 영어를 배우고 있습니다\")", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "스스로를 \"사용자\" 라고 지칭하세요. (예: \"사용자는 영어를 배우고 있습니다\")",
"References from": "출처", "References from": "출처",
"Refused when it shouldn't have": "허용되지 않았지만 허용되어야 합니다.", "Refused when it shouldn't have": "허용되지 않았지만 허용되어야 합니다.",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "음성 설정", "Set Voice": "음성 설정",
"Set whisper model": "자막 생성기 모델 설정", "Set whisper model": "자막 생성기 모델 설정",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "생성을 위한 무작위 숫자 시드를 설정합니다. 이 값을 특정 숫자로 설정하면 동일한 프롬프트에 대해 동일한 텍스트를 생성합니다. (기본값: 무작위)", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "중단 시퀀스를 설정합니다. 이 패턴이 발생하면 LLM은 텍스트 생성을 중단하고 반환합니다. 여러 중단 패턴은 모델 파일에서 여러 개의 별도 중단 매개변수를 지정하여 설정할 수 있습니다.", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "중단 시퀀스를 설정합니다. 이 패턴이 발생하면 LLM은 텍스트 생성을 중단하고 반환합니다. 여러 중단 패턴은 모델 파일에서 여러 개의 별도 중단 매개변수를 지정하여 설정할 수 있습니다.",
"Settings": "설정", "Settings": "설정",
"Settings saved successfully!": "설정이 성공적으로 저장되었습니다!", "Settings saved successfully!": "설정이 성공적으로 저장되었습니다!",
@ -964,7 +980,7 @@
"System Prompt": "시스템 프롬프트", "System Prompt": "시스템 프롬프트",
"Tags Generation": "태그 생성", "Tags Generation": "태그 생성",
"Tags Generation Prompt": "태그 생성 프롬프트", "Tags Generation Prompt": "태그 생성 프롬프트",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "탭하여 중단", "Tap to interrupt": "탭하여 중단",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "피드백 감사합니다!", "Thanks for your feedback!": "피드백 감사합니다!",
"The Application Account DN you bind with for search": "", "The Application Account DN you bind with for search": "",
"The base to search for users": "", "The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "이 플러그인 뒤에 있는 개발자는 커뮤니티에서 활동하는 단순한 열정적인 일반인들입니다. 만약 플러그인이 도움 되었다면, 플러그인 개발에 기여를 고려해주세요!", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "이 플러그인 뒤에 있는 개발자는 커뮤니티에서 활동하는 단순한 열정적인 일반인들입니다. 만약 플러그인이 도움 되었다면, 플러그인 개발에 기여를 고려해주세요!",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "평가 리더보드는 Elo 평가 시스템을 기반으로 하고 실시간으로 업데이트됩니다", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "평가 리더보드는 Elo 평가 시스템을 기반으로 하고 실시간으로 업데이트됩니다",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "최대 파일 크기(MB). 만약 파일 크기가 한도를 초과할 시, 파일은 업로드되지 않습니다", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "최대 파일 크기(MB). 만약 파일 크기가 한도를 초과할 시, 파일은 업로드되지 않습니다",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "하나의 채팅에서는 사용가능한 최대 파일 수가 있습니다. 만약 파일 수가 한도를 초과할 시, 파일은 업로드되지 않습니다.", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "하나의 채팅에서는 사용가능한 최대 파일 수가 있습니다. 만약 파일 수가 한도를 초과할 시, 파일은 업로드되지 않습니다.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "점수는 0.0(0%)에서 1.0(100%) 사이의 값이어야 합니다.", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "점수는 0.0(0%)에서 1.0(100%) 사이의 값이어야 합니다.",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "모델의 온도. 온도를 높이면 모델이 더 창의적으로 답변합니다. (기본값: 0.8)", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "테마", "Theme": "테마",
"Thinking...": "생각 중...", "Thinking...": "생각 중...",
"This action cannot be undone. Do you wish to continue?": "이 액션은 되돌릴 수 없습니다. 계속 하시겠습니까?", "This action cannot be undone. Do you wish to continue?": "이 액션은 되돌릴 수 없습니다. 계속 하시겠습니까?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "이렇게 하면 소중한 대화 내용이 백엔드 데이터베이스에 안전하게 저장됩니다. 감사합니다!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "이렇게 하면 소중한 대화 내용이 백엔드 데이터베이스에 안전하게 저장됩니다. 감사합니다!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "이것은 실험적 기능으로, 예상대로 작동하지 않을 수 있으며 언제든지 변경될 수 있습니다.", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "이것은 실험적 기능으로, 예상대로 작동하지 않을 수 있으며 언제든지 변경될 수 있습니다.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "이 행동은 컬렉션에 존재하는 모든 파일을 삭제하고 새로 업로드된 파일들로 대체됩니다", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "이 행동은 컬렉션에 존재하는 모든 파일을 삭제하고 새로 업로드된 파일들로 대체됩니다",
"This response was generated by \"{{model}}\"": "\"{{model}}\"이 생성한 응답입니다", "This response was generated by \"{{model}}\"": "\"{{model}}\"이 생성한 응답입니다",
"This will delete": "이것은 다음을 삭제합니다.", "This will delete": "이것은 다음을 삭제합니다.",
@ -1132,7 +1148,7 @@
"Why?": "이유는?", "Why?": "이유는?",
"Widescreen Mode": "와이드스크린 모드", "Widescreen Mode": "와이드스크린 모드",
"Won": "승리", "Won": "승리",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "워크스페이스", "Workspace": "워크스페이스",
"Workspace Permissions": "워크스페이스 권한", "Workspace Permissions": "워크스페이스 권한",
"Write": "", "Write": "",
@ -1142,6 +1158,7 @@
"Write your model template content here": "여기에 모델 템플릿 내용을 입력하세요", "Write your model template content here": "여기에 모델 템플릿 내용을 입력하세요",
"Yesterday": "어제", "Yesterday": "어제",
"You": "당신", "You": "당신",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "동시에 최대 {{maxCount}} 파일과만 대화할 수 있습니다 ", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "동시에 최대 {{maxCount}} 파일과만 대화할 수 있습니다 ",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "아래 '관리' 버튼으로 메모리를 추가하여 LLM들과의 상호작용을 개인화할 수 있습니다. 이를 통해 더 유용하고 맞춤화된 경험을 제공합니다.", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "아래 '관리' 버튼으로 메모리를 추가하여 LLM들과의 상호작용을 개인화할 수 있습니다. 이를 통해 더 유용하고 맞춤화된 경험을 제공합니다.",
"You cannot upload an empty file.": "빈 파일을 업로드 할 수 없습니다", "You cannot upload an empty file.": "빈 파일을 업로드 할 수 없습니다",

View file

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(pvz. `sh webui.sh --api`)", "(e.g. `sh webui.sh --api`)": "(pvz. `sh webui.sh --api`)",
"(latest)": "(naujausias)", "(latest)": "(naujausias)",
"{{ models }}": "{{ models }}", "{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "", "{{COUNT}} Replies": "",
"{{user}}'s Chats": "{{user}} susirašinėjimai", "{{user}}'s Chats": "{{user}} susirašinėjimai",
"{{webUIName}} Backend Required": "{{webUIName}} būtinas serveris", "{{webUIName}} Backend Required": "{{webUIName}} būtinas serveris",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Administratoriai visada turi visus įrankius. Naudotojai turi tuėti prieigą prie dokumentų per modelių nuostatas", "Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Administratoriai visada turi visus įrankius. Naudotojai turi tuėti prieigą prie dokumentų per modelių nuostatas",
"Advanced Parameters": "Pažengę nustatymai", "Advanced Parameters": "Pažengę nustatymai",
"Advanced Params": "Pažengę nustatymai", "Advanced Params": "Pažengę nustatymai",
"All": "",
"All Documents": "Visi dokumentai", "All Documents": "Visi dokumentai",
"All models deleted successfully": "", "All models deleted successfully": "",
"Allow Chat Controls": "", "Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Leisti pertraukimą skambučio metu", "Allow Voice Interruption in Call": "Leisti pertraukimą skambučio metu",
"Allowed Endpoints": "", "Allowed Endpoints": "",
"Already have an account?": "Ar jau turite paskyrą?", "Already have an account?": "Ar jau turite paskyrą?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "", "Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "", "Always": "",
"Amazing": "", "Amazing": "",
"an assistant": "assistentas", "an assistant": "assistentas",
@ -93,6 +95,7 @@
"Are you sure?": "Are esate tikri?", "Are you sure?": "Are esate tikri?",
"Arena Models": "", "Arena Models": "",
"Artifacts": "", "Artifacts": "",
"Ask": "",
"Ask a question": "", "Ask a question": "",
"Assistant": "", "Assistant": "",
"Attach file from knowledge": "", "Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "", "Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "", "Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "", "Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Brave Search API raktas", "Brave Search API Key": "Brave Search API raktas",
"By {{name}}": "", "By {{name}}": "",
"Bypass Embedding and Retrieval": "", "Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "", "Code Interpreter": "",
"Code Interpreter Engine": "", "Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "", "Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Kolekcija", "Collection": "Kolekcija",
"Color": "", "Color": "",
"ComfyUI": "ComfyUI", "ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "", "Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "", "Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Ryšiai", "Connections": "Ryšiai",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "", "Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Susisiekite su administratoriumi dėl prieigos", "Contact Admin for WebUI Access": "Susisiekite su administratoriumi dėl prieigos",
"Content": "Turinys", "Content": "Turinys",
"Content Extraction Engine": "", "Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "", "Continue with Email": "",
"Continue with LDAP": "", "Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "", "Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "", "Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Valdymas", "Controls": "Valdymas",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "", "Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Nukopijuota", "Copied": "Nukopijuota",
"Copied shared chat URL to clipboard!": "Nukopijavote pokalbio nuorodą", "Copied shared chat URL to clipboard!": "Nukopijavote pokalbio nuorodą",
"Copied to clipboard": "", "Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "Sukurta", "Created At": "Sukurta",
"Created by": "Sukurta", "Created by": "Sukurta",
"CSV Import": "CSV importavimas", "CSV Import": "CSV importavimas",
"Ctrl+Enter to Send": "",
"Current Model": "Dabartinis modelis", "Current Model": "Dabartinis modelis",
"Current Password": "Esamas slaptažodis", "Current Password": "Esamas slaptažodis",
"Custom": "Personalizuota", "Custom": "Personalizuota",
@ -358,7 +364,7 @@
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "", "Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "", "Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "", "Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "", "Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Aktyvuoti naujas registracijas", "Enable New Sign Ups": "Aktyvuoti naujas registracijas",
"Enabled": "Leisti", "Enabled": "Leisti",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Įsitikinkite, kad CSV failas turi 4 kolonas šiuo eiliškumu: Name, Email, Password, Role.", "Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Įsitikinkite, kad CSV failas turi 4 kolonas šiuo eiliškumu: Name, Email, Password, Role.",
@ -375,6 +381,7 @@
"Enter CFG Scale (e.g. 7.0)": "", "Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "Įveskite blokų persidengimą", "Enter Chunk Overlap": "Įveskite blokų persidengimą",
"Enter Chunk Size": "Įveskite blokų dydį", "Enter Chunk Size": "Įveskite blokų dydį",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "", "Enter description": "",
"Enter Document Intelligence Endpoint": "", "Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "", "Enter Document Intelligence Key": "",
@ -389,11 +396,13 @@
"Enter Jupyter Token": "", "Enter Jupyter Token": "",
"Enter Jupyter URL": "", "Enter Jupyter URL": "",
"Enter Kagi Search API Key": "", "Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "Įveskite kalbos kodus", "Enter language codes": "Įveskite kalbos kodus",
"Enter Model ID": "", "Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "Įveskite modelio žymą (pvz. {{modelTag}})", "Enter model tag (e.g. {{modelTag}})": "Įveskite modelio žymą (pvz. {{modelTag}})",
"Enter Mojeek Search API Key": "", "Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "Įveskite žingsnių kiekį (pvz. 50)", "Enter Number of Steps (e.g. 50)": "Įveskite žingsnių kiekį (pvz. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "", "Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "", "Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "", "Enter Sampler (e.g. Euler a)": "",
@ -417,6 +426,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "", "Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "Įveskite Tika serverio nuorodą", "Enter Tika Server URL": "Įveskite Tika serverio nuorodą",
"Enter timeout in seconds": "", "Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Įveskite Top K", "Enter Top K": "Įveskite Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Įveskite nuorodą (pvz. http://127.0.0.1:7860/)", "Enter URL (e.g. http://127.0.0.1:7860/)": "Įveskite nuorodą (pvz. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Įveskite nuorododą (pvz. http://localhost:11434", "Enter URL (e.g. http://localhost:11434)": "Įveskite nuorododą (pvz. http://localhost:11434",
@ -440,9 +450,13 @@
"Example: mail": "", "Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "", "Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "", "Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "", "Exclude": "",
"Execute code for analysis": "", "Execute code for analysis": "",
"Expand": "",
"Experimental": "Eksperimentinis", "Experimental": "Eksperimentinis",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "", "Explore the cosmos": "",
"Export": "Eksportuoti", "Export": "Eksportuoti",
"Export All Archived Chats": "", "Export All Archived Chats": "",
@ -566,7 +580,7 @@
"Include": "", "Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "Įtraukti `--api-auth` flag when running stable-diffusion-webui", "Include `--api-auth` flag when running stable-diffusion-webui": "Įtraukti `--api-auth` flag when running stable-diffusion-webui",
"Include `--api` flag when running stable-diffusion-webui": "Pridėti `--api` kai vykdomas stable-diffusion-webui", "Include `--api` flag when running stable-diffusion-webui": "Pridėti `--api` kai vykdomas stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "", "Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Informacija", "Info": "Informacija",
"Input commands": "Įvesties komandos", "Input commands": "Įvesties komandos",
"Install from Github URL": "Instaliuoti Github nuorodą", "Install from Github URL": "Instaliuoti Github nuorodą",
@ -624,6 +638,7 @@
"Local": "", "Local": "",
"Local Models": "Lokalūs modeliai", "Local Models": "Lokalūs modeliai",
"Location access not allowed": "", "Location access not allowed": "",
"Logit Bias": "",
"Lost": "", "Lost": "",
"LTR": "LTR", "LTR": "LTR",
"Made by Open WebUI Community": "Sukurta OpenWebUI bendruomenės", "Made by Open WebUI Community": "Sukurta OpenWebUI bendruomenės",
@ -764,6 +779,7 @@
"Permission denied when accessing microphone": "Mikrofono leidimas atmestas", "Permission denied when accessing microphone": "Mikrofono leidimas atmestas",
"Permission denied when accessing microphone: {{error}}": "Leidimas naudoti mikrofoną atmestas: {{error}}", "Permission denied when accessing microphone: {{error}}": "Leidimas naudoti mikrofoną atmestas: {{error}}",
"Permissions": "", "Permissions": "",
"Perplexity API Key": "",
"Personalization": "Personalizacija", "Personalization": "Personalizacija",
"Pin": "Smeigtukas", "Pin": "Smeigtukas",
"Pinned": "Įsmeigta", "Pinned": "Įsmeigta",
@ -809,7 +825,7 @@
"Reasoning Effort": "", "Reasoning Effort": "",
"Record voice": "Įrašyti balsą", "Record voice": "Įrašyti balsą",
"Redirecting you to Open WebUI Community": "Perkeliam Jus į OpenWebUI bendruomenę", "Redirecting you to Open WebUI Community": "Perkeliam Jus į OpenWebUI bendruomenę",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "", "Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Vadinkite save Naudotoju (pvz. Naudotojas mokosi prancūzų kalbos)", "Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Vadinkite save Naudotoju (pvz. Naudotojas mokosi prancūzų kalbos)",
"References from": "", "References from": "",
"Refused when it shouldn't have": "Atmesta kai neturėtų būti atmesta", "Refused when it shouldn't have": "Atmesta kai neturėtų būti atmesta",
@ -918,11 +934,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "", "Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "Numatyti balsą", "Set Voice": "Numatyti balsą",
"Set whisper model": "", "Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "", "Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "", "Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "", "Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "Nustatymai", "Settings": "Nustatymai",
"Settings saved successfully!": "Parametrai sėkmingai išsaugoti!", "Settings saved successfully!": "Parametrai sėkmingai išsaugoti!",
@ -964,7 +980,7 @@
"System Prompt": "Sistemos užklausa", "System Prompt": "Sistemos užklausa",
"Tags Generation": "", "Tags Generation": "",
"Tags Generation Prompt": "", "Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "", "Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "", "Talk to model": "",
"Tap to interrupt": "Paspauskite norėdami pertraukti", "Tap to interrupt": "Paspauskite norėdami pertraukti",
"Tasks": "", "Tasks": "",
@ -979,7 +995,7 @@
"Thanks for your feedback!": "Ačiū už atsiliepimus", "Thanks for your feedback!": "Ačiū už atsiliepimus",
"The Application Account DN you bind with for search": "", "The Application Account DN you bind with for search": "",
"The base to search for users": "", "The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "", "The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Šis modulis kuriamas savanorių. Palaikykite jų darbus finansiškai arba prisidėdami kodu.", "The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Šis modulis kuriamas savanorių. Palaikykite jų darbus finansiškai arba prisidėdami kodu.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "", "The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "", "The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1004,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "", "The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "", "The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Rezultatas turėtų būti tarp 0.0 (0%) ir 1.0 (100%)", "The score should be a value between 0.0 (0%) and 1.0 (100%).": "Rezultatas turėtų būti tarp 0.0 (0%) ir 1.0 (100%)",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "", "The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Tema", "Theme": "Tema",
"Thinking...": "Mąsto...", "Thinking...": "Mąsto...",
"This action cannot be undone. Do you wish to continue?": "Šis veiksmas negali būti atšauktas. Ar norite tęsti?", "This action cannot be undone. Do you wish to continue?": "Šis veiksmas negali būti atšauktas. Ar norite tęsti?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Tai užtikrina, kad Jūsų pokalbiai saugiai saugojami duomenų bazėje. Ačiū!", "This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Tai užtikrina, kad Jūsų pokalbiai saugiai saugojami duomenų bazėje. Ačiū!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Tai eksperimentinė funkcija ir gali veikti nevisada.", "This is an experimental feature, it may not function as expected and is subject to change at any time.": "Tai eksperimentinė funkcija ir gali veikti nevisada.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "", "This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "", "This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "", "This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "", "This response was generated by \"{{model}}\"": "",
"This will delete": "Tai ištrins", "This will delete": "Tai ištrins",
@ -1132,7 +1148,7 @@
"Why?": "", "Why?": "",
"Widescreen Mode": "Plataus ekrano rėžimas", "Widescreen Mode": "Plataus ekrano rėžimas",
"Won": "", "Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "", "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Nuostatos", "Workspace": "Nuostatos",
"Workspace Permissions": "", "Workspace Permissions": "",
"Write": "", "Write": "",
@ -1142,6 +1158,7 @@
"Write your model template content here": "", "Write your model template content here": "",
"Yesterday": "Vakar", "Yesterday": "Vakar",
"You": "Jūs", "You": "Jūs",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "", "You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Galite pagerinti modelių darbą suteikdami jiems atminties funkcionalumą.", "You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Galite pagerinti modelių darbą suteikdami jiems atminties funkcionalumą.",
"You cannot upload an empty file.": "", "You cannot upload an empty file.": "",

Some files were not shown because too many files have changed in this diff Show more