diff --git a/docs/features/plugin/functions/actions.md b/docs/features/plugin/functions/actions.md deleted file mode 100644 index 56d74574..00000000 --- a/docs/features/plugin/functions/actions.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -sidebar_position: 6 -title: "Actions" ---- - -# Actions -Action functions allow you to write custom buttons to the message toolbar for end users to interact -with. This feature enables more interactive messaging, enabling users to grant permission before a -task is performed, generate visualizations of structured data, download an audio snippet of chats, -and many other use cases. - -A scaffold of Action code can be found [in the community section](https://openwebui.com/f/hub/custom_action/). - -An example of a graph visualization Action can be seen in the video below. - -

- - Graph Visualization Action - -

diff --git a/docs/features/plugin/functions/index.mdx b/docs/features/plugin/functions/index.mdx deleted file mode 100644 index 5afd5519..00000000 --- a/docs/features/plugin/functions/index.mdx +++ /dev/null @@ -1,363 +0,0 @@ ---- -sidebar_position: 1 -title: "Functions" ---- - -## What are Functions? -Functions are modular operations that allow users to enhance the capabilities of the AI by embedding specific logic or actions directly into workflows. Unlike tools, which operate as external utilities, functions run natively within the OpenWebUI environment and handle tasks such as data processing, visualization, and interactive messaging. Functions are lightweight and designed to execute efficiently on the same server as the WebUI, enabling quick responses without the need for external dependencies. - -## How can I use Functions? -Functions can be used, [once installed](#how-to-install-functions), by assigning them to an LLM or enabling them globally. Some function types will always be enabled globally, such as manifolds. To assign a function to a model, you simply need to navigate to Workspace => Models. Here you can select the model for which you’d like to enable any Functinos. - -Once you click the pencil icon to edit the model settings, scroll down to the Functions section and check any Functions you wish to enable. Once done you must click save. - -You also have the ability to enable Functions globally for ALL models. In order to do this, navigate to Workspace => Functions and click the "..." menu. Once the menu opens, simply enable the "Global" switch and your function will be enabled for every model in your OpenWebUI instance. -## How to install Functions -The Functions import process is quite simple. You will have two options: - -### Download and import manually -Navigate to the community site: https://openwebui.com/functions/ -1) Click on the Function you wish to import -2) Click the blue “Get” button in the top right-hand corner of the page -3) Click “Download as JSON export” -4) You can now upload the Funtion into OpenWebUI by navigating to Workspace => Functions and clicking “Import Functions - -### Import via your OpenWebUI URL -1) Navigate to the community site: https://openwebui.com/functions/ -2) Click on the Function you wish to import -3) Click the blue “Get” button in the top right-hand corner of the page -4) Enter the IP address of your OpenWebUI instance and click “Import to WebUI” which will automatically open your instance and allow you to import the Function. - -Note: You can install your own Function and other Functions not tracked on the community site using the manual import method. Please do not import Functions you do not understand or are not from a trustworthy source. Running unknown code is ALWAYS a risk. - -## What are the support types of functions -### Filter -Filters are used to manipulate the user input and/or the LLM output to add, remove, format, or otherwise adjust the content of the body object. - -Filters have a few main components: - -#### Inlet Function -The inlet is user to pre-process a user input before it is send to the LLM for processing. - -#### Outlet Function -The outlet is used to post-process the output from the LLM. It is important to note that when you perform actions such as stripping/replacing content, this will happen after the output is rendered to the UI. - -
-Example - -``` -class Filter: - # Define and Valves - class Valves(BaseModel): - priority: int = Field( - default=0, description="Priority level for the filter operations." - ) - test_valve: int = Field( - default=4, description="A valve controlling a numberical value" - ) - pass - - # Define any UserValves - class UserValves(BaseModel): - test_user_valve: bool = Field( - default=False, description="A user valve controlling a True/False (on/off) switch" - ) - pass - - def __init__(self): - self.valves = self.Valves() - pass - - def inlet(self, body: dict, __user__: Optional[dict] = None) -> dict: - print(f"inlet:{__name__}") - print(f"inlet:body:{body}") - print(f"inlet:user:{__user__}") - - # Pre-processing logic here - - return body - - def outlet(self, body: dict, __user__: Optional[dict] = None) -> dict: - print(f"outlet:{__name__}") - print(f"outlet:body:{body}") - print(f"outlet:user:{__user__}") - - # Post-processing logic here - - return body -``` -
- -### Action -Actions are used to create a button in the Message UI (the small buttons found directly underneath individual chat messages). - -Actions have a single main component called an action function. This component takes an object defining the type of action and the data being processed. - -
-Example - -``` -async def action( - self, - body: dict, - __user__=None, - __event_emitter__=None, - __event_call__=None, - ) -> Optional[dict]: - print(f"action:{__name__}") - - response = await __event_call__( - { - "type": "input", - "data": { - "title": "write a message", - "message": "here write a message to append", - "placeholder": "enter your message", - }, - } - ) - print(response) -``` -
- -#### Pipes - -#### Pipe -A Pipe is used to create a "Model" with custom logic and processing. A Pipe will always show up as it's own singular model in the OpenWebUI interface and will, much like a filter - -A Pipe has a single main component called a pipe function. This component encapsulates all of the primary logic that the Pipe will perform. - -
-Example - -``` -class Pipe: - class Valves(BaseModel): - RANDOM_CONFIG_OPTION: str = Field(default="") - - def __init__(self): - self.type = "pipe" - self.id = "blah" - self.name = "Testing" - self.valves = self.Valves( - **{"RANDOM_CONFIG_OPTION": os.getenv("RANDOM_CONFIG_OPTION", "")} - ) - pass - - def get_provider_models(self): - return [ - {"id": "model_id_1", "name": "model_1"}, - {"id": "model_id_2", "name": "model_2"}, - {"id": "model_id_3", "name": "model_3"}, - ] - - def pipe(self, body: dict) -> Union[str, Generator, Iterator]: - # Logic goes here - return body -``` -
- -#### Manifold -A Manifold is used to create a collection of Pipes. If a Pipe creates a singular "Model", a Manifold creates a set of "Models." Manifolds are typically used to create integrations with other providers. - -A Manifold has two main components: - -##### Pipes Function -This is used to simply initiate a dictionary to hold all of the Pipes created by the manifold - -##### Pipe Function -As referenced above, this component encapsulates all of the primary logic that the Pipe will perform. - - -
-Example - -``` -class Pipe: - class Valves(BaseModel): - PROVIDER_API_KEY: str = Field(default="") - - def __init__(self): - self.type = "manifold" - self.id = "blah" - self.name = "Testing" - self.valves = self.Valves( - **{"PROVIDER_API_KEY": os.getenv("PROVIDER_API_KEY", "")} - ) - pass - - def get_provider_models(self): - return [ - {"id": "model_id_1", "name": "model_1"}, - {"id": "model_id_2", "name": "model_2"}, - {"id": "model_id_3", "name": "model_3"}, - ] - - def pipes(self) -> List[dict]: - return self.get_provider_models() - - def pipe(self, body: dict) -> Union[str, Generator, Iterator]: - # Logic goes here - return body -``` -
- -Note: To differentiate between a Pipe and a Manifold you will need to specify the type in def init: -``` -def __init__(self): - self.type = "pipe" - self.id = "blah" - self.name = "Testing" - pass -``` - -or - -``` -def __init__(self): - self.type = "manifold" - self.id = "blah" - self.name = "Testing/" - pass -``` - -## Shared Function Components - -### Valves and UserValves - (optional, but HIGHLY encouraged) - -Valves and UserValves are used to allow users to provide dyanmic details such as an API key or a configuration option. These will create a fillable field or a bool switch in the GUI menu for the given function. - -Valves are configurable by admins alone and UserValves are configurable by any users. - -
-Example - -``` -# Define and Valves - class Valves(BaseModel): - priority: int = Field( - default=0, description="Priority level for the filter operations." - ) - test_valve: int = Field( - default=4, description="A valve controlling a numberical value" - ) - pass - - # Define any UserValves - class UserValves(BaseModel): - test_user_valve: bool = Field( - default=False, description="A user valve controlling a True/False (on/off) switch" - ) - pass - - def __init__(self): - self.valves = self.Valves() - pass -``` -
- -### Event Emitters -Event Emitters are used to add additional information to the chat interface. Similarly to Filter Outlets, Event Emitters are capable of appending content to the chat. Unlike Filter Outlets, they are not capable of stripping information. Additionally, emitters can be activated at any stage during the function. - -There are two different types of Event Emitters: - -#### Status -This is used to add statuses to a message while it is performing steps. These can be done at any stage during the Function. These statuses appear right above the message content. These are very useful for Functions that delay the LLM response or process large amounts of information. This allows you to inform users what is being processed in real-time. - -``` -await __event_emitter__( - { - "type": "status", # We set the type here - "data": {"description": "Message that shows up in the chat", "done": False}, - # Note done is False here indicating we are still emitting statuses - } - ) -``` - -
-Example - -``` -async def test_function( - self, prompt: str, __user__: dict, __event_emitter__=None - ) -> str: - """ - This is a demo - - :param test: this is a test parameter - """ - - await __event_emitter__( - { - "type": "status", # We set the type here - "data": {"description": "Message that shows up in the chat", "done": False}, - # Note done is False here indicating we are still emitting statuses - } - ) - - # Do some other logic here - await __event_emitter__( - { - "type": "status", - "data": {"description": "Completed a task message", "done": True}, - # Note done is True here indicating we are done emitting statuses - } - ) - - except Exception as e: - await __event_emitter__( - { - "type": "status", - "data": {"description": f"An error occured: {e}", "done": True}, - } - ) - - return f"Tell the user: {e}" -``` -
- -#### Message -This type is used to append a message to the LLM at any stage in the Function. This means that you can append messages, embed images, and even render web pages before, or after, or during the LLM response. - -``` -await __event_emitter__( - { - "type": "message", # We set the type here - "data": {"content": "This message will be appended to the chat."}, - # Note that with message types we do NOT have to set a done condition - } - ) -``` - -
-Example - -``` -async def test_function( - self, prompt: str, __user__: dict, __event_emitter__=None - ) -> str: - """ - This is a demo - - :param test: this is a test parameter - """ - - await __event_emitter__( - { - "type": "message", # We set the type here - "data": {"content": "This message will be appended to the chat."}, - # Note that with message types we do NOT have to set a done condition - } - ) - - except Exception as e: - await __event_emitter__( - { - "type": "status", - "data": {"description": f"An error occured: {e}", "done": True}, - } - ) - - return f"Tell the user: {e}" -``` -
diff --git a/docs/features/plugin/tools/index.mdx b/docs/features/plugin/tools/index.mdx deleted file mode 100644 index 340da23d..00000000 --- a/docs/features/plugin/tools/index.mdx +++ /dev/null @@ -1,187 +0,0 @@ ---- -sidebar_position: 0 -title: "Tools" ---- - -## What are Tools? -Tools are python scripts that are provided to an LLM at the time of the request. Tools allow LLMs to perform actions and receive additional context as a result. Generally speaking, your LLM of choice will need to support function calling for tools to be reliably utilized. - -Tools enable many use cases for chats, including web search, web scraping, and API interactions within the chat. - -Many Tools are available to use on the [Community Website](https://openwebui.com/tools) and can easily be imported into your Open WebUI instance. - -## How can I use Tools? -[Once installed](#how-to-install-tools), Tools can be used by assigning them to any LLM that supports function calling and then enabling that Tool. To assign a Tool to a model, you need to navigate to Workspace => Models. Here you can select the model for which you’d like to enable any Tools. - -Once you click the pencil icon to edit the model settings, scroll down to the Tools section and check any Tools you wish to enable. Once done you must click save. - -Now that Tools are enabled for the model, you can click the “+” icon when chatting with an LLM to use various Tools. Please keep in mind that enabling a Tool does not force it to be used. It means the LLM will be provided the option to call this Tool. - -Lastly, we do provide a filter function on the community site that allows LLMs to autoselect Tools without you needing to enable them in the “+” icon menu: https://openwebui.com/f/hub/autotool_filter/ - -Please note: when using the AutoTool Filter, you will still need to take the steps above to enable the Tools per model. - -## How to install Tools -The Tools import process is quite simple. You will have two options: - -### Download and import manually -Navigate to the community site: https://openwebui.com/tools/ -1) Click on the Tool you wish to import -2) Click the blue “Get” button in the top right-hand corner of the page -3) Click “Download as JSON export” -4) You can now upload the Tool into OpenWebUI by navigating to Workspace => Tools and clicking “Import Tools” - -### Import via your OpenWebUI URL -1) Navigate to the community site: https://openwebui.com/tools/ -2) Click on the Tool you wish to import -3) Click the blue “Get” button in the top right-hand corner of the page -4) Enter the IP address of your OpenWebUI instance and click “Import to WebUI” which will automatically open your instance and allow you to import the Tool. - -Note: You can install your own Tools and other Tools not tracked on the community site using the manual import method. Please do not import Tools you do not understand or are not from a trustworthy source. Running unknown code is ALWAYS a risk. - -## What sorts of things can Tools do? -Tools enable diverse use cases for interactive conversations by providing a wide range of functionality such as: - -- [**Web Search**](https://openwebui.com/t/constliakos/web_search/): Perform live web searches to fetch real-time information. -- [**Image Generation**](https://openwebui.com/t/justinrahb/image_gen/): Generate images based on the user prompt -- [**External Voice Synthesis**](https://openwebui.com/t/justinrahb/elevenlabs_tts/): Make API requests within the chat to integrate external voice synthesis service ElevenLabs and generate audio based on the LLM output. - -## Important Tools Components -### Valves and UserValves - (optional, but HIGHLY encouraged) - -Valves and UserValves are used to allow users to provide dyanmic details such as an API key or a configuration option. These will create a fillable field or a bool switch in the GUI menu for the given Tool. - -Valves are configurable by admins alone and UserValves are configurable by any users. - -
-Example - -``` -# Define and Valves - class Valves(BaseModel): - priority: int = Field( - default=0, description="Priority level for the filter operations." - ) - test_valve: int = Field( - default=4, description="A valve controlling a numberical value" - ) - pass - - # Define any UserValves - class UserValves(BaseModel): - test_user_valve: bool = Field( - default=False, description="A user valve controlling a True/False (on/off) switch" - ) - pass - - def __init__(self): - self.valves = self.Valves() - pass -``` -
- -### Event Emitters -Event Emitters are used to add additional information to the chat interface. Similarly to Filter Outlets, Event Emitters are capable of appending content to the chat. Unlike Filter Outlets, they are not capable of stripping information. Additionally, emitters can be activated at any stage during the Tool. - -There are two different types of Event Emitters: - -#### Status -This is used to add statuses to a message while it is performing steps. These can be done at any stage during the Tool. These statuses appear right above the message content. These are very useful for Tools that delay the LLM response or process large amounts of information. This allows you to inform users what is being processed in real-time. - -``` -await __event_emitter__( - { - "type": "status", # We set the type here - "data": {"description": "Message that shows up in the chat", "done": False}, - # Note done is False here indicating we are still emitting statuses - } - ) -``` - -
-Example - -``` -async def test_function( - self, prompt: str, __user__: dict, __event_emitter__=None - ) -> str: - """ - This is a demo - - :param test: this is a test parameter - """ - - await __event_emitter__( - { - "type": "status", # We set the type here - "data": {"description": "Message that shows up in the chat", "done": False}, - # Note done is False here indicating we are still emitting statuses - } - ) - - # Do some other logic here - await __event_emitter__( - { - "type": "status", - "data": {"description": "Completed a task message", "done": True}, - # Note done is True here indicating we are done emitting statuses - } - ) - - except Exception as e: - await __event_emitter__( - { - "type": "status", - "data": {"description": f"An error occured: {e}", "done": True}, - } - ) - - return f"Tell the user: {e}" -``` -
- -#### Message -This type is used to append a message to the LLM at any stage in the Tool. This means that you can append messages, embed images, and even render web pages before, or after, or during the LLM response. - -``` -await __event_emitter__( - { - "type": "message", # We set the type here - "data": {"content": "This message will be appended to the chat."}, - # Note that with message types we do NOT have to set a done condition - } - ) -``` - -
-Example - -``` -async def test_function( - self, prompt: str, __user__: dict, __event_emitter__=None - ) -> str: - """ - This is a demo - - :param test: this is a test parameter - """ - - await __event_emitter__( - { - "type": "message", # We set the type here - "data": {"content": "This message will be appended to the chat."}, - # Note that with message types we do NOT have to set a done condition - } - ) - - except Exception as e: - await __event_emitter__( - { - "type": "status", - "data": {"description": f"An error occured: {e}", "done": True}, - } - ) - - return f"Tell the user: {e}" -``` -
diff --git a/docs/features/url-params.md b/docs/features/url-params.md index 812d7fbe..e446af41 100644 --- a/docs/features/url-params.md +++ b/docs/features/url-params.md @@ -44,7 +44,7 @@ The following table lists the available URL parameters, their function, and exam ### 4. **Tool Selection** -- **Description**: The `tools` or `tool-ids` parameters specify which [tools](../plugin/tools/index.mdx) to activate within the chat. +- **Description**: The `tools` or `tool-ids` parameters specify which [tools](./workspace/plugins/tools/index.mdx) to activate within the chat. - **How to Set**: Provide a comma-separated list of tool IDs as the parameter’s value. - **Example**: `/?tools=tool1,tool2` or `/?tool-ids=tool1,tool2` - **Behavior**: Each tool ID is matched and activated within the session for user interaction. diff --git a/docs/features/workspace/Plugins/functions/actions.md b/docs/features/workspace/Plugins/functions/actions.md new file mode 100644 index 00000000..6e9b0302 --- /dev/null +++ b/docs/features/workspace/Plugins/functions/actions.md @@ -0,0 +1,82 @@ +--- +sidebar_position: 5 +title: "💬 Actions " +--- + +# 💬 Actions + +Ever wanted a 🔘 button that lets you quickly do something with the 🤖 AI’s response? That’s where **Actions** come in. Actions in Open WebUI are mini-interactive elements you can attach to individual chat messages, making interactions smoother and more efficient. ⚡ + +## TL;DR +- **Actions** are 🛠️ buttons or interactive elements you can add to chat messages. +- They allow users to interact with messages—such as ✅ confirming, 📝 adding notes, or 🔄 triggering additional responses. + +### What Are Actions? 🤔 +Actions allow you to place buttons right below any chat message, making it super easy for users to respond to prompts, confirm information, or trigger a new task based on the conversation. + +### How Do Actions Work? ⚙️ +Actions are created with a primary component, the **action function**, which defines what happens when the button is clicked. For instance, an action might open a small text input where users can add feedback or perform a secondary task. + +### Examples of Actions: +1. **Confirm Action** ✅ – Users click to confirm an instruction or agreement. +2. **Add Feedback** 📝 – Opens a text box to input additional information. +3. **Quick Reply** ⚡ – Buttons for fast responses like “Yes” 👍 or “No” 👎. + +By making interactions intuitive, Actions create a better user experience within Open WebUI, helping users stay engaged and making workflows faster and easier to manage. + +Some of the actual usecases includes: +- Grant permission before performing a specific task +- Generate visualizations of structured data 📊 +- Download audio snippets of chats 🎧 +- Enable other interactive use cases for a richer messaging experience + +## 💻 Getting Started with Actions +To start using action functions you can start by checking the [community functions](https://openwebui.com).[This guide](index.mdx#how-to-install-functions) provides a foundation for setting up an action. + +## 📊 Example: Graph Visualization Action + +For example, a graph visualization Action can enrich user interactions by enabling real-time data visuals. Check out the example below to see it in action: + +

+ + Graph Visualization Action + +

+ +Explore and experiment to make your interactions more dynamic and engaging with **Actions**! + + + +## 📝 Starting with Code + +If you’re ready to dive into writing code, start with reading this [document](../start_coding.md). You can also use the **Action code scaffold** available in [the community section](https://openwebui.com/f/hub/custom_action/). + +Actions have a single main component called an action function. This component takes an object defining the type of action and the data being processed. + +Example + +``` +async def action( + self, + body: dict, + __user__=None, + __event_emitter__=None, + __event_call__=None, + ) -> Optional[dict]: + print(f"action:{__name__}") + + response = await __event_call__( + { + "type": "input", + "data": { + "title": "write a message", + "message": "here write a message to append", + "placeholder": "enter your message", + }, + } + ) + print(response) +``` + + + diff --git a/docs/features/workspace/Plugins/functions/filter.md b/docs/features/workspace/Plugins/functions/filter.md new file mode 100644 index 00000000..45c6634e --- /dev/null +++ b/docs/features/workspace/Plugins/functions/filter.md @@ -0,0 +1,96 @@ +--- +sidebar_position: 5 +title: "🚦 Filters" +--- + +# 🚦 Filters + +When using Open WebUI, not all user messages or model outputs might be perfectly suited for every conversation—this is where **Filters** come to the rescue 🛠️. Filters let you control what goes in and out, ensuring that only the most relevant content reaches your chat or model. 🎯 + +## TL;DR +- **Filters** are used to pre-process (edit) incoming user messages or post-process (tweak) the AI’s responses. ✍️ +- Filters help you adjust content on the fly, adding flexibility for sensitive topics, formatting, or message simplification. 🔧 + +### Why Use Filters? 🤔 +Filters are ideal for adding rules to chat interactions, like removing specific keywords or reformatting text. + +### Examples of Filters: +1. **Profanity Filter** 🚫 – Screens and removes inappropriate words from user messages. +2. **Format Adjuster** ✨ – Automatically reformats incoming or outgoing text for consistency. +3. **Spam Blocker** 🛑 – Filters out repetitive or unwanted messages. +4. **Resize Pictures** 📷 - Make the image smaller in size before giving it to the model. + +By setting up Filters, you control the flow of your conversation, ensuring that interactions are smooth, clean, and always relevant! 💬 + +## 💻 Getting Started with Filters +To start using filter functions you can start by checking the [community functions](https://openwebui.com).[This guide](index.mdx#how-to-install-functions) provides a foundation for setting up a filter. + + + +### How Filters Work +Filters work with two main components: +1. **Inlet** – Pre-processes a user’s message before sending it to the model. +2. **Outlet** – Adjusts the model’s response after it’s generated. + +When a filter pipeline is enabled on a model or pipe, the incoming message from the user (or "inlet") is passed to the filter for processing. The filter performs the desired action against the message before requesting the chat completion from the LLM model. Finally, the filter performs post-processing on the outgoing LLM message (or "outlet") before it is sent to the user. + + + + + +

+ + Filter Workflow + +

+ + +## 📝 Starting with Code + +If you’re ready to dive into writing code, start with reading this [document](../start_coding.md). + + +Example + +``` +class Filter: + # Define and Valves + class Valves(BaseModel): + priority: int = Field( + default=0, description="Priority level for the filter operations." + ) + test_valve: int = Field( + default=4, description="A valve controlling a numberical value" + ) + pass + + # Define any UserValves + class UserValves(BaseModel): + test_user_valve: bool = Field( + default=False, description="A user valve controlling a True/False (on/off) switch" + ) + pass + + def __init__(self): + self.valves = self.Valves() + pass + + def inlet(self, body: dict, __user__: Optional[dict] = None) -> dict: + print(f"inlet:{__name__}") + print(f"inlet:body:{body}") + print(f"inlet:user:{__user__}") + + # Pre-processing logic here + + return body + + def outlet(self, body: dict, __user__: Optional[dict] = None) -> dict: + print(f"outlet:{__name__}") + print(f"outlet:body:{body}") + print(f"outlet:user:{__user__}") + + # Post-processing logic here + + return body +``` \ No newline at end of file diff --git a/docs/features/workspace/Plugins/functions/index.mdx b/docs/features/workspace/Plugins/functions/index.mdx new file mode 100644 index 00000000..9f57d476 --- /dev/null +++ b/docs/features/workspace/Plugins/functions/index.mdx @@ -0,0 +1,53 @@ +--- +sidebar_position: 4 +title: "🔧 Functions" +--- + +# 🔧 Functions + +Just started with Open WebUI or finding yourself wondering, "What exactly are functions, and how do they fit in here?" Let’s unravel it together! Functions are designed to add flexibility and control directly within Open WebUI, allowing you to expand its functionality without the hassle of complex external setups. 🚀 + +## TL;DR +- **Functions** are like built-in enhancements for Open WebUI, letting you add custom processing, filtering, and even interactivity without needing external tools. 🛠️ +- Functions help WebUI handle tasks like formatting, visualizations, or creating interactive elements. 📊 +- **Types of Functions** include Filters, Actions, Pipes, each offering a unique way to enhance or control Open WebUI’s behavior. + +Using Functions is straightforward! Once you've installed one, it’s just a matter of enabling it per model or globally for all models. You’re now ready to get creative with how WebUI processes data! 🎨 + +## What are Functions? 🤔 +Think of **Functions** as ways to teach Open WebUI some new skills—not for AI model answers but for WebUI’s internal workings. Functions let you configure processes like filtering messages, formatting text, or even creating interactive chat features. + +### Why Use Functions? 🔍 +Functions are your go-to for customizing the WebUI’s behavior or adding new ways to interact with AI models. + +### Examples of Functions: +1. **Add new AI models** to your setup, such as Anthropic 🤖. +2. **Set up message filters** to weed out inappropriate content. 🚫 +3. **Create custom buttons** that perform specific actions in the chat. 🔘 + +Functions make Open WebUI more dynamic, keeping it flexible for whatever you need it to do! Dive into the next sections to learn more about each function type. + +## How can I use Functions? 💻 +Functions can be used, [once installed](#how-to-install-functions), by assigning them to an LLM or enabling them globally. Some function types will always be enabled globally, such as manifolds. To assign a function to a model, you simply need to navigate to **Workspace => Models**. Here you can select the model for which you’d like to enable any Functions. + +Once you click the pencil icon ✏️ to edit the model settings, scroll down to the Functions section and check any Functions you wish to enable. Once done, click **Save** 💾. + +You also have the ability to enable Functions globally for ALL models. In order to do this, navigate to **Workspace => Functions** and click the "..." menu. Once the menu opens, simply enable the **Global** switch 🌐, and your function will be enabled for every model in your OpenWebUI instance. + +## How to install Functions 📥 +The Functions import process is quite simple. You will have two options: + +### Download and import manually 📄 +Navigate to the community site: https://openwebui.com/functions/ +1) Click on the Function you wish to import. +2) Click the blue “Get” button in the top right-hand corner of the page. +3) Click “Download as JSON export.” +4) You can now upload the Function into OpenWebUI by navigating to **Workspace => Functions** and clicking **Import Functions**. + +### Import via your OpenWebUI URL 🌐 +1) Navigate to the community site: https://openwebui.com/functions/ +2) Click on the Function you wish to import. +3) Click the blue “Get” button in the top right-hand corner of the page. +4) Enter the IP address of your OpenWebUI instance and click **Import to WebUI** which will automatically open your instance and allow you to import the Function. + +> **Note:** You can install your own Function and other Functions not tracked on the community site using the manual import method. Please do not import Functions you do not understand or are not from a trustworthy source. Running unknown code is ALWAYS a risk. ⚠️ diff --git a/docs/features/workspace/Plugins/functions/pipe.md b/docs/features/workspace/Plugins/functions/pipe.md new file mode 100644 index 00000000..d1ac27a9 --- /dev/null +++ b/docs/features/workspace/Plugins/functions/pipe.md @@ -0,0 +1,67 @@ +--- +sidebar_position: 5 +title: "🔗 Pipes" +--- + +# 🔗 Pipes + +If you’re ready to take Open WebUI to the next level, **Pipes** might just be the feature you need. Pipes let you create custom "mini-models" with specific logic that work as independent, fully-functional models in the WebUI interface. 🚀 + +## TL;DR +- **Pipes** act as standalone models within WebUI, letting you add unique logic and processing. ⚙️ +- With Pipes, you can design specialized models or workflows directly in Open WebUI. + +### What Are Pipes? 🤔 +A Pipe is like a model that you build yourself. You get to define its logic, what it does, and how it processes messages. Pipes can appear as a unique model, enabling custom processing steps beyond the default options in WebUI. + +### How Pipes Work 🔍 +Pipes are defined by a primary component called the **pipe function**. This is where the logic lives—whether it’s to transform data, format text, or something more specialized. +

+ + Pipe Workflow + +

+ + +### Examples of Pipes: +1. **Sentiment Analyzer** 💬 – A pipe that classifies the sentiment of text. +2. **Summarizer** 📄 – Automatically generates a concise summary for long inputs. +3. **Keyword Extractor** 🔑 – Identifies and highlights key terms in user messages. + +If you’re looking to add advanced functionality and custom processing to Open WebUI, Pipes are your best friend. 🛠️ They give you the freedom to create unique features that act like personal assistants in your WebUI setup! + + +## 💻 Getting Started with Pipes +To start using pipes functions you can start by checking the [community functions](https://openwebui.com).[This guide](index.mdx#how-to-install-functions) provides a foundation for setting up an action. + + +## 📝 Starting with Code + +If you’re ready to dive into writing code, start with reading this [document](../start_coding.md). +Example + +``` +class Pipe: + class Valves(BaseModel): + RANDOM_CONFIG_OPTION: str = Field(default="") + + def __init__(self): + self.type = "pipe" + self.id = "blah" + self.name = "Testing" + self.valves = self.Valves( + **{"RANDOM_CONFIG_OPTION": os.getenv("RANDOM_CONFIG_OPTION", "")} + ) + pass + + def get_provider_models(self): + return [ + {"id": "model_id_1", "name": "model_1"}, + {"id": "model_id_2", "name": "model_2"}, + {"id": "model_id_3", "name": "model_3"}, + ] + + def pipe(self, body: dict) -> Union[str, Generator, Iterator]: + # Logic goes here + return body +``` \ No newline at end of file diff --git a/docs/features/plugin/index.mdx b/docs/features/workspace/Plugins/index.mdx similarity index 99% rename from docs/features/plugin/index.mdx rename to docs/features/workspace/Plugins/index.mdx index 3ec51fef..4bc3be47 100644 --- a/docs/features/plugin/index.mdx +++ b/docs/features/workspace/Plugins/index.mdx @@ -1,5 +1,5 @@ --- -sidebar_position: 2 +sidebar_position: 3 title: "🛠️ Tools & Functions" --- diff --git a/docs/features/workspace/Plugins/start_coding.md b/docs/features/workspace/Plugins/start_coding.md new file mode 100644 index 00000000..77989350 --- /dev/null +++ b/docs/features/workspace/Plugins/start_coding.md @@ -0,0 +1,125 @@ +--- +sidebar_position: 5 +title: "🔧 Coding Functions" +--- + +# 🔧 Coding Functions + +Alright, so you're ready to dive into coding custom functions in Open WebUI! Whether you're here to enhance the WebUI experience, enable new AI capabilities, or automate tasks, this guide will make sure you know exactly how to get started. We’ll walk through the essentials of setting up a function, configuring user input options, and using real-time updates in the chat interface. + +## TL;DR + +- **Valves & UserValves** allow for dynamic configuration by admins and users, making functions adaptable. +- **Event Emitters** add real-time updates to the chat interface, letting users know the status of ongoing processes. +- Starting a new function requires defining parameters, logic, and configuration options. + +## Let's Get Coding! 🛠️ + +Open WebUI functions are like adding superpowers to your WebUI! They allow you to introduce custom interactions, gather user input, and control behavior right within the platform. + +Here’s a step-by-step guide to help you start: + +### Step 1: Define Your Function + +Every function in Open WebUI starts as a Python method. This function is where all the logic lives—whether it’s fetching data, transforming information, or communicating with an external API. + +```python +async def my_first_function(self, prompt: str, __user__: dict) -> str: + """ + This is a demo function to get started. + + :param prompt: The main text input from the user. + :param __user__: User information for personalized responses. + """ + # Function logic goes here! + return "Hello, Open WebUI!" +``` + +## Step 2: Add Dynamic Inputs with Valves and UserValves 💡 + +Functions can be configured with **Valves** and **UserValves** to make them more flexible. + +- **Valves**: These are admin-controlled fields that allow administrators to set parameters. +- **UserValves**: These are user-configurable options like switches or fields, letting users set details such as an API key or enabling/disabling specific features. + +### Example Usage of Valves and UserValves + +To set up these fields, define them as classes: + +```python +class Valves(BaseModel): + priority: int = Field( + default=0, description="Priority level for the function." + ) + max_attempts: int = Field( + default=3, description="Maximum number of attempts allowed." + ) + +class UserValves(BaseModel): + enable_notifications: bool = Field( + default=False, description="Toggle notifications for the function." + ) + +def __init__(self): + self.valves = self.Valves() + self.user_valves = self.UserValves() + +``` + +## Step 3: Keep Users Updated with Event Emitters ⏳ +:::tip +If your goal is communicate to the models, you need to use these fucntions. +::: + +**Event Emitters** make your functions interactive by adding status messages or updates to the chat interface. + +### Types of Event Emitters: + +- **Status**: Shows real-time updates like “Processing…” or “Almost done!” +- **Message**: Adds custom messages at any stage, even embedding images or links. + +#### Example of a Status Event Emitter + +```python +await __event_emitter__( + { + "type": "status", + "data": {"description": "Processing your request...", "done": False} + } +) +# Perform some processing +await __event_emitter__( + { + "type": "status", + "data": {"description": "Completed processing!", "done": True} + } +) +``` +### Example of a Message Event Emitter +```python +await __event_emitter__( + { + "type": "message", + "data": {"content": "Here’s your data! 📊"} + } +) +``` + +## Step 4: Handle Errors Gracefully 🚨 +To make your functions user-friendly, handle any exceptions and send a clear message back to the user. + +```python +except Exception as e: + await __event_emitter__( + { + "type": "status", + "data": {"description": f"An error occurred: {e}", "done": True}, + } + ) + return f"Oops! Something went wrong: {e}" +``` + +# Putting It All Together 🎉 +With **Valves** and **UserValves**, you control input options. Event Emitters keep users informed with real-time updates. These components, combined with well-structured function logic, make Open WebUI functions versatile and user-centric. + +Now you’re all set to start coding your own custom functions! Please check the specific section related to each function for more details! The community website can also be a great source of working code! 💻🌐 \ No newline at end of file diff --git a/docs/features/workspace/Plugins/tools/index.mdx b/docs/features/workspace/Plugins/tools/index.mdx new file mode 100644 index 00000000..3515744c --- /dev/null +++ b/docs/features/workspace/Plugins/tools/index.mdx @@ -0,0 +1,68 @@ +--- +sidebar_position: 4 +title: "🛠️ Tools" +--- + +# 🛠️ Tools in Open WebUI + +Welcome to the world of **Tools** in Open WebUI! Whether you’re just starting with Open WebUI or exploring new ways to expand its capabilities, this guide will introduce you to the exciting concept of **Tools** and how they can enhance your interactions with large language models (LLMs). Let’s break down what Tools are, how they work, and why they’re easier to use than you might think. + +## 🔍 TL;DR + +- **Tools** 🛠️ extend LLM abilities by enabling real-time actions and dynamic data gathering. +- Once enabled, **Tools** can be selected for use in chats, allowing LLMs to call specific functions, such as web searches 🔎 or API integrations 🌐. +- Tools can be installed easily from the community, with options for custom tools. + +## 🤔 What are "Tools"? + +Think of **Open WebUI** as a powerful base platform for LLMs. However, to interact with real-world data or execute dynamic tasks, the LLM needs a bit of extra functionality—this is where **Tools** come in. + +### 🔧 Tools Overview + +**Tools** are essentially **Python scripts** 🐍 that add abilities to an LLM beyond text-based responses. They enable actions like web searches, data retrieval, and integration with external APIs. + +#### 💡 Example of Using a Tool: + +Imagine you’re chatting with an LLM and want it to give you the current weather 🌦️. Normally, an LLM wouldn’t have access to live data, but with a **weather tool** enabled, the LLM can fetch and display this real-time information in the chat. + +**Tools** make this possible by allowing the LLM to **call external functions** and **retrieve relevant information** or **perform specific actions** during a conversation. + +#### 📝 Examples of Tools: + +1. **Web Search** 🔍: Get real-time answers from live web searches. +2. **Image Generation** 🖼️: Create images based on user prompts. +3. **Voice Synthesis** 🎙️: Integrate with an API like ElevenLabs to generate audio. + +## 🔑 How to Enable and Use Tools + +Once installed, **Tools** can be assigned to any LLM supporting function calling: + +1. **Navigate to Workspace => Models** in Open WebUI. +2. **Edit the Model Settings** ✏️: Click the pencil icon, scroll to the Tools section, and check the boxes next to the Tools you want to enable. +3. **Start a Chat** 💬: Now, when chatting, you can click the “+” icon to access available Tools and allow the LLM to call them if necessary. + +Enabling a Tool does not mean it’s forced to be used; it simply makes it available for use in that chat. To simplify access, the **AutoTool Filter** on the community site allows automatic Tool selection, but you still need to manually enable Tools per model. + +## 📥 Installing Tools + +You can install Tools from the community in two ways: + +1. **Manual Download and Import** + - Visit the community site: [https://openwebui.com/tools/](https://openwebui.com/tools/) 🌐 + - Select a Tool, download it as a JSON export 📄, and then import it in Open WebUI by going to Workspace => Tools and clicking “Import Tools.” + +2. **Direct Import via Open WebUI URL** + - Go to [https://openwebui.com/tools/](https://openwebui.com/tools/) 🌐, select the Tool you want, and enter the IP of your Open WebUI instance. Click “Import to WebUI” to automatically open your instance and begin the import. + +### ⚠️ Important: Only Import Trusted Tools +Avoid importing Tools from unknown sources, as they may introduce risks. + + + +## 📝 Starting with Code + +If you’re ready to dive into writing code, start with reading this [document](../start_coding.md). + + + + \ No newline at end of file diff --git a/docs/features/workspace/index.mdx b/docs/features/workspace/index.mdx index 10f0724f..91536c08 100644 --- a/docs/features/workspace/index.mdx +++ b/docs/features/workspace/index.mdx @@ -1,4 +1,8 @@ --- -sidebar_position: 0 +sidebar_position: 2 title: "🖥️ Workspace" ---- \ No newline at end of file +--- + +# Workspace +The **Workspace** in Open WebUI serves as the central hub for managing all key elements, including models, knowledge, functions, and tools. +This space provides an interface where you can organize, enable, and customize various models to suit your specific needs. \ No newline at end of file diff --git a/docs/features/workspace/knowledge.md b/docs/features/workspace/knowledge.md new file mode 100644 index 00000000..5d7dc811 --- /dev/null +++ b/docs/features/workspace/knowledge.md @@ -0,0 +1,54 @@ +--- +sidebar_position: 3 +title: "🧠 Knowledge" +--- + +# 🧠 Knowledge + + Knowledge part of Open WebUI is like a memory bank that makes your interactions even more powerful and context-aware. Let's break down what "Knowledge" really means in Open WebUI, how it works, and why it’s incredibly helpful for enhancing your experience. + +## TL;DR + +- **Knowledge** is a special section in Open WebUI where you can store structured information that the system can refer to during your interactions. +- It’s like a memory system for Open WebUI that allows it to pull from saved data, making responses more personalized and contextually aware. +- You can use Knowledge directly in your chats with Open WebUI to access the stored data whenever you need it. + +Setting up Knowledge is straightforward! Simply head to the Knowledge section inside work space and start adding details or data. You don’t need coding expertise or technical setup; it’s built into the core system! + +## What is the "Knowledge" Section? + +The **Knowledge section** is a storage area within Open WebUI where you can save specific pieces of information or data points. Think of it as a **reference library** that Open WebUI can use to make its responses more accurate and relevant to your needs. + +### Why is Knowledge Useful? + +Imagine you're working on a long-term project and want the system to remember certain parameters, settings, or even key notes about the project without having to remind it every time. Or perhaps, you want it to remember specific personal preferences for chats and responses. The Knowledge section is where you can store this kind of **persistent information** so that Open WebUI can reference it in future conversations, creating a more **coherent, personalized experience**. + +Some examples of what you might store in Knowledge: +- Important project parameters or specific data points you’ll frequently reference. +- Custom commands, workflows, or settings you want to apply. +- Personal preferences, guidelines, or rules that Open WebUI can follow in every chat. + +### How to Use Knowledge in Chats + +Accessing stored Knowledge in your chats is easy! By simply referencing what’s saved(using '#' before the name), Open WebUI can pull in data or follow specific guidelines that you’ve set up in the Knowledge section. + +For example: +- When discussing a project, Open WebUI can automatically recall your specified project details. +- It can apply custom preferences to responses, like formality levels or preferred phrasing. + +To reference Knowledge in your chats, just ensure it’s saved in the Knowledge section, and Open WebUI will know when and where to bring in the relevant information! + +### Setting Up Your Knowledge Base + +1. **Navigate to the Knowledge Section**: This area is designed to be user-friendly and intuitive. +2. **Add Entries**: Input information you want Open WebUI to remember. It can be as specific or broad as you like. +3. **Save and Apply**: Once saved, the Knowledge is accessible and ready to enhance your chat interactions. + +## Summary + +- The Knowledge section is like Open WebUI's "memory bank," where you can store data that you want it to remember and use. +- **Use Knowledge to keep the system aware** of important details, ensuring a personalized chat experience. +- You can **directly reference Knowledge in chats** to bring in stored data whenever you need it using '#' + name of the knowlege. + + +🌟 Remember, there’s always more to discover, so dive in and make Open WebUI truly your own! diff --git a/docs/features/workspace/models.md b/docs/features/workspace/models.md index 1e03d679..284af59c 100644 --- a/docs/features/workspace/models.md +++ b/docs/features/workspace/models.md @@ -1,6 +1,6 @@ --- -sidebar_position: 16 -title: "Models" +sidebar_position: 3 +title: "🤖 Models" --- **Models** diff --git a/docs/features/workspace/prompts.md b/docs/features/workspace/prompts.md new file mode 100644 index 00000000..306a316f --- /dev/null +++ b/docs/features/workspace/prompts.md @@ -0,0 +1,69 @@ +--- +sidebar_position: 3 +title: "📜 Prompts & Presets" +--- + +# 📜 Prompts & Presets + +Imagine you’ve just started exploring Open WebUI, or maybe you've been using it but find yourself wondering what "Prompts" and "Presets" are all about. Is it just tech lingo? Not really! Let's break it down step-by-step so you'll fully understand how they work, what makes them useful, and why using them is easier than it seems. + +## TL;DR + +- **Prompts** are the instructions you give the AI to get the kind of responses you want. +- **Presets** are predefined prompts or templates you can use repeatedly to ensure consistent interaction. +- **Custom Prompts** allow you to tailor prompts for specific needs, making interactions with the AI smoother and more efficient. + +Creating and using prompts and presets is straightforward because Open WebUI already has these capabilities built-in! All you need to do is **click and customize** to enhance your interactions. + +## What are "Prompts" and "Presets"? + +Think of **Open WebUI** as a smart tool that follows your instructions to respond in useful ways. But sometimes, you want it to understand you better or respond in a specific style. That’s where **prompts** and **presets** come into play. + +### Prompts + +**Prompts** are the way you communicate with the AI, instructing it on how to respond. They guide the AI’s responses based on your instructions. + +#### Example of a Prompt: + +Imagine you’re interacting with the AI and want it to respond in a polite, formal tone, or maybe you need a summary instead of a long answer. A well-crafted prompt can make sure the AI gives you just that. + +Some examples of prompt styles you might use: +1. **Polite request** (“Please summarize this article.”) +2. **Creative approach** (“Write a poem about autumn.”) +3. **Concise response** (“Give a one-sentence answer on this topic.”) + +By adjusting your prompts, you’re guiding the AI to respond in a way that aligns with your needs. + +### Presets + +**Presets** are pre-made prompts that make life easier by saving you from writing the same instructions repeatedly. If you often ask the AI similar questions or need it to respond in specific ways, presets are a lifesaver. + +#### How Presets Work: + +Let’s say you frequently ask the AI to check your grammar or summarize articles. With a preset, you don’t have to type out the full prompt every time—just select the preset, and the AI knows exactly what to do! + +- **Presets are like shortcuts** for recurring prompts. Once you set them up, they’re always ready to go with a single click. + +Some examples of useful preset categories: +1. **Grammar Check** 📝 +2. **Summarizer** 📰 +3. **Creative Writing** ✍️ + +## How to Create Custom Prompts + +Creating a custom prompt involves deciding on the specific instructions you want to give the AI. The prompt creation feature in Open WebUI allows you to: +1. Add a title for your prompt (e.g., "Professional Email"). +2. Add a command for your prompt that can be use later to autofil the message. +3. Define the prompt content with customizable placeholders. +4. Save and reuse the prompt as needed. + +## Using Community Prompts + +Open WebUI offers a [collection of prompts](https://openwebui.com/prompts)contributed by other users. You can browse and download prompts like “Code Expert,” “Grammar Check and Rewrite” or “Image Description Assistant,” to quickly enhance your experience with minimal setup. + +## Want to Try It? 🚀 + +Dive into Open WebUI, check out the community section, and explore some presets or create a custom prompt. Experimenting with different prompts will show you how flexible and powerful Open WebUI can be! + +🌟 There’s always more to explore, so stay curious and keep personalizing your experience! +""" diff --git a/docs/pipelines/filters.md b/docs/pipelines/filters.md deleted file mode 100644 index 0b34c503..00000000 --- a/docs/pipelines/filters.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -sidebar_position: 1 -title: "Filters" ---- - -# Filters -Filters are used to perform actions against incoming user messages and outgoing assistant (LLM) messages. Potential actions that can be taken in a filter include sending messages to monitoring platforms (such as Langfuse or DataDog), modifying message contents, blocking toxic messages, translating messages to another language, or rate limiting messages from certain users. A list of examples is maintained in the [Pipelines repo](https://github.com/open-webui/pipelines/tree/main/examples/filters). Filters can be executed as a Function or on a Pipelines server. The general workflow can be seen in the image below. - -

- - Filter Workflow - -

- -When a filter pipeline is enabled on a model or pipe, the incoming message from the user (or "inlet") is passed to the filter for processing. The filter performs the desired action against the message before requesting the chat completion from the LLM model. Finally, the filter performs post-processing on the outgoing LLM message (or "outlet") before it is sent to the user. \ No newline at end of file diff --git a/docs/pipelines/index.mdx b/docs/pipelines/index.mdx index a24b8654..6904575e 100644 --- a/docs/pipelines/index.mdx +++ b/docs/pipelines/index.mdx @@ -12,7 +12,7 @@ title: "⚡ Pipelines" # Pipelines: UI-Agnostic OpenAI API Plugin Framework :::tip -If your goal is simply to add support for additional providers like Anthropic or basic filters, you likely don't need Pipelines . For those cases, [Open WebUI Functions](/features/plugin/functions) are a better fit—it's built-in, much more convenient, and easier to configure. Pipelines, however, comes into play when you're dealing with computationally heavy tasks (e.g., running large models or complex logic) that you want to offload from your main Open WebUI instance for better performance and scalability. +If your goal is simply to add support for additional providers like Anthropic or basic filters, you likely don't need Pipelines . For those cases, [Open WebUI Functions](../features/workspace/plugins/functions/index.mdx) are a better fit—it's built-in, much more convenient, and easier to configure. Pipelines, however, comes into play when you're dealing with computationally heavy tasks (e.g., running large models or complex logic) that you want to offload from your main Open WebUI instance for better performance and scalability. ::: Welcome to **Pipelines**, an [Open WebUI](https://github.com/open-webui) initiative. Pipelines bring modular, customizable workflows to any UI client supporting OpenAI API specs – and much more! Easily extend functionalities, integrate unique logic, and create dynamic workflows with just a few lines of code.