Manage your home automation LLM prompts, available LLMs and evaluate changes with MLflow
- Enable users to manage their LLM prompts and available LLMs from Home Assistant
- Enable users to evaluate changes to their LLM prompts and LLM parameters with versioning + change management
- Centralize credential management for LLMs
Home Assistant has a convenient prompt template tool to generate prompts for LLMs. However, it's not easy to manage these prompts. I can't tell if my new prompt is better than the last one, change tracking is not easy, and live editing means switching between the developer tools and the prompt template tool. There's a better way! Enter MLflow with its new PromptLab UI and integrated evaluation tools + tracking.
LLMs are excellent reasoning agents with unstructured input. Someone familiar with home automation is used to kludgy workarounds:
- presence detection like wasp-in-a-box, or
- using a combination of sensors to determine if someone is home via a weak heuristic, or
- playing around with the bayesian sensor because it's awesome, then slowly losing your mind tweaking the priors
LLMs can be used to reason about state changes more naturally. Can we send the state of multiple sensors and ask the LLM to decide a higher level status in the home?
- (motion, time of day, device usage) -> "is AK asleep?"
You heard about Langchain but not this whole MLflow business - what's up with that? Read more about how all this fits together!
- Install the MLflow Gateway
- Install the MLflow Tracking Service
- Install the MLflow Home Integration
- LLM: Large Language Model, a model that can generate text based on a prompt
- MLflow: An open source platform for the machine learning lifecycle
- Home Assistant: An open source home automation platform