From 9298fd9cfd52bb7b35aee5f9525efac32b78243b Mon Sep 17 00:00:00 2001 From: James McMahon Date: Tue, 30 Jul 2024 15:31:51 +0100 Subject: [PATCH 01/11] Update config.md --- docs/config.md | 50 +++++++++++++++++++++++++------------------------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/docs/config.md b/docs/config.md index 3ac0af52..e3cd9cd2 100644 --- a/docs/config.md +++ b/docs/config.md @@ -1,8 +1,8 @@ # Configuration file -In this section we will explain all the parts of the `config_emhass.yaml` needed to properly run EMHASS. +In this section, we will explain all the parts of the `config_emhass.yaml` needed to properly run EMHASS. -We will find three main parts on the configuration file: +We will find three main parts in the configuration file: - The parameters needed to retrieve data from Home Assistant (retrieve_hass_conf) - The parameters to define the optimization problem (optim_conf) @@ -10,13 +10,13 @@ We will find three main parts on the configuration file: ## Retrieve HASS data configuration -These are the parameters that we will need to define to retrieve data from Home Assistant. There are no optional parameters. In the case of a list, an empty list is a valid entry. +We will need to define these parameters to retrieve data from Home Assistant. There are no optional parameters. In the case of a list, an empty list is a valid entry. - `freq`: The time step to resample retrieved data from hass. This parameter is given in minutes. It should not be defined too low or you will run into memory problems when defining the Linear Programming optimization. Defaults to 30. -- `days_to_retrieve`: We will retrieve data from now and up to days_to_retrieve days. Defaults to 2. -- `var_PV`: This is the name of the photovoltaic produced power sensor in Watts from Home Assistant. For example: 'sensor.power_photovoltaics'. -- `var_load`: The name of the household power consumption sensor in Watts from Home Assistant. The deferrable loads that we will want to include in the optimization problem should be substracted from this sensor in HASS. For example: 'sensor.power_load_no_var_loads' -- `load_negative`: Set this parameter to True if the retrived load variable is negative by convention. Defaults to False. +- `days_to_retrieve`: We will retrieve data from now to days_to_retrieve days. Defaults to 2. +- `var_PV`: This is the name of the photovoltaic power-produced sensor in Watts from Home Assistant. For example: 'sensor.power_photovoltaics'. +- `var_load`: The name of the household power consumption sensor in Watts from Home Assistant. The deferrable loads that we will want to include in the optimization problem should be subtracted from this sensor in HASS. For example: 'sensor.power_load_no_var_loads' +- `load_negative`: Set this parameter to True if the retrieved load variable is negative by convention. Defaults to False. - `set_zero_min`: Set this parameter to True to give a special treatment for a minimum value saturation to zero for power consumption data. Values below zero are replaced by nans. Defaults to True. - `var_replace_zero`: The list of retrieved variables that we would want to replace nans (if they exist) with zeros. For example: - 'sensor.power_photovoltaics' @@ -24,7 +24,7 @@ These are the parameters that we will need to define to retrieve data from Home - 'sensor.power_photovoltaics' - 'sensor.power_load_no_var_loads' - `method_ts_round`: Set the method for timestamp rounding, options are: first, last and nearest. -- `continual_publish`: set to True to save entities to .json after optimization run. Then automatically republish the saved entities *(with updated current state value)* every freq minutes. *entity data saved to data_path/entities.* +- `continual_publish`: set to True to save entities to .json after an optimization run. Then automatically republish the saved entities *(with updated current state value)* every freq minutes. *entity data saved to data_path/entities.* A second part of this section is given by some privacy-sensitive parameters that should be included in a `secrets_emhass.yaml` file alongside the `config_emhass.yaml` file. @@ -50,10 +50,10 @@ These are the parameters needed to properly define the optimization problem. - `def_total_hours`: The total number of hours that each deferrable load should operate. For example: - 5 - 8 -- `def_start_timestep`: The timestep as from which each deferrable load is allowed to operate (if you don't want the deferrable load to use the whole optimization timewindow). If you specify a value of 0 (or negative), the deferrable load will be optimized as from the beginning of the complete prediction horizon window. For example: +- `def_start_timestep`: The timestep as from which each deferrable load is allowed to operate (if you don't want the deferrable load to use the whole optimization time window). If you specify a value of 0 (or negative), the deferrable load will be optimized as from the beginning of the complete prediction horizon window. For example: - 0 - 1 -- `def_end_timestep`: The timestep before which each deferrable load should operate. The deferrable load is not allowed to operate after the specified timestep. If a value of 0 (or negative) is provided, the deferrable load is allowed to operate in the complete optimization window). For example: +- `def_end_timestep`: The timestep before which each deferrable load should operate. The deferrable load is not allowed to operate after the specified time step. If a value of 0 (or negative) is provided, the deferrable load is allowed to operate in the complete optimization window). For example: - 0 - 3 - `treat_def_as_semi_cont`: Define if we should treat each deferrable load as a semi-continuous variable. Semi-continuous variables (`True`) are variables that must take a value that can be either their maximum or minimum/zero (for example On = Maximum load, Off = 0 W). Non semi-continuous (which means continuous) variables (`False`) can take any values between their maximum and minimum. For example: @@ -64,7 +64,7 @@ These are the parameters needed to properly define the optimization problem. - False - `def_start_penalty`: Set to a list of floats. For each deferrable load with a penalty `P`, each time the deferrable load turns on will incur an additional cost of `P * P_deferrable_nom * cost_of_electricity` at that time. - `weather_forecast_method`: This will define the weather forecast method that will be used. The options are 'scrapper' for a scrapping method for weather forecast from clearoutside.com and 'csv' to load a CSV file. When loading a CSV file this will be directly considered as the PV power forecast in Watts. The default CSV file path that will be used is '/data/data_weather_forecast.csv'. Defaults to 'scrapper' method. -- `load_forecast_method`: The load forecast method that will be used. The options are 'csv' to load a CSV file or 'naive' for a simple 1-day persistance model. The default CSV file path that will be used is '/data/data_load_forecast.csv'. Defaults to 'naive'. +- `load_forecast_method`: The load forecast method that will be used. The options are 'csv' to load a CSV file or 'naive' for a simple 1-day persistence model. The default CSV file path that will be used is '/data/data_load_forecast.csv'. Defaults to 'naive'. - `load_cost_forecast_method`: Define the method that will be used for load cost forecast. The options are 'hp_hc_periods' for peak and non-peak hours contracts and 'csv' to load custom cost from CSV file. The default CSV file path that will be used is '/data/data_load_cost_forecast.csv'. The following parameters and definitions are only needed if load_cost_forecast_method='hp_hc_periods': - `list_hp_periods`: Define a list of peak hour periods for load consumption from the grid. This is useful if you have a contract with peak and non-peak hours. For example for two peak hour periods: @@ -77,18 +77,18 @@ The following parameters and definitions are only needed if load_cost_forecast_m - `load_cost_hp`: The cost of the electrical energy from the grid during peak hours in €/kWh. Defaults to 0.1907. - `load_cost_hc`: The cost of the electrical energy from the grid during non-peak hours in €/kWh. Defaults to 0.1419. -- `prod_price_forecast_method`: Define the method that will be used for PV power production price forecast. This is the price that is payed by the utility for energy injected to the grid. The options are 'constant' for a constant fixed value or 'csv' to load custom price forecast from a CSV file. The default CSV file path that will be used is '/data/data_prod_price_forecast.csv'. +- `prod_price_forecast_method`: Define the method that will be used for PV power production price forecast. This is the price that is paid by the utility for energy injected into the grid. The options are 'constant' for a constant fixed value or 'csv' to load custom price forecasts from a CSV file. The default CSV file path that will be used is '/data/data_prod_price_forecast.csv'. - `prod_sell_price`: The paid price for energy injected to the grid from excedent PV production in €/kWh. Defaults to 0.065. This parameter is only needed if prod_price_forecast_method='constant'. -- `set_total_pv_sell`: Set this parameter to true to consider that all the PV power produced is injected to the grid. No direct self-consumption. The default is false, for as system with direct self-consumption. +- `set_total_pv_sell`: Set this parameter to true to consider that all the PV power produced is injected to the grid. No direct self-consumption. The default is false, for a system with direct self-consumption. - `lp_solver`: Set the name of the linear programming solver that will be used. Defaults to 'COIN_CMD'. The options are 'PULP_CBC_CMD', 'GLPK_CMD' and 'COIN_CMD'. - `lp_solver_path`: Set the path to the LP solver. Defaults to '/usr/bin/cbc'. -- `set_nocharge_from_grid`: Set this to true if you want to forbidden to charge the battery from the grid. The battery will only be charged from excess PV. -- `set_nodischarge_to_grid`: Set this to true if you want to forbidden to discharge the battery power to the grid. -- `set_battery_dynamic`: Set a power dynamic limiting condition to the battery power. This is an additional constraint on the battery dynamic in power per unit of time, which allows you to set a percentage of the battery nominal full power as the maximum power allowed for (dis)charge. +- `set_nocharge_from_grid`: Set this to true if you want to forbid charging the battery from the grid. The battery will only be charged from excess PV. +- `set_nodischarge_to_grid`: Set this to true if you want to forbid discharging battery power to the grid. +- `set_battery_dynamic`: Set a power dynamic limiting condition to the battery power. This is an additional constraint on the battery dynamic in power per unit of time, which allows you to set a percentage of the battery's nominal full power as the maximum power allowed for (dis)charge. - `battery_dynamic_max`: The maximum positive (for discharge) battery power dynamic. This is the allowed power variation (in percentage) of battery maximum power per unit of time. - `battery_dynamic_min`: The maximum negative (for charge) battery power dynamic. This is the allowed power variation (in percentage) of battery maximum power per unit of time. -- `weight_battery_discharge`: An additional weight (currency/ kWh) applied in cost function to battery usage for discharge. Defaults to 0.00 -- `weight_battery_charge`: An additional weight (currency/ kWh) applied in cost function to battery usage for charge. Defaults to 0.00 +- `weight_battery_discharge`: An additional weight (currency/ kWh) applied in the cost function to battery usage for discharging. Defaults to 0.00 +- `weight_battery_charge`: An additional weight (currency/ kWh) applied in the cost function to battery usage for charging. Defaults to 0.00 ## System configuration parameters @@ -98,15 +98,15 @@ These are the technical parameters of the energy system of the household. - `P_to_grid_max`: The maximum power that can be supplied to the utility grid in Watts (injection). Defaults to 9000. We will define the technical parameters of the PV installation. For the modeling task we rely on the PVLib Python package. For more information see: [https://pvlib-python.readthedocs.io/en/stable/](https://pvlib-python.readthedocs.io/en/stable/) -A dedicated webapp will help you search for your correct PV module and inverter names: [https://emhass-pvlib-database.streamlit.app/](https://emhass-pvlib-database.streamlit.app/) +A dedicated web app will help you search for your correct PV module and inverter names: [https://emhass-pvlib-database.streamlit.app/](https://emhass-pvlib-database.streamlit.app/) If your specific model is not found in these lists then solution (1) is to pick another model as close as possible as yours in terms of the nominal power. Solution (2) would be to use SolCast and pass that data directly to emhass as a list of values from a template. Take a look at this example here: [https://emhass.readthedocs.io/en/latest/forecasts.html#example-using-solcast-forecast-amber-prices](https://emhass.readthedocs.io/en/latest/forecasts.html#example-using-solcast-forecast-amber-prices) -- `module_model`: The PV module model. For example: 'CSUN_Eurasia_Energy_Systems_Industry_and_Trade_CSUN295_60M'. This parameter can be a list of items to enable the simulation of mixed orientation systems, for example one east-facing array (azimuth=90) and one west-facing array (azimuth=270). When finding the correct model for your installation remember to replace all the special characters in the model name by '_'. The name of the table column for your device on the webapp will already have the correct naming convention. -- `inverter_model`: The PV inverter model. For example: 'Fronius_International_GmbH__Fronius_Primo_5_0_1_208_240__240V_'. This parameter can be a list of items to enable the simulation of mixed orientation systems, for example one east-facing array (azimuth=90) and one west-facing array (azimuth=270). When finding the correct model for your installation remember to replace all the special characters in the model name by '_'. The name of the table column for your device on the webapp will already have the correct naming convention. -- `surface_tilt`: The tilt angle of your solar panels. Defaults to 30. This parameter can be a list of items to enable the simulation of mixed orientation systems, for example one east-facing array (azimuth=90) and one west-facing array (azimuth=270). -- `surface_azimuth`: The azimuth of your PV installation. Defaults to 205. This parameter can be a list of items to enable the simulation of mixed orientation systems, for example one east-facing array (azimuth=90) and one west-facing array (azimuth=270). -- `modules_per_string`: The number of modules per string. Defaults to 16. This parameter can be a list of items to enable the simulation of mixed orientation systems, for example one east-facing array (azimuth=90) and one west-facing array (azimuth=270). +- `module_model`: The PV module model. For example: 'CSUN_Eurasia_Energy_Systems_Industry_and_Trade_CSUN295_60M'. This parameter can be a list of items to enable the simulation of mixed orientation systems, for example one east-facing array (azimuth=90) and one west-facing array (azimuth=270). When finding the correct model for your installation remember to replace all the special characters in the model name with '_'. The name of the table column for your device on the webapp will already have the correct naming convention. +- `inverter_model`: The PV inverter model. For example: 'Fronius_International_GmbH__Fronius_Primo_5_0_1_208_240__240V_'. This parameter can be a list of items to enable the simulation of mixed orientation systems, for example, one east-facing array (azimuth=90) and one west-facing array (azimuth=270). When finding the correct model for your installation remember to replace all the special characters in the model name with '_'. The name of the table column for your device on the web app will already have the correct naming convention. +- `surface_tilt`: The tilt angle of your solar panels. Defaults to 30. This parameter can be a list of items to enable the simulation of mixed orientation systems, for example, one east-facing array (azimuth=90) and one west-facing array (azimuth=270). +- `surface_azimuth`: The azimuth of your PV installation. Defaults to 205. This parameter can be a list of items to enable the simulation of mixed orientation systems, for example, one east-facing array (azimuth=90) and one west-facing array (azimuth=270). +- `modules_per_string`: The number of modules per string. Defaults to 16. This parameter can be a list of items to enable the simulation of mixed orientation systems, for example, one east-facing array (azimuth=90) and one west-facing array (azimuth=270). - `strings_per_inverter`: The number of used strings per inverter. Defaults to 1. This parameter can be a list of items to enable the simulation of mixed orientation systems, for example one east-facing array (azimuth=90) and one west-facing array (azimuth=270). - `inverter_is_hybrid`: Set to True to consider that the installation inverter is hybrid for PV and batteries (Default False). - `compute_curtailment`: Set to True to compute a special PV curtailment variable (Default False). @@ -118,6 +118,6 @@ If your system has a battery (set_use_battery=True), then you should define the - `eta_disch`: The discharge efficiency. Defaults to 0.95. - `eta_ch`: The charge efficiency. Defaults to 0.95. - `Enom`: The total capacity of the battery stack in Wh. Defaults to 5000. -- `SOCmin`: The minimun allowable battery state of charge. Defaults to 0.3. +- `SOCmin`: The minimum allowable battery state of charge. Defaults to 0.3. - `SOCmax`: The maximum allowable battery state of charge. Defaults to 0.9. - `SOCtarget`: The desired battery state of charge at the end of each optimization cycle. Defaults to 0.6. From efe5ee17828bd3dd11814ff51e93ad05ed69a543 Mon Sep 17 00:00:00 2001 From: James McMahon Date: Tue, 30 Jul 2024 15:34:38 +0100 Subject: [PATCH 02/11] Update develop.md --- docs/develop.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/develop.md b/docs/develop.md index da81ef25..b5a685a2 100644 --- a/docs/develop.md +++ b/docs/develop.md @@ -111,9 +111,9 @@ The recommended steps to run are: ### Method 3 - Docker Virtual Environment With Docker, you can test EMHASS in both standalone and add-on mode via modifying the build argument: `build_version` with values: `standalone`, `addon-pip`, `addon-git`, `addon-local`. -Since emhass-add-on is using the same docker base, this method is good to test the add-on functionality of your code. _(addon-local)_ +Since emhass-add-on uses the same docker base, this method is good to test the add-on functionality of your code. _(addon-local)_ -Depending on your choice of running standalone or addon, `docker run` will require different passed variables/arguments to function. See following examples: +Depending on your choice of running standalone or addon, `docker run` will require different passed variables/arguments to function. See the following examples: _Note: Make sure your terminal is in the root `emhass` directory before running the docker build._ @@ -193,7 +193,7 @@ docker build -t emhass/docker --build-arg build_version=addon-pip . docker run -it -p 5000:5000 --name emhass-container -e LAT="45.83" -e LON="6.86" -e ALT="4807.8" -e TIME_ZONE="Europe/Paris" -v $(pwd)/options.json:/app/options.json emhass/docker --url YOURHAURLHERE --key YOURHAKEYHERE ``` -To build with specific pip version, set with build arg: `build_pip_version`: +To build with a specific pip version, set with build arg: `build_pip_version`: ```bash docker build -t emhass/docker --build-arg build_version=addon-pip --build-arg build_pip_version='==0.7.7' . @@ -227,7 +227,7 @@ docker run... -v $(pwd)/data/heating_prediction.csv:/app/data/ ... ``` #### Issue with TARGETARCH -If your docker build fails with an error related to `TARGETARCH`. It may be best to add your devices architecture manually: +If your docker build fails with an error related to `TARGETARCH`. It may be best to add your device's architecture manually: Example with armhf architecture ```bash @@ -257,7 +257,7 @@ _Linux:_ docker build -t emhass/docker --build-arg build_version=addon-local . && docker run --rm -it -p 5000:5000 -v $(pwd)/secrets_emhass.yaml:/app/secrets_emhass.yaml --name emhass-container emhass/docker ``` -_The example command chain rebuilds Docker image, and runs new container with newly built image. `--rm` has been added to the `docker run` to delete the container once ended to avoid manual deletion every time._ +_The example command chain rebuilds the Docker image and runs a new container with the newly built image. `--rm` has been added to the `docker run` to delete the container once ended to avoid manual deletion every time._ _This use case may not require any volume mounts (unless you use secrets_emhass.yaml) as the Docker build process will pull the latest versions of the configs as it builds._ @@ -270,7 +270,7 @@ docker build -t emhass/docker --build-arg build_version=addon-local . docker run -it -p 5000:5000 --name emhass-container -e URL="YOURHAURLHERE" -e KEY="YOURHAKEYHERE" -e LAT="45.83" -e LON="6.86" -e ALT="4807.8" -e TIME_ZONE="Europe/Paris" emhass/docker ``` -This allows the user to set variables prior to build +This allows the user to set variables before the build Linux: ```bash @@ -356,4 +356,4 @@ User may wish to re-test with tweaked parameters such as `lp_solver`, `weather_f ## Step 3 - Pull request Once developed, commit your code, and push to your fork. -Then submit a pull request with your fork to the [davidusb-geek/emhass@master](https://github.com/davidusb-geek/emhass) repository. \ No newline at end of file +Then submit a pull request with your fork to the [davidusb-geek/emhass@master](https://github.com/davidusb-geek/emhass) repository. From a3bdf7513cfae9d4c1640aea4cd05eb4602d4329 Mon Sep 17 00:00:00 2001 From: James McMahon Date: Tue, 30 Jul 2024 15:38:53 +0100 Subject: [PATCH 03/11] Update differences.md --- docs/differences.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/differences.md b/docs/differences.md index 93cee4ad..837a22e2 100644 --- a/docs/differences.md +++ b/docs/differences.md @@ -1,16 +1,16 @@ # EMHASS & EMHASS-Add-on differences -User will pass parameters into EMHASS differently, based on running *Standalone* mode or *addon* Mode. +Users will pass parameters into EMHASS differently, based on running *Standalone* mode or *addon* Mode. This page tries to help to resolve the common confusion between the two. -_Its best to see EMHASS-Add-on as a Home Assistant Docker wrapper for EMHASS. However, because of this containerization, certain changes are made between the two modes._ +_It's best to see EMHASS-Add-on as a Home Assistant Docker wrapper for EMHASS. However, because of this containerization, certain changes are made between the two modes._ ## Configuration & parameter differences Both EMHASS & EMHASS-Add-on utilize `config_emhass.yaml` for receiving parameters. Where they diverge is EMHASS-Add-ons additional use of `options.json`, generated by Home Assistants `Configuration Page`. -Any passed parameters given in `options.json` will overwrite the parameters hidden in the `config_emhass.yaml` file in EMHASS. _(this results in `config_emhass.yaml` used for parameter default fall back if certain required parameters were missing in `options.json`)_ +Any passed parameters given in `options.json` will overwrite the parameters hidden in the `config_emhass.yaml` file in EMHASS. _(this results in `config_emhass.yaml` used for parameter default fallback if certain required parameters were missing in `options.json`)_ The parameters naming convention has also been changed in `options.json`, designed to make it easier for the user to understand. -See bellow for a list of associations between the parameters from `config_emhass.yaml` and `options.json`: +See below for a list of associations between the parameters from `config_emhass.yaml` and `options.json`: *You can view the current parameter differences in the [`Utils.py`](https://github.com/davidusb-geek/emhass/blob/master/src/emhass/utils.py) file under the `build_params` function.* | config | config_emhass.yaml | options.json | options.json list dictionary key | @@ -73,7 +73,7 @@ See bellow for a list of associations between the parameters from `config_emhass | plant_conf | SOCmax | battery_maximum_state_of_charge | | | plant_conf | SOCtarget | battery_target_state_of_charge | | -Descriptions of each parameter, can be found at: +Descriptions of each parameter can be found at: - [`Configuration file`](https://emhass.readthedocs.io/en/latest/config.html) on EMHASS - [`en.yaml`](https://github.com/davidusb-geek/emhass-add-on/blob/main/emhass/translations/en.yaml) on EMHASS-Add-on @@ -87,14 +87,14 @@ Running EMHASS in standalone mode's default workflow retrieves all secret parame For users who are running EMHASS with methods other than EMHASS-Add-on, secret parameters can be passed with the use of arguments and/or environment variables. _(instead of `secrets_emhass.yaml`)_ Some arguments include: `--url` and `--key` -Some environment variables include: `TIME_ZONE`, `LAT` , `LON`, `ALT`, `EMHASS_URL`, `EMHASS_KEY` +Some environment variables include: `TIME_ZONE`, `LAT`, `LON`, `ALT`, `EMHASS_URL`, `EMHASS_KEY` -_Note: As of writing, EMHASS standalone will override ARG/ENV secret parameters if file is present._ +_Note: As of writing, EMHASS standalone will override ARG/ENV secret parameters if the file is present._ For more information on passing arguments and environment variables using docker, have a look at some examples from [Configuration and Installation](https://emhass.readthedocs.io/en/latest/intro.html#configuration-and-installation) and [EMHASS Development](https://emhass.readthedocs.io/en/latest/develop.html) pages. ### EMHASS-Add-on (addon mode) -By default the `URL` and `KEY` parameters have been set to `empty`/blank. This results in EMHASS calling to its Supervisor API to gain access locally. This is the easiest method, as there is no user input necessary. +By default, the `URL` and `KEY` parameters have been set to `empty`/blank. This results in EMHASS calling its Supervisor API to gain access locally. This is the easiest method, as there is no user input necessary. However, if you wish to receive/send sensor data to a different Home Assistant environment, set url and key values in the `hass_url` & `long_lived_token` hidden parameters. - `hass_url` example: `https://192.168.1.2:8123/` From 6de71329a5f6918f53d8743f073c9d11afcb39c1 Mon Sep 17 00:00:00 2001 From: James McMahon Date: Tue, 30 Jul 2024 15:46:28 +0100 Subject: [PATCH 04/11] Update forecasts.md --- docs/forecasts.md | 64 +++++++++++++++++++++++------------------------ 1 file changed, 32 insertions(+), 32 deletions(-) diff --git a/docs/forecasts.md b/docs/forecasts.md index fde219cd..76c60247 100644 --- a/docs/forecasts.md +++ b/docs/forecasts.md @@ -1,23 +1,23 @@ # The forecast module -EMHASS will basically need 4 forecasts to work properly: +EMHASS will need 4 forecasts to work properly: - PV power production forecast (internally based on the weather forecast and the characteristics of your PV plant). This is given in Watts. -- Load power forecast: how much power your house will demand on the next 24h. This is given in Watts. +- Load power forecast: how much power your house will demand in the next 24 hours. This is given in Watts. -- Load cost forecast: the price of the energy from the grid on the next 24h. This is given in EUR/kWh. +- Load cost forecast: the price of the energy from the grid in the next 24 hours. This is given in EUR/kWh. -- PV production selling price forecast: at what price are you selling your excess PV production on the next 24h. This is given in EUR/kWh. +- PV production selling price forecast: the price at which you will sell your excess PV production in the next 24 hours. This is given in EUR/kWh. -There are methods that are generalized to the 4 forecast needed. For all there forecasts it is possible to pass the data either as a passed list of values or by reading from a CSV file. With these methods it is then possible to use data from external forecast providers. +Some methods are generalized to the 4 forecasts needed. For all the forecasts it is possible to pass the data either as a passed list of values or by reading from a CSV file. With these methods, it is then possible to use data from external forecast providers. -Then there are the methods that are specific to each type of forecast and that proposed forecast treated and generated internally by this EMHASS forecast class. For the weather forecast a first method (`scrapper`) uses a scrapping to the ClearOutside webpage which proposes detailed forecasts based on Lat/Lon locations. Another method (`solcast`) is using the Solcast PV production forecast service. A final method (`solar.forecast`) is using another external service: Solar.Forecast, for which just the nominal PV peak installed power should be provided. Search the forecast section on the documentation for examples on how to implement these different methods. +Then there are the methods that are specific to each type of forecast and that proposed forecast is treated and generated internally by this EMHASS forecast class. For the weather forecast, the first method (`scrapper`) uses scrapping to the ClearOutside webpage which proposes detailed forecasts based on Lat/Lon locations. Another method (`solcast`) is using the Solcast PV production forecast service. A final method (`solar.forecast`) is using another external service: Solar.Forecast, for which just the nominal PV peak installed power should be provided. Search the forecast section on the documentation for examples of how to implement these different methods. -The `get_power_from_weather` method is proposed here to convert from irradiance data to electrical power. The PVLib module is used to model the PV plant. A dedicated webapp will help you search for your correct PV module and inverter: [https://emhass-pvlib-database.streamlit.app/](https://emhass-pvlib-database.streamlit.app/) +The `get_power_from_weather` method is proposed here to convert irradiance data to electrical power. The PVLib module is used to model the PV plant. A dedicated web app will help you search for your correct PV module and inverter: [https://emhass-pvlib-database.streamlit.app/](https://emhass-pvlib-database.streamlit.app/) -The specific methods for the load forecast are a first method (`naive`) that uses a naive approach, also called persistance. It simply assumes that the forecast for -a future period will be equal to the observed values in a past period. The past period is controlled using parameter `delta_forecast`. A second method (`mlforecaster`) +The specific methods for the load forecast is a first method (`naive`) that uses a naive approach, also called persistence. It simply assumes that the forecast for +a future period will be equal to the observed values in a past period. The past period is controlled using the parameter `delta_forecast`. A second method (`mlforecaster`) uses an internal custom forecasting model using machine learning. There is a section in the documentation explaining how to use this method. ```{note} @@ -25,17 +25,17 @@ uses an internal custom forecasting model using machine learning. There is a sec This custom machine learning model is introduced from v0.4.0. EMHASS proposed this new `mlforecaster` class with `fit`, `predict` and `tune` methods. Only the `predict` method is used here to generate new forecasts, but it is necessary to previously fit a forecaster model and it is a good idea to optimize the model hyperparameters using the `tune` method. See the dedicated section in the documentation for more help. ``` -For the PV production selling price and Load cost forecasts the privileged method is a direct read from a user provided list of values. The list should be passed as a runtime parameter during the `curl` to the EMHASS API. +For the PV production selling price and Load cost forecasts the privileged method is a direct read from a user-provided list of values. The list should be passed as a runtime parameter during the `curl` to the EMHASS API. ## PV power production forecast #### scrapper -The default method for PV power forecast is the scrapping of weather forecast data from the [https://clearoutside.com/](https://clearoutside.com/) website. This is obtained using `method=scrapper`. This site proposes detailed forecasts based on Lat/Lon locations. This method seems quite stable but as with any scrape method it will fail if any changes are made to the webpage API. The weather forecast data is then converted into PV power production using the `list_pv_module_model` and `list_pv_inverter_model` paramters defined in the configuration. +The default method for PV power forecast is the scrapping of weather forecast data from the [https://clearoutside.com/](https://clearoutside.com/) website. This is obtained using `method=scrapper`. This site proposes detailed forecasts based on Lat/Lon locations. This method seems quite stable but as with any scrape method, it will fail if any changes are made to the webpage API. The weather forecast data is then converted into PV power production using the `list_pv_module_model` and `list_pv_inverter_model` parameters defined in the configuration. #### solcast -The second method uses the Solcast solar forecast service. Go to [https://solcast.com/](https://solcast.com/) and configure your system. You will need to set `method=solcast` and use two parameters `solcast_rooftop_id` and `solcast_api_key` that should be passed as parameters at runtime or provided in the configuration/secrets. The free hobbyist account will be limited to 10 API requests per day, the granularity will be 30 minutes and the forecast is updated every 6h. If needed, better performances may be obtained with paid plans: [https://solcast.com/pricing/live-and-forecast](https://solcast.com/pricing/live-and-forecast). +The second method uses the Solcast solar forecast service. Go to [https://solcast.com/](https://solcast.com/) and configure your system. You will need to set `method=solcast` and use two parameters `solcast_rooftop_id` and `solcast_api_key` that should be passed as parameters at runtime or provided in the configuration/secrets. The free hobbyist account will be limited to 10 API requests per day, the granularity will be 30 minutes and the forecast will be updated every 6 hours. If needed, better performances may be obtained with paid plans: [https://solcast.com/pricing/live-and-forecast](https://solcast.com/pricing/live-and-forecast). For example: ```yaml @@ -58,7 +58,7 @@ curl -i -H 'Content-Type:application/json' -X POST -d {} http://localhost:5000/a # Then run your regular MPC call (E.g. every 5 minutes) curl -i -H 'Content-Type:application/json' -X POST -d {} http://localhost:5000/action/naive-mpc-optim ``` -EMHASS will see the saved Solcast cache and use it's data over pulling from Solcast. +EMHASS will see the saved Solcast cache and use its data over pulling from Solcast. `weather_forecast_cache` can also be provided in an optimization to save the forecast results to cache: ```bash @@ -66,7 +66,7 @@ EMHASS will see the saved Solcast cache and use it's data over pulling from Solc curl -i -H 'Content-Type:application/json' -X POST -d '{"weather_forecast_cache":true}' http://localhost:5000/action/dayahead-optim ``` -By default, if EMHASS finds a problem with the Solcast cache file, the cache will be automatically deleted. Due to the missing cache, the next optimization will run and pulling data from Solcast. +By default, if EMHASS finds a problem with the Solcast cache file, the cache will be automatically deleted. Due to the missing cache, the next optimization will run and pull data from Solcast. If you wish to make sure that a certain optimization will only use the cached data, (otherwise present an error) the runtime parameter `weather_forecast_cache_only` can be used: ```bash # Run the weather forecast action 1-10 times a day @@ -79,21 +79,21 @@ curl -i -H 'Content-Type:application/json' -X POST -d '{"weather_forecast_cache_ #### solar.forecast -A third method uses the Solar.Forecast service. You will need to set `method=solar.forecast` and use just one parameter `solar_forecast_kwp` (the PV peak installed power in kW) that should be passed at runtime. This will be using the free public Solar.Forecast account with 12 API requests per hour, per IP, and 1h data resolution. As with Solcast, there are paid account services that may results in better forecasts. +A third method uses the Solar.Forecast service. You will need to set `method=solar.forecast` and use just one parameter `solar_forecast_kwp` (the PV peak installed power in kW) that should be passed at runtime. This will be using the free public Solar.Forecast account with 12 API requests per hour, per IP, and 1h data resolution. As with Solcast, there are paid account services that may result in better forecasts. -For example, for a 5 kWp installation: +For example, for a 5 kW installation: ```bash curl -i -H "Content-Type:application/json" -X POST -d '{"solar_forecast_kwp":5}' http://localhost:5000/action/dayahead-optim ``` ```{note} -If you use the Solar.Forecast or Solcast methods, or explicitly pass the PV power forecast values (see below), the list_pv_module_model and list_pv_inverter_model paramters defined in the configuration will be ignored. +If you use the Solar.Forecast or Solcast methods, or explicitly pass the PV power forecast values (see below), the list_pv_module_model and list_pv_inverter_model parameters defined in the configuration will be ignored. ``` ## Load power forecast -The default method for load forecast is a naive method, also called persistence. This is obtained using `method=naive`. This method simply assumes that the forecast for a future period will be equal to the observed values in a past period. The past period is controlled using parameter `delta_forecast` and the default value for this is 24h. +The default method for load forecast is a naive method, also called persistence. This is obtained using `method=naive`. This method simply assumes that the forecast for a future period will be equal to the observed values in a past period. The past period is controlled using the parameter `delta_forecast` and the default value for this is 24h. This is presented graphically here: @@ -104,7 +104,7 @@ This is presented graphically here: New in EMHASS v0.4.0: machine learning forecast models! ``` -Starting with v0.4.0, a new forecast framework is proposed within EMHASS. It provides a more efficient way to forecast the power load consumption. It is based on the `skforecast` module that uses `scikit-learn` regression models considering auto-regression lags as features. The hyperparameter optimization is proposed using bayesian optimization from the `optuna` module. To use this change to `method=mlforecaster` in the configuration. +Starting with v0.4.0, a new forecast framework is proposed within EMHASS. It provides a more efficient way to forecast the power load consumption. It is based on the `skforecast` module that uses `scikit-learn` regression models considering auto-regression lags as features. The hyperparameter optimization is proposed using Bayesian optimization from the `optuna` module. To use this change to `method=mlforecaster` in the configuration. The API provides fit, predict and tune methods. @@ -112,9 +112,9 @@ The following is an example of a trained model using a KNN regressor: ![](./images/load_forecast_knn_optimized.svg) -The naive persistance model performs very well on the 2 day test period, however is well out-performed by the KNN regressor when back-testing on the complete training set (10 months of 30 minute time step data). +The naive persistence model performs very well on the 2-day test period, however, is well outperformed by the KNN regressor when back-testing on the complete training set (10 months of 30-minute time step data). -The hyperparameter tuning using bayesian optimization improves the bare KNN regressor from $R^2=0.59$ to $R^2=0.75$. The optimized number of lags is $48$. +The hyperparameter tuning using Bayesian optimization improves the bare KNN regressor from $R^2=0.59$ to $R^2=0.75$. The optimized number of lags is $48$. See the [machine learning forecaster](mlforecaster.md) section for more details. @@ -148,7 +148,7 @@ Then you will need to define the `prod_sell_price` variable to provide the corre ## Passing your own forecast data -For all the needed forecasts in EMHASS two other methods allows the user to provide their own forecast value. This may be used to provide a forecast provided by a more powerful and accurate forecaster. The two methods are: `csv` and `list`. +For all the needed forecasts in EMHASS, two other methods allow the user to provide their own forecast value. This may be used to provide a forecast provided by a more powerful and accurate forecaster. The two methods are: `csv` and `list`. For the `csv` method you should push a csv file to the `data` folder. The CSV file should contain no header and the timestamped data should have the following format: @@ -157,9 +157,9 @@ For the `csv` method you should push a csv file to the `data` folder. The CSV fi 2021-04-29 01:00:00+00:00,243.38 ... -For the list method you just have to add the data as a list of values to a data dictionnary during the call to `emhass` using the `runtimeparams` option. +For the list method, you just have to add the data as a list of values to a data dictionary during the call to `emhass` using the `runtimeparams` option. -The possible dictionnary keys to pass data are: +The possible dictionary keys to pass data are: - `pv_power_forecast` for the PV power production forecast. @@ -169,12 +169,12 @@ The possible dictionnary keys to pass data are: - `prod_price_forecast` for the PV production selling price forecast. -For example if using the add-on or the standalone docker installation you can pass this data as list of values to the data dictionnary during the `curl` POST: +For example, if using the add-on or the standalone docker installation you can pass this data as a list of values to the data dictionary during the `curl` POST: ```bash curl -i -H "Content-Type: application/json" -X POST -d '{"pv_power_forecast":[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 70, 141.22, 246.18, 513.5, 753.27, 1049.89, 1797.93, 1697.3, 3078.93, 1164.33, 1046.68, 1559.1, 2091.26, 1556.76, 1166.73, 1516.63, 1391.13, 1720.13, 820.75, 804.41, 251.63, 79.25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}' http://localhost:5000/action/dayahead-optim ``` -You need to be careful here to send the correct amount of data on this list, the correct length. For example, if the data time step is defined to 1h and you are performing a day-ahead optimization, then this list length should be of 24 data points. +You need to be careful here to send the correct amount of data on this list, the correct length. For example, if the data time step is defined as 1 hour and you are performing a day-ahead optimization, then this list length should be 24 data points. ### Example using: Solcast forecast + Amber prices @@ -207,9 +207,9 @@ sensors: {%- endfor %} {{ (values_all.all)[:48] }} ``` -With this you can now feed this Solcast forecast to EMHASS along with the mapping of the Amber prices. +With this, you can now feed this Solcast forecast to EMHASS along with the mapping of the Amber prices. -A MPC call may look like this for 4 deferrable loads: +An MPC call may look like this for 4 deferrable loads: ```yaml post_mpc_optim_solcast: "curl -i -H \"Content-Type: application/json\" -X POST -d '{\"load_cost_forecast\":{{( @@ -317,9 +317,9 @@ shell_command: ## Now/current values in forecasts -When implementing MPC applications with high optimization frequencies it can be interesting if at each MPC iteration the forecast values are updated with the real now/current values measured from live data. This is useful to improve the accuracy of the short-term forecasts. As shown in some of the references below, mixing with a persistance model make sense since this type of model performs very good at low temporal resolutions (intra-hour). +When implementing MPC applications with high optimization frequencies it can be interesting if, at each MPC iteration, the forecast values are updated with the real now/current values measured from live data. This is useful to improve the accuracy of the short-term forecasts. As shown in some of the references below, mixing with a persistence model makes sense since this type of model performs very well at low temporal resolutions (intra-hour). -A simple integration of current/now values for PV and load forecast is implemented using a mixed one-observation presistence model and the one-step-ahead forecasted values from the current passed method. +A simple integration of current/now values for PV and load forecast is implemented using a mixed one-observation persistence model and the one-step-ahead forecasted values from the current passed method. This can be represented by the following equation at time $t=k$: @@ -327,9 +327,9 @@ $$ P^{mix}_{PV} = \alpha \hat{P}_{PV}(k) + \beta P_{PV}(k-1) $$ -Where $P^{mix}_{PV}$ is the mixed power forecast for PV prodduction, $\hat{P}_{PV}(k)$ is the current first element of the original forecast data, $P_{PV}(k-1)$ is the now/current value of PV production and $\alpha$ and $\beta$ are coefficients that can be fixed to reflect desired dominance of now/current values over the original forecast data or viceversa. +Where $P^{mix}_{PV}$ is the mixed power forecast for PV production, $\hat{P}_{PV}(k)$ is the current first element of the original forecast data, $P_{PV}(k-1)$ is the now/current value of PV production and $\alpha$ and $\beta$ are coefficients that can be fixed to reflect desired dominance of now/current values over the original forecast data or vice-versa. -The `alpha` and `beta` values can be passed in the dictionnary using the `runtimeparams` option during the call to `emhass`. If not passed they will both take the default 0.5 value. These values should be fixed following your own analysis on how much weight you want to put on measured values to be used as the persistance forecast. This will also depend on your fixed optimization time step. As a default they will be at 0.5, but if you want to give more weight to measured persistance values, then you can try lower $\alpha$ and rising $\beta$, for example: `alpha=0.25`, `beta=0.75`. After this you will need to check with the recored history if these values fits your needs. +The `alpha` and `beta` values can be passed in the dictionary using the `runtimeparams` option during the call to `emhass`. If not passed they will both take the default 0.5 value. These values should be fixed following your own analysis of how much weight you want to put on measured values to be used as the persistence forecast. This will also depend on your fixed optimization time step. As a default, they will be at 0.5, but if you want to give more weight to measured persistence values, then you can try lower $\alpha$ and rising $\beta$, for example: `alpha=0.25`, `beta=0.75`. After this, you will need to check with the recorded history if these values fit your needs. ## References From d1504061b38cfe6960a2cae873c3e03d5dbe5401 Mon Sep 17 00:00:00 2001 From: James McMahon Date: Tue, 30 Jul 2024 15:47:01 +0100 Subject: [PATCH 05/11] Update index.md --- docs/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/index.md b/docs/index.md index 14972fb8..6b3a9f01 100644 --- a/docs/index.md +++ b/docs/index.md @@ -9,7 +9,7 @@ ``` -Welcome to the documentation of EMHASS. With this package written in Python you will be able to implement a real Energy Management System for your household. This software was designed to be easy configurable and with a fast integration with Home Assistant: +Welcome to the documentation of EMHASS. With this package written in Python, you will be able to implement a real Energy Management System for your household. This software was designed to be easily configurable and with a fast integration with Home Assistant: To get started go ahead and look at the installation procedure and usage instructions below. From bb16e118d7baa87e3395d80810fcdfe9662a7dbe Mon Sep 17 00:00:00 2001 From: James McMahon Date: Tue, 30 Jul 2024 15:54:30 +0100 Subject: [PATCH 06/11] Update lpems.md --- docs/lpems.md | 58 +++++++++++++++++++++++++-------------------------- 1 file changed, 29 insertions(+), 29 deletions(-) diff --git a/docs/lpems.md b/docs/lpems.md index 060bb48e..4c56eb29 100644 --- a/docs/lpems.md +++ b/docs/lpems.md @@ -1,28 +1,28 @@ # An EMS based on Linear Programming -In this section we present the basics of the Linear Programming (LP) approach for a household Energy Management System (EMS). +In this section, we present the basics of the Linear Programming (LP) approach for a household Energy Management System (EMS). ## Motivation -Imagine that we have installed some solar panels in our house. Imagine that we have Home Assistant and that we can control (on/off) some crucial power consumptions in our home. For example the water heater, the pool pump, a dispatchable dishwasher, and so on. We can also imagine that we have installed a battery like a PowerWall, in order to maximize the PV self-consumption. With Home Assistant we also have sensors that can measure the power produced by our PV plant, the global power consumption of the house and hopefully the power consumed by the controllable loads. Home Assistant has released the Energy Dashboard where we can viusalize all these variables in somme really good looking graphics. See: [https://www.home-assistant.io/blog/2021/08/04/home-energy-management/](https://www.home-assistant.io/blog/2021/08/04/home-energy-management/) +Imagine that we have installed some solar panels in our house. Imagine that we have Home Assistant and that we can control (on/off) some crucial power consumptions in our home. For example the water heater, the pool pump, a dispatchable dishwasher, and so on. We can also imagine that we have installed a battery like a PowerWall, to maximize the PV self-consumption. With Home Assistant we also have sensors that can measure the power produced by our PV plant, the global power consumption of the house and hopefully the power consumed by the controllable loads. Home Assistant has released the Energy Dashboard where we can visualize all these variables in some good-looking graphics. See: [https://www.home-assistant.io/blog/2021/08/04/home-energy-management/](https://www.home-assistant.io/blog/2021/08/04/home-energy-management/) -Now, how can we be certain of the good and optimal management of these devices? If we define a fixed schedule for our deferrable loads, is this the best solution? When we can indicate or force a charge or discharge on the battery? This is a well known academic problem for an Energy Management System. +Now, how can we be certain of the good and optimal management of these devices? If we define a fixed schedule for our deferrable loads, is this the best solution? When we can indicate or force a charge or discharge on the battery? This is a well-known academic problem for an Energy Management System. -The first and most basic approach could be to define some basic rules or heuristics, this is the so called rule-based approach. The rules could be some fixed schedules for the deferrable loads, or some threshold based triggering of the battery charge/discharge, and so on. The rule-based approach has the advantage of being simple to implement and robust. However, the main disadvantage is that optimality is not guaranteed. +The first and most basic approach could be to define some basic rules or heuristics, this is the so-called rule-based approach. The rules could be some fixed schedules for the deferrable loads, some threshold-based triggering of the battery charge/discharge, and so on. The rule-based approach has the advantage of being simple to implement and robust. However, the main disadvantage is that optimality is not guaranteed. -The goal of this work is to provide an easy to implement framework where anyone using Home Assistant can apply the best and optimal set of instructions to control the energy flow in a household. There are many ways and techniques that can be found in the literature to implement optimized EMS. In this package we are using just one of those techniques, the Linear Programming approach, that will be presented below. +The goal of this work is to provide an easy-to-implement framework where anyone using Home Assistant can apply the best and optimal set of instructions to control the energy flow in a household. There are many techniques in the literature to implement optimized EMS. In this package, we are using just one of those techniques, the Linear Programming approach, that will be presented below. -When I was designing and testing this package in my own house I estimated a daily gain between 5% and 8% when using the optimized approach versus a rule-based one. In my house I have a 5 kWp PV installation with a contractual grid supply of 9 kVA. I have a grid contract with two tariffs for power consumption for the grid (peak and non-peak hours) and one tariff for the excess PV energy injected to the grid. I have no battery installed, but I suppose that the margin of gain would be even bigger with a battery, adding flexibility to the energy management. Of course the disadvantage is the initial capital cost of the battery stack. In my case the gain comes from the fact that the EMS is helping me to decide when to turn on my water heater and the pool pump. If we have a good clear sky day the results of the optimization will normally be to turn them on during the day where solar production is present. But if the day is going to be really clouded, then is possible that the best solution will be to turn them on during the non-peak tariff hours, for my case this is during the night from 9pm to 2am. All these decisions are made automatically by the EMS using forecasts of both the PV production and the house power consumption. +When I was designing and testing this package in my own house I estimated a daily gain between 5% and 8% when using the optimized approach versus a rule-based one. In my house, I have a 5 kW PV installation with a contractual grid supply of 9 kVA. I have a grid contract with two tariffs for power consumption for the grid (peak and non-peak hours) and one tariff for the excess PV energy injected into the grid. I have no battery installed, but I suppose that the margin of gain would be even bigger with a battery, adding flexibility to the energy management. Of course, the disadvantage is the initial capital cost of the battery stack. In my case, the gain comes from the fact that the EMS is helping me to decide when to turn on my water heater and the pool pump. If we have a good clear sky day the results of the optimization will normally be to turn them on during the day when solar production is present. But if the day is going to be cloudy, then is possible that the best solution will be to turn them on during the non-peak tariff hours, for my case this is during the night from 9pm to 2am. All these decisions are made automatically by the EMS using forecasts of both the PV production and the house power consumption. -Some other good packages and projects offer similar approaches to EMHASS. I can cite for example the good work done by my friends at the G2ELab in Grenoble, France. They have implemented the OMEGAlpes package that can also be used as an optimized EMS using LP and MILP (see: [https://gricad-gitlab.univ-grenoble-alpes.fr/omegalpes/omegalpes](https://gricad-gitlab.univ-grenoble-alpes.fr/omegalpes/omegalpes)). But here in EMHASS the first goal was to keep it simple to implement using configuration files and the second goal was that it should be easy to integrate to Home Assistant. I am sure that there will be a lot of room for optimize the code and the package implementation as this solution will be used and tested in the future. +Some other good packages and projects offer similar approaches to EMHASS. I can cite for example the good work done by my friends at the G2ELab in Grenoble, France. They have implemented the OMEGAlpes package that can also be used as an optimized EMS using LP and MILP (see: [https://gricad-gitlab.univ-grenoble-alpes.fr/omegalpes/omegalpes](https://gricad-gitlab.univ-grenoble-alpes.fr/omegalpes/omegalpes)). But here in EMHASS the first goal was to keep it simple to implement using configuration files and the second goal was that it should be easy to integrate to Home Assistant. I am sure that there will be a lot of room to optimize the code and the package implementation as this solution will be used and tested in the future. I have included a list of scientific references at the bottom if you want to deep into the technical aspects of this subject. -Ok, let's start by a resumed presentation of the LP approach. +Ok, let's start with a resumed presentation of the LP approach. ## Linear programming -Linear programming is an optimization method that can be used to obtain the best solution from a given cost function using a linear modeling of a problem. Typically we can also also add linear constraints to the optimization problem. +Linear programming is an optimization method that can be used to obtain the best solution from a given cost function using linear modelling of a problem. Typically we can also add linear constraints to the optimization problem. This can be mathematically written as: @@ -34,9 +34,9 @@ $$ with $\mathbf{x}$ the variable vector that we want to find, $\mathbf{c}$ and $\mathbf{b}$ are vectors with known coefficients and $\mathbf{A}$ is a matrix with known values. Here the cost function is defined by $\mathbf{c}^\mathrm{T} \mathbf{x}$. The inequalities $A \mathbf{x} \leq \mathbf{b}$ and $\mathbf{x} \ge \mathbf{0}$ represent the convex region of feasible solutions. -We could find a mix of real and integer variables in $\mathbf{x}$, in this case the problem is referred as Mixed Integer Linear Programming (MILP). Typically this kind of problem use the branch and boud type of solvers or similars. +We could find a mix of real and integer variables in $\mathbf{x}$, in this case the problem is referred as Mixed Integer Linear Programming (MILP). Typically this kind of problem uses 'branch and bound' type solvers, or similar. -The LP has of course its set of advantages and disadvantages. The main advantage is the that if the problem is well posed and the region of feasible possible solutions is convex, then a solution is guaranteed and solving times are usually fast when compared to other optimization techniques (as dynamic programming for example). However we can easily fall into memory issues, larger solving times and convergence problems if the size of the problem is too high (too many equations). +The LP has, of course, its set of advantages and disadvantages. The main advantage is that if the problem is well posed and the region of feasible possible solutions is convex, then a solution is guaranteed and solving times are usually fast when compared to other optimization techniques (such as dynamic programming for example). However we can easily fall into memory issues, larger solving times and convergence problems if the size of the problem is too high (too many equations). ## Household EMS with LP @@ -48,31 +48,31 @@ Three main cost functions are proposed. #### **1/ The _profit_ cost function:** -In this case the cost function is posed to maximize the profit. The profit is defined by the revenues from selling PV power to the grid minus the cost of consumed energy from the grid. +In this case, the cost function is posed to maximize the profit. The profit is defined by the revenues from selling PV power to the grid minus the cost of consumed energy from the grid. This can be represented with the following objective function: $$ \sum_{i=1}^{\Delta_{opt}/\Delta_t} -0.001*\Delta_t*(unit_{LoadCost}[i]*P_{gridPos}[i] + prod_{SellPrice}*P_{gridNeg}[i]) $$ -> For the special case of an energy contract where the totality of the PV produced energy is injected into the grid this will be: +> For the special case of an energy contract where the totality of the PV-produced energy is injected into the grid this will be: $$ \sum_{i=1}^{\Delta_{opt}/\Delta_t} -0.001*\Delta_t*(unit_{LoadCost}[i]*(P_{load}[i]+P_{defSum}[i]) + prod_{SellPrice}*P_{gridNeg}[i]) $$ -where $\Delta_{opt}$ is the total period of optimization in hours, $\Delta_t$ is the optimization time step in hours, $unit_{LoadCost_i}$ is the cost of the energy from the utility in EUR/kWh, $P_{load}$ is the electricity load consumption (positive defined), $P_{defSum}$ is the sum of the deferrable loads defined, $prod_{SellPrice}$ is the price of the energy sold to the utility, $P_{gridNeg}$ is the negative component of the grid power, this is the power exported to the grid. All these power are expressed in Watts. +where $\Delta_{opt}$ is the total period of optimization in hours, $\Delta_t$ is the optimization time step in hours, $unit_{LoadCost_i}$ is the cost of the energy from the utility in EUR/kWh, $P_{load}$ is the electricity load consumption (positive defined), $P_{defSum}$ is the sum of the deferrable loads defined, $prod_{SellPrice}$ is the price of the energy sold to the utility, $P_{gridNeg}$ is the negative component of the grid power, this is the power exported to the grid. All these power values are expressed in Watts. #### **2/ The energy from the grid _cost_:** -In this case the cost function is computed as the cost of the energy coming from the grid. The PV power injected into the grid is not valorized. +In this case, the cost function is computed as the cost of the energy coming from the grid. The PV power injected into the grid is not valorized. This is: $$ \sum_{i=1}^{\Delta_{opt}/\Delta_t} -0.001*\Delta_t*unit_{LoadCost}[i]*P_{gridPos}[i] $$ -> Again, for the special case of an energy contract where the totality of the PV produced energy is injected into the grid this will be: +> Again, for the special case of an energy contract where the totality of the PV-produced energy is injected into the grid this will be: $$ \sum_{i=1}^{\Delta_{opt}/\Delta_t} -0.001*\Delta_t* unit_{LoadCost}[i]*(P_{load}[i]+P_{defSum}[i]) @@ -94,7 +94,7 @@ $$ $$ where bigM equals 1000. -Adding this bigM factor will give more weight to the cost of grid offtake, or formulated differently: avoiding offtake through self-consumption will have strong influence on the calculated cost. +Adding this bigM factor will give more weight to the cost of grid offtake, or formulated differently: avoiding offtake through self-consumption will have a strong influence on the calculated cost. Please note that the bigM factor is not used in the calculated cost that comes out of the optimizer results. It is only used to drive the optimizer. @@ -127,8 +127,8 @@ $$ SC[i] \leq P_{load}[i]+P_{defSum}[i] $$ -All these cost functions can be chosen by the user with the `--costfun` tag with the `emhass` command. The options are: `profit`, `cost`, `self-consumption`. -They are all set in the LP formulation as cost function to maximize. +All these cost functions can be chosen by the user with the `--costfun` tag with the `emhass` command. The options are: `profit`, `cost`, and `self-consumption`. +They are all set in the LP formulation as cost a function to maximize. The problem constraints are written as follows. @@ -138,13 +138,13 @@ $$ P_{PV_i}-P_{defSum_i}-P_{load_i}+P_{gridNeg_i}+P_{gridPos_i}+P_{stoPos_i}+P_{stoNeg_i}=0 $$ -with $P_{PV}$ the PV power production, $P_{gridPos}$ the positive component of the grid power (from grid to household), $P_{stoPos}$ and $P_{stoNeg}$ are the positive (discharge) and negative components of the battery power (charge). +with $P_{PV}$ the PV power production, $P_{gridPos}$ the positive component of the grid power (from the grid to household), $P_{stoPos}$ and $P_{stoNeg}$ are the positive (discharge) and negative components of the battery power (charge). -Normally the PV power production and the electricity load consumption are considered known. In the case of a day-ahead optimization these should be forecasted values. When the optimization problem is solved the others power defining the power flow are found as a result: the deferrable load power, the grid power and the battery power. +Normally the PV power production and the electricity load consumption are considered known. In the case of a day-ahead optimization, these should be forecasted values. When the optimization problem is solved the others power defining the power flow are found as a result: the deferrable load power, the grid power and the battery power. ### Other constraints -Some other special linear constraints are defined. A constraint is introduced to avoid injecting and consuming from grid at the same time, which is physically impossible. Other constraints are used to control the total time that a deferrable load will stay on and the number of start-ups. +Some other special linear constraints are defined. A constraint is introduced to avoid injecting and consuming from the grid at the same time, which is physically impossible. Other constraints are used to control the total time that a deferrable load will stay on and the number of start-ups. Constraints are also used to define semi-continuous variables. Semi-continuous variables are variables that must take a value between their minimum and maximum or zero. @@ -189,13 +189,13 @@ The following example diagram may help us understand the time frames of these op ### Perfect forecast optimization -This is the first type of optimization task that are proposed with this package. In this case the main inputs, the PV power production and the house power consumption, are fixed using historical values from the past. This mean that in some way we are optimizing a system with a perfect knowledge of the future. This optimization is of course non-practical in real life. However this can be give us the best possible solution of the optimization problem that can be later used as a reference for comparison purposes. On the example diagram presented before, the perfect optimization is defined on a 5-day period. These historical values will be retrieved from the Home Assistant database. +This is the first type of optimization task that is proposed with this package. In this case, the main inputs, the PV power production and the house power consumption are fixed using historical values from the past. This means that in some way we are optimizing a system with a perfect knowledge of the future. This optimization is of course non-practical in real life. However, this can give us the best possible solution to the optimization problem that can be later used as a reference for comparison purposes. In the example diagram presented before, the perfect optimization is defined on a 5-day period. These historical values will be retrieved from the Home Assistant database. ### Day-ahead optimization -In this second type of optimization task the PV power production and the house power consumption are forecasted values. This is the action that should be performed in a real case scenario and is the case that should be launched from Home Assistant to obtain an optimized energy management of future actions. This optimization is defined in the time frame of the next 24 hours. +In this second type of optimization task, the PV power production and the house power consumption are forecasted values. This is the action that should be performed in a real case scenario and is the case that should be launched from Home Assistant to obtain an optimized energy management plan for future actions. This optimization is defined in the time frame of the next 24 hours. -As the optimization is bounded to forecasted values, it will also be bounded to uncertainty. The quality and accuracy of the optimization results will be inevitably linked to the quality of the forecast used for these values. The better the forecast error, the better accuracy of the optimization result. +As the optimization is bounded to forecasted values, it will also be bounded to uncertainty. The quality and accuracy of the optimization results will be inevitably linked to the quality of the forecast used for these values. The better the forecast error, the better the accuracy of the optimization result. ### Model Predictive Control (MPC) optimization @@ -208,11 +208,11 @@ This type of controller performs the following actions: - Apply the first element of the obtained optimized control variables. - Repeat at a relatively high frequency, ex: 5 min. -On the example diagram presented before, the MPC is performed on 6h intervals at 6h, 12h and 18h. The prediction horizon is progressively reducing during the day to keep the one-day energy optimization notion (it should not just be a fixed rolling window as, for example, you would like to know when you want to reach the desired `soc_final`). This type of optimization is used to take advantage of actualized forecast values during throughout the day. The user can of course choose higher/lower implementation intervals, keeping in mind the contraints below on the `prediction_horizon`. +In the example diagram presented before, the MPC is performed at 6h intervals at 6h, 12h and 18h. The prediction horizon is progressively reduced during the day to keep the one-day energy optimization notion (it should not just be a fixed rolling window as, for example, you would like to know when you want to reach the desired `soc_final`). This type of optimization is used to take advantage of actualized forecast values during throughout the day. The user can of course choose higher/lower implementation intervals, keeping in mind the constraints below on the `prediction_horizon`. When applying this controller, the following `runtimeparams` should be defined: -- `prediction_horizon` for the MPC prediction horizon. Fix this at at least 5 times the optimization time step. +- `prediction_horizon` for the MPC prediction horizon. Fix this at least 5 times the optimization time step. - `soc_init` for the initial value of the battery SOC for the current iteration of the MPC. @@ -227,9 +227,9 @@ When applying this controller, the following `runtimeparams` should be defined: In a practical use case, the values for `soc_init` and `soc_final` for each MPC optimization can be taken from the initial day-ahead optimization performed at the beginning of each day. ### Time windows for deferrable loads -Since v0.7.0, the user has the possibility to limit the operation of each deferrable load to a specific timewindow, which can be smaller than the prediction horizon. This is done by means of the `def_start_timestep` and `def_end_timestep` parameters. These parameters can either be set in the configuration screen of the Home Assistant EMHASS add-on, in the config_emhass.yaml file, or provided as runtime parameters. +Since v0.7.0, the user has the possibility to limit the operation of each deferrable load to a specific timewindow, which can be smaller than the prediction horizon. This is done by means of the `def_start_timestep` and `def_end_timestep` parameters. These parameters can either be set in the configuration screen of the Home Assistant EMHASS add-on, or in the config_emhass.yaml file, or provided as runtime parameters. -Taking the example of two electric vehicle that need to charge, but which are not available during the whole prediction horizon: +Take the example of two electric vehicles that need to charge, but which are not available during the whole prediction horizon: ![image](./images/deferrable_timewindow_evexample.png) For this example, the settings could look like this: From f2c1ec9b5c1ed11bccc00b7f4e041291cfbe66e0 Mon Sep 17 00:00:00 2001 From: James McMahon Date: Tue, 30 Jul 2024 15:59:14 +0100 Subject: [PATCH 07/11] Update mlforecaster.md --- docs/mlforecaster.md | 66 ++++++++++++++++++++++---------------------- 1 file changed, 33 insertions(+), 33 deletions(-) diff --git a/docs/mlforecaster.md b/docs/mlforecaster.md index 82e52008..5933bae8 100644 --- a/docs/mlforecaster.md +++ b/docs/mlforecaster.md @@ -1,38 +1,38 @@ # The machine learning forecaster -Starting with v0.4.0, a new forecast framework is proposed within EMHASS. It provides a more efficient way to forecast the power load consumption. It is based on the `skforecast` module that uses `scikit-learn` regression models considering auto-regression lags as features. The hyperparameter optimization is proposed using bayesian optimization from the `optuna` module. +Starting with v0.4.0, a new forecast framework is proposed within EMHASS. It provides a more efficient way to forecast the power load consumption. It is based on the `skforecast` module that uses `scikit-learn` regression models considering auto-regression lags as features. The hyperparameter optimization is proposed using Bayesian optimization from the `optuna` module. This API provides three main methods: -- fit: to train a model with the passed data. This method is exposed with the `forecast-model-fit` end point. +- fit: to train a model with the passed data. This method is exposed with the `forecast-model-fit` endpoint. -- predict: to obtain a forecast from a pre-trained model. This method is exposed with the `forecast-model-predict` end point. +- predict: to obtain a forecast from a pre-trained model. This method is exposed with the `forecast-model-predict` endpoint. -- tune: to optimize the models hyperparameters using bayesian optimization. This method is exposed with the `forecast-model-tune` end point. +- tune: to optimize the model's hyperparameters using Bayesian optimization. This method is exposed with the `forecast-model-tune` endpoint. ## A basic model fit To train a model use the `forecast-model-fit` end point. -Some paramters can be optionally defined at runtime: +Some parameters can be optionally defined at runtime: -- `days_to_retrieve`: the total days to retrieve from Home Assistant for model training. Define this in order to retrieve as much history data as possible. +- `days_to_retrieve`: the total days to retrieve from Home Assistant for model training. Define this to retrieve as much history data as possible. ```{note} -The minimum number of `days_to_retrieve` is hard coded to 9 by default. But it is adviced to provide more data for better accuracy by modifying your Home Assistant recorder settings. +The minimum number of `days_to_retrieve` is hard coded to 9 by default. However, it is advised to provide more data for better accuracy by modifying your Home Assistant recorder settings. ``` -- `model_type`: define the type of model forecast that this will be used for. For example: `load_forecast`. This should be an unique name if you are using multiple custom forecast models. +- `model_type`: define the type of model forecast that this will be used for. For example: `load_forecast`. This should be a unique name if you are using multiple custom forecast models. - `var_model`: the name of the sensor to retrieve data from Home Assistant. Example: `sensor.power_load_no_var_loads`. -- `sklearn_model`: the `scikit-learn` model that will be used. For now only this options are possible: `LinearRegression`, `ElasticNet` and `KNeighborsRegressor`. +- `sklearn_model`: the `scikit-learn` model that will be used. For now, only these options are possible: `LinearRegression`, `ElasticNet` and `KNeighborsRegressor`. -- `num_lags`: the number of auto-regression lags to consider. A good starting point is to fix this as one day. For example if your time step is 30 minutes, then fix this to 48, if the time step is 1 hour the fix this to 24 and so on. +- `num_lags`: the number of auto-regression lags to consider. A good starting point is to fix this at one day. For example, if your time step is 30 minutes, then fix this to 48, if the time step is 1 hour fix this to 24 and so on. - `split_date_delta`: the delta from now to `split_date_delta` that will be used as the test period to evaluate the model. -- `perform_backtest`: if `True` then a back testing routine is performed to evaluate the performance of the model on the complete train set. +- `perform_backtest`: if `True` then a backtesting routine is performed to evaluate the performance of the model on the complete train set. The default values for these parameters are: ```yaml @@ -52,7 +52,7 @@ A correct `curl` call to launch a model fit can look like this: curl -i -H "Content-Type:application/json" -X POST -d '{}' http://localhost:5000/action/forecast-model-fit ``` -As an example, the following figure shows a 240 days load power data retrieved from EMHASS and that will be used for a model fit: +As an example, the following figure shows 240 days of load power data retrieved from EMHASS that will be used for a model fit: ![](./images/inputs_power_load_forecast.svg) @@ -62,12 +62,12 @@ After applying the `curl` command to fit the model the following information is 2023-02-20 22:05:23,882 - __main__ - INFO - Elapsed time: 1.2236599922180176 2023-02-20 22:05:24,612 - __main__ - INFO - Prediction R2 score: 0.2654560762747957 -As we can see the $R^2$ score for the fitted model on the 2 day test perdiod is $0.27$. -A quick prediction graph using the fitted model should be available in the webui: +As we can see the $R^2$ score for the fitted model on the 2-day test period is $0.27$. +A quick prediction graph using the fitted model should be available in the web UI: ![](./images/load_forecast_knn_bare.svg) -Visually the prediction looks quite acceptable but we need to evaluate this further. For this we can use the `"perform_backtest": True` option to perform a backtest evaluation using this syntax: +Visually the prediction looks quite acceptable but we need to evaluate this further. For this, we can use the `"perform_backtest": True` option to perform a backtest evaluation using this syntax: ``` curl -i -H "Content-Type:application/json" -X POST -d '{"perform_backtest": "True"}' http://localhost:5000/action/forecast-model-fit ``` @@ -85,7 +85,7 @@ Here is the graphic result of the backtesting routine: ## The predict method -To obtain a prediction using a previously trained model use the `forecast-model-predict` end point. +To obtain a prediction using a previously trained model use the `forecast-model-predict` endpoint. ``` curl -i -H "Content-Type:application/json" -X POST -d '{}' http://localhost:5000/action/forecast-model-predict ``` @@ -93,13 +93,13 @@ If needed pass the correct `model_type` like this: ```bash curl -i -H "Content-Type:application/json" -X POST -d '{"model_type": "load_forecast"}' http://localhost:5000/action/forecast-model-predict ``` -The resulting forecast DataFrame is shown in the webui. +The resulting forecast DataFrame is shown in the web UI. -It is possible to publish the predict method results to a Home Assistant sensor. By default this is desactivated but it can be activated by using runtime parameters. +It is possible to publish the predict method results to a Home Assistant sensor. By default, this is deactivated but it can be activated by using runtime parameters. The list of parameters needed to set the data publish task is: -- `model_predict_publish`: set to `True` to activate the publish action when calling the `forecast-model-predict` end point. +- `model_predict_publish`: set to `True` to activate the publish action when calling the `forecast-model-predict` endpoint. - `model_predict_entity_id`: the unique `entity_id` to be used. @@ -119,13 +119,13 @@ runtimeparams = { ## The tuning method with Bayesian hyperparameter optimization -With a previously fitted model you can use the `forecast-model-tune` end point to tune its hyperparameters. This will be using bayeasian optimization with a wrapper of `optuna` in the `skforecast` module. +With a previously fitted model, you can use the `forecast-model-tune` endpoint to tune its hyperparameters. This will be using Bayesian optimization with a wrapper of `optuna` in the `skforecast` module. You can pass the same parameter you defined during the fit step, but `var_model` has to be defined at least. According to the example, the syntax will be: ```bash curl -i -H "Content-Type:application/json" -X POST -d '{"var_model": "sensor.power_load_no_var_loads"}' http://localhost:5000/action/forecast-model-tune ``` -This will launch the optimization routine and optimize the internal hyperparamters of the `scikit-learn` regressor and it will find the optimal number of lags. +This will launch the optimization routine and optimize the internal hyperparameters of the `scikit-learn` regressor and it will find the optimal number of lags. The following are the logs with the results obtained after the optimization for a KNN regressor: 2023-02-20 22:06:43,112 - __main__ - INFO - Backtesting and bayesian hyperparameter optimization @@ -134,37 +134,37 @@ The following are the logs with the results obtained after the optimization for 2023-02-20 22:25:50,282 - __main__ - INFO - R2 score for naive prediction in train period (backtest): 0.22525145245617462 2023-02-20 22:25:50,284 - __main__ - INFO - R2 score for optimized prediction in train period: 0.7485208725102304 2023-02-20 22:25:50,312 - __main__ - INFO - R2 score for non-optimized prediction in test period: 0.7098996657492629 - 2023-02-20 22:25:50,337 - __main__ - INFO - R2 score for naive persistance forecast in test period: 0.8714987509894714 + 2023-02-20 22:25:50,337 - __main__ - INFO - R2 score for naive persistence forecast in test period: 0.8714987509894714 2023-02-20 22:25:50,352 - __main__ - INFO - R2 score for optimized prediction in test period: 0.7572325833767719 This is a graph comparing these results: ![](./images/load_forecast_knn_optimized.svg) -The naive persistance load forecast model performs very well on the 2 day test period with a $R^2=0.87$, however is well out-performed by the KNN regressor when back-testing on the complete training set (10 months of 30 minute time step data) with a score $R^2=0.23$. +The naive persistence load forecast model performs very well on the 2-day test period with a $R^2=0.87$, however is well out-performed by the KNN regressor when back-testing on the complete training set (10 months of 30-minute time step data) with a score $R^2=0.23$. -The hyperparameter tuning using bayesian optimization improves the bare KNN regressor from $R^2=0.59$ to $R^2=0.75$. The optimized number of lags is $48$. +The hyperparameter tuning using Bayesian optimization improves the bare KNN regressor from $R^2=0.59$ to $R^2=0.75$. The optimized number of lags is $48$. ```{warning} -The tuning routine can be computing intense. If you have problems with computation times, try to reduce the `days_to_retrieve` parameter. In the example shown, for a 240 days train period, the optimization routine took almost 20 min to finish on an amd64 Linux architecture machine with a i5 processor and 8 Gb of RAM. This is a task that should be performed once in a while, for example every week. +The tuning routine can be computing intense. If you have problems with computation times, try to reduce the `days_to_retrieve` parameter. In the example shown, for a 240-day train period, the optimization routine took almost 20 min to finish on an amd64 Linux architecture machine with an i5 processor and 8 GB of RAM. This is a task that should be performed once in a while, for example, every week. ``` -## How does this works? +## How does this work? This machine learning forecast class is based on the `skforecast` module. -We use the recursive autoregresive forecaster with added features. +We use the recursive autoregressive forecaster with added features. -I will borrow this image from the `skforecast` [documentation](https://skforecast.org/0.11.0/user_guides/autoregresive-forecaster) that help us understand the working principles of this type of model. +I will borrow this image from the `skforecast` [documentation](https://skforecast.org/0.11.0/user_guides/autoregresive-forecaster) that helps us understand the working principles of this type of model. ![](https://skforecast.org/0.11.0/img/diagram-recursive-mutistep-forecasting.png) -With this type of model what we do in EMHASS is to create new features based on the timestamps of the data retrieved from Home Assistant. We create new features based on the day, the hour of the day, the day of the week, the month of the year, among others. +With this type of model what we do in EMHASS is to create new features based on the timestamps of the data retrieved from Home Assistant. We create new features based on the day, the hour of the day, the day of the week, and the month of the year, among others. -What is interesting is that these added features are based on the timestamps, they always known in advance and useful for generating forecasts. These are the so-called future known covariates. +What is interesting is that these added features are based on the timestamps, they are always known in advance and useful for generating forecasts. These are the so-called future known covariates. -In the future we may test to expand using other possible known future covariates from Home Assistant, for example a known (forecasted) temperature, a scheduled presence sensor, etc. +In the future, we may test to expand using other possible known future covariates from Home Assistant, for example, a known (forecasted) temperature, a scheduled presence sensor, etc. ## Going further? -This class can be gebneralized to actually forecasting any given sensor variable present in Home Assistant. It has been tested and the main initial motivation for this development was for a better load power consumption forecasting. But in reality is has been coded in a flexible way so that you can control what variable is used, how many lags, the amount of data used to train the model, etc. +This class can be generalized to forecast any given sensor variable present in Home Assistant. It has been tested and the main initial motivation for this development was for better load power consumption forecasting. But in reality, it has been coded flexibly so that you can control what variable is used, how many lags, the amount of data used to train the model, etc. -So you can really go further and try to forecast other types of variables and possible use the results for some interesting automations in Home Assistant. If doing this, was is important is to evaluate the pertinence of the obtained forecasts. The hope is that the tools proposed here can be used for that purpose. +So you can go further and try to forecast other types of variables and possibly use the results for some interesting automations in Home Assistant. If doing this, what is important is to evaluate the pertinence of the obtained forecasts. The hope is that the tools proposed here can be used for that purpose. From b4b491e6a92739b5553c4afb4023588ce261eb05 Mon Sep 17 00:00:00 2001 From: James McMahon Date: Tue, 30 Jul 2024 16:01:40 +0100 Subject: [PATCH 08/11] Update mlregressor.md --- docs/mlregressor.md | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/docs/mlregressor.md b/docs/mlregressor.md index d17548a7..620c564e 100644 --- a/docs/mlregressor.md +++ b/docs/mlregressor.md @@ -1,18 +1,18 @@ # The machine learning regressor -Starting with v0.9.0, a new framework is proposed within EMHASS. It provides a machine learning module to predict values from a csv file using different regression models. +Starting with v0.9.0, a new framework is proposed within EMHASS. It provides a machine learning module to predict values from a CSV file using different regression models. This API provides two main methods: -- **fit**: To train a model with the passed data. This method is exposed with the `regressor-model-fit` end point. +- **fit**: To train a model with the passed data. This method is exposed with the `regressor-model-fit` endpoint. -- **predict**: To obtain a prediction from a pre-trained model. This method is exposed with the `regressor-model-predict` end point. +- **predict**: To obtain a prediction from a pre-trained model. This method is exposed with the `regressor-model-predict` endpoint. ## A basic model fit To train a model use the `regressor-model-fit` end point. -Some paramters can be optionally defined at runtime: +Some parameters can be optionally defined at runtime: - `csv_file`: The name of the csv file containing your data. @@ -20,9 +20,9 @@ Some paramters can be optionally defined at runtime: - `target`: The target, the value that has to be predicted. -- `model_type`: Define the name of the model regressor that this will be used for. For example: `heating_hours_degreeday`. This should be an unique name if you are using multiple custom regressor models. +- `model_type`: Define the name of the model regressor that this will be used for. For example: `heating_hours_degreeday`. This should be a unique name if you are using multiple custom regressor models. -- `regression_model`: The regression model that will be used. For now only this options are possible: `LinearRegression`, `RidgeRegression`, `LassoRegression`, `RandomForestRegression`, `GradientBoostingRegression` and `AdaBoostRegression`. +- `regression_model`: The regression model that will be used. For now, only these options are possible: `LinearRegression`, `RidgeRegression`, `LassoRegression`, `RandomForestRegression`, `GradientBoostingRegression` and `AdaBoostRegression`. - `timestamp`: If defined, the column key that has to be used for timestamp. @@ -78,7 +78,7 @@ After fitting the model the following information is logged by EMHASS: ## The predict method -To obtain a prediction using a previously trained model use the `regressor-model-predict` end point. +To obtain a prediction using a previously trained model use the `regressor-model-predict` endpoint. The list of parameters needed to set the data publish task is: @@ -144,24 +144,24 @@ The predict method will publish the result to a Home Assistant sensor. ## Storing CSV files ### Standalone container - how to mount a .csv files in data_path folder -If running EMHASS as Standalone container, you will need to volume mount a folder to be the `data_path`, or mount a single .csv file inside `data_path` +If running EMHASS as a standalone container, you will need to volume mount a folder to be the `data_path`, or mount a single .csv file inside `data_path` Example of mounting a folder as data_path *(.csv files stored inside)* ```bash docker run -it --restart always -p 5000:5000 -e LOCAL_COSTFUN="profit" -v $(pwd)/data:/app/data -v $(pwd)/config_emhass.yaml:/app/config_emhass.yaml -v $(pwd)/secrets_emhass.yaml:/app/secrets_emhass.yaml --name DockerEMHASS ``` -Example of mounting a single csv file +Example of mounting a single CSV file ```bash docker run -it --restart always -p 5000:5000 -e LOCAL_COSTFUN="profit" -v $(pwd)/data/heating_prediction.csv:/app/data/heating_prediction.csv -v $(pwd)/config_emhass.yaml:/app/config_emhass.yaml -v $(pwd)/secrets_emhass.yaml:/app/secrets_emhass.yaml --name DockerEMHASS ``` -### Add-on - How to store data in a csv file from Home Assistant +### Add-on - How to store data in a CSV file from Home Assistant #### Change data_path -If running EMHASS-Add-On, you will likley need to change the `data_path` to a folder your Home Assistant can access. +If running EMHASS-Add-On, you will likely need to change the `data_path` to a folder your Home Assistant can access. To do this, set the `data_path` to `/share/` in the addon *Configuration* page. -#### Store sensor data to csv +#### Store sensor data to CSV Notify to a file ```yaml From c6001765667b2b979b7acf0531d51a7afa9449b4 Mon Sep 17 00:00:00 2001 From: James McMahon Date: Tue, 30 Jul 2024 16:05:11 +0100 Subject: [PATCH 09/11] Update study_case.md --- docs/study_case.md | 36 ++++++++++++++++++------------------ 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/docs/study_case.md b/docs/study_case.md index 78d6df24..26de4f78 100644 --- a/docs/study_case.md +++ b/docs/study_case.md @@ -4,15 +4,15 @@ In this section example configurations are presented as study cases using real d ## First test system: a simple system with no PV and two deferrable loads -In this example we will consider a simple system with no PV installation and just two deferrable loads that we want to optimize their schedule. +In this example, we will consider a simple system with no PV installation and just two deferrable loads that we want to optimize their schedule. -For this the following parameters can be added to the `secrets.yaml` file: `solar_forecast_kwp: 0`. And also we will set the PV forecast method to `method='solar.forecast'`. This is a simple way to just set a vector with zero values on the PV forecast power, emulating the case where there is no PV installation. The other values on the configuration file are set to their default values. +For this, the following parameters can be added to the `secrets.yaml` file: `solar_forecast_kwp: 0`. Also, we will set the PV forecast method to `method='solar.forecast'`. This is a simple way to just set a vector with zero values on the PV forecast power, emulating the case where there is no PV installation. The other values on the configuration file are set to their default values. ### Day-ahead optimization -Let's performa a day-ahead optimization task on this simple system. We want to schedule our two deferrable loads. +Let's perform a day-ahead optimization task on this simple system. We want to schedule our two deferrable loads. -For this we use the following command (example using the legacy EMHASS Python module command line): +For this, we use the following command (for example using the legacy EMHASS Python module command line): ``` emhass --action 'dayahead-optim' --config '/home/user/emhass/config_emhass.yaml' --costfun 'profit' ``` @@ -25,19 +25,19 @@ Finally, the optimization results are: ![](./images/optim_results_defLoads_dayaheadOptim.png) -For this system the total value of the obtained cost function is -5.38 EUR. +For this system, the total value of the obtained cost function is -5.38 EUR. ## A second test system: a 5kW PV installation and two deferrable loads Let's add a 5 kWp solar production with two deferrable loads. No battery is considered for now. The configuration used is the default configuration proposed with EMHASS. -We will first consider a perfect optimization task, to obtain the optimization results with perfectly know PV production and load power values for the last week. +We will first consider a perfect optimization task, to obtain the optimization results with perfectly known PV production and load power values for the last week. ### Perfect optimization Let's perform a 7-day historical data optimization. -For this we use the following command (example using the legacy EMHASS Python module command line): +For this, we use the following command (for example using the legacy EMHASS Python module command line): ``` emhass --action 'perfect-optim' --config '/home/user/emhass/config_emhass.yaml' --costfun 'profit' ``` @@ -58,19 +58,19 @@ For this 7-day period, the total value of the cost function was -26.23 EUR. ### Day-ahead optimization -As with the simple system we will now perform a day-ahead optimization task. We use again the `dayahead-optim` action or end point. +As with the simple system, we will now perform a day-ahead optimization task. We use again the `dayahead-optim` action or endpoint. The optimization results are: ![](./images/optim_results_PV_defLoads_dayaheadOptim.png) -For this system the total value of the obtained cost function is -1.56 EUR. We can note the important improvement on the cost function value whenn adding a PV installation. +For this system, the total value of the obtained cost function is -1.56 EUR. We can note the important improvement in the cost function value when adding a PV installation. ## A third test system: a 5kW PV installation, a 5kWh battery and two deferrable loads -Now we will consider a complet system with PV and added batteries. To add the battery we will set `set_use_battery: true` in the `optim_conf` section of the `config_emhass.yaml` file. +Now we will consider a complete system with PV and added batteries. To add the battery we will set `set_use_battery: true` in the `optim_conf` section of the `config_emhass.yaml` file. -In this case we want to schedule our deferrable loads but also the battery charge/discharge. We use again the `dayahead-optim` action or end point. +In this case, we want to schedule our deferrable loads but also the battery charge/discharge. We use again the `dayahead-optim` action or endpoint. The optimization results are: @@ -80,13 +80,13 @@ The battery state of charge plot is shown below: ![](./images/optim_results_PV_Batt_defLoads_dayaheadOptim_SOC.png) -For this system the total value of the obtained cost function is -1.23 EUR, a substantial improvement when adding a battery. +For this system, the total value of the obtained cost function is -1.23 EUR, a substantial improvement when adding a battery. ## Configuration example to pass data at runtime -As we showed in the forecast module section, we can pass our own forecast data using lists of values passed at runtime using templates. However, it is possible to also pass other data during runtime in order to automate the energy management. +As we showed in the forecast module section, we can pass our own forecast data using lists of values passed at runtime using templates. However, it is possible to also pass other data during runtime to automate energy management. -For example, let's suppose that for the default configuration with two deferrable loads we want to correlate and control them to the outside temperature. This will be used to build a list of the total number of hours for each deferrable load (`def_total_hours`). In this example the first deferrable load is a water heater and the second is the pool pump. +For example, let's suppose that for the default configuration with two deferrable loads, we want to correlate and control them to the outside temperature. This will be used to build a list of the total number of hours for each deferrable load (`def_total_hours`). In this example, the first deferrable load is a water heater and the second is the pool pump. We will begin by defining a temperature sensor on a 12 hours sliding window using the filter platform for the outside temperature: ``` @@ -116,15 +116,15 @@ Then we will use a template sensor to build our list of the total number of hour {{ [3, 12] | list }} {% endif %} ``` -The values for the total number of operating hours were tuned by trial and error throughout a whole year. These values work fine for a 3000W water heater (the first value of the list) and a 750W pool pump (the second value in the list). +The values for the total number of operating hours were tuned by trial and error throughout a whole year. These values work fine for a 3000W water heater (the first value in the list) and a 750W pool pump (the second value in the list). -Finally my two shell commands for EMHASS will look like: +Finally, my two shell commands for EMHASS will look like this: ``` shell_command: dayahead_optim: "curl -i -H \"Content-Type: application/json\" -X POST -d '{\"def_total_hours\":{{states('sensor.list_operating_hours_of_each_deferrable_load')}}}' http://localhost:5000/action/dayahead-optim" publish_data: "curl -i -H \"Content-Type: application/json\" -X POST -d '{}' http://localhost:5000/action/publish-data" ``` -The dedicated automations for these shell commands can be for example: +The dedicated automation for these shell commands can be for example: ``` - alias: EMHASS day-ahead optimization trigger: @@ -162,6 +162,6 @@ And as a bonus, an automation can be set to relaunch the optimization task autom The real implementation of EMHASS and its efficiency depends on the quality of the forecasted PV power production and the house load consumption. -Here is an extract of the PV power production forecast with the default PV forecast method from EMHASS: a web scarpping of the clearoutside page based on the defined lat/lon location of the system. These are the forecast results of the GFS model compared with the real PV produced data for a 4 day period. +Here is an extract of the PV power production forecast with the default PV forecast method from EMHASS: a web scraping of the clearoutside page based on the defined lat/lon location of the system. These are the forecast results of the GFS model compared with the real PV-produced data for a 4-day period. ![](./images/forecasted_PV_data.png) From 500ed115f72fcf789c0f70dffe7fd68d71837c34 Mon Sep 17 00:00:00 2001 From: James McMahon Date: Tue, 30 Jul 2024 16:06:51 +0100 Subject: [PATCH 10/11] Update thermal_model.md --- docs/thermal_model.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/thermal_model.md b/docs/thermal_model.md index 157aeaa9..af6030ed 100644 --- a/docs/thermal_model.md +++ b/docs/thermal_model.md @@ -1,8 +1,8 @@ # Deferrable load thermal model EMHASS supports defining a deferrable load as a thermal model. -This is useful to control thermal equipement as: heaters, air conditioners, etc. -The advantage of using this approach is that you will be able to define your desired room temperature jsut as you will do with your real equipmenet thermostat. +This is useful to control thermal equipment such as heaters, heat pumps, air conditioners, etc. +The advantage of using this approach is that you will be able to define your desired room temperature just as you will do with your real equipment thermostat. Then EMHASS will deliver the operating schedule to maintain that desired temperature while minimizing the energy bill and taking into account the forecasted outdoor temperature. A big thanks to @werdnum for proposing this model and the initial code for implementing this. @@ -23,7 +23,7 @@ In this model we can see two main configuration parameters: These parameters are defined according to the thermal characteristics of the building/house. It was reported by @werdnum, that values of $\alpha_h=5.0$ and $\gamma_c=0.1$ were reasonable in his case. -Of course these parameters should be adapted to each use case. This can be done with with history values of the deferrable load operation and the differents temperatures (indoor/outdoor). +Of course, these parameters should be adapted to each use case. This can be done with historical values of the deferrable load operation and the different temperatures (indoor/outdoor). The following diagram tries to represent an example behavior of this model: @@ -36,7 +36,7 @@ To implement this model we need to provide a configuration for the discussed par We will control this by using data passed at runtime. The first step will be to define a new entry `def_load_config`, this will be used as a dictionary to store any needed special configuration for each deferrable load. -For example if we have just **two** deferrable loads and the **second** load is a **thermal load** then we will define `def_load_config` as for example: +For example, if we have just **two** deferrable loads and the **second** load is a **thermal load** then we will define `def_load_config` as: ``` 'def_load_config': { {}, @@ -55,7 +55,7 @@ Here the `desired_temperatures` is a list of float values for each time step. Now we also need to define the other needed input, the `outdoor_temperature_forecast`, which is a list of float values. The list of floats for `desired_temperatures` and the list in `outdoor_temperature_forecast` should have proper lengths, if using MPC the length should be at least equal to the prediction horizon. Here is an example modified from a working example provided by @werdnum to pass all the needed data at runtime. -This example is given for the following configuration: just one deferrable load (a thermal load), no PV, no battery, an MPC application, pre-defined heating intervals times. +This example is given for the following configuration: just one deferrable load (a thermal load), no PV, no battery, an MPC application, and pre-defined heating intervals times. ``` rest_command: From e246a8710c200084b3263f41bc5717ba115b0b57 Mon Sep 17 00:00:00 2001 From: James McMahon Date: Tue, 30 Jul 2024 16:16:40 +0100 Subject: [PATCH 11/11] Update README.md --- README.md | 114 +++++++++++++++++++++++++++--------------------------- 1 file changed, 57 insertions(+), 57 deletions(-) diff --git a/README.md b/README.md index ee3bb114..d5a1a830 100644 --- a/README.md +++ b/README.md @@ -73,7 +73,7 @@ Home Assistant provides a platform for the automation of household devices based One of the main benefits of integrating EMHASS and Home Assistant is the ability to customize and tailor the energy management solution to the specific needs and preferences of each household. With EMHASS, households can define their energy management objectives and constraints, such as maximizing self-consumption or minimizing energy costs, and the system will generate an optimization plan accordingly. Home Assistant provides a platform for the automation of devices based on the optimization plan, allowing households to create a fully customized and optimized energy management solution. -Overall, the integration of EMHASS and Home Assistant offers a comprehensive energy management solution that provides significant cost savings, increased energy efficiency, and greater sustainability for households. By leveraging advanced energy management features and automation capabilities, households can achieve their energy management objectives while enjoying the benefits of a more efficient and sustainable energy usage, including optimized EV charging schedules. +Overall, the integration of EMHASS and Home Assistant offers a comprehensive energy management solution that provides significant cost savings, increased energy efficiency, and greater sustainability for households. By leveraging advanced energy management features and automation capabilities, households can achieve their energy management objectives while enjoying the benefits of more efficient and sustainable energy usage, including optimized EV charging schedules. The package flow can be graphically represented as follows: @@ -81,21 +81,21 @@ The package flow can be graphically represented as follows: ## Configuration and Installation -The package is meant to be highly configurable with an object oriented modular approach and a main configuration file defined by the user. -EMHASS was designed to be integrated with Home Assistant, hence it's name. +The package is meant to be highly configurable with an object-oriented modular approach and a main configuration file defined by the user. +EMHASS was designed to be integrated with Home Assistant, hence its name. Installation instructions and example Home Assistant automation configurations are given below. You must follow these steps to make EMHASS work properly: -1) Define all the parameters in the configuration file according to your installation method. For the add-on method you need to use the configuration pane directly on the add-on page. For other installation methods it should be needed to set the variables using the `config_emhass.yaml` file. See below for details on the installation methods. See the description for each parameter in the **configuration** section. If you have a PV installation then this dedicated webapp can be useful to find your inverter and solar panel models: [https://emhass-pvlib-database.streamlit.app/](https://emhass-pvlib-database.streamlit.app/) +1) Define all the parameters in the configuration file according to your installation method. For the add-on method, you need to use the configuration pane directly on the add-on page. For other installation methods, it should be needed to set the variables using the `config_emhass.yaml` file. See below for details on the installation methods. See the description for each parameter in the **configuration** section. If you have a PV installation then this dedicated web app can be useful for finding your inverter and solar panel models: [https://emhass-pvlib-database.streamlit.app/](https://emhass-pvlib-database.streamlit.app/) -2) You most notably will need to define the main data entering EMHASS. This will be the `sensor.power_photovoltaics` for the name of the your hass variable containing the PV produced power and the variable `sensor.power_load_no_var_loads` for the load power of your household excluding the power of the deferrable loads that you want to optimize. +2) You most notably will need to define the main data entering EMHASS. This will be the `sensor.power_photovoltaics` for the name of your hass variable containing the PV produced power and the variable `sensor.power_load_no_var_loads` for the load power of your household excluding the power of the deferrable loads that you want to optimize. -3) Launch the actual optimization and check the results. This can be done manually using the buttons in the web ui or with a `curl` command like this: `curl -i -H 'Content-Type:application/json' -X POST -d '{}' http://localhost:5000/action/dayahead-optim`. +3) Launch the actual optimization and check the results. This can be done manually using the buttons in the web UI or with a `curl` command like this: `curl -i -H 'Content-Type:application/json' -X POST -d '{}' http://localhost:5000/action/dayahead-optim`. -4) If you’re satisfied with the optimization results then you can set the optimization and data publish task commands in an automation. You can read more about this on the **usage** section below. +4) If you’re satisfied with the optimization results then you can set the optimization and data publish task commands in an automation. You can read more about this in the **usage** section below. -5) The final step is to link the deferrable loads variables to real switchs on your installation. An example code for this using automations and the shell command integration is presented below in the **usage** section. +5) The final step is to link the deferrable loads variables to real switches on your installation. An example code for this using automations and the shell command integration is presented below in the **usage** section. A more detailed workflow is given below: @@ -103,7 +103,7 @@ A more detailed workflow is given below: ### Method 1) The EMHASS add-on for Home Assistant OS and supervised users -For Home Assistant OS and HA Supervised users, I've developed an add-on that will help you use EMHASS. The add-on is more user friendly as the configuration can be modified directly in the add-on options pane and as with the standalone docker it exposes a web ui that can be used to inspect the optimization results and manually trigger a new optimization. +For Home Assistant OS and HA Supervised users, I've developed an add-on that will help you use EMHASS. The add-on is more user-friendly as the configuration can be modified directly in the add-on options pane and as with the standalone docker it exposes a web UI that can be used to inspect the optimization results and manually trigger a new optimization. You can find the add-on with the installation instructions here: [https://github.com/davidusb-geek/emhass-add-on](https://github.com/davidusb-geek/emhass-add-on) @@ -126,11 +126,11 @@ Then load the image in the .tar file: ```bash docker load -i .tar ``` -Finally check your image tag with `docker images` and launch the docker itself: +Finally, check your image tag with `docker images` and launch the docker itself: ```bash docker run -it --restart always -p 5000:5000 -e LOCAL_COSTFUN="profit" -v $(pwd)/config_emhass.yaml:/app/config_emhass.yaml -v $(pwd)/secrets_emhass.yaml:/app/secrets_emhass.yaml --name DockerEMHASS ``` - - If you wish to keep a local, persistent copy of the EMHASS generated data, create a local folder on your device, then mount said folder inside the container. + - If you wish to keep a local, persistent copy of the EMHASS-generated data, create a local folder on your device, then mount said folder inside the container. ```bash mkdir -p $(pwd)/data #linux: create data folder on local device @@ -170,9 +170,9 @@ python3 -m pip install --upgrade emhass ### Method 1) Add-on and docker standalone -If using the add-on or the standalone docker installation, it exposes a simple webserver on port 5000. You can access it directly using your brower, ex: http://localhost:5000. +If using the add-on or the standalone docker installation, it exposes a simple webserver on port 5000. You can access it directly using your browser, ex: http://localhost:5000. -With this web server you can perform RESTful POST commands on multiple ENDPOINTS with prefix `action/*`: +With this web server, you can perform RESTful POST commands on multiple ENDPOINTS with the prefix `action/*`: - A POST call to `action/perfect-optim` to perform a perfect optimization task on the historical data. - A POST call to `action/dayahead-optim` to perform a day-ahead optimization task of your home energy. @@ -180,7 +180,7 @@ With this web server you can perform RESTful POST commands on multiple ENDPOINTS - A POST call to `action/publish-data` to publish the optimization results data for the current timestamp. - A POST call to `action/forecast-model-fit` to train a machine learning forecaster model with the passed data (see the [dedicated section](https://emhass.readthedocs.io/en/latest/mlforecaster.html) for more help). - A POST call to `action/forecast-model-predict` to obtain a forecast from a pre-trained machine learning forecaster model (see the [dedicated section](https://emhass.readthedocs.io/en/latest/mlforecaster.html) for more help). -- A POST call to `action/forecast-model-tune` to optimize the machine learning forecaster models hyperparameters using bayesian optimization (see the [dedicated section](https://emhass.readthedocs.io/en/latest/mlforecaster.html) for more help). +- A POST call to `action/forecast-model-tune` to optimize the machine learning forecaster models hyperparameters using Bayesian optimization (see the [dedicated section](https://emhass.readthedocs.io/en/latest/mlforecaster.html) for more help). A `curl` command can then be used to launch an optimization task like this: `curl -i -H 'Content-Type:application/json' -X POST -d '{}' http://localhost:5000/action/dayahead-optim`. @@ -188,8 +188,8 @@ A `curl` command can then be used to launch an optimization task like this: `cur To run a command simply use the `emhass` CLI command followed by the needed arguments. The available arguments are: -- `--action`: That is used to set the desired action, options are: `perfect-optim`, `dayahead-optim`, `naive-mpc-optim`, `publish-data`, `forecast-model-fit`, `forecast-model-predict` and `forecast-model-tune`. -- `--config`: Define path to the config.yaml file (including the yaml file itself) +- `--action`: This is used to set the desired action, options are: `perfect-optim`, `dayahead-optim`, `naive-mpc-optim`, `publish-data`, `forecast-model-fit`, `forecast-model-predict` and `forecast-model-tune`. +- `--config`: Define the path to the config.yaml file (including the yaml file itself) - `--costfun`: Define the type of cost function, this is optional and the options are: `profit` (default), `cost`, `self-consumption` - `--log2file`: Define if we should log to a file or not, this is optional and the options are: `True` or `False` (default) - `--params`: Configuration as JSON. @@ -205,9 +205,9 @@ Before running any valuable command you need to modify the `config_emhass.yaml` ## Home Assistant integration -To integrate with home assistant we will need to define some shell commands in the `configuration.yaml` file and some basic automations in the `automations.yaml` file. -In the next few paragraphs we are going to consider the `dayahead-optim` optimization strategy, which is also the first that was implemented, and we will also cover how to publish the results. -Then additional optimization strategies were developed, that can be used in combination with/replace the `dayahead-optim` strategy, such as MPC, or to expland the funcitonalities such as the Machine Learning method to predict your hosehold consumption. Each of them has some specificities and features and will be considered in dedicated sections. +To integrate with Home Assistant we will need to define some shell commands in the `configuration.yaml` file and some basic automations in the `automations.yaml` file. +In the next few paragraphs, we are going to consider the `dayahead-optim` optimization strategy, which is also the first that was implemented, and we will also cover how to publish the results. +Then additional optimization strategies were developed, that can be used in combination with/replace the `dayahead-optim` strategy, such as MPC, or to expand the funcitonalities such as the Machine Learning method to predict your household consumption. Each of them has some specificities and features and will be considered in dedicated sections. ### Dayahead Optimization - Method 1) Add-on and docker standalone @@ -263,9 +263,9 @@ In `automations.yaml`: action: - service: shell_command.publish_data ``` -In these automation's the day-ahead optimization is performed once a day, everyday at 5:30am, and the data *(output of automation)* is published every 5 minutes. +In these automations the day-ahead optimization is performed once a day, every day at 5:30am, and the data *(output of automation)* is published every 5 minutes. -#### Option 2, EMHASS automate publish +#### Option 2, EMHASS automated publish In `automations.yaml`: ```yaml @@ -282,15 +282,15 @@ in configuration page/`config_emhass.yaml` "method_ts_round": "first" "continual_publish": true ``` -In this automation the day-ahead optimization is performed once a day, everyday at 5:30am. -If the `freq` parameter is set to `30` *(default)* in the configuration, the results of the day-ahead optimization will generate 48 values *(for each entity)*, a value for each 30 minutes in a day *(i.e. 24 hrs x 2)*. +In this automation, the day-ahead optimization is performed once a day, every day at 5:30am. +If the `freq` parameter is set to `30` *(default)* in the configuration, the results of the day-ahead optimization will generate 48 values *(for each entity)*, a value for every 30 minutes in a day *(i.e. 24 hrs x 2)*. -Setting the parameter `continual_publish` to `true` in the configuration page, will allow EMHASS to store the optimization results as entities/sensors into seperate json files. `continual_publish` will periodically (every `freq` amount of minutes) run a publish, and publish the optimization results of each generated entities/sensors to Home Assistant. The current state of the sensor/entity being updated every time publish runs, selecting one of the 48 stored values, by comparing the stored values timestamps, the current timestamp and [`"method_ts_round": "first"`](#the-publish-data-specificities) to select the optimal stored value for the current state. +Setting the parameter `continual_publish` to `true` in the configuration page will allow EMHASS to store the optimization results as entities/sensors into separate json files. `continual_publish` will periodically (every `freq` amount of minutes) run a publish, and publish the optimization results of each generated entities/sensors to Home Assistant. The current state of the sensor/entity being updated every time publish runs, selecting one of the 48 stored values, by comparing the stored values' timestamps, the current timestamp and [`"method_ts_round": "first"`](#the-publish-data-specificities) to select the optimal stored value for the current state. -option 1 and 2 are very similar, however option 2 (`continual_publish`) will require a cpu thread to constantly be run inside of EMHASS, lowering efficiency. The reason why you may pick one over the other is explained in more detail bellow in [continual_publish](#continual_publish-emhass-automation). +option 1 and 2 are very similar, however, option 2 (`continual_publish`) will require a CPU thread to constantly be run inside of EMHASS, lowering efficiency. The reason why you may pick one over the other is explained in more detail below in [continual_publish](#continual_publish-emhass-automation). -Lastly, we can link a EMHASS published entities/sensor's current state to a Home Assistant entity on/off switch, controlling a desired controllable load. -For example, imagine that I want to control my water heater. I can use a published `deferrable` EMHASS entity to control my water heaters desired behavior. In this case, we could use an automation like below, to control the desired water heater on and off: +Lastly, we can link an EMHASS published entity/sensor's current state to a Home Assistant entity on/off switch, controlling a desired controllable load. +For example, imagine that I want to control my water heater. I can use a published `deferrable` EMHASS entity to control my water heater's desired behavior. In this case, we could use an automation like the below, to control the desired water heater on and off: on: ```yaml @@ -322,13 +322,13 @@ automation: - service: homeassistant.turn_off entity_id: switch.water_heater_switch ``` -The result of these automation's will turn on and off the Home Assistant entity `switch.water_heater_switch` using the current state from the EMHASS entity `sensor.p_deferrable0`. `sensor.p_deferrable0` being the entity generated from the EMHASS day-ahead optimization and published by examples above. The `sensor.p_deferrable0` entity current state being updated every 30 minutes (or `freq` minutes) via a automated publish option 1 or 2. *(selecting one of the 48 stored data values)* +These automations will turn on and off the Home Assistant entity `switch.water_heater_switch` using the current state from the EMHASS entity `sensor.p_deferrable0`. `sensor.p_deferrable0` being the entity generated from the EMHASS day-ahead optimization and published by examples above. The `sensor.p_deferrable0` entity's current state is updated every 30 minutes (or `freq` minutes) via an automated publish option 1 or 2. *(selecting one of the 48 stored data values)* ## The publish-data specificities -`publish-data` (which is either run manually, or automatically via `continual_publish` or Home Assistant automation), will push the optimization results to Home Assistant for each deferrable load defined in the configuration. For example if you have defined two deferrable loads, then the command will publish `sensor.p_deferrable0` and `sensor.p_deferrable1` to Home Assistant. When the `dayahead-optim` is launched, after the optimization, either entity json files or a csv file will be saved on disk. The `publish-data` command will load the latest csv/json files to look for the closest timestamp that match the current time using the `datetime.now()` method in Python. This means that if EMHASS is configured for 30min time step optimizations, the csv/json will be saved with timestamps 00:00, 00:30, 01:00, 01:30, ... and so on. If the current time is 00:05, and parameter `method_ts_round` is set to `nearest` in the configuration, then the closest timestamp of the optimization results that will be published is 00:00. If the current time is 00:25, then the closest timestamp of the optimization results that will be published is 00:30. +`publish-data` (which is either run manually or automatically via `continual_publish` or Home Assistant automation), will push the optimization results to Home Assistant for each deferrable load defined in the configuration. For example, if you have defined two deferrable loads, then the command will publish `sensor.p_deferrable0` and `sensor.p_deferrable1` to Home Assistant. When the `dayahead-optim` is launched, after the optimization, either entity json files or a csv file will be saved on disk. The `publish-data` command will load the latest csv/json files to look for the closest timestamp that matches the current time using the `datetime.now()` method in Python. This means that if EMHASS is configured for 30-minute time step optimizations, the csv/json will be saved with timestamps 00:00, 00:30, 01:00, 01:30, ... and so on. If the current time is 00:05, and parameter `method_ts_round` is set to `nearest` in the configuration, then the closest timestamp of the optimization results that will be published is 00:00. If the current time is 00:25, then the closest timestamp of the optimization results that will be published is 00:30. -The `publish-data` command will also publish PV and load forecast data on sensors `p_pv_forecast` and `p_load_forecast`. If using a battery, then the battery optimized power and the SOC will be published on sensors `p_batt_forecast` and `soc_batt_forecast`. On these sensors the future values are passed as nested attributes. +The `publish-data` command will also publish PV and load forecast data on sensors `p_pv_forecast` and `p_load_forecast`. If using a battery, then the battery-optimized power and the SOC will be published on sensors `p_batt_forecast` and `soc_batt_forecast`. On these sensors, the future values are passed as nested attributes. If you run publish manually *(or via a Home Assistant Automation)*, it is possible to provide custom sensor names for all the data exported by the `publish-data` command. For this, when using the `publish-data` endpoint we can just add some runtime parameters as dictionaries like this: ```yaml @@ -343,7 +343,7 @@ If you provide the `custom_deferrable_forecast_id` then the passed data should b shell_command: publish_data: "curl -i -H \"Content-Type:application/json\" -X POST -d '{\"custom_deferrable_forecast_id\": [{\"entity_id\": \"sensor.p_deferrable0\",\"unit_of_measurement\": \"W\", \"friendly_name\": \"Deferrable Load 0\"},{\"entity_id\": \"sensor.p_deferrable1\",\"unit_of_measurement\": \"W\", \"friendly_name\": \"Deferrable Load 1\"}]}' http://localhost:5000/action/publish-data" ``` -And you should be careful that the list of dictionaries has the correct length, which is the number of defined deferrable loads. +You should be careful that the list of dictionaries has the correct length, which is the number of defined deferrable loads. ### Computed variables and published data @@ -354,7 +354,7 @@ Below you can find a list of the variables resulting from EMHASS computation, sh | P_PV | Forecasted power generation from your solar panels (Watts). This helps you predict how much solar energy you will produce during the forecast period. | sensor.p_pv_forecast | | P_Load | Forecasted household power consumption (Watts). This gives you an idea of how much energy your appliances are expected to use. | sensor.p_load_forecast | | P_deferrableX
[X = 0, 1, 2, ...] | Forecasted power consumption of deferrable loads (Watts). Deferable loads are appliances that can be managed by EMHASS. EMHASS helps you optimise energy usage by prioritising solar self-consumption and minimizing reliance on the grid or by taking advantage or supply and feed-in tariff volatility. You can have multiple deferable loads and you use this sensor in HA to control these loads via smart switch or other IoT means at your disposal. | sensor.p_deferrableX | -| P_grid_pos | Forecasted power imported from the grid (Watts). This indicates the amount of energy you are expected to draw from the grid when your solar production is insufficient to meet your needs or it is advantagous to consume from the grid. | - | +| P_grid_pos | Forecasted power imported from the grid (Watts). This indicates the amount of energy you are expected to draw from the grid when your solar production is insufficient to meet your needs or it is advantageous to consume from the grid. | - | | P_grid_neg | Forecasted power exported to the grid (Watts). This indicates the amount of excess solar energy you are expected to send back to the grid during the forecast period. | - | | P_batt | Forecasted (dis)charge power load (Watts) for the battery (if installed). If negative it indicates the battery is charging, if positive that the battery is discharging. | sensor.p_batt_forecast | | P_grid | Forecasted net power flow between your home and the grid (Watts). This is calculated as P_grid_pos - P_grid_neg. A positive value indicates net export, while a negative value indicates net import. | sensor.p_grid_forecast | @@ -368,34 +368,34 @@ Below you can find a list of the variables resulting from EMHASS computation, sh ## Passing your own data -In EMHASS we have basically 4 forecasts to deal with: +In EMHASS we have 4 forecasts to deal with: - PV power production forecast (internally based on the weather forecast and the characteristics of your PV plant). This is given in Watts. -- Load power forecast: how much power your house will demand on the next 24h. This is given in Watts. +- Load power forecast: how much power your house will demand in the next 24 hours. This is given in Watts. -- Load cost forecast: the price of the energy from the grid on the next 24h. This is given in EUR/kWh. +- Load cost forecast: the price of the energy from the grid in the next 24 hours. This is given in EUR/kWh. -- PV production selling price forecast: at what price are you selling your excess PV production on the next 24h. This is given in EUR/kWh. +- PV production selling price forecast: at what price are you selling your excess PV production in the next 24 hours. This is given in EUR/kWh. -The sensor containing the load data should be specified in parameter `var_load` in the configuration file. As we want to optimize the household energies, when need to forecast the load power consumption. The default method for this is a naive approach using 1-day persistence. The load data variable should not contain the data from the deferrable loads themselves. For example, lets say that you set your deferrable load to be the washing machine. The variable that you should enter in EMHASS will be: `var_load: 'sensor.power_load_no_var_loads'` and `sensor.power_load_no_var_loads = sensor.power_load - sensor.power_washing_machine`. This is supposing that the overall load of your house is contained in variable: `sensor.power_load`. The sensor `sensor.power_load_no_var_loads` can be easily created with a new template sensor in Home Assistant. +The sensor containing the load data should be specified in the parameter `var_load` in the configuration file. As we want to optimize household energy, we need to forecast the load power consumption. The default method for this is a naive approach using 1-day persistence. The load data variable should not contain the data from the deferrable loads themselves. For example, let's say that you set your deferrable load to be the washing machine. The variables that you should enter in EMHASS will be: `var_load: 'sensor.power_load_no_var_loads'` and `sensor.power_load_no_var_loads = sensor.power_load - sensor.power_washing_machine`. This is supposing that the overall load of your house is contained in the variable: `sensor.power_load`. The sensor `sensor.power_load_no_var_loads` can be easily created with a new template sensor in Home Assistant. -If you are implementing a MPC controller, then you should also need to provide some data at the optimization runtime using the key `runtimeparams`. +If you are implementing an MPC controller, then you should also need to provide some data at the optimization runtime using the key `runtimeparams`. -The valid values to pass for both forecast data and MPC related data are explained below. +The valid values to pass for both forecast data and MPC-related data are explained below. ### Alternative publish methods -Due to the flexibility of EMHASS, multiple different approaches to publishing the optimization results have been created. Select a option that best meets your use case: +Due to the flexibility of EMHASS, multiple different approaches to publishing the optimization results have been created. Select an option that best meets your use case: #### publish last optimization *(manual)* -By default, running an optimization in EMHASS will output the results into the csv file: `data_path/opt_res_latest.csv` *(overriding the existing data on that file)*. We run the publish command to publish the last optimization saved in the `opt_res_latest.csv`: +By default, running an optimization in EMHASS will output the results into the CSV file: `data_path/opt_res_latest.csv` *(overriding the existing data on that file)*. We run the publish command to publish the last optimization saved in the `opt_res_latest.csv`: ```bash # RUN dayahead curl -i -H 'Content-Type:application/json' -X POST -d {} http://localhost:5000/action/dayahead-optim # Then publish teh results of dayahead curl -i -H 'Content-Type:application/json' -X POST -d {} http://localhost:5000/action/publish-data ``` -*Note, the published entities from the publish-data action will not automatically update the entities current state (current state being used to check when to turn on and off appliances via Home Assistant automatons). To update the EMHASS entities state, another publish would have to be re-run later when the current time matches the next values timestamp (E.g every 30 minutes). See examples bellow for methods to automate the publish-action.* +*Note, the published entities from the publish-data action will not automatically update the entities' current state (current state being used to check when to turn on and off appliances via Home Assistant automatons). To update the EMHASS entities state, another publish would have to be re-run later when the current time matches the next value's timestamp (e.g. every 30 minutes). See examples below for methods to automate the publish-action.* #### continual_publish *(EMHASS Automation)* As discussed in [Common for any installation method - option 2](#option-2-emhass-automate-publish), setting `continual_publish` to `true` in the configuration saves the output of the optimization into the `data_path/entities` folder *(a .json file for each sensor/entity)*. A constant loop (in `freq` minutes) will run, observe the .json files in that folder, and publish the saved files periodically (updating the current state of the entity by comparing date.now with the saved data value timestamps). @@ -412,7 +412,7 @@ This will tell continual_publish to loop every 5 minutes based on the freq passe
-*It is recommended to use the 2 other options bellow once you have a more advance understanding of EMHASS and/or Home Assistant.* +*It is recommended to use the 2 other options below once you have a more advanced understanding of EMHASS and/or Home Assistant.* #### Mixture of continual_publish and manual *(Home Assistant Automation for Publish)* @@ -430,7 +430,7 @@ This example saves the dayahead optimization into `data_path/entities` as .json #### Manual *(Home Assistant Automation for Publish)* -For users who wish to have full control of exactly when they will like to run a publish and have the ability to save multiple different optimizations. The `entity_save` runtime parameter has been created to save the optimization output entities to .json files whilst `continual_publish` is set to `false` in the configuration. Allowing the user to reference the saved .json files manually via a publish: +For users who wish to have full control of exactly when they would like to run a publish and have the ability to save multiple different optimizations. The `entity_save` runtime parameter has been created to save the optimization output entities to .json files whilst `continual_publish` is set to `false` in the configuration. Allowing the user to reference the saved .json files manually via a publish: in configuration page/`config_emhass.yaml` : ```json @@ -459,9 +459,9 @@ This action will publish the dayahead (_dh) and MPC (_mpc) optimization results ### Forecast data at runtime -It is possible to provide EMHASS with your own forecast data. For this just add the data as list of values to a data dictionary during the call to `emhass` using the `runtimeparams` option. +It is possible to provide EMHASS with your own forecast data. For this just add the data as a list of values to a data dictionary during the call to `emhass` using the `runtimeparams` option. -For example if using the add-on or the standalone docker installation you can pass this data as list of values to the data dictionary during the `curl` POST: +For example, if using the add-on or the standalone docker installation you can pass this data as a list of values to the data dictionary during the `curl` POST: ```bash curl -i -H 'Content-Type:application/json' -X POST -d '{"pv_power_forecast":[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 70, 141.22, 246.18, 513.5, 753.27, 1049.89, 1797.93, 1697.3, 3078.93, 1164.33, 1046.68, 1559.1, 2091.26, 1556.76, 1166.73, 1516.63, 1391.13, 1720.13, 820.75, 804.41, 251.63, 79.25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}' http://localhost:5000/action/dayahead-optim ``` @@ -482,7 +482,7 @@ The possible dictionary keys to pass data are: ### Passing other data at runtime -It is possible to also pass other data during runtime in order to automate the energy management. For example, it could be useful to dynamically update the total number of hours for each deferrable load (`def_total_hours`) using for instance a correlation with the outdoor temperature (useful for water heater for example). +It is possible to also pass other data during runtime to automate energy management. For example, it could be useful to dynamically update the total number of hours for each deferrable load (`def_total_hours`) using for instance a correlation with the outdoor temperature (useful for water heater for example). Here is the list of the other additional dictionary keys that can be passed at runtime: @@ -492,7 +492,7 @@ Here is the list of the other additional dictionary keys that can be passed at r - `def_total_hours` for the total number of hours that each deferrable load should operate. -- `def_start_timestep` for the timestep as from which each deferrable load is allowed to operate (if you don't want the deferrable load to use the whole optimization timewindow). +- `def_start_timestep` for the timestep from which each deferrable load is allowed to operate (if you don't want the deferrable load to use the whole optimization timewindow). - `def_end_timestep` for the timestep before which each deferrable load should operate (if you don't want the deferrable load to use the whole optimization timewindow). @@ -512,19 +512,19 @@ Here is the list of the other additional dictionary keys that can be passed at r - `SOCmax` the maximum possible SOC. -- `SOCtarget` for the desired target value of initial and final SOC. +- `SOCtarget` for the desired target value of the initial and final SOC. - `Pd_max` for the maximum battery discharge power. - `Pc_max` for the maximum battery charge power. -- `publish_prefix` use this key to pass a common prefix to all published data. This will add a prefix to the sensor name but also to the forecasts attributes keys within the sensor. +- `publish_prefix` use this key to pass a common prefix to all published data. This will add a prefix to the sensor name but also the forecast attribute keys within the sensor. ## A naive Model Predictive Controller -A MPC controller was introduced in v0.3.0. This is an informal/naive representation of a MPC controller. This can be used in combination with/as a replacement of the Dayahead Optimization. +An MPC controller was introduced in v0.3.0. This is an informal/naive representation of an MPC controller. This can be used in combination with/as a replacement for the Dayahead Optimization. -A MPC controller performs the following actions: +An MPC controller performs the following actions: - Set the prediction horizon and receding horizon parameters. - Perform an optimization on the prediction horizon. @@ -535,7 +535,7 @@ This is the receding horizon principle. When applying this controller, the following `runtimeparams` should be defined: -- `prediction_horizon` for the MPC prediction horizon. Fix this at at least 5 times the optimization time step. +- `prediction_horizon` for the MPC prediction horizon. Fix this at least 5 times the optimization time step. - `soc_init` for the initial value of the battery SOC for the current iteration of the MPC. @@ -543,11 +543,11 @@ When applying this controller, the following `runtimeparams` should be defined: - `def_total_hours` for the list of deferrable loads functioning hours. These values can decrease as the day advances to take into account receding horizon daily energy objectives for each deferrable load. -- `def_start_timestep` for the timestep as from which each deferrable load is allowed to operate (if you don't want the deferrable load to use the whole optimization timewindow). If you specify a value of 0 (or negative), the deferrable load will be optimized as from the beginning of the complete prediction horizon window. +- `def_start_timestep` for the timestep from which each deferrable load is allowed to operate (if you don't want the deferrable load to use the whole optimization timewindow). If you specify a value of 0 (or negative), the deferrable load will be optimized as from the beginning of the complete prediction horizon window. - `def_end_timestep` for the timestep before which each deferrable load should operate (if you don't want the deferrable load to use the whole optimization timewindow). If you specify a value of 0 (or negative), the deferrable load optimization window will extend up to the end of the prediction horizon window. -A correct call for a MPC optimization should look like: +A correct call for an MPC optimization should look like this: ```bash curl -i -H 'Content-Type:application/json' -X POST -d '{"pv_power_forecast":[0, 70, 141.22, 246.18, 513.5, 753.27, 1049.89, 1797.93, 1697.3, 3078.93], "prediction_horizon":10, "soc_init":0.5,"soc_final":0.6}' http://192.168.3.159:5000/action/naive-mpc-optim @@ -565,11 +565,11 @@ Check the dedicated section in the documentation here: [https://emhass.readthedo ## Development -Pull request are very much accepted on this project. For development you can find some instructions here [Development](https://emhass.readthedocs.io/en/latest/develop.html). +Pull requests are very much accepted on this project. For development, you can find some instructions here [Development](https://emhass.readthedocs.io/en/latest/develop.html). ## Troubleshooting -Some problems may arise from solver related issues in the Pulp package. It was found that for arm64 architectures (ie. Raspberry Pi4, 64 bits) the default solver is not avaliable. A workaround is to use another solver. The `glpk` solver is an option. +Some problems may arise from solver-related issues in the Pulp package. It was found that for arm64 architectures (ie. Raspberry Pi4, 64 bits) the default solver is not available. A workaround is to use another solver. The `glpk` solver is an option. This can be controlled in the configuration file with parameters `lp_solver` and `lp_solver_path`. The options for `lp_solver` are: 'PULP_CBC_CMD', 'GLPK_CMD' and 'COIN_CMD'. If using 'COIN_CMD' as the solver you will need to provide the correct path to this solver in parameter `lp_solver_path`, ex: '/usr/bin/cbc'.