Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

continual_publish service #284

Merged
merged 10 commits into from
Jun 1, 2024

Conversation

GeoDerp
Copy link
Contributor

@GeoDerp GeoDerp commented May 14, 2024

continual_publish

A new parameter continual_publish (boolean) has been generated and will:

  • Save entity data of the optimization action results to data_path/entities
  • Automatically re-publish entities to HA based on freq (checking for current state change)

This Pull Request is a "functional" proof of concept. Feel free to comment, review or commit any pull requests.
I will likely refine this post and/or the code later.

Operation Example:

  • Set continual_publish to true in the config file
  • Run MPC and dayahead
    # RUN MPC, with freq=1, prefix=mpc_
    curl -i -H 'Content-Type:application/json' -X POST -d '{"freq":1,"publish_prefix":"mpc_","pv_power_forecast":[0,70,141.22,246.18,513.5,753.27,1049.89,1797.93,1697.3,3078.93],"prediction_horizon":10,"soc_init":0.5,"soc_final":0.6}' http://localhost:5000/action/naive-mpc-optim
    # RUN dayahead, with freq=30, prefix=dh_ 
    curl -i -H 'Content-Type:application/json' -X POST -d '{"freq":30,"publish_prefix":"dh_"}' http://localhost:5000/action/dayahead-optim
    This results in:
    • Both the MPC and dayahead entity results saved in data_path/entities
      • Each entity generated has a .json file named under their entity_id (E.g. sensor.dh_p_pv_forecast)
    • A background loop has a Freq now set to 1 minute, using the shortest freq saved (in this case, from MPC)
    • Every minute, the loop will the publish results of both MPC and dayahead entities (with updated current state) to HA

continual_publish drawio


Notes

  • Continual optimization tasks
  • When running an optimization (such as MPC) frequently, the results of that optimization will override the files stored in .json, they will then get published to HA in the next loop
  • Running an optimization action and bypassing continual_publish:
    curl -i -H 'Content-Type:application/json' -X POST -d '{"freq":30,"publish_prefix":"dh_","continual_publish":false}' http://localhost:5000/action/dayahead-optim
    Setting continual_publish to false as a runtime parameter results in the data only being saved in opt_res_latest.csv. Therefore, it would require a publish to be run to upload the results
  • forecast_model_predict and regressor_model_predict:
    • As of now, both predict publishes don't full under the continual_publish automated re-publish loop. This however can be changed if requested
  • Loop freq:
    • It may be best to set the loop freq to a fixed value of 1 minute (or under)

@GeoDerp

This comment was marked as outdated.

@davidusb-geek
Copy link
Owner

Hi @GeoDerp, this seems nice!
So the publish action is done automatically.
The freq parameters is not the same as in the config.yaml:

retrieve_hass_conf:
  freq: 30 # The time step to resample retrieved data from hass in minutes

?? If so, then probably give it another name to avoid confusion. Ex: freq_publish?

A comment. This maybe the opportunity to solve this issue about unique IDs:
davidusb-geek/emhass-add-on#91

@GeoDerp
Copy link
Contributor Author

GeoDerp commented May 15, 2024

Hi @GeoDerp, this seems nice! So the publish action is done automatically. The freq parameters is not the same as in the config.yaml:

retrieve_hass_conf:
  freq: 30 # The time step to resample retrieved data from hass in minutes

?? If so, then probably give it another name to avoid confusion. Ex: freq_publish?

A comment. This maybe the opportunity to solve this issue about unique IDs: davidusb-geek/emhass-add-on#91

We can defiantly set up a freq_publish parameter.
The loop frequency is initially set up by the frequency provided by freq in config.yaml.
It is assumed that freq may change depending on a combination of MPC and dayahead calls. (for instance dayahead may be 30min freq and MPC is 1 minute)
Therefore the metadata logs the lowest freq used, then the loop is set to that. Its very mutch unnecessary and we can just set a freq_publish parameter, or set a fixed loop of 1 minute.

Edit: oh sorry, yeah the publish is done automatically.

@GeoDerp
Copy link
Contributor Author

GeoDerp commented May 15, 2024

I might see how difficult it would be to fix the unique_id issue

@davidusb-geek
Copy link
Owner

We can defiantly set up a freq_publish parameter.

No that's ok, I thought that it was a different parameters. But is the same sol let's keep freq

@GeoDerp
Copy link
Contributor Author

GeoDerp commented May 15, 2024

@purcell-lab , whats your take on the Pull Request?

@purcell-lab
Copy link
Contributor

This functionality is very welcome in 99% of cases I almost always have a publish command following an optimisation command. As I want my automations to take action as soon as a new set of optim results are available. In your flow diagram this is covered by the publish_data function. In the case if MPC is run every 1 or 7 minutes, it will immediately publish results without awaiting the loop?

The other case is when we cross a time step boundary (e g minute /30) in many cases there is a new set of data awaiting publication from the last optim run, which should be published immediately. Is this the function of continual_publish? Does this need to run every minute or just every time step?

@GeoDerp
Copy link
Contributor Author

GeoDerp commented May 17, 2024

Sorry for my delay. I created a video explaining on the progress of the PR (forgive my tiredness). Ill merge the latest changes tomorrow when I'm more awake.

continual_publish explaination.

Side note. I am thinking of creating a couple of videos explaining my development process, and EMHASS workflow (once its finalized). Let me know if you like this video and ill consider.

@davidusb-geek
Copy link
Owner

Wow those kind of videos are really genius! That type of tutorial will help a lot of people for first setups for example. Great job! Definitely go for them if you're willing, to explain the EMHASS workflow.

For this PR it was already great with that flow diagram at the beginning, the video is a great to understand that even better.
How will we deal with default behavior?
It could be great to set continual_publish by default to True right? But then that may somehow collide with our current automations to publish data?

@GeoDerp
Copy link
Contributor Author

GeoDerp commented May 18, 2024

Just a note, the README will likely need to be improved more before the merge. I may have also missed some documentation in the README or docs, that would require adjustment for this PR . All help will be appreciated.

How will we deal with default behavior?

I agree, I think its best to leave the continual_publish to false by default for the existing EMHASS users.
However, I have adjusted the README (Common for any installation method) examples to include the continual_publish functionality as a publish option. Currently presenting both the old and the new option for the base day-ahead example. I can remove a option on request.

@davidusb-geek , Let me know if you would like unitests generated for this PR.

@GeoDerp
Copy link
Contributor Author

GeoDerp commented May 22, 2024

Thank my Dad for noticing the DateTime Issue while testing. Bug Fixed.

@davidusb-geek
Copy link
Owner

Hi @GeoDerp, is this good to go?

@GeoDerp
Copy link
Contributor Author

GeoDerp commented May 28, 2024

Hi @GeoDerp, is this good to go?

Code should be good to go. I would recommend reading the README file first just to see if you're happy with it. (Definitely put more working into the code than the readme)
I also haven't added any relevant unitests. (not sure if it's required for this integration) Happy to look into some of requested.

@davidusb-geek davidusb-geek merged commit 6ca18b4 into davidusb-geek:master Jun 1, 2024
13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants