-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docs: Expand Robust Search tutorial #147
Comments
I agree. It would be good to expand this a bit more. |
Maybe another section with adaptive policies could even be added. |
@quaquel What's your vision on (dynamic) adaptive policies (pathways) in the workbench? What's the current level of support the workbench offers? Do you like the current level of support? If not, what aspects would you like to improve (functionality, documentation, etc.)? |
Conceptually, pathways and exploratory modeling are distinct. The former is a way of structuring interventions. The latter is a way of developing and using models to aid decision making. Evidently, you can use exploratory modeling to support the design of adaptation pathways. However, I wouldn't immediately know what kind of addition modeling support could be added to the workbench itself for this. |
Currently, the idea is you generate adaptive policy pathways outside the EMAworkbench (in your model, etc.) and then input those as regular Maybe one interesting thing is to have insight if triggers are reached for an adaptive policy, per scenario. An then you can have analysis on what parameters make a trigger be triggered. Are there any example models that include adaptive policies and are transparant or analyze if triggers are reached? |
From the workbench perspective, what matters are the levers. How those levers play out is up to the model itself. Levers could be adaptive actions, trigger values, or even signposts. Exactly which signpost, trigger, and policy option belong together is unknown at the workbench level. The typical way I have been doing this stuff is through additional outcomes of interest, which might follow the signpost or the activation of options. And yes, once you have that information, you can do sensitivity analysis or scenario discovery on this. Still, more often, I work the other way around. I start with a candidate policy option. Next, I identify the conditions under which it performs poorly using scenario discovery / gsa. Then, you try and test various signposts to identify in a timely manner that you are heading in the wrong direction. Once you have a nice (set of) signpost(s), you add another option and play around or optimize the trigger values. Now, go back to step one and repeat. Unfortunately, we don't have a nice toy model for this. |
Would be interesting to have, and have a tutorial about. I found this concept very fascinating when doing the course myself but never deepened my understanding by actually exploring it with a model. |
It seems like the Robust Search part of the Directed search tutorial is ending somewhat abruptly. It ends with running the experiment, and I think expanding it with interpreting and analyzing the results could be useful. The whole Directed search tutorial could maybe also benefit from a general conclusion/summary.
The text was updated successfully, but these errors were encountered: