-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve parameter optimisation algorithm #11
Comments
We can try the search from both ends, i.e. from one end then the other with the stopping criteria remaining the same, i.e. if stepping further increases MAE then we stop and use the lowest parameter value yielding the lowest MAE. We do this from smallest to larges parameter value, then again from the largest to the smallest value. Finally, among the two "optimal" parameter values we select the one which yields the smaller MAE. |
The previous proposal does not fully appreciate the complexity in the parameter spaces! The trends in MAE across one parameter space is unlikely to be the same across all the levels of all the other parameters! This means that the current simplistic coordinate-descent-like optimisation may prove to be suboptimal. However, we can perform a similar coordinate-descent-like optimisation where we start from each corner of the 4-dimensional parameter space! Do we zigzag from each corner? Currently, what we do is start from one corner then move along a single dimension, then once the minimum MAE is found, we fix this dimension at the "optimum" value, and jump to the next dimension, and so on. Can we efficiently lick all 4 dimensions before each step from each corner? Also, just a hopeful sidenote: Despite these, maybe the current output is too granular, and maybe as we get more info it all smooths out and the parameter spaces become more or less smooth and unimodal? |
tested optimisation modifcations and resolcing issue #11
My optimisation revision is resulting to worse concordances and MAE! See commit 91b023b for comparison. |
Testing optimisation changes here. Main changes:
|
|
|
|
The most recent merge from dev:
|
To address the glacial pace at which the new optimisation algorithm proceeds, we added an additional inner-loop-break condition, i.e. when the last 3 consecutive MAEs are the same. |
Still too slow!
|
|
|
It's starting to seem like the parameter spaces are not smooth and unimodal. See the sensitivity analysis currently running in perf.md.
The text was updated successfully, but these errors were encountered: