You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An exponential-based cost function is the most physically motivated and easily justifiable form, but it will inherently yield a wide range of costs that span many orders of magnitude which makes PR solving very difficult and time-consuming. Our PR solving procedure depends on being able to iteratively add costs via PR cost updates in order to eventually get path ordering to switch. However, if the shortest and 2nd shortest paths differ in cost by e.g. 10,000 (where the shortest path has an unsolved nested PR, and the 2nd shortest does not) and each iteration only increases the cost of the shortest path by e.g. 10, then it will require 1000 iterations for the 2nd shortest path to become the shortest and thus to solve the principle PR in question.
Perhaps there could be a way to accelerate PR solving in this instance by checking if a PR cost is increasing by the same amount, let's call it dx, for e.g. 5 or 10 iterations in a row, and in that case then starting to increase it by 10*dx for a certain number of iterations instead, and then perhaps continuing to increase dx by some factor after waiting some number of iterations until eventually a path ordering switch occurs, resulting in the principle PR being solved.
This is definitely a half-baked idea, and I can easily imagine corner cases that it would encounter problems with, but I this should serve as the general issue of how we accelerate PR solving with physically motivated exponential cost functions rather than simply trying to use cost functions that bring costs closer together, which makes PR solving easier but is kinda the easy way out and is not physically justified.
The text was updated successfully, but these errors were encountered:
An exponential-based cost function is the most physically motivated and easily justifiable form, but it will inherently yield a wide range of costs that span many orders of magnitude which makes PR solving very difficult and time-consuming. Our PR solving procedure depends on being able to iteratively add costs via PR cost updates in order to eventually get path ordering to switch. However, if the shortest and 2nd shortest paths differ in cost by e.g. 10,000 (where the shortest path has an unsolved nested PR, and the 2nd shortest does not) and each iteration only increases the cost of the shortest path by e.g. 10, then it will require 1000 iterations for the 2nd shortest path to become the shortest and thus to solve the principle PR in question.
Perhaps there could be a way to accelerate PR solving in this instance by checking if a PR cost is increasing by the same amount, let's call it dx, for e.g. 5 or 10 iterations in a row, and in that case then starting to increase it by 10*dx for a certain number of iterations instead, and then perhaps continuing to increase dx by some factor after waiting some number of iterations until eventually a path ordering switch occurs, resulting in the principle PR being solved.
This is definitely a half-baked idea, and I can easily imagine corner cases that it would encounter problems with, but I this should serve as the general issue of how we accelerate PR solving with physically motivated exponential cost functions rather than simply trying to use cost functions that bring costs closer together, which makes PR solving easier but is kinda the easy way out and is not physically justified.
The text was updated successfully, but these errors were encountered: