-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Function to generate object and objective gradient evaluator functions #115
Conversation
Okay, based on the other PR. I would say that this is a fun function for quite a few cases. However, we optimally want to define the objective expression as a continuous function, possibly with an integral. import sympy as sm
import sympy.physics.mechanics as me
x, v, f = me.dynamicsymbols("x v f")
t = me.dynamicsymbols._t
m = sm.symbols("m")
integration_method = "backward euler"
node_times = np.linspace(0, 2, 101) # duration=2.0 ; N=101 ; interval_value=0.02
obj_expr = m**2 + sm.Integral(f**2, (t, 0.5, 1.5))
obj, obj_grad = create_objective_function(obj_expr, (s, v), (f,), (m,), integration_method, node_times)
# Should give something like
# free[303] is just the value of m
# free[228] is f.subs(t, 0.52) (we are using backward euler), therefore it is not f.subs(t, 0.5)
# free[278] is f.subs(t, 1.50)
def obj(free):
return free[303] ** 2 + np.sum(free[228:278] ** 2)
def obj_grad(free):
return np.hstack((np.zeros(101), np.zeros(101), np.zeros(25), 2 * free[228:278], np.zeros(26), 2 * free[303])) Not yet fully sure if this is the ideal method to construct obj_grad, but it is the version which looks most like what this PR currently tried to do. |
A temporary simplification would be to just ignore the bounds of the integral for now, i.e. |
Here is a fun objective, that will give even when trying to keep things simple quite a few problems. import numpy as np
import sympy as sm
import sympy.physics.mechanics as me
x, v, f = me.dynamicsymbols("x v f")
t = me.dynamicsymbols._t
m, k, c = sm.symbols("m k c")
integration_method = "backward euler"
node_times = np.linspace(0, 2, 11)
states = sm.ImmutableDenseMatrix([x, v])
inputs = sm.ImmutableMatrix([f])
params = sm.ImmutableMatrix([c, k, m])
n, q, r = states.shape[0], inputs.shape[0], params.shape[0]
obj = m**2 + sm.Integral(f**2, (t,)) + sm.Integral(c**2 * f, (t,)) + sm.Integral(k**2 , (t,))
obj_grad = sm.ImmutableMatrix([obj]).jacobian(states.col_join(inputs).col_join(params)) |
@moorepants I think I figured out a method to combine it all. It is not the most beautiful function I have ever written, but it seems to do the job as the tests show. |
#116 fixes the test failuers (pytest 8 just came out with a deprecation). You can merge master. |
|
||
# Compute analytical gradient of the objective function | ||
objective_grad = sm.ImmutableMatrix([objective]).jacobian( | ||
states[:] + inputs[:] + params[:]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just as note for me to see what this looks like:
import sympy as sm
a, t = sm.symbols('a, t')
f, g = sm.Function('f')(t), sm.Function('g')(t)
expr = f**2 + a*sm.sin(g)
int_expr = sm.Integral(expr, t)
sm.ImmutableMatrix([int_expr]).jacobian([a, f, g])
Matrix([[Integral(sin(g(t)), t), Integral(2*f(t), t), Integral(a*cos(g(t)), t)]])
obj_grad_time_expr_eval = lambdify_function( | ||
objective_grad[:n + q], np.hstack((0, np.ones(N - 1))), False) | ||
obj_grad_param_expr_eval = lambdify_function( | ||
objective_grad[n + q:], np.hstack((0, np.ones(N - 1))), True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't follow these lines perfectly. What is the reason to force the sum or not?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it all feels a bit like a cheat.
By multiplying with an array I achieve two goals:
- I make sure that we always get the correct shape both
sm.Integral(f(t), t)
andsm.Integral(m, t)
should work. In this casem
is translated to an array ofm
. - It allows setting weights for nodes. In backward Euler you have a weight of 0 for the first node.
Taking the sum differs a little based on the fact whether we'd like to execute the actual integration or whether we'd like to keep the items separate.
When computing the gradient of a state, we want the gradient for each of the nodes separately, so we should not take the sum and just make sure that we apply the right weight using the multiplication array.
However, suppose we want the gradient of sm.Integral(f(t)**2 * m**2, t)
to m
, i.e. sm.Integral(2 * f(t)**2 * m, t)
, then we do need to properly integrate it by taking the sum.
You happy with this? I think it is good to go in if you do. |
I think that it is good for now. Unless you're planning to add a lot of advanced algorithms soon, I really don't see the need to make it more advanced. Like the internals may change. The only thing is that the keyword arguments expect fixed time stepping. |
Ok. let's merge. Nice addition! |
This PR proposes a utility function to allow one to specify an analytic expression as the objective function.
It is a relatively simple function with one major limitation, namely that one cannot use time-dependent and time-independent variables in the objective function.
Additionally, I noticed while opening this PR that #31 aims to implement a similar utility in an entirely different way. Will have to look into that one to see if it would be a better solution.