Replies: 1 comment
-
cc @dme65 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi Botorch community,
I have had some good success in using SEBO with ax as per the ax developer API. I have generally tested it with L1_norm and all 0's as target. I am interested though, in doing a weighted L1_norm as penalty.
For example, some but not all are useful/appropiate to sparsify and it would be nice to be able to take some parameters out of the equation.
As another example, if e.g. "input cost" is known apriori, then a user could create a weight-tensor which reflects this and the acq. function would take it into account.
One question would be, is there some reason this wouldnt work? as far as I can tell, from the sourcecode,
def L1_norm_func(X: Tensor, init_point: Tensor) -> Tensor: r"""L1_norm takes in a a
batch_shape x n x d-dim input tensor
Xto a
batch_shape x n x 1-dimensional L1 norm tensor. To be used for constructing a GenericDeterministicModel. """ return torch.linalg.norm((X - init_point), ord=1, dim=-1, keepdim=True)
https://ax.dev/api/_modules/ax/models/torch/botorch_modular/sebo.html#L1_norm_func
the "init_point" also referred to as target_point is a d-dimensional tensor which is used to subtract from a point in the space, before sum(abs(X)). Similarly or instead, it could take a weight-tensor to multiply with, before sum(abs(X)).
I am working with a normalized [0,1]^d design space and can do without a target_point, as it would make it a bit easier to formulate weights.
I am more unsure about the L0_norm penalty, as i understand that a differentiable relaxation is necessary and the weights would have to be binary.
Another question is though, would there be an easier approach to implementing a modified weighted_L1_norm function rather than the above, instead of copying the module as a whole and changing the function?
Beta Was this translation helpful? Give feedback.
All reactions