Skip to content

Commit

Permalink
Advent of Rework #1: Breaking things
Browse files Browse the repository at this point in the history
* Rename Problem to AbstractManoptProblem
* Introduce an AbstractManifoldObjective
* Rename solve to solve!
  • Loading branch information
kellertuer committed Dec 1, 2022
1 parent 40ec550 commit a836bb9
Show file tree
Hide file tree
Showing 68 changed files with 414 additions and 330 deletions.
6 changes: 3 additions & 3 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Even providing a single new method is a good contribution.

A main contribution you can provide is another algorithm that is not yet included in the
package.
An algorithm is always based on a concrete type of a [`Problem`](https://manoptjl.org/stable/plans/index.html#Problems-1) storing the main information of the task and a concrete type of an [`Option`](https://manoptjl.org/stable/plans/index.html#Options-1from) storing all information that needs to be known to the solver in general. The actual algorithm is split into an initialization phase, see [`initialize_solver!`](https://manoptjl.org/stable/solvers/index.html#Manopt.initialize_solver!), and the implementation of the `i`th step of the solver itself, see before the iterative procedure, see [`step_solver!`](https://manoptjl.org/stable/solvers/index.html#Manopt.step_solver!).
An algorithm is always based on a concrete type of a [`AbstractManoptProblem`](https://manoptjl.org/stable/plans/index.html#AbstractManoptProblems-1) storing the main information of the task and a concrete type of an [`Option`](https://manoptjl.org/stable/plans/index.html#Options-1from) storing all information that needs to be known to the solver in general. The actual algorithm is split into an initialization phase, see [`initialize_solver!`](https://manoptjl.org/stable/solvers/index.html#Manopt.initialize_solver!), and the implementation of the `i`th step of the solver itself, see before the iterative procedure, see [`step_solver!`](https://manoptjl.org/stable/solvers/index.html#Manopt.step_solver!).
For these two functions, it would be great if a new algorithm uses functions from the [`ManifoldsBase.jl`](https://juliamanifolds.github.io/Manifolds.jl/latest/interface.html) interface as generically as possible. For example, if possible use [`retract!(M,q,p,X)`](https://juliamanifolds.github.io/Manifolds.jl/latest/interface.html#ManifoldsBase.retract!-Tuple{AbstractManifold,Any,Any,Any}) in favor of [`exp!(M,q,p,X)`](https://juliamanifolds.github.io/Manifolds.jl/latest/interface.html#ManifoldsBase.exp!-Tuple{AbstractManifold,Any,Any,Any}) to perform a step starting in `p` in direction `X` (in place of `q`), since the exponential map might be too expensive to evaluate or might not be available on a certain manifold. See [Retractions and inverse retractions](https://juliamanifolds.github.io/Manifolds.jl/latest/interface.html#Retractions-and-inverse-Retractions) for more details.
Further, if possible, prefer [`retract!(M,q,p,X)`](https://juliamanifolds.github.io/Manifolds.jl/latest/interface.html#ManifoldsBase.retract!-Tuple{AbstractManifold,Any,Any,Any}) in favor of [`retract(M,p,X)`](https://juliamanifolds.github.io/Manifolds.jl/latest/interface.html#ManifoldsBase.retract-Tuple{AbstractManifold,Any,Any}), since a computation in place of a suitable variable `q` reduces memory allocations.

Expand All @@ -58,9 +58,9 @@ We run [`JuliaFormatter.jl`](https://github.com/domluna/JuliaFormatter.jl) on th

We also follow a few internal conventions:

- It is preferred that the `Problem`'s struct contains information about the general structure of the problem.
- It is preferred that the `AbstractManoptProblem`'s struct contains information about the general structure of the problem.
- Any implemented function should be accompanied by its mathematical formulae if a closed form exists.
- Problem and option structures are stored within the `plan/` folder and sorted by properties of the problem and/or solver at hand.
- AbstractManoptProblem and option structures are stored within the `plan/` folder and sorted by properties of the problem and/or solver at hand.
- Within the source code of one algorithm, the high level interface should be first, then the initialization, then the step.
- Otherwise an alphabetical order is preferable.
- The above implies that the mutating variant of a function follows the non-mutating variant.
Expand Down
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "Manopt"
uuid = "0fc0a36d-df90-57f3-8f93-d78a9fc72bb5"
authors = ["Ronny Bergmann <[email protected]>"]
version = "0.3.47"
version = "0.4.0"

[deps]
ColorSchemes = "35d6a980-a343-548e-a6ea-1d62b119f2f4"
Expand Down
4 changes: 2 additions & 2 deletions benchmarks/benchmark_subgradient.jl
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@ function ∂f!(M, X, y)
return X
end
x2 = copy(x0)
subgradient_method!(M, f, ∂f!, x2; evaluation=MutatingEvaluation())
@btime subgradient_method!($M, $f, $∂f!, x3; evaluation=$(MutatingEvaluation())) setup = (
subgradient_method!(M, f, ∂f!, x2; evaluation=InplaceEvaluation())
@btime subgradient_method!($M, $f, $∂f!, x3; evaluation=$(InplaceEvaluation())) setup = (
x3 = deepcopy($x0)
)

Expand Down
8 changes: 4 additions & 4 deletions benchmarks/benchmark_trust_regions.jl
Original file line number Diff line number Diff line change
Expand Up @@ -15,20 +15,20 @@ x_opt = trust_regions(M, cost, rgrad, rhess, x; max_trust_region_radius=8.0)
h = RHess(M, A, p)
g = RGrad(M, A)
x_opt2 = trust_regions(
M, cost, g, h, x; max_trust_region_radius=8.0, evaluation=MutatingEvaluation()
M, cost, g, h, x; max_trust_region_radius=8.0, evaluation=InplaceEvaluation()
)

@btime trust_regions(
$M, $cost, $g, $h, x2; max_trust_region_radius=8.0, evaluation=$(MutatingEvaluation())
$M, $cost, $g, $h, x2; max_trust_region_radius=8.0, evaluation=$(InplaceEvaluation())
) setup = (x2 = deepcopy($x))

x3 = deepcopy(x)
trust_regions!(
M, cost, g, h, x3; max_trust_region_radius=8.0, evaluation=MutatingEvaluation()
M, cost, g, h, x3; max_trust_region_radius=8.0, evaluation=InplaceEvaluation()
)

@btime trust_regions!(
$M, $cost, $g, $h, x3; max_trust_region_radius=8.0, evaluation=$(MutatingEvaluation())
$M, $cost, $g, $h, x3; max_trust_region_radius=8.0, evaluation=$(InplaceEvaluation())
) setup = (x3 = deepcopy($x))

@test distance(M, x_opt, x_opt2) 0
Expand Down
2 changes: 1 addition & 1 deletion docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ Several functions are available, implemented on an arbitrary manifold, [cost fun

### Optimization Algorithms (Solvers)

For every optimization algorithm, a [solver](@ref SolversSection) is implemented based on a [`Problem`](@ref) that describes the problem to solve and its [`Options`](@ref) that set up the solver, store interims values. Together they
For every optimization algorithm, a [solver](@ref SolversSection) is implemented based on a [`AbstractManoptProblem`](@ref) that describes the problem to solve and its [`Options`](@ref) that set up the solver, store interims values. Together they
form a [plan](@ref planSection).

### Visualization
Expand Down
2 changes: 1 addition & 1 deletion docs/src/plans/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
CurrentModule = Manopt
```

In order to start a solver, both a [Problem](@ref ProblemSection) and [Options](@ref OptionsSection) are required.
In order to start a solver, both a [AbstractManoptProblem](@ref AbstractManoptProblemSection) and [Options](@ref OptionsSection) are required.
Together they form a __plan__.
Everything related to problems, options, and their tools in general, is explained in this
section and its subpages. The specific Options related to a certain (concrete) solver can be
Expand Down
4 changes: 2 additions & 2 deletions docs/src/plans/problem.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# [Problems](@id ProblemSection)
# [AbstractManoptProblems](@id ProblemSection)

```@meta
CurrentModule = Manopt
Expand All @@ -19,7 +19,7 @@ allocation function `X = gradF(x)` allocates memory for its result `X`, while `g
```@docs
AbstractEvaluationType
AllocatingEvaluation
MutatingEvaluation
InplaceEvaluation
```

## Cost based problem
Expand Down
4 changes: 2 additions & 2 deletions docs/src/plans/stopping_criteria.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@ StoppingCriterionSet
```

Then the stopping criteria `s` might have certain internal values to check against,
and this is done when calling them as a function `s(p::Problem, o::Options)`,
where the [`Problem`](@ref) and the [`Options`](@ref) together represent
and this is done when calling them as a function `s(p::AbstractManoptProblem, o::Options)`,
where the [`AbstractManoptProblem`](@ref) and the [`Options`](@ref) together represent
the current state of the solver. The functor returns either `false` when the stopping criterion is not fulfilled or `true` otherwise.
One field all criteria should have is the `s.reason`, a string giving the reason to stop, see [`get_reason`](@ref).

Expand Down
6 changes: 3 additions & 3 deletions docs/src/solvers/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
CurrentModule = Manopt
```

Solvers can be applied to [`Problem`](@ref)s with solver
Solvers can be applied to [`AbstractManoptProblem`](@ref)s with solver
specific [`Options`](@ref).

# List of Algorithms
Expand Down Expand Up @@ -40,7 +40,7 @@ Note that the solvers (or their [`Options`](@ref), to be precise) can also be de
The main function a solver calls is

```@docs
solve(p::Problem, o::Options)
solve!(p::AbstractManoptProblem, o::Options)
```

which is a framework that you in general should not change or redefine.
Expand All @@ -52,5 +52,5 @@ initialize_solver!
step_solver!
get_solver_result
get_solver_return
stop_solver!(p::Problem, o::Options, i::Int)
stop_solver!(p::AbstractManoptProblem, o::Options, i::Int)
```
8 changes: 4 additions & 4 deletions examples/FrankWolfeSPDMean.jl
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ PlutoUI.with_terminal() do
50,
],
record=[:Iteration, :Iterate, :Cost],
evaluation=MutatingEvaluation(),
evaluation=InplaceEvaluation(),
return_options=true,
)
end
Expand All @@ -201,7 +201,7 @@ statsF20 = @timed Frank_Wolfe_method!(
grad_weighted_mean!,
q1;
subtask=special_oracle!,
evaluation=MutatingEvaluation(),
evaluation=InplaceEvaluation(),
stopping_criterion=StopAfterIteration(20),
);

Expand All @@ -225,7 +225,7 @@ PlutoUI.with_terminal() do
:Stop,
1,
],
evaluation=MutatingEvaluation(),
evaluation=InplaceEvaluation(),
stopping_criterion=StopAfterIteration(200) | StopWhenGradientNormLess(1e-12),
return_options=true,
)
Expand All @@ -240,7 +240,7 @@ statsG = @timed gradient_descent!(
weighted_mean_cost,
grad_weighted_mean!,
q2;
evaluation=MutatingEvaluation(),
evaluation=InplaceEvaluation(),
stopping_criterion=StopAfterIteration(200) | StopWhenGradientNormLess(1e-12),
);

Expand Down
2 changes: 1 addition & 1 deletion src/Manopt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,7 @@ export Problem,
StochasticGradientProblem,
AbstractEvaluationType,
AllocatingEvaluation,
MutatingEvaluation
InplaceEvaluation
#
# Options
export Options,
Expand Down
2 changes: 1 addition & 1 deletion src/functions/proximal_maps.jl
Original file line number Diff line number Diff line change
Expand Up @@ -349,7 +349,7 @@ function prox_TV2!(
∂F!,
y;
stopping_criterion=stopping_criterion,
evaluation=MutatingEvaluation(),
evaluation=InplaceEvaluation(),
kwargs...,
)
return y
Expand Down
6 changes: 3 additions & 3 deletions src/plans/alm_plan.jl
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ in the keyword arguments.
[`augmented_Lagrangian_method`](@ref)
"""
mutable struct AugmentedLagrangianMethodOptions{
P,Pr<:Problem,Op<:Options,TStopping<:StoppingCriterion
P,Pr<:AbstractManoptProblem,Op<:Options,TStopping<:StoppingCriterion
} <: Options
x::P
sub_problem::Pr
Expand Down Expand Up @@ -78,7 +78,7 @@ mutable struct AugmentedLagrangianMethodOptions{
stopping_criterion::StoppingCriterion=StopAfterIteration(300) | (
StopWhenSmallerOrEqual(, ϵ_min) & StopWhenChangeLess(1e-10)
),
) where {P,Pr<:Problem,Op<:Options}
) where {P,Pr<:AbstractManoptProblem,Op<:Options}
o = new{P,Pr,Op,typeof(stopping_criterion)}()
o.x = x0
o.sub_problem = sub_problem
Expand Down Expand Up @@ -209,7 +209,7 @@ end
# mutating vector -> we can omit a few of the ineq gradients and allocations.
function (
LG::AugmentedLagrangianGrad{
<:ConstrainedProblem{<:MutatingEvaluation,<:VectorConstraint}
<:ConstrainedProblem{<:InplaceEvaluation,<:VectorConstraint}
}
)(
M::AbstractManifold, X, p
Expand Down
22 changes: 8 additions & 14 deletions src/plans/alternating_gradient_plan.jl
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
@doc raw"""
AlternatingGradientProblem <: Problem
AlternatingGradientProblem <:AbstractManoptProblem
An alternating gradient problem consists of
* a `ProductManifold M` ``=\mathcal M = \mathcal M_1 × ⋯ × M_n``
Expand Down Expand Up @@ -91,30 +91,26 @@ function get_gradient!(
return X
end
function get_gradient(
p::AlternatingGradientProblem{MutatingEvaluation,<:AbstractManifold,TC,<:Function}, x
p::AlternatingGradientProblem{InplaceEvaluation,<:AbstractManifold,TC,<:Function}, x
) where {TC}
Y = zero_vector(p.M, x)
return p.gradient!!(p.M, Y, x)
end
function get_gradient(
p::AlternatingGradientProblem{
MutatingEvaluation,<:AbstractManifold,TC,<:AbstractVector
},
p::AlternatingGradientProblem{InplaceEvaluation,<:AbstractManifold,TC,<:AbstractVector},
x,
) where {TC}
Y = zero_vector(p.M, x)
get_gradient!(p, Y, x)
return Y
end
function get_gradient!(
p::AlternatingGradientProblem{MutatingEvaluation,<:AbstractManifold,TC,<:Function}, X, x
p::AlternatingGradientProblem{InplaceEvaluation,<:AbstractManifold,TC,<:Function}, X, x
) where {TC}
return p.gradient!!(p.M, X, x)
end
function get_gradient!(
p::AlternatingGradientProblem{
MutatingEvaluation,<:AbstractManifold,TC,<:AbstractVector
},
p::AlternatingGradientProblem{InplaceEvaluation,<:AbstractManifold,TC,<:AbstractVector},
X,
x,
) where {TC}
Expand Down Expand Up @@ -167,14 +163,14 @@ function get_gradient!(
return X
end
function get_gradient(
p::AlternatingGradientProblem{MutatingEvaluation,<:AbstractManifold,TC}, k, x
p::AlternatingGradientProblem{InplaceEvaluation,<:AbstractManifold,TC}, k, x
) where {TC}
X = zero_vector(p.M[k], x[p.M, k])
get_gradient!(p, X, k, x)
return X
end
function get_gradient!(
p::AlternatingGradientProblem{MutatingEvaluation,<:AbstractManifold,TC,<:Function},
p::AlternatingGradientProblem{InplaceEvaluation,<:AbstractManifold,TC,<:Function},
X,
k,
x,
Expand All @@ -186,9 +182,7 @@ function get_gradient!(
return X
end
function get_gradient!(
p::AlternatingGradientProblem{
MutatingEvaluation,<:AbstractManifold,TC,<:AbstractVector
},
p::AlternatingGradientProblem{InplaceEvaluation,<:AbstractManifold,TC,<:AbstractVector},
X,
k,
x,
Expand Down
Loading

0 comments on commit a836bb9

Please sign in to comment.