Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A curve trait for general interoperation #80

Merged
merged 22 commits into from
Aug 3, 2024

Conversation

mweatherley
Copy link
Contributor

@mweatherley mweatherley commented Apr 11, 2024

RENDERED

This RFC describes a trait API for general curves within the Bevy ecosystem, abstracting over their low-level implementations.

It has a partial implementation in this draft PR.

Integration with bevy_animation has a proof-of-concept prototype in this draft PR.

Integration with bevy_math's cubic splines has a proof-of-concept prototype in this draft PR.

@jnhyatt
Copy link

jnhyatt commented Apr 12, 2024

The proposal looks great so far. Some thoughts:

  1. Since these curves are general purpose rather than specifically targeting animation, I see a lot of value in being able to express curves over any arbitrary range, e.g. -1.0..=1.0, f32::NEG_INFINITY..=0.0, 100.0..=200.0. Instead of storing a duration, what about a domain: RangeInclusive<f32>?
  2. I may have missed in in the RFC, but how are ranges sampled outside of bounds? For a range with duration 1.0, what happens when you sample -1 or 2? Is it defined by extrapolating using the derivative or clamped to the nearest value? This is something that comes up a lot in animation curves.

@NthTensor
Copy link

Might have to have option/result samples. I think the general domain is worth considering.

@NthTensor
Copy link

Also do we want this to be general enough to capture parametric surfaces? Because I think we can do this if T is not Ord (a Vec2 for example).

@mweatherley
Copy link
Contributor Author

mweatherley commented Apr 12, 2024

The proposal looks great so far. Some thoughts:

1. Since these curves are general purpose rather than specifically targeting animation, I see a lot of value in being able to express curves over any arbitrary range, e.g. `-1.0..=1.0`, `f32::NEG_INFINITY..=0.0`, `100.0..=200.0`. Instead of storing a duration, what about a `domain: RangeInclusive<f32>`?

2. I may have missed in in the RFC, but how are ranges sampled outside of bounds? For a range with duration 1.0, what happens when you sample -1 or 2? Is it defined by extrapolating using the derivative or clamped to the nearest value? This is something that comes up a lot in animation curves.

Ah, I'm glad you brought this up. The basic form of Curve<T> itself is actually based on this animation draft RFC, so it has inherited some properties which might not be totally appropriate in a general setting. I think that your suggestion in (1) is quite reasonable. My main qualm is actually that Rust's Range types are not Copy, which is insanely annoying in situations like this where we really have no interest in actually using them as iterators (and this is a burden we pass on to the consumer).

As for (2) — good question. Once again, the animation RFC I linked suggested that sampling would always return a value, but that the way that out-of-bounds samples might be formed would be left implementation-specific. Perhaps @james7132 might chime in with some insight there. I think that it's tempting to believe that returning an error or None when the sample point is out of range is just enforcing invariants, but I wouldn't be surprised if there is a compelling rationale to the contrary. For curves defined algebraically, extrapolating outside of the given bounds also generally makes sense anyway (e.g. Hermite interpolation between two points), so I might be in favor of that kind of thing regardless.

@mweatherley
Copy link
Contributor Author

Also do we want this to be general enough to capture parametric surfaces? Because I think we can do this if T is not Ord (a Vec2 for example).

I guess I'm a little confused by this since there are no Ord constraints anywhere as far as I'm aware. I imagine that what you have in mind is encoding a two-parameter surface as a one-parameter family of curves; I would think that the main barrier there is the constraint that you need T: Curve<Point> where T itself is actually Interpolable in some reasonable fashion, but perhaps this is actually less challenging than I imagined when I started writing this response; e.g. a variant of FunctionCurve<T> (the thing holding a function as a wrapper to make a Curve<T>) could probably do the trick if you store a weight field (and use this to pass on the interpolation to the results of evaluation).

@jnhyatt
Copy link

jnhyatt commented Apr 12, 2024

Rust's Range types are not Copy

I had no idea, that's pretty unfortunate.

What if sampling outside was done by means of a wrapper? For example a Curve that wraps another but has a domain over all reals? Sampling could look like: Clamp(my_curve).sample(...) or Hermite(my_curve).sample(...)

@mweatherley
Copy link
Contributor Author

A random thought on the range business: we could implement our own Copy version of ranges (say, T), and then use impl Into<T> or impl TryInto<T> together with a corresponding implementation for certain Bevy ranges in order to still allow for the use of the ..= syntax.

What if sampling outside was done by means of a wrapper? For example a Curve that wraps another but has a domain over all reals? Sampling could look like: Clamp(my_curve).sample(...) or Hermite(my_curve).sample(...)

I think something like that could work, but it would still be a little annoying to have to unwrap its sampling output all the time despite knowing it would always succeed.

@jnhyatt
Copy link

jnhyatt commented Apr 12, 2024

I think something like that could work, but it would still be a little annoying to have to unwrap its sampling output all the time despite knowing it would always succeed.

I was assuming sample would just return a T, not an Option/Result. If it won't always succeed, we could embed information for extrapolating in the result's Err variant instead, then maybe provide some convenience method for extrapolating:

fn clamp<T>(x: Result<T, OutOfBounds<T>>) -> T {
    match x {
        Ok(x) => x,
        Err(out_of_bounds) => todo!(), // `out_of_bounds` has all the info we need to extrapolate out-of-bounds sampling
    }
}

let curve = function_curve((0.0..=1.0), |x| x);
let out_of_bounds_sample = clamp(curve.sample(2.0)); // returns 2.0

@alice-i-cecile
Copy link
Member

Rust's Range types are not Copy

FYI, this is slated to change over either the 2024 or 2027 edition boundary: https://github.com/pitaj/rfcs/blob/new-range/text/3550-new-range.md

@mweatherley
Copy link
Contributor Author

mweatherley commented Apr 13, 2024

Rust's Range types are not Copy

FYI, this is slated to change over either the 2024 or 2027 edition boundary: https://github.com/pitaj/rfcs/blob/new-range/text/3550-new-range.md

That's good to know! Honestly, I tinkered around and found that RangeInclusive is honestly kind of poor regardless (doesn't enforce any invariants, lacks virtually any useful methods other than contains...), so I'm leaning towards making our own Interval type with a TryFrom<RangeInclusive<f32>> implementation regardless.

(This wouldn't be very heavy, but would enforce non-emptiness and include methods like length, is_finite, and intersect, and it would be Copy.)

@mweatherley
Copy link
Contributor Author

mweatherley commented Apr 13, 2024

Okay, here again to explain a couple changes:

Firstly, I added Interval, which is a lightweight type representing a non-empty closed interval that is possibly infinite in either direction. This has three main benefits:

  • Its invariants are enforced at the type level, which allows many cases of input sanitization to be removed from the API.
  • It allows the provision of a large number of useful methods; these turned out to be more numerous than I expected while refactoring, so there are a number of those now.
  • It is Copy.

Secondly, with the addition of sample_checked and sample_clamped, I now have a belief about the suite of sample functions that I am willing to defend: While we might alter what is named what here, I think that this is what the API should actually look like. My reasoning is mainly as follows:

  • The existence of zero-overhead (i.e. fully unchecked) sampling in the trait definition is non-negotiable, since sampling is one of the main paths in applications and some may be performance-sensitive.
  • I think that sample_checked and sample_clamped cover almost all use-cases to handle sampling outside of the domain.

Outside of this, I don't really care whether sample_checked gets renamed to sample and sample becomes sample_unchecked or something.

@NthTensor
Copy link

Read through the updates. Looks really good.

The interval type makes sense, and I agree Range doesn't seem like quite the right fit. Among other things, allowing curves over arbitrary intervals could simplify thinking about segments of splines as curves in their own right. I think we may want to investigate functions for sticking/appending curves together based on their domains or splitting them apart.

I do worry that requiring an interval domain will cause problems if we want to work with derivatives. It would be nice to be able to represent the first and second derivatives as "functional curves" where they can be determined analytically (eg. when working with splines), but most Bezier splines aren't generally C2 (and some aren't even C1). I suppose a tangent curve could be a Curve<Option<Vec3>>but this seems at bit clunky and generally at-odds with the direction of sample_unchecked. This is a problem for moving frames as well.

One possible solution might be to allow curves over arbitrary measurable sets (or otherwise generalize Interval) but I'm not sure what that would look like and I'd rather be able to work with derivatives without that sort of theoretical equipment.

Setting derivatives aside, I agree with you're points about domain checking. My preference would be for sample_unchecked as the main implementation point, and then having the trait provide checked wrappers sample and sample_clamped. New users (who may be prone to bounds errors) will probably reach for sample first. Implementers will know not to write redundant bounds checking because of the function name.

By the way, I can think of two other traits that also provide interpolation: Mix, Animatable. You've suggested adding a blanket implementation for VectorSpace and we could do the same here, but I wonder if we might want to simply generalize Mix and then have all the others extend it. It seems like there is pretty much one correct way to interpolate any given type, so the only reason to have multiple versions in different traits is name ergonomics.


Also, please disregard my earlier comment about Ord. I tossed this off in a hurry and was confused something I was thinking about with the RFC (I was playing with sample(t: D) -> R where D: Field, R: Interpolable).

@mweatherley
Copy link
Contributor Author

The interval type makes sense, and I agree Range doesn't seem like quite the right fit. Among other things, allowing curves over arbitrary intervals could simplify thinking about segments of splines as curves in their own right. I think we may want to investigate functions for sticking/appending curves together based on their domains or splitting them apart.

Yeah, this could be a good idea; I am a bit wary of scope creep at this point, since this seems like something that can be adjudicated outside of the RFC proper, but I agree that it's definitely worth investigating and seriously considering things in this direction.

I do worry that requiring an interval domain will cause problems if we want to work with derivatives. It would be nice to be able to represent the first and second derivatives as "functional curves" where they can be determined analytically (eg. when working with splines), but most Bezier splines aren't generally C2 (and some aren't even C1). I suppose a tangent curve could be a Curve<Option<Vec3>>but this seems at bit clunky and generally at-odds with the direction of sample_unchecked. This is a problem for moving frames as well.

My thoughts on this have yet to materialize into concrete implementation details, but as far as I can tell, we are mostly in the clear; the main thing I would like to do with our spline code in order to adapt it to this interface is to differentiate classes of curves a bit more at the level of types. All of the spline constructions that we currently support, for instance, are globally C1; in my ideal world, this would mean that they naturally produce something that looks like Curve<(Vec3, Vec3)>, packaging the points and derivatives together (or some struct does the same thing).

Then, for instance, for the B-spline construction, the produced type would actually be slightly different from the previous one; in addition to implementing the Curve trait for the C1 data, it would also implement Curve<C2Data> or whatever. (Both of these concrete return types could also provide access to the underlying Bézier segments as well, not through the Curve interface itself, and those would have as much information as you could wish for regardless.)

I am unsure on the finer points of what that type-level differentiation would look like (kind of next on my list to investigate), but I guess my angle here is this: most of the things you would want to do with spline output are only going to care about the Curve interface anyway and not the concrete return types; for instance, most things you would dream of doing with curve geometry really only care that you start with a Curve<C1Data> (positions and derivatives), and both of the concrete return types would get you there.

So, to be brief, my vision for actual implementation involves reifying the quality of spline constructions more strongly at the type level; I hope that makes sense.

@NthTensor
Copy link

NthTensor commented Apr 14, 2024

I am a bit wary of scope creep at this point, since this seems like something that can be adjudicated outside of the RFC proper

Quite right. There's very little I would add to the proposal at this point, and I'm not pushing for any of this to be added to the RFC. But I do think it's worth noting future work items here as well as evaluating the sorts of things the RFC lets us build. Please let me know if you think I'm derailing the discussion, that's not my intention.

All of the spline constructions that we currently support, for instance, are globally C1.

Bezier splines are only C1 within curve segments, and may or may not have smooth transitions between segments depending on the position of the control points. I don't think we can or should assume all our curves will be C1.

Part of the problem is that "tangents" can mean like four different things depending on the underlying curve: "Functional curves" are generally going to be either C1 or piecewise C1, which is the difference between a T and an Option<T>. Sampled curves can have two sources of approximate tangents: Finite differences, or samples from a derivative.

As I see it:

  • We need to be able to specify continuity information as part of a parameter type bound,
  • Depending on the bound, the tangents/acceleration may or may not be guaranteed to exist,
  • And we need to be able to sample both position+tangent and position+tangent+acceleration as single functions for efficiency purposes.

Then, for instance, for the B-spline construction, the produced type would actually be slightly different from the previous one; in addition to implementing the Curve trait for the C1 data, it would also implement Curve or whatever.

The idea of multiple implementations of Curve is interesting, but I don't love the fully qualified syntax required to disambiguate between them.

fn foo<C>(curve: C)
where C: Curve<C1<Vec3>> + Curve<C2<Vec3>>
{
    let (pos, acc) = <C as Curve<C1<Vec3>>>::sample(t)
    let vel = <C as Curve<C2<Vec3>>>::sample(t)
}

Would something like the following work?

// These all have blanket implementations of curve and other Cn/Pn traits 
trait C2C1Curve<T> { ... } // C2 + C1: Curve<(T, T, T)>
trait P2C1Curve<T> { ... } // Piecewise C2 + C1: Curve<T, T, Option<T>>
trait P2P1Curve<T> { ... } // Piecewise C2 + Piecewise C1: Curve<(T, Option<T>, Option<T>)>
trait C1Curve<T> { ... } // C1: Curve<(T, T)>
trait P1Curve<T> { ... } // Piecewise C1: Curve<(T, Option<T>)>

fn foo(curve: impl P2C1Curve<Vec3>) {
    if let (pos, vel, Some(acc)) = curve::sample(t) {
        ...
    }
    let acc = curve.sample_acc(t);
    let pos = curve.sample_pos(t);
}

New curves which provide tangents/acceleration would implement the strongest trait they can. The types are a bit cumbersome but I think it would avoid the qualified syntax.

We don't have to spec anything out as part of the RFC, but I'd like a better idea of what this would look like to make sure we aren't locking ourselves out of anything in the future.

@mweatherley
Copy link
Contributor Author

Please let me know if you think I'm derailing the discussion, that's not my intention.

You're quite fine. :)

Bezier splines are only C1 within curve segments, and may or may not have smooth transitions between segments depending on the position of the control points. I don't think we can or should assume all our curves will be C1.

It seems I actually misspoke, since the cubic NURBS construction is only C0 (in full generality). What I was trying to get at was really that only the CubicCurve type itself really says 'C0' for the Bézier constructions; I'm not saying we shouldn't support C0 data, by any means! What I'm really trying to say is that my view is that we should do some differentiation at the level of the curve constructors (the first thing that came to mind was genericity with respect to marker types of some kind, which it seems you have also thought about).

Part of the problem is that "tangents" can mean like four different things depending on the underlying curve: "Functional curves" are generally going to be either C1 or piecewise C1, which is the difference between a T and an Option<T>.

Yeah, I agree here. My main thing is that I would prefer for, e.g., Option<Vec3> not to appear as a curve output for a "maybe tangent" directly — rather, I would prefer a system where we give access to as much output as possible that we know is valid globally, while also providing type-level tools (rather than through Curve per se) for them to access data that might not be 'valid' (e.g. continuous across the curve) because of type constraints alone.

For instance, with how things are currently set up, something like CubicCurve might only have a Curve<Vec3> implementation on its own, but we could provide access to

  • the curve segments, which would be able to give the derivative data individually, and/or
  • a "promotion" procedure, wherein we give the derivative data globally, with the user-level understanding that it may not be globally continuous.

Sampled curves can have two sources of approximate tangents: Finite differences, or samples from a derivative.

This is true, but I suppose I see "finite differences" as a "promotion" procedure, something like:

fn numerical_derivative(position_curve: SampleCurve<Position>) -> SampleCurve<PositionAndVelocity>

hence impl Curve<Position> -> impl Curve<PositionAndVelocity>, if that makes sense; so they are at least the same kind of Curve.

When you start with a Curve<PositionAndVelocity> to begin with, you can map down to just a Curve<Position> and then resample and call numerical_derivative, and you should get something similar to what you would get if you just called resample to begin with (at least if numerical_derivative is any good), so this all sort of makes sense to me.

As I see it:

* We need to be able to specify continuity information as part of a parameter type bound,

* Depending on the bound, the tangents/acceleration may or may not be guaranteed to exist,

* And we need to be able to sample both position+tangent and position+tangent+acceleration as single functions for efficiency purposes.

Agreed on all counts.

The idea of multiple implementations of Curve is interesting, but I don't love the fully qualified syntax required to disambiguate between them.

Having sat with it a little, I don't like it much either; I think this is a situation where explicit methods producing Curve outputs for certain kinds of data would be better.

Would something like the following work?

// These all have blanket implementations of curve and other Cn/Pn traits 
trait C2C1Curve<T>; // C2 + C1: Curve<(T, T, T)>
trait P2C1Curve<T>; // Piecewise C1 + C1: Curve<T, T, Option<T>>
trait P2P1Curve<T>; // Piecewise C2 + Piecewise C1: Curve<(T, Option<T>, Option<T>)>
trait C1Curve<T>; // C1: Curve<(T, T)>
trait P1Curve<T>; // Piecewise C1: Curve<(T, Option<T>)>

fn foo<C>(curve: C)
where C: P2C1Curve<Vec3>
{
    if let (pos, vel, Some(acc)) = curve::sample(t) {
        ...
    }
    let acc = curve.sample_acc(t);
    let pos = curve.sample_pos(t);
}

New curves which provide tangents/acceleration would implement the strongest trait they can. The types are a bit cumbersome but I think it would avoid the qualified syntax.

We don't have to spec anything out as part of the RFC, but I'd like a better idea of what this would look like to make sure we aren't locking ourselves out of anything in the future.

I think this is reasonable, although I am a little wary of actually putting Option types in curve return values, because I'm skeptical that we can make interpolation work well for them. (I wrote more on this in the earlier portion of this reply.)

I think what I'd like to do now is to sit down when I have time and actually prototype this (at least to the point where we can convince ourselves we won't be stepping on our own toes); I think we are mostly on the same page, though.

@NthTensor
Copy link

NthTensor commented Apr 14, 2024

I think this is reasonable, although I am a little wary of actually putting Option types in curve return values, because I'm skeptical that we can make interpolation work well for them.

It seems to me like that is an issue with interpolation. We shouldn't try to interpolate across a discontinuity, and if we want to represent derivatives directly using curves we can only assume they are piecewise continuous. Maybe the concrete curve representations need to know about their discontinuities and treat them as boundaries for interpolation.

Does that make sense? I'll try to expand on this more when I have time.

@mweatherley
Copy link
Contributor Author

mweatherley commented Apr 15, 2024

It seems to me like that is an issue with interpolation. We shouldn't try to interpolate across a discontinuity, and if we want to represent derivatives directly using curves we can only assume they are piecewise continuous. Maybe the concrete curve representations need to know about their discontinuities and treat them as boundaries for interpolation.

Does that make sense? I'll try to expand on this more when I have time.

It does make sense, and I am curious where this line of inquiry leads — especially in the matter of how such a thing would be distinct from a vector of curves.

Copy link
Member

@alice-i-cecile alice-i-cecile left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like the API, and see the need for this. I've left some further comments on areas that I feel could be improved.

While I fully agree with the need for precise mathematical language when talking about this domain, I think we can and should do a better job making this approachable, by sprinkling in more tangible examples, explaining concepts in simple language first and so on :)

rfcs/80-curve-trait.md Outdated Show resolved Hide resolved
rfcs/80-curve-trait.md Show resolved Hide resolved
rfcs/80-curve-trait.md Outdated Show resolved Hide resolved
rfcs/80-curve-trait.md Outdated Show resolved Hide resolved
rfcs/80-curve-trait.md Outdated Show resolved Hide resolved
rfcs/80-curve-trait.md Show resolved Hide resolved
rfcs/80-curve-trait.md Show resolved Hide resolved
rfcs/80-curve-trait.md Outdated Show resolved Hide resolved
Copy link
Member

@alice-i-cecile alice-i-cecile left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remarkably clear and thought through. You've sold me on the value of this API, and the abstractions chosen seem natural and powerful. We will need so many examples to make this tangible and useful to non-mathematicians, but that's fine.

Before I approve, there are a couple of straightforward edits to be made. More importantly, I want to make sure that we record why f32 is used as the base numerical type for t, rather than making this generic. I agree with that decision, but it's an important design consideration that should be documented as it has come up repeatedly in other contexts.

@mweatherley
Copy link
Contributor Author

This has now been substantially rewritten. The goals of this rewrite were as follows:

  1. Confine type-inferred interpolation to a more mathematical core for which it is well-scoped and well-behaved. The imposition of type-inferred interpolation was too opinionated, so the API has been updated to better support explicitly-provided interpolation instead. If bevy_animation wants to use general type-inferred interpolation in its internals, its interoperation with Curve can be built on top of SampleCurve and UnevenSampleCurve (which now use explicit interpolators) rather than using some intrinsic notion of interpolation owned by bevy_math.
  2. Absorb some changes that were already being made in early review of the code as part of the working group. Part of this was interpolation-related (it had already been confined to resampling methods rather than the trait itself), but there were some other small changes too; e.g. I no longer think it's a good idea to default to non-lazy map behavior for sample-interpolated curves — the non-laziness should just be explicit instead.

I think that this ends up pushing the Curve API to be more flexible and less closely married to the original machinations of bevy_animation (ironically, spinning off interpolation from Animatable was kind of what spurred this in the first place), but I think the changes are for the better.

github-merge-queue bot pushed a commit to bevyengine/bevy that referenced this pull request Jun 10, 2024
# Objective

Partially address #13408 

Rework of #13613

Unify the very nice forms of interpolation specifically present in
`bevy_math` under a shared trait upon which further behavior can be
based.

The ideas in this PR were prompted by [Lerp smoothing is broken by Freya
Holmer](https://www.youtube.com/watch?v=LSNQuFEDOyQ).

## Solution

There is a new trait `StableInterpolate` in `bevy_math::common_traits`
which enshrines a quite-specific notion of interpolation with a lot of
guarantees:
```rust
/// A type with a natural interpolation that provides strong subdivision guarantees.
///
/// Although the only required method is `interpolate_stable`, many things are expected of it:
///
/// 1. The notion of interpolation should follow naturally from the semantics of the type, so
///    that inferring the interpolation mode from the type alone is sensible.
///
/// 2. The interpolation recovers something equivalent to the starting value at `t = 0.0`
///    and likewise with the ending value at `t = 1.0`.
///
/// 3. Importantly, the interpolation must be *subdivision-stable*: for any interpolation curve
///    between two (unnamed) values and any parameter-value pairs `(t0, p)` and `(t1, q)`, the
///    interpolation curve between `p` and `q` must be the *linear* reparametrization of the original
///    interpolation curve restricted to the interval `[t0, t1]`.
///
/// The last of these conditions is very strong and indicates something like constant speed. It
/// is called "subdivision stability" because it guarantees that breaking up the interpolation
/// into segments and joining them back together has no effect.
///
/// Here is a diagram depicting it:
/// ```text
/// top curve = u.interpolate_stable(v, t)
///
///              t0 => p   t1 => q    
///   |-------------|---------|-------------|
/// 0 => u         /           \          1 => v
///              /               \
///            /                   \
///          /        linear         \
///        /     reparametrization     \
///      /   t = t0 * (1 - s) + t1 * s   \
///    /                                   \
///   |-------------------------------------|
/// 0 => p                                1 => q
///
/// bottom curve = p.interpolate_stable(q, s)
/// ```
///
/// Note that some common forms of interpolation do not satisfy this criterion. For example,
/// [`Quat::lerp`] and [`Rot2::nlerp`] are not subdivision-stable.
///
/// Furthermore, this is not to be used as a general trait for abstract interpolation.
/// Consumers rely on the strong guarantees in order for behavior based on this trait to be
/// well-behaved.
///
/// [`Quat::lerp`]: crate::Quat::lerp
/// [`Rot2::nlerp`]: crate::Rot2::nlerp
pub trait StableInterpolate: Clone {
    /// Interpolate between this value and the `other` given value using the parameter `t`.
    /// Note that the parameter `t` is not necessarily clamped to lie between `0` and `1`.
    /// When `t = 0.0`, `self` is recovered, while `other` is recovered at `t = 1.0`,
    /// with intermediate values lying between the two.
    fn interpolate_stable(&self, other: &Self, t: f32) -> Self;
}
```

This trait has a blanket implementation over `NormedVectorSpace`, where
`lerp` is used, along with implementations for `Rot2`, `Quat`, and the
direction types using variants of `slerp`. Other areas may choose to
implement this trait in order to hook into its functionality, but the
stringent requirements must actually be met.

This trait bears no direct relationship with `bevy_animation`'s
`Animatable` trait, although they may choose to use `interpolate_stable`
in their trait implementations if they wish, as both traits involve
type-inferred interpolations of the same kind. `StableInterpolate` is
not a supertrait of `Animatable` for a couple reasons:
1. Notions of interpolation in animation are generally going to be much
more general than those allowed under these constraints.
2. Laying out these generalized interpolation notions is the domain of
`bevy_animation` rather than of `bevy_math`. (Consider also that
inferring interpolation from types is not universally desirable.)

Similarly, this is not implemented on `bevy_color`'s color types,
although their current mixing behavior does meet the conditions of the
trait.

As an aside, the subdivision-stability condition is of interest
specifically for the [Curve
RFC](bevyengine/rfcs#80), where it also ensures
a kind of stability for subsampling.

Importantly, this trait ensures that the "smooth following" behavior
defined in this PR behaves predictably:
```rust
    /// Smoothly nudge this value towards the `target` at a given decay rate. The `decay_rate`
    /// parameter controls how fast the distance between `self` and `target` decays relative to
    /// the units of `delta`; the intended usage is for `decay_rate` to generally remain fixed,
    /// while `delta` is something like `delta_time` from an updating system. This produces a
    /// smooth following of the target that is independent of framerate.
    ///
    /// More specifically, when this is called repeatedly, the result is that the distance between
    /// `self` and a fixed `target` attenuates exponentially, with the rate of this exponential
    /// decay given by `decay_rate`.
    ///
    /// For example, at `decay_rate = 0.0`, this has no effect.
    /// At `decay_rate = f32::INFINITY`, `self` immediately snaps to `target`.
    /// In general, higher rates mean that `self` moves more quickly towards `target`.
    ///
    /// # Example
    /// ```
    /// # use bevy_math::{Vec3, StableInterpolate};
    /// # let delta_time: f32 = 1.0 / 60.0;
    /// let mut object_position: Vec3 = Vec3::ZERO;
    /// let target_position: Vec3 = Vec3::new(2.0, 3.0, 5.0);
    /// // Decay rate of ln(10) => after 1 second, remaining distance is 1/10th
    /// let decay_rate = f32::ln(10.0);
    /// // Calling this repeatedly will move `object_position` towards `target_position`:
    /// object_position.smooth_nudge(&target_position, decay_rate, delta_time);
    /// ```
    fn smooth_nudge(&mut self, target: &Self, decay_rate: f32, delta: f32) {
        self.interpolate_stable_assign(target, 1.0 - f32::exp(-decay_rate * delta));
    }
```

As the documentation indicates, the intention is for this to be called
in game update systems, and `delta` would be something like
`Time::delta_seconds` in Bevy, allowing positions, orientations, and so
on to smoothly follow a target. A new example, `smooth_follow`,
demonstrates a basic implementation of this, with a sphere smoothly
following a sharply moving target:


https://github.com/bevyengine/bevy/assets/2975848/7124b28b-6361-47e3-acf7-d1578ebd0347


## Testing

Tested by running the example with various parameters.
@mweatherley
Copy link
Contributor Author

Okay. After refactoring the draft library, I ended up with some shared interpolation interfaces which seem pretty useful for implementors. (I used them in the bevy_animation rewrite, for example.) I ended up adding a section to the RFC that describes them as a result; I think having something like that is pretty important to lower the implementation barrier if trait-dispatched interpolation is off the table.

I would say that now I'm reasonably happy with this in terms of completeness (with the new approach in mind), at least until someone changes my mind. :)

@Meyermagic
Copy link

This RFC seems mainly focused on animation, but I'd like to drop a link to https://www.forrestthewoods.com/blog/tech_of_planetary_annihilation_chrono_cam/ , a blog post which discusses using a time series Curve type as the primary means of communicating state from the server to the client in a multiplayer game. In this approach the client state is a big collection of Curve which get updated according to new data from the server (which runs the actual simulation and tracks all the data persistently) and client-side predictions which can override stale server data.

Perhaps a similar approach could work well with Bevy in the future, it seems very ECS-like (though I ended up using a different approach in my project).


/// Create an [`Interval`] by intersecting this interval with another. Returns an error if the
/// intersection would be empty (hence an invalid interval).
pub fn intersect(self, other: Interval) -> Result<Interval, InvalidIntervalError> { //... }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Follow-up: I think we want a union method too, but that will need a new error arm for non-contiguous intervals.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that sounds useful.

The `Curve::sample` method is not intrinsically constrained by the curve's `domain` interval. Instead,
implementors of `Curve<T>` are free to determine how samples drawn from outside the `domain` will behave.
However, variants of `sample` (as well as other important methods) use the `domain` explicitly:
```rust
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: I think this section / API will be clearer if we explicitly provide an extrapolate method.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was inclined to agree and have the invariant extrapolate(x) == Some(sample(x)), but this would be bad for readability of code, because then users will use extrapolate to sample the curve. At that point, you're better of having sample return Option, but that is cumbersome for users. Furthermore, a sample(x) that panics outside its domain will cause issues with floating point rounding errors at the boundary of the domain.

That begs the question: are there use cases for curves where the curve is undefined for some value of x (outside its domain)? If not, we can enforce that curves are defined for all finite values of f32. Curves for which this requirement is a problem can perhaps use clamping to satisfy this constraint.


### Other ways of making curves

The curve-creation functions `constant_curve` and `function_curve` that we have been using in examples are in fact
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: I think that linear_curve is another valuable convenience API of this sort.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely a good idea. I think that "common sense" constructors for doing things like joining a couple points by a curve (and variations on that) are on the short list as far as library functions go.

Copy link
Member

@alice-i-cecile alice-i-cecile left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent work: this is exactly what I hoped to see from both the RFC process and working groups. I'm particularly grateful for the careful thinking around object-safety: this is a key requirement, and very easy to accidentally overlook in a hard-to-fix way.

I've left a few small suggestions, some functional, and some merely copy-editing. None of them are blocking, and I'm personally confident that this is ready for implementation. I'd like @mockersf's sign-off on this before marking this as fully blessed, but I don't think there's serious risk at this stage.

When developing and shipping this feature, the main challenge is going to be teaching this feature to less mathematical users. I think it's valuable and correct to use the standard, rigorous terms, but this is something that e.g. nalgebra has struggled with in practice. To help alleviate that, I'd like to see a glossary of terms that we can aggressively link to for things like "domain", "preimage", "interpolation" and so on: anything beyond Grade 8 math. Pairing that with extensive practical examples / doc tests for why these APIs might be useful like you've done in this RFC gives me confidence that we can make these powerful tools broadly accessible.

Copy link

@bcmpinc bcmpinc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is my first time commenting on an RFC. I hope my comments are useful and not too harsh. Feel free to ignore any of my comments that you disagree with.

pub fn new(start: f32, end: f32) -> Result<Self, InvalidIntervalError> { //... }

/// Get the start of this interval.
pub fn start(self) -> f32 { //... }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename start to begin. It's better to not mix the logical pairs begin/end (which are positions) and start/stop (which are actions). For reference: CppCon 2017: Kate Gregory “Naming is Hard: Let's Do Better”.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe "beginning"? We're a bit more verbose usually.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I named them start and end because that's what Range does. I don't really feel strongly about it though.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In c++ they use the begin/end pair. In rust they unfortunately choose start/end. I had not realized that when I wrote this. In that case staying consistent with rust is probably better. Unless, you want to have an animation type that has both start/stop methods to control whether the animation is active, and begin/end methods to access the begin and end time of the animation.

P.S. even in the documentation for Pair at some point they use begin/end: https://doc.rust-lang.org/core/ops/struct.Range.html#impl-SliceIndex%3Cstr%3E-for-Range%3Cusize%3E.


/// Get the linear map which maps this curve onto the `other` one. Returns an error if either
/// interval is infinite.
pub fn linear_map_to(self, other: Self) -> Result<impl Fn(f32) -> f32, InfiniteIntervalError> { //... }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand this is used to implement Curve::reparametrize_linear. Is there a reason this should be exposed in the public API?

The comment here refers to the interval as a "curve", that should be "interval".

For the documentation: I'd write something like, "Returns a linear function f such that f(self.begin) == other.begin and f(self.end) == other.end. This makes clear that f takes values within the domain of self.

I would suggest to have its return type be Curve<f32> rather than Fn(f32) such that its type can carry continuity information that can be used to determine the resulting continuity when it is chained with another curve.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand this is used to implement Curve::reparametrize_linear. Is there a reason this should be exposed in the public API?

Probably not!

The comment here refers to the interval as a "curve", that should be "interval".

👍

For the documentation: I'd write something like, "Returns a linear function f such that f(self.begin) == other.begin and f(self.end) == other.end. This makes clear that f takes values within the domain of self.

I would suggest to have its return type be Curve<f32> rather than Fn(f32) such that its type can carry continuity information that can be used to determine the resulting continuity when it is chained with another curve.

That's probably a good idea!


The `Interval` type also implements `TryFrom<RangeInclusive>`, which may be desirable if you want to use
the `start..=end` syntax. One of the primary benefits of `Interval` (in addition to these methods) is
that it is `Copy`, so it is easy to take intervals and throw them around.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add here that Interval enforces that the interval is non-empty. I would like to mention here that Range is iterable, while Interval should not be.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. Hopefully one day Range won't be iterable either (but IntoIterator), and paradise will truly be ours.

pub fn spaced_points(
self,
points: usize,
) -> Result<impl Iterator<Item = f32>, SpacedPointsError> {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Imho corner cases should be avoided if there are sensible defaults. Corner case handling is very prone to introduce bugs.
For points = 0, this can return an empty iterator; and
for points = 1 this can return an iterator with only the start value.

For example: this is useful for playing an animation for 1 frame (Godot uses this for their RESET animation track).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That probably makes sense for this, yeah. I was just modeling this after the resample method, which is probably also going to change.

The `Curve::sample` method is not intrinsically constrained by the curve's `domain` interval. Instead,
implementors of `Curve<T>` are free to determine how samples drawn from outside the `domain` will behave.
However, variants of `sample` (as well as other important methods) use the `domain` explicitly:
```rust
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was inclined to agree and have the invariant extrapolate(x) == Some(sample(x)), but this would be bad for readability of code, because then users will use extrapolate to sample the curve. At that point, you're better of having sample return Option, but that is cumbersome for users. Furthermore, a sample(x) that panics outside its domain will cause issues with floating point rounding errors at the boundary of the domain.

That begs the question: are there use cases for curves where the curve is undefined for some value of x (outside its domain)? If not, we can enforce that curves are defined for all finite values of f32. Curves for which this requirement is a problem can perhaps use clamping to satisfy this constraint.

Often, one will want to define a curve using some kind of interpolation over discrete data or, conversely,
extract lists of samples that approximate a curve, or convert a curve to one which has been discretized.

For the first of these, there is the type `SampleCurve<T, I>`, whose constructor looks like this:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure about the name of SampleCurve<T,I>. It does not really convey what it does. It could also be called a PiecewiseCurve<T,I> or InterpolatedCurve<T,I>. But those names come with their own issues.

Considering that I interpolates between two samples and without information about derivatives, the interpolation options are very limited (e.g. step, nearest, linear and perhaps some smoothstep). Because of that, I'd suggest tot have the name of SampleCurve<T,I> be an implementation detail and only expose specific curve types e.g. PiecewiseConstantCurve for I=step, PiecewiseLinearCurve for I=lerp.

A constant curve is still a perfectly valid curve for when only 1 sample is provided. If this can be implemented with negligible overhead, this is preferred over introducing a corner case where a sensible default is available.

Scope creep / things that we may want to support in a future version:

It makes sense to have a generic class for piecewise/interpolated curves as there's a lot of common code between these types of curves. There's two ways in which SampleCurve can be extended to support things like BezierCurve.

  1. Allow the interpolant to access more than 2 samples. In this case we have to define how end points are dealt with.
  2. Allow samples to include derivative information. (i.e. samples: impl Into<Vec<Sample<T>>, where Sample indicates what derivative information is available. Then implement impl<T> From<T> for Sample<T> for ease of use.)

Both are useful when resampling curves, while maintaining continuity properties.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure about the name of SampleCurve<T,I>. It does not really convey what it does. It could also be called a PiecewiseCurve<T,I> or InterpolatedCurve<T,I>. But those names come with their own issues.

Yeah, at one point I called this "SampleInterpolatedCurve", which is what I kind of want to convey, but it's also too long :P

Considering that I interpolates between two samples and without information about derivatives, the interpolation options are very limited (e.g. step, nearest, linear and perhaps some smoothstep). Because of that, I'd suggest tot have the name of SampleCurve<T,I> be an implementation detail and only expose specific curve types e.g. PiecewiseConstantCurve for I=step, PiecewiseLinearCurve for I=lerp.

Well, in principle these can use any information from the sample values at each point, which could make them considerably more complex. On the other hand, the purpose of making this just take an Fn is explicitly to move the choice of interpolation modes "upward" to consumers in general (for example, to make interpolation guided by an enum or similar).

A constant curve is still a perfectly valid curve for when only 1 sample is provided. If this can be implemented with negligible overhead, this is preferred over introducing a corner case where a sensible default is available.

My thoughts on this right now are that we should probably use the number of segments instead of samples, which seems more intuitive to me. That also has the side-effect of leaving only the error case of zero segments, which seems kind of unavoidable regardless (if only it were ergonomic for users to provide a NonZeroUsize....).

Scope creep / things that we may want to support in a future version:

It makes sense to have a generic class for piecewise/interpolated curves as there's a lot of common code between these types of curves.

Yeah; perhaps this could be a part of the story for the analogue of FromIterator/collect at some point.

  1. Allow samples to include derivative information. (i.e. samples: impl Into<Vec<Sample<T>>, where Sample indicates what derivative information is available. Then implement impl<T> From<T> for Sample<T> for ease of use.)

Yeah, there is some ongoing work on including derivative information as well; I think some general form of Hermite approximation for differentiable curves would be quite useful.

```rust
/// Extract an iterator over evenly-spaced samples from this curve. If `samples` is less than 2
/// or if this curve has unbounded domain, then an error is returned instead.
fn samples(&self, samples: usize) -> Result<impl Iterator<Item = T>, ResamplingError> { //... }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably take an Into<Iterator>.

Minor scope creep:

For T : NormedVectorSpace it is useful to have a convenience function fn distance(points: impl Iterator<T>) that takes and returns an iterator that computes the cumulative distance. This can be used to remap a Curve into an approximately constant velocity Curve, which is a common use case for curves.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably take an Into<Iterator>.

I wrote in the document about why this isn't the case: basically, you can just .map over your iterator if that's what you want to do, and the behavior of requested samples that aren't within the curve domain seems like it might be sensitive to the problem at hand. The benefit of having this method as-is is that it's just really simple: for example, if I want to take my curve and get a vector of points to render into a linestrip with gizmos, I can just do something like this without giving it much thought:

gizmos.linestrip(my_curve.samples(100));

As for the distance function: this is a short-term goal, but I think it would be best addressed when we start trying to do geometry with curves :)

/// A total of `samples` samples are used, although at least two samples are required to produce
/// well-formed output. If fewer than two samples are provided, or if this curve has an unbounded
/// domain, then a [`ResamplingError`] is returned.
fn resample<I>(
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be a static member function of the destination curve type. I.e. defined as fn resample(impl Curve<T>, samples:usize) -> Self. For example, a user might want to resample the curve as a bezier curve. This API forces a specific output type.

Sidenote: I thought that maybe samples can be a Vec in case the sampling should not happen uniformly over t, However, that functionality is already available through remapping.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, in the interest of "minimality" this is as it is presently, but I think there is a reasonable chance that we end up trying to mirror the situation of FromIterator/collect with resampled curve types in the future; I just wanted for things to remain relatively simple for the time being.

{ //... }
```

This story has a parallel in `UnevenSampleCurve`, which behaves more like keyframes in that the samples need not be evenly spaced:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With a curve that maps timestamps to equidistant points (e.g. the SampleCurve.inverted() I suggested earlier) and curve chaining, this can replace UnevenSampleCurve entirely.

/// with outputs of the same type. The domain of the other curve is translated so that its start
/// coincides with where this curve ends. A [`CompositionError`] is returned if this curve's domain
/// doesn't have a finite right endpoint or if `other`'s domain doesn't have a finite left endpoint.
fn compose<C>(self, other: C) -> Result<impl Curve<T>, CompositionError> { //... }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like this function needs a bit more thought. What is the behavior if self.end() != other.begin()?

My suggestion: ignore the domains and have the user specify where the breakpoint is.

What is the resulting type of composing curves that themselves are composed of curves?

My suggestion: have that be PiecewiseCurve<impl Curve<T>>. This allows composing a variable number of curves at runtime.

I want to point out that there is some similarity between piecewise composed curves and SampleCurve<T,I>. Especially as type I is very similar to a curve. However implementing SampleCurve as a PiecewiseCurve would cause storage overhead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like this function needs a bit more thought. What is the behavior if self.end() != other.begin()?

Right now, it just translates the second curve so that its start coincides with the end of the first. If you mean in terms of values — there are no guarantees: a Curve isn't necessarily continuous, and in general this operator will combine two continuous curves into a discontinuous one.

What is the resulting type of composing curves that themselves are composed of curves?

My suggestion: have that be PiecewiseCurve<impl Curve<T>>. This allows composing a variable number of curves at runtime.

Yeah, there is probably more to be done here; the goal of the minimal version of end-to-end composition in the API is to work with arbitrary curves, so the output type contains those of the input curves as generic parameters. The story for longer sequences of curves is a little thornier, since we don't really have HLists. One idea is to do what you're saying with piecewise constructions, but this only works if every curve in the sequence has exactly the same type. In its most general form, there probably isn't any way to get around using Box<dyn Curve<T>> as the curve type in such a thing presently.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the generic case dynamic dispatch is practically unavoidable. Even a Composed<Curve,Curve> has a function call stuck behind a branch, that the compiler probably can't do much with.

I'm not sure whether the PiecewiseCurve I suggest here can actually be implemented in rust. If I'm correct, you cannot have:

fn chain(self, other: Curve<T>) -> PiecewiseCurve<Curve<T>; in the generic case; and
fn chain(self, other: PiecewiseCurve<Curve<T>>) -> PiecewiseCurve<Curve<T>; as specialization.

This may be possible with either negative trait bounds or specialization, but both are still unstable features and might still not provide the needed flexibility.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can do something like that with the caveat that you use an opaque impl Curve<T> return type in the trait definition (and do the specialization on self rather than other). I have personally gone back and forth with these sorts of optimizations, since they basically get destroyed the moment you do anything other than call the same method twice.

rfcs/80-curve-trait.md Outdated Show resolved Hide resolved
@alice-i-cecile alice-i-cecile merged commit ea2d3d9 into bevyengine:main Aug 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants