Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] figure out story around linear queries #41710

Open
nikomatsakis opened this issue May 2, 2017 · 6 comments
Open

[RFC] figure out story around linear queries #41710

nikomatsakis opened this issue May 2, 2017 · 6 comments
Labels
A-MIR Area: Mid-level IR (MIR) - https://blog.rust-lang.org/2016/04/19/MIR.html T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.

Comments

@nikomatsakis
Copy link
Contributor

nikomatsakis commented May 2, 2017

PR #41625, in an effort to rationalize the MIR pipeline, introduced queries that yield a Steal type. Cribbing from a rather long and overwrought comment that I wrote, the idea is to model linear queries as ordinary queries that return a &'tcx Steal<D>, which supports two operations ("borrow" and "steal"). Once a value is stolen, any further operation is a bug!(). This means that all parts of the compiler which invoke a query with a steal result must be aware of any other parts that might use that same query and coordinate with them. (This is a performance optimization: that is, we could make "stolen" queries simply regenerate the resulting value, and everything would work, but we don't want to be regenerating the results of these queries multiple times, so instead we make this error a bug!.)

In all the MIR cases, at least, there really isn't much to intercoordinate. For the most part, each MIR optimization is just used by one other query: the next optimization in the sequence. (In the current PR, each optimization pass has its own query, but if we convert to just having a query-per-suite, then this would apply at the level of suites.) Other parts of the compiler should use one of the queries (e.g., optimized_mir()) that do not return a Steal result.

However, there are a few exceptions. One example is const qualification. This is a lightweight scan of the contents of a const item, and it needs to take place relatively early in the pipeline (before optimizations are applied and so forth). The way I handled this is (a) to have it borrow() from the steal and then (b) to use force before we steal the relevant MIR, so that we know that it has executed. If we forgot to add the force() call, then the result would be dependent on the order in which queries were issued; that is, if one requested const-qualif(D) first, it would execute successfully, but if one requested optimized_mir(D) first, then const-qualif(D) afterwards, you would get a bug! because const-qualif would be trying to read stolen data. (However, if compilation succeeds, we are always assured of a consistent result.)

The complete set of examples that I am aware of where we have a linear query that also needs to be accessed is as follows:

  • Const qualification, as just covered, needs to inspect the "const" MIR before validation, at least for const items.
  • Const evaluation will, for const fn, need to save a copy of the IR before validation and optimization that can be used by miri (indeed, optimization may want to execute miri to evaluate constant expressions, perhaps even some appearing within the const fn itself).
  • Borrow checking needs to access the MIR result after "validation" but before "optimization".

The current solution is simple and may indeed be "good enough" -- it depends a bit (in my mind) on how often we introduce linear queries and how many things depend on them. However, there have been some alternative proposals for how to handle it. The purpose of this issue is to describe those proposals and try to settle on which is best.

Rather than continuing with concrete examples, in the descriptions that follow, I will abstract out the pattern as follows. Consider a "stealable" query A that needs to be read by query B (e.g., borrow checking) but consumed by query C (e.g., optimization).

For each system, I will describe how it works, and then discuss some of the tradeoffs.

Current system: "stealable queries"

How it works: As described in the run-up, you have a query that yields a &'tcx Steal<T>. This can be either borrowed or stolen. An attempt to borrow after a steal (or a second attempt to steal) will result in a bug! call.

How you can mess it up: You can forget to call force() on a potential consumer when stealing. In the case of our example, this would mean that query C fails to force query B.

What happens if you mess it up: Compilation may succeed with some query execution orders (e.g., if B executes first) but fail with a bug! in others (e.g., if C execues first).

Other downsides: entanglement It's kind of a drag that the stealer (C) must be aware of the reader (B). It

Proposed alternative: "linear queries" with "tuple providers"

How it works: @eddyb proposed an alternative in which linear queries can only be used once. In this scheme, once you execute a query once, any further attempt to request the same query is a bug!. So we can't have a query A that is read by B and consumed by C, as I had, since both B and C must consume, and that would violate linearity. To support this scenario, then, eddyb wanted to introduce "tuple" providers (name is mine). Basically, when setting up the Providers struct, I can use a bit of magic to knit together a function that processes the result from A and produces a (B, C) tuple. This magic would then divide the tuple and store the B into the B map and store the C into the C map in the right spots. This might look something like this:

fn produce_bc(tcx: TyCtxt, def_id: DefId) -> (B, C) {
    let a = tcx.A(def_id); // consumes A
    ... // returns (B, C)
}

// this method will initialize `providers.b` and `providers.c`
// with generated glue functions:
providers.tuple().b().c().func(produce_bc);

How you can mess it up: You could fail to use the tuple system, and instead write queries B and C independently.

What happens if you mess it up: In any execution in which both queries B and C are used, no matter which order they are used in, compilation will ICE. This is thus mildly more robust than the stealable scheme.

Other downsides: entanglement. It's kind of a drag that we must produce B and C from one function.

Other downsides: imprecise dependencies. It's possible that producing B and C use different sets of inputs. But since there is one function that is doing both bits of work, we won't be able to tell them apart in the query system, and hence they will wind up with imprecise dependencies.

Other advantages: splitting. This technique allows you to take the result of query A and split it into pieces without cloning. This is not possible with the other alternatives.

Proposed alternative: "linear queries" with "mapping providers"

In my original comment, I described a variant of of tuple providers. I want to describe a variant here of that idea that I've been thinking since. The idea is to make linear queries part of the query framework, as in the "tuple providers" proposal. However, when you define a linear query, you don't use it in the same way as non-linear queries. That is, if you have a linear query A, you can't do tcx.A(def_id) as I showed earlier. Instead, when creating the provider struct, we use some magic functions to "connect" derived queries to a linear query. This connection can be in one of two modes ("read" or "consume"). For example:

fn produce_b(tcx: TyCtxt, def_id: DefId, a: &A) -> B {
    //                                   ^^^^^
    // Note this extra argument: this will be the value of
    // the `A` query, borrowed.
    ...
}

fn produce_c(tcx: TyCtxt, def_id: DefId, a: A) -> B {
    //                                   ^^^^
    // Note this extra argument: since this query is on
    // "consume" mode, it gets ownership of the `A` value.
    ...
}

// Register the B query as something that *reads* linear
// query A:
providers.read_a().to_produce_b_with(produce_b);

// Register the C query as something that *consumes* linear
// query A:
providers.consume_a().to_produce_c_from(produce_c);

(These read_a() and to_produce_b_with() methods would be auto-generated
by the macro.)

The basic idea here is that we are telling the providers struct two things:

  • which linear queries we access (and whether we read or consume them)
  • which query we are answering when we do that

This allows the framework to do more intelligent routing of results. In particular, if there is more than one consume query for a linear query, we can detect that at provider creation time -- i.e., before we even process any input. Moreover, we can ensure that before the "consume" query for a given linear value runs, we force all of the "read queries". (In this case, that means that if you request C, we will force the query B, so that it can read before C executes.) (This last part is why the provider functions have to know what query you are producing.)

How you can mess it up: You cannot mess this up, since you can't access linear queries using the normal methods. Even if you tried to register two "consume" queries for the same linear value, that would fail when creating the provider struct (so such a PR could never land, for example, and no test could ever pass).

What happens if you mess it up: You cannot, unless I missed something. =)

Other advantages: no entanglement. This is the only proposal that avoids the need for queries B and C to know about one another. Simply by registering the queries, you help the framework sort things out and ensure that the appropriate force calls happen.

Other proposals?

I'll try to update this header if more ideas come up.

cc @rust-lang/compiler @matthewhammer

@nikomatsakis nikomatsakis changed the title figure out story around linear queries [RFC] figure out story around linear queries May 2, 2017
@nikomatsakis nikomatsakis added A-allocators Area: Custom and system allocators T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. and removed A-allocators Area: Custom and system allocators labels May 2, 2017
@eddyb
Copy link
Member

eddyb commented May 2, 2017

IMO not requiring declaring inputs up-front is a plus, that's why I didn't even mention the scheme you came up with but rather dismissed it because of the constraints I had.

That is, I wanted a system that could maintain the current flexibility in that queries are never declared (not even when setting up the providers, although I like that it is possible), but rather arise from imperative code.

Perhaps more than correctness, I was focused on performance and somewhat on ergonomics, assuming we could use no interior mutability in the values themselves (i.e. no Steal), and that we wanted several "versions" of the values (to split queries apart so we don't, say, optimize the MIR just as we've built it, to hash the MIR for "firewalling", etc.).

The immediate result is the basic linear query idea, where you can only invoke the query once, and you own the value instead of a clone being made. But then if several other queries want to access the same value, you have no way to share it. You can tuple up those other queries into a single query, but not if you only want some of the results to be linear too.

So the constraints more or less involve a bunch of related analyses/transformations, where both the inputs and the outputs contain values you don't want to duplicate, and also values you do want to duplicate (e.g. MIR const-qualification results).
In the context of #41625, that would be something like (MirConstQualif, &Steal<Mir>).
But I didn't want to trust random interior mutability containers (as mentioned above), so you either end up with a tuple element special-casing, or don't group them in the declaration.
If they're not grouped in the declaration, you can still get the desired result by having the provider come up with the grouping so the query engine always does both queries at once.

All of that said, the only element of correctness was reducing the "trusted abstraction base", specifically the use of interior mutability outside of the implementation of the query engine.

@michaelwoerister
Copy link
Member

I'm still not sure if we need or should have linear queries at all. They complicate things, like caching things to disk in the background (e.g. unoptimized MIR is queued for serialization in another thread, then another query steals it) and I'm not sure they are worth the trouble. As far as I can tell, the MIR transformation pipeline is the only case this is ever going to be used. Can't the optimization pass just make it's own copy of the unoptimized MIR in the beginning and then modify that in place?

@eddyb
Copy link
Member

eddyb commented Jun 13, 2017

I don't think background caching is realistic, because of HashMaps that can change concurrently, unless serialization were context-free - do you think that's plausible?

@michaelwoerister
Copy link
Member

What maps are you referring to here? I would imagine the cache to be mainly DepNode -> Mir and DepNode -> TypeCheckTables tables. Once one has a Arc<Mir> or Arc<TypeCheckTables> in hand, it should be readonly and thus be fine to send to another thread, right? (The 'tcx bound might complicate things a little but no very much so).

@eddyb
Copy link
Member

eddyb commented Jun 13, 2017

@michaelwoerister I mean that serialization might require concurrent map lookup for some leaves.
But if the keys are all integers and pointers, you could duplicate/move everything serialization need and remove the need for invoking any queries during the serialization process.

FWIW, you don't even need Arc, if you intern everything and the background thread is valid for less than 'tcx (via rayon or w/e), then &'tcx Mir<'tcx> and &'tcx TypeckTables<'tcx> would work.

@michaelwoerister
Copy link
Member

@eddyb Yeah, that's a good point. I kind of assumed, that anything reachable would already be interned and not need any lookups. That might not be true...

@Mark-Simulacrum Mark-Simulacrum added the C-tracking-issue Category: A tracking issue for an RFC or an unstable feature. label Jul 27, 2017
@jonas-schievink jonas-schievink added the A-MIR Area: Mid-level IR (MIR) - https://blog.rust-lang.org/2016/04/19/MIR.html label Jan 23, 2020
@jackh726 jackh726 removed the C-tracking-issue Category: A tracking issue for an RFC or an unstable feature. label Mar 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-MIR Area: Mid-level IR (MIR) - https://blog.rust-lang.org/2016/04/19/MIR.html T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
None yet
Development

No branches or pull requests

6 participants