-
Notifications
You must be signed in to change notification settings - Fork 234
Efficient execution of F* tactics
Plan based on discussions with Nik and Jonathan (Redmond, 20/03/2017)
In the current incarnation of the tactic engine, user tactics are executed by embedding the application of tactics into the VC term and invoking the F* normalizer. Since in practice this strategy is often too slow, we would like to have a way of using compiled tactics (initially via OCaml, but in principle this could also be extended to F#), either pre-compiled (e.g. as a library of tactics) or compiled at the same time as the F* program which uses them and dynamically linked. Roughly, these are the implementation steps:
We can use a union-find to keep track of metavariable instantiations as part of tactic execution and revert them when a tactic fails. The current implementation of union-find in the F* compiler doesn't allow this to be shared between multiple goals in a proof state in a sound way. The first step would be to implement a persistent union-find (as described here).
In order to allow the rest of the F* compiler to use this implementation, we would also need to have a second interface to it which duplicates the current one (which exposes methods for creating and rolling back transactions).
The persistent union-find has been implemented separately in both F# and OCaml. It also replaces the previous union-find in the rest of the compiler, via a transactional interface. The implementation passes all the regression tests, but there might be some questions left about performance. I've had trouble bootstrapping the compiler on my machine (VD), but it seems to work in CI. When testing a few examples, runtime seems similar to the old implementation, but I think it's possible array allocations are hogging the (limited) RAM on my machine.
Since tactics are written in the Tac
effect, a pre-requisite for extracting tactics is extraction of user-defined effects (#886). This raises the question of how to extract code which combines primitive and user-defined effects (we can take join, but not reify the result), since we don't want to extract the model of primitive effects.
This is largely implemented and already merged into master. One remaining issue is figuring out when a tactic is extractable, since we don't want to allow full generality (details here).
Once tactics can be extracted, they can be compiled ahead of time into a .cmxs
. A corresponding .fsti
would expose the tactics to the user, but the actual code would be dynamically loaded when compiling the F* program which uses them. It is critical to make sure that the loaded compiled tactics behave in the same way as the original F* tactics.
When invoking a tactic, the user would have to specify if the tactic has been pre-compiled (e.g. assert Phi by native t
).
[On branch vdum_extract_dm4f_effects] We can compile extracted tactics and dynamically load the .cmxs
using the --load
option as a primitive step in the normalizer. At compile time, When verifying an F* module which uses a user-defined tactic, the tactic is first looked up in the list of dynamically loaded tactics. If it is found, its body is replaced with two declarations, one assuming the reified version of the tactic (reification occurs at tactic extraction time) and the other one replacing the original tactic's definition with a call to TAC?.reflect
.
- At tactic extraction time, automatically generate a call to
FStar.Native.register_tactic
, providing the appropriate embedding/de-embedding pairs. Currently we are manually adding a handwritten invocation to the extracted OCaml tactic. - Calling tactics from tactics; e.g. a user-defined tactic using a standard tactic (like
apply
,exact
, etc.) or even another (possibly dynamically loaded) user-defined tactic. For standard library tactics, we could rely on their existing ML implementation (I've tried this but I've been having some technical problem, to do with the fact that these tactics are defined using thetac
type inFStar.Tactics.Basic
, used by the interpreter, but in user code, tactics are expected to have typeFStar.Tactics.tactic
. I'm not sure how to reconcile this). However, to support tactics calling user tactics, we would need a way to plug into the tactics interpreter when making calls to these tactics.
Taking this one step further, user tactics could be extracted, compiled and linked against as part of the F* compilation process. We don't want to do this for all tactics, since for small proof states, the overhead of doing so might be greater than the gains in speed over using the normalizer. The user could manually request a tactic to be compiled, if appropriate (again via a keyword such as native
).
Not yet started.
JP: some notes on the various approaches for automating tactic compilation and loading.
Things on the radar:
- need to have ML types in the environment for all the previous modules and previous definitions... perhaps even have their definitions? are we now going to thread through the extraction environment throughout type-checking?
- extraction calls the normalizer; are there any adverse interactions with type-checking?
- how to have dependency between two "F# projects"; a function reference in typechecker/ that is later filled out by extraction/ reveals the main entry point?
In any case, there will be numerous checks to be performed, such as: checking that the ocamlopt
we find is of the same version we (the F* executable) are compiled with; checking that the internal compiler modules that the tactic is compiled against are at the same version that we (the F* executable) are running with; this can be done à la ocamlobjinfo
; checking that the FSTAR_HOME
we have (or find) is the one that we're running out of, etc.
Assuming we know how to call the extraction in a re-entrant manner, I see various approaches.
-
Approach 1. Write out a
.ml
file to disk, callocamlopt
, andDynlink
the resulting file. This is the simplest approach, the easiest to debug, and the one that minimizes the risk. Pros: simple, easily debuggable, results can be naturally cached ascmxs
. Cons: potential cost of forking external processes & disk IO. -
Approach 2. Do the same thing, but instead of calling
ocamlopt
, just link in thecompiler-libs
and do the same thing by poking at the various functions from the OCaml compiler. Potentially faster to drive the OCaml compiler this way, but we still write outcmxs
's to disk. Not sure this is worth the trouble. Cons: it may be the case that these functions fromcompiler-libs
change over compiler versions. -
Approach 3. Do something à la
ocamltopnat
, i.e. directly write out object code in an executable section of memory,Obj.magic
the code pointer, and voilà, you have a JIT-like compilation schema. This is probably difficult (need to know compiler internals), risky (segfaults), hard to debug, likely to vary across OCaml versions, does not offer a natural way of caching the results, and may even need some C stubs which will complicate our already-difficult build story.