-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MLIR/Frontend] C++ compiler driver improvements, ability to compile textual IR #216
Conversation
* Add MHLO as C++ dependency to quantum-opt * Configure parsing from C++ * Implement lowering to LLVM IR with custom metadata handling for Enzyme * Fix bugs with memory ownership * Clean up C calling convention * Canonicalize ._mlir attribute in frontend * python formatting * Update tests with canonical IR * Update canonicalized lit tests * Add MHLO as a dependency to quantum dialect build in CI * Formatting + typo in workflow * CI: Attempt to re-checkout MHLO during dialect build * CI: Attempt to use cached MHLO * CI: Add _mlir_libs to mocked modules for doc build, fix logic issue with MHLO source caching * CI: mock specific CAPI library * CI: typo in docs configuration * Switch mlir_canonicalize to generic pass runner, reorganize driver files * Clean up, rename LLVMTarget to avoid confusion with core LLVMTarget * Fix error message for mlir_run_pipeline * Update mlir/CMakeLists.txt Co-authored-by: Ali Asadi <[email protected]> * Update copyright year Co-authored-by: Ali Asadi <[email protected]> * Update mlir/lib/Catalyst/Driver/Pipelines.cpp Co-authored-by: Ali Asadi <[email protected]> * Update copyright year Co-authored-by: Ali Asadi <[email protected]> * Update copyright year Co-authored-by: Ali Asadi <[email protected]> * Add #pragma once Co-authored-by: Ali Asadi <[email protected]> * Add #pragma once Co-authored-by: Ali Asadi <[email protected]> * Move MHLO passes to top-level CMake variable, documentation --------- Co-authored-by: Ali Asadi <[email protected]>
…++-compiler-driver
@pengmai @erick-xanadu into a "programmable black box" model Where: arrow ( The questions are:
|
I think both models are equivalent. The first one also had effects (printing to a file and preserved in the file system). And the Spec was just the default pipeline. We only gave the users the ability to define their own pipeline, which I think should also be possible in the compiler driver, but I haven't investigated.
Can you elaborate on this? I think for the scope of this PR, we can limit the ability for the user to define their own pass pipelines if it is getting in the way while we think of which passes are useful to us. In GCC, there's no option for the user to add passes (beyond enabling passes that are disabled by default) without recompiling the compiler. Similarly, if the user wants to change the order of transformations, they would need to recompile the compiler. I think this wouldn't be too bad but I agree that it would take away some of the dynamism we are accustomed to.
What do you mean by "in any combination"? I think the Compiler Driver already prints all the IR with the |
Yes, this is true. I didn't mention effects in the current design because I am assuming that users can control them via the Python API quite precisely (so we are not responsible that much).
No, I don't think so, unless Spec is as powerful as Python that we are using now. But I do think that we might not need to have all this power actually. So I would like to know what do we expect from the pipeline configurations.
The idea I have in mind - is to allow users to call filename, llvm_ir, *inferred_data = compile_asm(ir, workspace_name, module_name, ...
pipelines = {
'mhloToCorePasses' : ["func.func(chlo-legalize-to-hlo)", ..., "convert-to-signless" ],
'quantumCompilationPasses' : ["this-pass", "that-pass", ...],
...
})
I can imagine users calling Yes, another option we have - is just to hardcode the pipelines into C++ , but we would still need to specify their names to be able to refer to them in
Hmm, probably yes, if I understand you correctly. In the tests we run the pipeline but only want the result of a single intermediate pipeline. |
Yes the user could have any function whatsoever, but the intended use case is mostly for a way to specify command line arguments to
Yeah, something like that would be great! EDIT: I also wouldn't be opposed to essentially having 'mhloToCorePasses' :
I don't understand here the notion of order fully, but I think it the main point is that the user did not specify that there would be an output. I think both options (having the human readable IR for these stages available vs printing it only on demand) have their use cases. Accessing the
We can dump it to a file and not print it. In the tests all human readable IRs are dump to a file (and as you pointed out elsewhere read into a dictionary) but they are only printed when the user requests them to be printed to stdout. I think we can keep that behaviour for |
Co-authored-by: David Ittah <[email protected]>
Co-authored-by: David Ittah <[email protected]>
5bf7d9e
to
a352ba8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🥳
Context: The previous version of the C++ compiler driver is a work-in-progress implementation that went down to the LLVM IR module. It also is missing important features around debugging that exist in the current subprocess driver.
Description of the Change:
@qjit
on a string containing textual IR (MLIR at any level and LLVM IR) and get it to run from Python.Benefits:
Improved compilation time
Progress
[sc-41430]
[sc-41704]