-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rewrite Extension section for type info #101
Conversation
- name: measure | ||
description: "measure a qubit" | ||
# We're going to implement this using ops defined in the "Quantum" resource | ||
resource_reqs: [Quantum] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the resource requirements should go back in. We can have operations within certain resources which require other resources. E.g. the MatMul
example could have an extra requirement on some LinAlg
resource
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I move that this requirement is either (a) use of types defined in LinAlg
, i.e. the complex_matrix
type mentioned in this comment; or (b) a property of the implementation, i.e. the rust/binary code (for what sets of resources try_lower
will succeed or fail), rather than the interface.
That is, at present the YAML does not specify when try_lower
will succeed or fail.
the [serialized definition data](#serialization): in this way | ||
subsequent tooling which does recognise the operation will receive | ||
it faithfully. | ||
<!-- Should we preserve some of this language about downcasting? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be worthwhile, but not in it's current form. The key point (imo) is that we're shelling out to some custom rust-implemented method and, because we know the type signature, we know how we should interpret the things that we get back. I think this is true of all 3 kinds of operations?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think so - key point vs. the previous text is that how the operation type is computed (2 ways above) is independent of how (tools figure out) the function of the operation (including whether or not an implementation of try_lower
, or a Hugr, is provided)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But I think the language about downcasting is rather implementation-specific (e.g. in Rust would we use Any
and downcast_ref
??) so probably doesn't belong in this doc
e28850f
to
8b58246
Compare
specification/hugr.md
Outdated
(these may include statically-known values e.g. U64), but does not | ||
specify how the operation type is computed from those. Instead the | ||
resource/extension provider implements a Rust Trait providing a | ||
function `compute_type` that takes the type arguments (for a particular | ||
instance i.e. operation node) and returns the type. Here it is useful | ||
to serialize the resulting (computed) type so the operation can be treated | ||
opaquely by tools that do not have the extension's Rust code available. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need to refer to Rust specifically. The key point is that extension code must self-register with the tooling interpreting the (HUGR+YAML) to say "I will compute the type for this operation if you give me the type parameters"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I've added a few points about registration. The point I want to make is that the YAML can be passed around with the HUGR but in general this is not possible for the binary part of the extension.
Ok, so have been thinking a bit about this. There are several issues here, and it'll help to be consistent/precise about type parameters (the declaration that a type scheme is dependent upon a type variable T, say) vs type arguments (the value, i.e. type, provided for T in a particular instantiation of that type scheme i.e. a particular node in the Hugr). The rough idea is - the YAML declares some "type parameters" and also some "type scheme". The format of the latter needs to be understood only by the "YAML Interpreter" - I am uncertain as to whether we just have one YAML interpreter or whether you name the "interpreter" (i.e. the type scheme parser/rules) you want. Naming YAML-interpreters initially felt like overkill but might help resolve these questions of how expressive the YAML-interpreter should be and whether it supports dependent types. A particular node in the HUGR carries actual+fixed+static+const values i.e. type arguments, which correspond to the type parameters. The "YAML Interpreter" is then invoked with
So, the type parameters are really just part of the type scheme; and the only thing that really cares about the type scheme is the "YAML interpreter" itself. This means
|
Ok, I am heading in a direction but not got to the endpoint yet. I think the idea of "multiple YAML interpreters" (i.e. code that computes signature, given type-scheme specified in the YAML) is the same idea as "providing operation-specific binary code for compute_signature". There should be one registry of code (where we can look up binary
And then finally the per-node values of the type arguments (conforming to the type-param decls being point 1.) |
That now enacted. Still need to add examples of qualified imports. The commented-out bits about downcasting and some others are still up for grabs. Anyone want to make a joint PR, or a followup? |
@@ -665,68 +674,103 @@ See [Type System](#type-system) for more on Resources. | |||
# may need some top level data, e.g. namespace? | |||
|
|||
# Import other header files to use their custom types | |||
# TODO: allow qualified, and maybe locally-scoped |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@croyzor I'm a bit unclear how this works. We've imported Quantum, and we say Q
is defined there - but, as an alias? Or an Opaque
? Those are, AFAI am aware, the only two ways for an extension to define new types. Maybe alias works, or maybe Q
is not a good example.
Elsewhere (below) I've been writing Opaque(complex_matrix,...)
but maybe that should just be complex_matrix
?? and that should be imported from another resource - in which case, lets do a qualified import for that....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this a lot! Especially the clear separation between type and semantics. I'm not sure I entirely follow the resource req/opqaue discussion but it seems like that could be a follow on thing to address. @mark-koch you may want to take a look too.
Closes #22 --------- Co-authored-by: Craig Roy <[email protected]> Co-authored-by: Craig Roy <[email protected]> Co-authored-by: Mark Koch <[email protected]>
## 🤖 New release * `hugr-llvm`: 0.4.0 -> 0.5.0 <details><summary><i><b>Changelog</b></i></summary><p> <blockquote> ## [0.5.0](CQCL/hugr-llvm@v0.4.0...v0.5.0) - 2024-09-16 ### New Features - Add emitters for int <-> float/usize conversions ([#94](CQCL/hugr-llvm#94)) - [**breaking**] array ops ([#96](CQCL/hugr-llvm#96)) - Add conversions itobool, ifrombool ([#101](CQCL/hugr-llvm#101)) - Add `tket2` feature and lowerings for `tket2.rotation` extension ([#100](CQCL/hugr-llvm#100)) ### Testing - Add execution test framework ([#97](CQCL/hugr-llvm#97)) </blockquote> </p></details> --- This PR was generated with [release-plz](https://github.com/MarcoIeni/release-plz/).
This is intended to enact what we've discussed a couple of times about extensions, unifying some concepts. It may need more work on phrasing and some details, e.g. I haven't learnt about
serde
yet, and there's that bit about downcasting...Also there is some uncertainty about terminology. I've been using "operation-definition" for what we at one point called "OpFactory", but I've also removed the previous term "OpDef" (along with CustomOp - I think these terms just live in the code) so it is tempting to use that. Thoughts?
I've deleted a fair bit of the old text that seemed to be talking about implementation, I'm happy to negotiate which bits of that to bring back and how.
call
s to functions in other HUGRsresource_reqs
as they seemed to be superceded by the existing/previous description of "a fallible interface for returning an equivalent program made of operations from some provided set ofResources
."