-
Notifications
You must be signed in to change notification settings - Fork 323
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CIP-31: Reference inputs #159
CIP-31: Reference inputs #159
Conversation
CIP-0031/README.md
Outdated
- Referenced outputs are _not_ removed from the UTXO set if the transaction validates. | ||
- Reference inputs _are_ visible to scripts. | ||
|
||
Finally, a transaction must _spend_ at least one output.[^2] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even if you did somehow "allow" this, wouldn't this be impossible since you need to pay the fee somehow? I.e. you could maybe reword this, if you think that it is relevant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implicit coin (reward withdrawals + deposit reclaims) can be used to pay the fee, so you could allow it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could potentially rely on a mechanism like this. Nonetheless, transactions today are already required to spend at least one UTXO, even if e.g. they could cover the fee with withdrawals. We simply don't change that restriction here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
michaelpj:mpj/reference-inputs
I think we need a feature like this but I have a suggestion for an alternate implementation. My concern is that we may regret losing the can only be read once affine nature of UTXOs. I think this is a core feature of the UTXO model we should be hesitant to dispense with. We could implement something that looks more like a Reader Monad than a reference input. To do that we would need to introduce the concept of a constrained (or content addressable) input and defer some construction of the All we need to say is that we want any data TxOut = TxOut {
txOutAddress :: Address,
txOutValue :: Value,
txOutDatumHash :: Maybe DatumHash
} and so long as the transaction reproduces an identical If the transaction does not reproduce exactly this output then further reading transactions will fail in the node transaction construction phase since they cannot find a matching input. This could be done efficiently by introducing a new content addressable Some advantages to this approach are:
|
While it's nice to keep some of the UTXO's current properties, I see three problems with this proposal:
|
This is an interesting idea, but it loses another property that we care about: that the transaction specifies exactly what it does, and is self-contained. For example, losing that threatens the determinism of script execution (I think we could probably keep it working, but we'd have to think carefully about it). You could imagine implementing something like this as a layer 2 solution, perhaps, where "partial" transactions are elaborated into "fully-specified" transactions in order to be included in the base settlement layer. But I feel less positive about it as a feature of layer 1. Plus, this "constrained script inputs" feature is much more powerful than just reference inputs. It feels odd to implement it just to do referencing. Perhaps there are other use cases that make it compelling, but otherwise it feels like we're adding a powerful feature somewhat blindly, which might lead to unexpected outcomes.
Notably, this means you need to implement more of the logic in the script that locks the output: it has to insist that you produce a matching script when you reference it.
It's not even clear if "controlled referencing" is desirable. And if we did do it, we'd probably want to use a different script than the script that controls spending: taking the oracle example, you don't want the same script to control who can use the data and who can reference it. (Well, maybe you could make it work with a clever script and redeemers.) At any rate, I think the design there is less clear, and your proposal actually gets rid of uncontrolled referencing, which I think is definitely useful. Anyway, I think this is definitely an interesting idea and I'd encourage you to write it up if you're keen. |
I am not sure what you mean. Perhaps there is some misunderstanding. I propose we add the ability to look up TxOutRefs on the ledger by the hash of the TxOut triple (Address,Value,Datum). This would be an extension to the current functionality so there is no danger that it will interfere with how anything currently works. This would be an opt-in feature since this requires storing additional data on the ledger. i.e. you make a Tx with an output and flag that it should be stored such that it can be looked up by its TxOutHash. On non-determinism - that's the whole point. We get non-determinism w.r.t. TxOutRef but maintain determinism w.r.t. TxOut content - we can still verify up front that the Tx will validate if a TxOut can be found with the specified TxOutHash. The lookup for a TxOutHash will be a similar cost to the lookup for a TxOutRef. If this step is successful validation proceeds as normal otherwise we fail as normal.
Why do you believe this? I state the opposite to be true. :)
I am not sure what you mean by this. I am not proposing we change how validation works and I agree we should keep validation deterministic. |
Perhaps constrained inputs is the wrong name for this. That's a more powerful bag of magic that I agree we shouldn't look into for layer 1. I'm suggesting we could specify inputs and outputs to be TxOuts that already exist on the ledger with exactly matching Address, Value, Datum, identified by a hash of this triple. The node can complete this Tx closure by computing the TxOutRefs as required. This keeps validation deterministic and allows us just enough non-determinism to get reference-input like behaviour without having reference inputs. I'm not sure whether this would be more work to implement than reference inputs. I feel like it may be considered less powerful though since we would lose uncontrolled referencing. My concern is that losing the affine read-once nature of UTXOs might make the computation model more complex and thus harder to audit and prove properties for - though I have no evidence for this, it's just a feeling.
I'll put some markdown together. 👍 |
I think that "content-addressed UTXOs" is a good idea, because the location/transaction of the UTXO doesn't matter, the only thing that matters is its content. I don't see why determinism would be lost if you have to specify the exact content of the UTXO, but if you support only specifying it partially, e.g. a UTXO with X token, without caring about the datum, that is a lot harder to make deterministic. However, this doesn't replace the need for a reference input, and it will not work well at all with CIP-33. The two extensions to the ledger aren't mutually exclusive as they don't conflict, so I don't think it's a good reason not to add this feature to the ledger. |
@L-as I think CIP-33 could work with "content-addressed UTXOs". We would just need a way of saying "this is the hash of the script, it should be in this UTXO" and have a mechanism for pulling it out of the "content-addressed UTXO". I am in agreement that these two ideas could coexist in the system. Read-only access to the ledger would be very useful. Additionally - perhaps content-addressed UTXOs are more powerful than I initially estimated - consider a UTXO that represents a DEX as a content-addressed UTXO supporting an order book with execution/matching decided by the node. Users could place an order relative to a content-addressed UTXO that they speculate will come into existence and hope that a node will be able to engineer a sequence that supports their transaction. The node could be performing some application specific arbitrage operation to enable the sequencing. |
FWIW my belief is that yes, this would be much more work to implement. Reference inputs requires a tag on inputs, some small changes to the ledger rules, and some small changes to the transaction context. "Constrained inputs" requires changes to how nodes construct blocks, changes to the transaction format, new kinds of entity, possibly a content-addressed lookup store... lots of things. |
|
- The spending conditions on referenced outputs are _not_ checked, nor are the witnesses required to be present. | ||
- i.e. validators are not required to pass (nor are the scripts themselves or redeemers required to be present at all), and signatures are not required for pubkey outputs. | ||
- Referenced outputs are _not_ removed from the UTXO set if the transaction validates. | ||
- Reference inputs _are_ visible to scripts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Together with CIP-33, reference inputs would then do two things: Be part of the context passed to scripts, and if they contain a reference script, that would get added to the witness set. This combination is a bit arbitrary. I worry about a future where we suddenly need to have different types of reference inputs because we need to be able to reference some of the things but not others.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
michaelpj:mpj/reference-inputs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Together with CIP-33, reference inputs would then do two things: Be part of the context passed to scripts, and if they contain a reference script, that would get added to the witness set. This combination is a bit arbitrary. I worry about a future where we suddenly need to have different types of reference inputs because we need to be able to reference some of the things but not others.
If a transaction's witnesses map contains additional (script hash, script source) pairs from the reference inputs, does it really make a difference? All of the transaction's inputs were intentionally locked by specific script hashes, so they would never unintentionally refer to these additional scripts. In other words, I don't see how including extra/unused scripts in the witnesses would ever change the transaction validation result.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Be part of the context passed to scripts, and if they contain a reference script, that would get added to the witness set.
My intention was that with CIP-33 any input that corresponds to an output with a reference script would see it added to the witness set. So I claim that reference inputs still only do one (conceptual) thing: they let you look at all the information in an output. It seems reasonable to me that looking at the information in an output that contains a reference script should let you use the reference script as a witness.
Rephrasing your worry, though, what you're suggesting is that we might want to e.g. restrict the information that a reference input lets us look at. I can't see a reason for that, but maybe there is one.
I've updated the text with some clarifications and a small discussion about controlling referencing. |
This could potentially be an entire additional address, since the conditions might be any of the normal spending conditions (public key or script witnessing). | ||
|
||
However, this would make outputs substantially bigger and more complicated. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's cleaner to add an optional "reference validator script" field rather than having "check inputs". How much overhead will an empty field add to the serialisation of a UTXO?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also prefer the optional "reference validator script" field, it does seem cleaner. To be really explicit about the behavior: If the reference validation script was absent it would be the same as validation passing. I like the model of having separate validators for the two semantic actions of referencing vs spending.
Dear community, I need some input. It is unclear to me whether controlled referencing (as defined in the CIP text currently) is a key feature for people or not. Please react to this comment with your feelings:
|
Reference inputs without controlled referencing are useful in applications where the referenced data is not monetized (i.e. compensation not expected to the data originator), because the dApp is referencing its own intrinsic data. Examples:
Referencing conditions (i.e. different conditions for referencing than spending) are useful in applications where the referenced data itself is being monetized, because the data comes from outside of the dApps that use it. In other words, the data provider does not derive sufficient incentive from the benefits that users gain from using the data, and must be compensated separately in order to provide the data. Examples:
The main benefit that referencing conditions provide is the ability for a user to prove on-chain that she met the data provider's terms for the data that she has used. For example, upon referencing a UTXO with referencing conditions, the user can mint an NFT that will witness to subsequent transactions that the referencing conditions have been met in this transaction. Such NFT witnesses would allow:
On the other hand, the usefulness of "check inputs" controlled referencing (i.e. referencing allowed if spending conditions are met) varies by context:
|
@GeorgeFlerovsky thank you for that useful summary. I agree with most of what you wrote. A couple of things. A key use case for reference inputs is to support CIP-33 (reference scripts). I think those would be pretty useful with just reference inputs, although there are perhaps interesting opportunities with controlled referencing too. On check inputs:
That's not necessarily true, as I hinted in the text. You can use the redeemer to control it:
So you can encode different referencing and spending conditions into a single validator. (Aside: this makes me realise that it is not true that check inputs give a proof that you could spend the output... the example above gives exactly a case where you could "check" an output but not spend it!) |
Cool! In that case, would it be accurate to say that the following two are equivalent?
In other words, if you squint your eyes, then the "referencing conditions" scheme looks like the If they are equivalent, then perhaps we don't need the additional/optional "referencing conditions" field. Of course, there might be some overhead involved with combining spending and referencing logic into a single validator. |
I think that is an interesting suggestion, but am wondering if these two are not equivalent in terms of side effects. I believe one reason for the separate fields:
Is for the transaction author to specify the intended semantics to the interpreter to determine if the input participates in the transaction balance check and if the UTXO gets marked spent. Which makes me wonder if this idiom:
would get used very much in practice, unless for some reason you wanted to handle both cases with only 1 script. I believe you could combine them if you added a flag to the Context and it was visible to the interpreter. The current proposal seems very economical, but perhaps not as explicit. Please correct me if I got any of the semantics wrong. I'm not super confident in my understanding of how this all works. |
I'm for separate reference input field in the context and not to mix them with ordinary inputs. And can they be not a packed to a set on a node level? So that it's possible to just index them as it's possible for outputs. That would be great to have. Fixing that for inputs is also great to have. But it's a different story. |
Thanks for the input, everyone. Given the timelines on which we'd like to do this work, the lack of design consensus, and the lack of anyone saying that the lack of controlled referencing makes this CIP worthless for them, I'm going to leave it out of scope for this CIP. We can revisit it in future. With that said, I think this is ready to be merged as Draft. I will revisit it and update it to Active once the implementation has progressed and e.g. the CDDL is pinned down.
Yep, that's the current proposal.
That's definitely out of scope. For this proposal I think the interface should be the same as for normal inputs, and if we change them we should change them both. Perhaps you should write a CIP :) |
This CIP will be ground-breaking for Cardano oracles which is what I'm working on. There's no point continuing with the current architecture restrictions (having to spend transaction outputs to read data + only one script can read per block) if CIP-31 is nearby. I realize there are many extenuating factors but it sounds like IOG is fast-tracking this CIP, is that right @michaelpj? I'm trying to determine whether I can expect this CIP feature in the next fork/chain update. AFAIK those are scheduled for February and then June right? |
No one is fast-tracking any CIP 😊 ! Plus, there's a clear separation between CIPs (which are proposals of possible solutions) and actual implementations. While IOG is seemingly working on implementing CIP-0031, CIP-0032 and CIP-0033; they are still following the same process as others CIPs, going through multiple rounds of reviews and validations by editors and the community 👍 |
Well, to be clear. I would love for IOG to fast track this particular CIP. It can't come on-chain soon enough IMHO. I say "fast track" because John Woods, Director of Cardano Architecture at IOG has publicly mentioned this CIP and it's two related CIPs twice now in Cardano 360 updates. That's how i found out about them. I understand and respect that all CIPs have to go through the same review and editorial process but I also realize that the individuals involved can choose to prioritize that work for whatever reason (i.e. "fast track"). |
This isn't the place to discuss timelines. |
Forgive my ignorance @michaelpj, I am brand new to the CIP process. I am an interested party in this CIP, my oracle project will benefit greatly from it. I am trying to plan my own Cardano development activities accordingly. I currently have no sense of whether to expect to see this CIP live in 1 month or 1 year. Please direct me to the correct forum to ask about timelines. Thank you. |
@peterVG there would be others willing to discuss the timeline(s) & other advocacy in this forum category if you create a thread there: https://forum.cardano.org/c/developers/cips/122 If & when these forum discussion threads generate insight into a CIP itself, sometimes the authors will also include them in the |
CIP-0031/README.md
Outdated
|
||
This is actually a very important feature. | ||
Since anyone can lock an output with any address, addresses are not that useful for identifying _particular_ outputs on chain, and instead we usually rely on looking for particular tokens in the value locked by the output. | ||
Hence, if a script is interested in referring to the data attached to a _particular_ output, it will likely want to look at the value is locked in the output. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix: look at the value that is locked in the output
After discussion with the ledger team, there is a preference for sticking to the principle of never silently omitting information. For this reason instead of just omitting reference inputs when creating the transaction context for old scripts, we ban that occurrence. Also, optional datums -> extra datums
I updated the proposal to follow the principle that we should never silently omit information. That means that instead of silently omitting reference inputs when creating the context for old scripts, instead we will fail a transaction which spends from an old script and includes reference inputs in phase 1. |
See also #161 (comment). |
This was discussed at the Editor meeting 38 (see notes)- it is assumed on hold until further development (do flag if ready to review again) |
It's ready for review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed addition about forbidding usage of reference inputs in conjunction with old Plutus script (i.e V1). This makes sense; limiting foot-guns when building a DeFi ecosystem definitely makes sense.
Hi @michaelpj Could you please add a justification of why this should be added as a Phase 1? Thank you! M. |
I'm sorry, I don't understand what you're asking for. Everything is phase 1 by default, only actually running scripts is phase 2. |
@michaelpj in the last editor, if you recall, someone in the chat brought up the question of why should the ledger fail during phase-1 validation when presented with reference inputs and a PlutusV1 script; So the idea was to provide a rationale for that, but, I see that the rationale is already there actually:
Thus, happy to proceed with that one as discussed 👍 |
Question @michaelpj: Can a transaction reference a UTXO if it's consumed in the same block by another transaction? |
Yes, blocks are essentially irrelevant to what's going on in the UTxO set. As long as the transaction that references an output comes before the one that spends it everything works, even if they are in the same block. It also works in reverse, an output can be created and referenced in the same block as well. But if you send transactions that depend on each other in such short succession that they might end up in the same block, there's some chance that they might not be in the order you've sent them, which would mean that only one of them would actually end up on the chain (and the other one would need to be submitted again). |
I think there's one corner that isn't specified here, which is what happens if you try and both reference and spend an input in the same transaction. That's more of a corner case, however, and I don't think it matters terribly much, so we can just pick one in the spec. |
This CIP proposes adding "reference inputs" in the style of Ergo's "data inputs".