-
Notifications
You must be signed in to change notification settings - Fork 373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP]feat: Protocol to support contract-contract interaction #473
base: master
Are you sure you want to change the base?
Conversation
Thank you @ltzmaxwell. Can you add a pure .gno test too? |
Thank you for your contribution. Can you help us understand what we are trying to achieve with innerMsg, std.Result and std.Response?
|
Spoiler: I didn't review your implementation yet, but will do soon. @piux2, one of the exciting points with contract-contract messages is if it can create a new |
Another point could be to support new kind of update proxies; where you can dynamically change the "import path" by calling a |
My idea may not have much to do with this issue, but I thought about adding res := std.Call("gno.land/r/demo/user","GetUser","username") Exceptionally, it seems that there are cases where dynamic imports are needed in contracts, rather than static imports. (e.g. GRC20 token management app?) |
I was thinking: var nft grc721.IGRC721 = std.Import("gno.land/r/demo/nft")
nft.TransferFrom(...) |
I also thought of |
The idiomatic way to call between contracts is to Import the realm package and call the method to access the package state stored in the package variables. The caller in the path can be accessed through std.GetCallerAt(i), regarding others
Fundamentally Gno VM is modeled differently than EVM. Fundamentally Gno VM is modeled differently than EVM. In solidity, call(txdata) txdata is byte code. EVM stores and execute the byte code. The solidity contract itself is not stored or executed upon on chain. We should avoid low-level calls .call() whenever possible when executing another contract function, as it bypasses type checking, function existence check, and argument packing. GNO VM is a go syntax interpreter, it does not store and execute bytecode, it stores elements of parsed .gno files as type, object, and blocknodes, and VM executes these parse elements (type, value, expression, variable, and such). In GNO, the real "raw txdata" in VM are these parsed objects encoded in amino+ proto
|
This is worth some discussion and lays out the pros, cons, and trade-offs One valid use case for dynamic importing is the proxy contract upgrade. However, the proxy contract upgrade is a controversial practice. Basically, it trades off security, trust, and complexity with certain flexibility. What are other use cases we have to use dynamic package loading? |
import "std" | ||
|
||
func main(){ | ||
// define msg here, marshal it and print to stdout, to trigger a msg call |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this this is how the system works I think could be better documented somewhere.
Thank you for the PR! A minimal use-case example may be helpful for me and others. |
Thanks guys, for your questions and discussions, it has given me a lot of inspiration, many of which I did not expect when I started this work. Really appreciate! MotivationFirst, I was looking for a way to dynamically call another contract from a host contract. That would be Then I realized that a messaging mechanism might be a better alternative, so some experimentation is done. General Flow:
This is fairly straightforward, while the challenge is to keep the forward&backward flow atomic, consistent, and robust. InnerMsg:InnerMsg is a wrapper of std.Msg (now only MsgCall supported), with an additional field std.ResultInnerMsg is wrapped in std.Result as well as some other fields like: data, event, log, etc. std.Result is defined in stdlibs, converted to Go type for further handling: to return a sdk.Result if no InnerMsgs returned, or to feed the dispatcher. Dispatcher:The dispatcher works as a coordinator, it converts innerMsg to a standard MsgCall, feeds the vm message handler, then does a Test & Simulate:Since the message mechanism involves some preprocessing, such as message validation and dispathing, the test should include this as well. A simulator is a tool to simulate an MsgCall for its entire lifecycle. The The way we write the test may change a bit, what we need to do is: Potential problems and possibilities:There may be some potential problems, like inifinte loops, permission issues, etc., but I think they would be solvable. While some another exiting possibilities may also arise, since interoperability(dynamically) is provided, some idiom and paradigms of development work will change as a result, bringing more surprises and possibilities. Responding to concerns:Does it bypass type checking? Msgs are injected into the VM keeper rather than a low-level call, it's some standalone MsgCalls chaining together, so I think is not bypassing typecheck. Are all args string? Yes. InnerMsg is converted to a std.Msg before the loop begin. |
hi, @moul, are these potentially related? Line 62 in 76d23ef
|
This comment was not based on a planned feature, but was an invitation to discuss if something could make sense. Maybe we can imagine something different to test your workflow, like having new testing helpers/drivers. Maybe we can also define a shareable format made for running a suite of TXs against a localnet of a production network. I'll keep thinking. |
We need more input on this one. My current feeling is that we should have at least two methods to make inter-contract calls:
|
These contracts are not individual services when they call each other or in the call back flow. The entire call or call back flow is atomic, since GNO VM will put each functions called by the contracts on stack first and executed by the GNO VM in FILO fashion. In the end, the package variable state are updated and committed in the state store. if there is any exception, the VM will panic and the final state will not be committed. The GNO VM even handle's stack over flow if there are recursive calls. It does slow down the VM though, but GNO VM will recover. No contact variable states are updated either. |
Not sure but came up with an idea, is it possible to build a unified model, that a standard msg could be routed to another contract, or another chain(IBC)? |
Having a way to call contracts between different chains linked by IBC is a great idea! The IBC channel is an abstraction on top of the message broadcast to the network. There is no actual point-to-point IBC message call directly between micro-services or chains. Instead, the IBC message is broadcast to the entire network of chain A as many other messages; the relayer scans the state of the local chain, sees the message included with the IBC information to chain B, and then relays it to the destination chain B. In other words, IBC is an abstraction modeled as a TCP connection. However, the underline implementation is more like a distributed database sync facilitated by IBC relayers. High-level proposalDue to the nature of IBC, as for call contracts on a destination chain connected through IBC, we will need to define a global realm URI format included in the IBC message, for example, ibc_chain_id+local realm URI. We do not need to build a proxy or anything to facilitate the message transfer. The message is delivered as part of the IBC message as follows:
Atomic function calls cross chainsThis will only be possible if we coordinate locking and releasing the contracts state on both chains between calls. However, there is no instant and real-time finality guaranteed cross chains. We need strong use cases to justify the complexity introduced and the trade-off between usability and functionality. Call back between contracts cross chainsApplication developers can implement either interface callback or pass a function as parameters to achieve cross-chain contract calls using the global realm URI. We must decide whether we allow callback and recursive calls between cross-chain contracts.
|
Thank you @piux2 for the comment, it's amazing. |
Update:In the V1 implementation, an InnerMsg is introduced, which closely resembles a MsgCall. It is embedded within a VMResult, which is then returned to the VM keeper. Subsequently, the VM keeper constructs a MsgCall targeting the desired contract and initiates a callback to the calling contract. The contract side pseudocode appears as follows:
Alternatively, a more RPC-style approach can be introduced using std.Call. This method accepts requests from the contract side and sends them to the VM keeper through a channel, which is then processed. The code looks like this:
However, improvements can be made to achieve a more idiomatic implementation, such as using client.XXX(req) instead of std.Call. Maybe introducing a proto buffer schema and code generation to build types and bindings could be helpful, with std.Call wrapped within. Moreover, regardless of the style chosen, both have the potential to connect with IBC since messages can flow to another contract, either within the same VM or on a different chain, as long as the necessary rules are established. So, the questions is which style is better? not limited to this question, any suggestions, corrections, and clarifications will be helpful, Thank you! |
stdlibs/stdlibs.go
Outdated
@@ -141,6 +141,61 @@ func InjectPackage(store gno.Store, pn *gno.PackageNode) { | |||
m.PushValue(res0) | |||
}, | |||
) | |||
// TODO: IBC handshake process | |||
// this is called mannuly, maybe should have some modifier for the contract,like `IBC`, | |||
// so the IBC channel is initialized during deploying, and can be used more efficiently |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The IBC channel should be chain-managed, with a consistent queue creation method initiated by the contract or the chain.
stdlibs/stdlibs.go
Outdated
// ) | ||
// m.PushValue(res0) | ||
|
||
go SendData(msg) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you plan to have something like this?
go SendData(msg) | |
errChan := chan(error) | |
go SendData(msg, errChan) | |
recvChan := chan(string) | |
select { | |
case recv := <-recvChan: // ... | |
case timeout: // ... | |
} |
Else, I'm curious about what's your plan?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have two ways to achieve this, like using the channel to send and receive msg, with timeout set properly, this is more like a synchronous call, it will block until the call is finished.
Another way is using a callback, that the VM keeper initiate a callback to the caller after the target call is finished, this is more like an asynchronous call, but brings complexity compared to the first one.
Both have pros and cons, what do you think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm considering eliminating the goroutine since we could have events in a queue without waiting.
Can you begin implementation as per your preference, and we can discuss it later? Perhaps I can propose an alternative solution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A prototype, here is a brief introduction:
Contract:
The caller
contract is something like this, it firstly encodes the parameters, invoke a std.Await(exactly name to be determined, name it await
since it's synchronous call"), and wait for the result.
package hello
import "std"
func Hello() string {
chainID := std.GetChainID()
call := std.EncodeMsgCall(chainID, "gno.land/r/demo/x/calls/await/greet", "Greet", []string{"hey"})
r, err := std.Await(call)
println("done ")
println("r is :", r)
println("errMsg is: ", err)
return "hello" + r
}
The encoder is very rough, just a simple stringification like:
package std
// TODO: a better encoder
func EncodeMsgCall(ChainID, PkgPath, Fn string, args []string) string {
var as string
for _, a := range args {
as += a
}
return ChainID + "#" + PkgPath + "#" + Fn + "#" + as
}
Note: the ChainID
is used temperarily to identify whether this is a in-VM call or IBC call.
Stdlib:
The stringified call
is decoded into a GnoMsg
, which mostly resembles MsgCall
but with additionally field of ChainID
, and field Response
, that is a chan string
as a callback to retrieve result.
pn.DefineNative("Await",
gno.Flds( // params
"call", "string",
),
gno.Flds( // results
"bz", "string",
"err", "string",
),
func(m *gno.Machine) {
// println("std.Send")
arg0 := m.LastBlock().GetParams1()
call := arg0.TV.GetString()
// println("call: ", call)
gnoMsg, err := vmh.DecodeMsg(call)
if err != nil {
panic("parameter invalid")
}
resQueue := make(chan string, 1)
gnoMsg.Response = resQueue
// send msg
vmkeeper.DispatchInternalMsg(gnoMsg)
// XXX: how this determined, since calls will accumulate
// should have an estimation like gas estimation?
timeout := 3 * time.Second
println("block, waiting for result...")
// TODO: err handling
var result string
select {
case result = <-resQueue:
println("callback in recvMsg: ", result)
case <-time.After(timeout):
panic("time out")
// case err = <- errQueue:
}
// TODO: return err to contract
var errMsg string
if err != nil {
errMsg = err.Error()
}
res := gno.Go2GnoValue(
m.Alloc,
m.Store,
reflect.ValueOf(result),
)
m.PushValue(res)
m.PushValue(typedString(gno.StringValue(errMsg)))
},
)
VMKeeper:
A eventLoop routine is always listening new event from contract, and then handle it.
func (vmk *VMKeeper) startEventLoop() {
for {
select {
case msg := <-vmk.internalMsgQueue:
go vmk.HandleMsg(msg)
}
}
}
The handler calls the callee
contract in-VM or through IBC. We don't have IBC now, so using a channel to simulate the IBC loop.
func (vmk *VMKeeper) HandleMsg(msg vmh.GnoMsg) {
println("------HandleMsg, routine spawned ------")
// prepare call
msgCall, isLocal, response, err := vmk.preprocessMessage(msg)
if err != nil {
panic(err.Error())
}
// do the call
if isLocal {
println("in VM call")
println("msgCall: ", msgCall.Caller.String(), msgCall.PkgPath, msgCall.Func, msgCall.Args[0])
r, err := vmk.Call(vmk.ctx, msgCall)
println("call finished, res: ", r)
// have an return
if err == nil {
response <- r
}
} else { // IBC call
println("IBC call")
// send IBC packet, waiting for OnRecv
vmk.ibcResponseQueue = response
vmk.SendIBCMsg(vmk.ctx, msgCall)
}
}
So it's a basic prototype, wonder if it's the right path?
To be determined and done:
- Sync or Async? if it's a cascade call, like a->b->c, the return value would be useful so a synchronous call will be needed, but if it's something with no strong dependencies, use Async call with events emitted would be enough.
- The proper name ;
- the
std.GetXXXCaller
related work. this may relate to some security issues, should get whole call graph to solve the impersonate issues? - gas management;
- better encoder and decoder?
- more use case and test.
pkgs/sdk/vm/vm_wrapper.go
Outdated
"github.com/gnolang/gno/stdlibs" | ||
) | ||
|
||
type Wrapper struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a more specific name we can use?
A CallContext?
Is there a better word than wrapper or context?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure this structure is really making the program easier to understand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do agree that we want some kind of refactoring but I think it would be better to revert the Wrapper stuff here and to make a separate PR later with perhaps a different refactor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, absolutely, I'm splitting it out to another PR, #718.
@ltzmaxwell do you think pr needs bit of modification due to #667 merged? |
Description