Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

clientv3: Allow watcher to close channels while waiting for new client #6651

Closed
wants to merge 1 commit into from

Conversation

yudai
Copy link
Contributor

@yudai yudai commented Oct 14, 2016

Once watchGrpcStream.run() gets a connection error, it does not anything
until it gets a new watch client, because newWatchClient() called in the main
loop blocks.
In this state, watch response channels returned by the Watcher won't be
closed even if the cancel function of the context given to the Watcher
has been called.
The cancel request will be executed after a connection to the server has
been re-established.
This commit allow watcher to run cancel tasks while waiting for a new
client so that users can cancel watch tasks anytime.


Hi,
I think that the current behavior makes sense when you can assume that the connection to the etcd cluster is stable enough and recovers quickly after troubles. However, in a situation that a connection outage could be longer, the current behavior could lead to some resource leak or unexpected block of goroutines.

Here's some code to reproduce the situation:

package main

import (
        "fmt"
        "os"
        "os/signal"
        "sync"
        "syscall"
        "time"

        "github.com/coreos/etcd/clientv3"
        "golang.org/x/net/context"
)

func main() {
        c, err := clientv3.New(clientv3.Config{
                Endpoints:   []string{"localhost:2379"},
                DialTimeout: time.Second,
        })
        if err != nil {
                fmt.Println("failed to connect")
                return
        }
        fmt.Println("connected")

        ctx, cancel := context.WithCancel(context.Background())
        wch := c.Watch(ctx, "/")
        if err != nil {
                fmt.Println("failed to watch")
                return
        }
        fmt.Println("watching")

        var wg sync.WaitGroup
        wg.Add(1)
        go func() {
                defer wg.Done()
                for wresp := range wch {
                        fmt.Println(wresp)
                        // do something
                }
                //
        }()

        // shutdown ETCD here

        waitSignal(cancel)

        // you will wait for the goroutine to be done forever after hitting Ctrl-C if the server is gone

        wg.Wait()
        fmt.Println("done")
}

func waitSignal(cancel func()) {
        sigs := make(chan os.Signal, 1)
        signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM)
        <-sigs
        cancel()
}

@yudai yudai force-pushed the cancel_watch_on_outage branch 2 times, most recently from 8f27c23 to 70bcd8b Compare October 14, 2016 08:41
@xiang90
Copy link
Contributor

xiang90 commented Oct 14, 2016

Can we add a test to illustrate the problem?

@yudai yudai force-pushed the cancel_watch_on_outage branch from 70bcd8b to 1e575fb Compare October 14, 2016 23:10
@yudai
Copy link
Contributor Author

yudai commented Oct 14, 2016

Added a test to reproduce the problem. With the current code, the test times out because the cancel() doesn't close the wch channel at this timing.
If you call cli.Close() following to the cancel(), it closes the channel and makes the test pass, however, we cannot assume users always close clients as well after canceling a watch request.

@yudai yudai force-pushed the cancel_watch_on_outage branch 4 times, most recently from c7d9ecb to eb63d56 Compare October 18, 2016 00:03
@yudai
Copy link
Contributor Author

yudai commented Oct 18, 2016

Finally fixed all issues...!

@xiang90
Copy link
Contributor

xiang90 commented Oct 19, 2016

@yudai Can we change

func (w *watchGrpcStream) openWatchClient() (ws pb.Watch_WatchClient, err error) 

to

func (w *watchGrpcStream) openWatchClient(ctx context.Context) (ws pb.Watch_WatchClient, err error) 

And make openWatchClient respect the passed in context?

@yudai
Copy link
Contributor Author

yudai commented Oct 19, 2016

@xiang90 Got it. Is that ok to pass w.ctx from callers for now?

@yudai yudai force-pushed the cancel_watch_on_outage branch from eb63d56 to 9a9894d Compare October 19, 2016 23:04
@xiang90
Copy link
Contributor

xiang90 commented Oct 19, 2016

@yudai

After read the code again, I think there is a more serious issue with how stream handles context in general.

First each stream can have multiple substreams created for each individual watchers.

So cancelling one watcher should not cancel the entire stream, or it will affect other watchers on the same stream. We probably did wrong on this even without your code change.

Your change makes it worse to cancel the ctx on the main stream when a watcher cancellation happens.

@xiang90
Copy link
Contributor

xiang90 commented Oct 19, 2016

@yudai Ignore my previous reply. I was wrong. We actually mask the ctx cancellation with https://github.com/coreos/etcd/blob/master/clientv3/watch.go#L193-L202.

So yea, you cannot use ctx on w. it wont work. we need to respect the ctx on each wr:
https://github.com/coreos/etcd/blob/master/clientv3/watch.go#L146

@yudai
Copy link
Contributor Author

yudai commented Oct 20, 2016

Let me double check.

Each watchGrpcStream has its own masked context (valCtx, w.ctx) and a cancel func (w.cancel).
https://github.com/yudai/etcd/blob/9a9894db6f7a66229c887caca3bbff873501d3da/clientv3/watch.go#L214-L216

I think, when we want to cancel watchGrpcStream.openWatchClient(), we want to throw away the instance of watchGrpcStream itself. I therefore think what watchGrpcStream.openWatchClient() should get is w.ctx. w.cancel() is called only when the w has no substream any longer. If there are at least one substream, we keep trying to open a new client and don't call cancel().
https://github.com/yudai/etcd/blob/9a9894db6f7a66229c887caca3bbff873501d3da/clientv3/watch.go#L508-L511

wr.ctx is the same ctx which users give to Watch(), and it's set to watchRequest.
https://github.com/coreos/etcd/blob/master/clientv3/watch.go#L238-L239
then, wr is consumed by watchGrpcStream.run() via reqc.
https://github.com/coreos/etcd/blob/master/clientv3/watch.go#L424
then, used in watchGrpcStream.serveSubstream() to break the loop there.
https://github.com/coreos/etcd/blob/master/clientv3/watch.go#L622

I'm afraid that respecting wr.ctx in watchGrpcStream.openWatchClient() is not right? I'm actually bit confused so I'm sorry if I'm wrong.

@yudai yudai force-pushed the cancel_watch_on_outage branch from 9a9894d to c2cad27 Compare October 20, 2016 01:19
@xiang90
Copy link
Contributor

xiang90 commented Oct 20, 2016

I think, when we want to cancel watchGrpcStream.openWatchClient()

We do not want to cancel that. For each stream, we can have multiple watchers each created by separate w.Watch() call. Cancelling that will affect all watchers on this stream. We only want to cancel the substream we created for the watcher with the ctx.

@xiang90
Copy link
Contributor

xiang90 commented Oct 20, 2016

@yudai

The main stream routine catches all substream closing here:

https://github.com/coreos/etcd/blob/master/clientv3/watch.go#L496-L498

During its initialization, it wont catch any closing event. So the cancellation wont work. I think this is the root problem that we need to solve. /cc @heyitsanthony


for {
select {
case opc := <-clientOpenC:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yudai OK. Now I understand this better. We move part of the logic into serveWatchClient to solve race conditions?

wc = opc.wc
closeErr = opc.err
case ws := <-w.closingc:
w.closeSubstream(ws)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we make this a func? this is the same as line 529-534.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could make the code a bit complicated, because closing is a local variable tied to this loop and in the the if block, the is a return to break the loop.
If moving the closing variable to the struct makes sense, I'll do that.

@@ -671,7 +694,7 @@ func (w *watchGrpcStream) joinSubstreams() {
}

// openWatchClient retries opening a watchclient until retryConnection fails
func (w *watchGrpcStream) openWatchClient() (ws pb.Watch_WatchClient, err error) {
func (w *watchGrpcStream) openWatchClient(ctx context.Context) (ws pb.Watch_WatchClient, err error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i was wrong. this change is actually not necessary.

@@ -543,21 +575,41 @@ func (w *watchGrpcStream) dispatchEvent(pbresp *pb.WatchResponse) bool {

// serveWatchClient forwards messages from the grpc stream to run()
func (w *watchGrpcStream) serveWatchClient(wc pb.Watch_WatchClient) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do not modify serveWatchClient? add a new func for line 578->594?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also on the initialization case, we do not even need line 578-594.

Copy link
Contributor

@heyitsanthony heyitsanthony left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This patch is really complicated for what it does.

time.Sleep(time.Second * 3)
clus.Members[0].Terminate(t)
time.Sleep(time.Second * 3)
donec := make(chan struct{})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the donec is unneeded synchronization,

go cancel()
select {
...
}
if err := cli.Close(); err != nil {
...

@heyitsanthony
Copy link
Contributor

The way to go is probably to rearrange the newWatchClient code to accept cancels while the watch goroutines are down:

func (w *watchGrpcStream) newWatchClient() (pb.Watch_WatchClient, error) {                                                                                    
        // mark all substreams as resuming                                                                                                                    
        if len(w.substreams)+len(w.resuming) > 0 {                                                                                                            
                close(w.resumec)                                                                                                                              
                w.resumec = make(chan struct{})                                                                                                               
                w.joinSubstreams()                                                                                                                            
                for _, ws := range w.substreams {
                        ws.id = -1
                        w.resuming = append(w.resuming, ws)                                                                                                   
                }       
        }               
        w.substreams = make(map[int64]*watcherStream)                                                                                                         

        // accept cancels while reconnecting                                                                                                                  
        donec := make(chan struct{})                                                                                                                          
        var wg sync.WaitGroup
        wg.Add(len(w.resuming))                                                                                                                               
        for i := range w.resuming {                                                                                                                           
                ws := w.resuming[i]                                                                                                                           
                go func() {                                                                                                                                   
                        defer wg.Done()                                                                                                                       
                        select {
                        case <-ws.initReq.ctx.Done():                                                                                                         
                                ws.closing = true                                                                                                             
                                close(ws.outc)                                                                                                                
                        case <-donec:
                        }
                }()                                                                                                                                           
        }

        // connect to grpc stream                                                                                                                             
        wc, err := w.openWatchClient()

        // clean up cancel waiters                                                                                                                            
        close(donec)                                                                                                                                          
        wg.Wait()                                                                                                                                             

        for _, ws := range w.resuming {
                if ws == nil || ws.closing {                                                                                                                  
                        continue                                                                                                                              
                }                                                                                                                                             
                ws.donec = make(chan struct{})                                                                                                                
                go w.serveSubstream(ws, w.resumec)                                                                                                            
        }               
        if err != nil {
                return nil, v3rpc.Error(err)
        }
        // receive data from new grpc stream
        go w.serveWatchClient(wc)
        return wc, nil
}

(it still fails the test but with a goroutine leak because of an unrelated issue)

@yudai
Copy link
Contributor Author

yudai commented Oct 20, 2016

@heyitsanthony I tested your code and found a goroutine leak, probably the same one you saw, happens at
https://github.com/coreos/etcd/blob/master/clientv3/watch.go#L408

                        select {
                        case <-ws.initReq.ctx.Done():                                                                                                         
                                ws.closing = true                                                                                                             
                                close(ws.outc)                                                                                                                
                        case <-donec:
                        }

I'm guessing this part needs w.closingc <-ws which the defer of serveSubstream() has. However adding it to this place doesn't work because w.closingc is unbuffered.

I came up with an idea which replaces close(ws.outc) with w.closeSubstream(ws) to bypass w.closingc part in run(), however, currently this function is not goroutine safe (add a mutex?). I think if len(w.substreams)+len(w.resuming) == 0 { reuturn } in the run() is also required to break the loop even without closing the client.

@yudai yudai force-pushed the cancel_watch_on_outage branch from c2cad27 to 00db101 Compare October 20, 2016 21:53
@heyitsanthony
Copy link
Contributor

@yudai The len(w.substreams)+len(w.resuming) stuff is broken in general; I'm in the middle of debugging it. The resume path here shouldn't need to send w.closingc <- ws, it already knows how to dispose of the channel...

@yudai yudai force-pushed the cancel_watch_on_outage branch from 00db101 to 0261d3c Compare October 20, 2016 21:54
@yudai
Copy link
Contributor Author

yudai commented Oct 20, 2016

I updated the code to restore newWatchClient(). I just change the function to setupWatchClient, because openWatchClient(); newWatchClient() looks confusing.

@yudai
Copy link
Contributor Author

yudai commented Oct 20, 2016

@heyitsanthony
Oh, it's broken...
Here's the commit I tested based on your code.
yudai@8c7c3af

Currently I'm getting

=== RUN   TestWatchCancelWithNoConnection
2016-10-20 15:06:09.419815 I | integration: launching node6316200646778702967 (unix://localhost:node6316200646778702967.sock.bridge)
2016-10-20 15:06:09.421858 I | etcdserver: name = node6316200646778702967
2016-10-20 15:06:09.421930 I | etcdserver: data dir = /tmp/etcd866721465
2016-10-20 15:06:09.422007 I | etcdserver: member dir = /tmp/etcd866721465/member
2016-10-20 15:06:09.422071 I | etcdserver: heartbeat = 10ms
2016-10-20 15:06:09.422130 I | etcdserver: election = 100ms
2016-10-20 15:06:09.422186 I | etcdserver: snapshot count = 0
2016-10-20 15:06:09.422264 I | etcdserver: advertise client URLs = unix://127.0.0.1:21002.13341.sock
2016-10-20 15:06:09.422345 I | etcdserver: initial advertise peer URLs = unix://127.0.0.1:21001.13341.sock
2016-10-20 15:06:09.422436 I | etcdserver: initial cluster = node6316200646778702967=unix://127.0.0.1:21001.13341.sock
2016-10-20 15:06:09.425302 I | etcdserver: starting member 92b6d632644f5b7a in cluster 94f6eed5e01d70c9
2016-10-20 15:06:09.425432 I | raft: 92b6d632644f5b7a became follower at term 0
2016-10-20 15:06:09.425535 I | raft: newRaft 92b6d632644f5b7a [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2016-10-20 15:06:09.425613 I | raft: 92b6d632644f5b7a became follower at term 1
2016-10-20 15:06:09.431198 I | etcdserver: set snapshot count to default 10000
2016-10-20 15:06:09.431280 I | etcdserver: starting server... [version: 3.1.0-rc.0+git, cluster version: to_be_decided]
2016-10-20 15:06:09.433396 I | etcdserver/membership: added member 92b6d632644f5b7a [unix://127.0.0.1:21001.13341.sock] to cluster 94f6eed5e01d70c9
2016-10-20 15:06:09.436851 I | integration: launched node6316200646778702967 (unix://localhost:node6316200646778702967.sock.bridge)
2016-10-20 15:06:09.446299 I | raft: 92b6d632644f5b7a is starting a new election at term 1
2016-10-20 15:06:09.446386 I | raft: 92b6d632644f5b7a became candidate at term 2
2016-10-20 15:06:09.446477 I | raft: 92b6d632644f5b7a received vote from 92b6d632644f5b7a at term 2
2016-10-20 15:06:09.446586 I | raft: 92b6d632644f5b7a became leader at term 2
2016-10-20 15:06:09.446676 I | raft: raft.node: 92b6d632644f5b7a elected leader 92b6d632644f5b7a at term 2
2016-10-20 15:06:09.447538 I | etcdserver: setting up the initial cluster version to 3.1
2016-10-20 15:06:09.447677 I | etcdserver: published {Name:node6316200646778702967 ClientURLs:[unix://127.0.0.1:21002.13341.sock]} to cluster 94f6eed5e01d70c9
2016-10-20 15:06:09.447981 N | etcdserver/membership: set the initial cluster version to 3.1
2016-10-20 15:06:09.448266 I | etcdserver/api: enabled capabilities for version 3.1
2016-10-20 15:06:12.460784 I | integration: terminating node6316200646778702967 (unix://localhost:node6316200646778702967.sock.bridge)
2016-10-20 15:06:12.461211 I | etcdserver/api/v3rpc: transport: http2Client.notifyError got notified that the client transport was broken EOF.
2016-10-20 15:06:12.462021 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix localhost:node6316200646778702967.sock.bridge: connect: no such file or directory"; Reconnecting to {localhost:node6316200646778702967.sock.bridge <nil>}
2016-10-20 15:06:12.464081 I | integration: terminated node6316200646778702967 (unix://localhost:node6316200646778702967.sock.bridge)
2016-10-20 15:06:13.462126 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix localhost:node6316200646778702967.sock.bridge: connect: no such file or directory"; Reconnecting to {localhost:node6316200646778702967.sock.bridge <nil>}
2016-10-20 15:06:15.244513 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix localhost:node6316200646778702967.sock.bridge: connect: no such file or directory"; Reconnecting to {localhost:node6316200646778702967.sock.bridge <nil>}
2016-10-20 15:06:15.464863 I | etcdserver/api/v3rpc: grpc: addrConn.transportMonitor exits due to: context canceled
--- PASS: TestWatchCancelWithNoConnection (6.05s)
PASS
Too many goroutines running after all test(s).
1 instances of:
context.propagateCancel.func1(...)
        /home/yudai/.gvm/gos/go1.7.1/src/context/context.go:262 +0x188
created by context.propagateCancel
        /home/yudai/.gvm/gos/go1.7.1/src/context/context.go:267 +0x281
1 instances of:
github.com/coreos/etcd/clientv3.(*watchGrpcStream).run.func1(...)
        /home/yudai/tmp/etcddev/src/github.com/coreos/etcd/clientv3/watch.go:408 +0x307
github.com/coreos/etcd/clientv3.(*watchGrpcStream).run(...)
        /home/yudai/tmp/etcddev/src/github.com/coreos/etcd/clientv3/watch.go:488 +0x1c85
created by github.com/coreos/etcd/clientv3.(*watcher).newWatcherGrpcStream
        /home/yudai/tmp/etcddev/src/github.com/coreos/etcd/clientv3/watch.go:222 +0x62c

@heyitsanthony
Copy link
Contributor

@yudai #6636 should fix that goroutine leak

@yudai
Copy link
Contributor Author

yudai commented Oct 20, 2016

@heyitsanthony thanks. I wait for the PR to be merged.

@xiang90
Copy link
Contributor

xiang90 commented Oct 21, 2016

@yudai FYI: #6636 is merged.

@yudai
Copy link
Contributor Author

yudai commented Oct 21, 2016

@xiang90 Thanks.

I rebased the branch to the master and am getting a hangup.
https://github.com/yudai/etcd/commits/cancel_watch_newWatchClient

It seems when you cancel a substream while the watcher has no connection, client.Close() hangs up. Taking a look into it.

@yudai yudai force-pushed the cancel_watch_on_outage branch from 0261d3c to 9e1f951 Compare October 21, 2016 20:58
@xiang90
Copy link
Contributor

xiang90 commented Oct 26, 2016

@yudai Any progress on this? Or anything we can help with to move this forward?

@yudai yudai force-pushed the cancel_watch_on_outage branch from 9e1f951 to 0e61165 Compare October 27, 2016 01:55
@yudai
Copy link
Contributor Author

yudai commented Oct 27, 2016

@xiang90 I spent some time to try to fix issues I found in the branch:
https://github.com/yudai/etcd/commits/cancel_watch_newWatchClient

However, I still have not found a good way to fix the issues with keeping the function clean.
We need to cancel w.openWatchClient() in the func when all substream have been canceled, however, to archive that, I assume we need to add a bit complicated mechanism in the func.
I assume that the loop inrun() is supposed to be the place to handle events and manage concurrency. So I'm wondering if it's ok to make newWatchClient() larger and more complicated. I guess it would make reading and maintaining the code harder.

I personally think it would be better to manage events/concurrency in the single place in run() just like my patch in this PR.

@heyitsanthony
Copy link
Contributor

@yudai can you please update this PR with the newWatchClient changes? run is not the place to handle cancelation while restablishing a new watch client; it's meant to handle forwarding events to the watch stream goroutines but the goroutines are down during reconnect.

@yudai yudai force-pushed the cancel_watch_on_outage branch 2 times, most recently from 1435d36 to 338405d Compare October 28, 2016 08:05
@yudai
Copy link
Contributor Author

yudai commented Oct 28, 2016

@heyitsanthony I updated the PR 👍

I added a bit additional code to break the loop from newWatchClient().

The patch still lacks some care to pass TestWatchOverlapDropConnContextCancel. Closing ws.outc conflicts with closeSubstream().

Once watchGrpcStream.run() gets a connection error, it does not anything
until it gets a new watch client, because newWatchClient() called in the main
loop blocks.
In this state, watch response channels returned by the Watcher won't be
closed even if the cancel function of the context given to the Watcher
has been called.
The cancel request will be executed after a connection to the server has
been re-established.
This commit allow watcher to run cancel tasks while waiting for a new
client so that users can cancel watch tasks anytime.
@yudai yudai force-pushed the cancel_watch_on_outage branch from 338405d to 1adb7e1 Compare October 28, 2016 09:18
if err != nil {
return nil, v3rpc.Error(err)
}
func (w *watchGrpcStream) newWatchClient() (ws pb.Watch_WatchClient, err error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@heyitsanthony This functions is pretty fat right now. I feel we should rename this to newOrResumeWatchClient. Or we can separate the new, resume case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the new/resume cases should be identical; I think the way to trim it is to have a new function like
func (wgs *watchGrpcStream) waitOnResumeCancels (chan struct{}, *sync.WaitGroup) that returns donec and the waitgroup for all the resume on cancellation goroutine stuff

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@heyitsanthony Does the new case need to go through the resume thing? It can know that there is no pending (pending + resuming == 0) thing, and can skip all resume logic. I think the reasoning behind is that we just do not have to care about it since new and resume can be identical anyway.

My worry is that this func becomes fat and affect readability. And yes, I hope we could group the resuming logic in the func in a better way. Probably move them to a new function.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a new stream should go through the same path as a resume; there's no significant difference between creating a new stream and resuming a stream (aside from the resume having its initial revision updated). We used to have that distinction but it made reasoning about the resume path complicated and error-prone.

The function can be trimmed down by splitting out the goroutine phase in newWatchClient that wait for the cancels into a separate function (e.g., waitOnResumeCancels) and have newWatchClient call that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. fine with me. /cc @yudai

@xiang90
Copy link
Contributor

xiang90 commented Nov 3, 2016

@heyitsanthony @yudai Can we move this forward?

@heyitsanthony
Copy link
Contributor

@xiang90 I can clobber this PR with one that's good to merge if you don't want to wait on @yudai

@yudai
Copy link
Contributor Author

yudai commented Nov 4, 2016

Sorry for the late reply. I was out for a trip.

I'm totally fine to leave this issue to @heyitsanthony. The ideal fix would require some larger changes, so I think it would be great if a maintainer can handle this issue 👍

@heyitsanthony
Copy link
Contributor

Fixed by #6816

@yudai
Copy link
Contributor Author

yudai commented Nov 8, 2016

@heyitsanthony @xiang90 Thank you very much for the fix!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging this pull request may close these issues.

4 participants