Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aggfuncs: implement Count with new aggregation framework #7009

Merged
merged 5 commits into from
Jul 12, 2018

Conversation

Xuanwo
Copy link
Contributor

@Xuanwo Xuanwo commented Jul 7, 2018

What have you changed? (mandatory)

implement Count with new aggregation framework

More information can be found at #6952

What are the type of the changes (mandatory)?

  • Improvement (non-breaking change which is an improvement to an existing feature)

How has this PR been tested (mandatory)?

  • unit test
  • explain test

Does this PR affect documentation (docs/docs-cn) update? (optional)

No

Refer to a related PR or issue link (optional)

To #6952

@sre-bot
Copy link
Contributor

sre-bot commented Jul 7, 2018

Hi contributor, thanks for your PR.

This patch needs to be approved by someone of admins. They should reply with "/ok-to-test" to accept this PR for running test automatically.

@Xuanwo Xuanwo force-pushed the implement-count branch from be98e7c to e1b4a14 Compare July 7, 2018 09:03

func (e *countOriginal) UpdatePartialResult(sctx sessionctx.Context, rowsInGroup []chunk.Row, pr PartialResult) error {
p := (*partialResult4Count)(pr)
p.count += int64(len(rowsInGroup))
Copy link
Member

@zz-jason zz-jason Jul 7, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should ignore the NULL values in a group, only count the non-NULL values

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I Know: while in countOriginal, the e.args's len is 0, and I can't do the e.args[0].EvalInt(sctx, row) in range for rowInGroup.

Copy link
Member

@zz-jason zz-jason Jul 7, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Xuanwo

  1. COUNT is special, it can have many parameters
  2. You can only call EvalInt when the type of input parameter is integer.

Copy link
Contributor Author

@Xuanwo Xuanwo Jul 7, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused by the expression.Expression.

  1. The comment said that: args stores the input arguments for an aggregate function. And where the input arguments from? From the user's sql?
  2. In the AVG aggfuncs, I found the use of e.args[0] and e.args[1]. It looks like e.args[0] represents count and e.args[1]represents sum. When and where these value determined?

I tried to trace the calling tree but failed to answer these qusetions.

Copy link
Member

@zz-jason zz-jason Jul 7, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We execute the aggregate function inside an aggregate operator, the input arguments are from the child of that aggregate operator. For example:

TiDB(localhost:4000) > explain select count(distinct a, b) from t;
+----------------------+------+----------------------------------------------+-------+
| id                   | task | operator info                                | count |
+----------------------+------+----------------------------------------------+-------+
| StreamAgg_8          | root | funcs:count(distinct test.t.a, test.t.b)     | 1.00  |
| └─TableReader_15     | root | data:TableScan_14                            | 3.00  |
|   └─TableScan_14     | cop  | table:t, range:[-inf,+inf], keep order:false | 3.00  |
+----------------------+------+----------------------------------------------+-------+
3 rows in set (0.00 sec)

In this case, we execute the count(distinct a, b) in the aggregate operator StreamAgg_8, and its input arguments come from the child operator, namely TableReader_15.

"In the AVG aggfuncs, I found the use of e.args[0] and e.args[1]. It looks like e.args[0] represents count and e.args[1]represents sum", this kind of AVG function handles the partial result of another aggregate operator, for example:

TiDB(localhost:4000) > explain select avg(a) from t;
+------------------------+------+----------------------------------------------+-------+
| id                     | task | operator info                                | count |
+------------------------+------+----------------------------------------------+-------+
| StreamAgg_16           | root | funcs:avg(col_0, col_1)                      | 1.00  |
| └─TableReader_17       | root | data:StreamAgg_8                             | 1.00  |
|   └─StreamAgg_8        | cop  | funcs:avg(test.t.a)                          | 1.00  |
|     └─TableScan_15     | cop  | table:t, range:[-inf,+inf], keep order:false | 3.00  |
+------------------------+------+----------------------------------------------+-------+
4 rows in set (0.00 sec)

Here we have two aggregate operators, StreamAgg_8 belongs to a coprocessor task and is executed on the TiKV server, StreamAgg_16 belongs to a root task and is executed on the TiDB server.

StreamAgg_8 handles the original data of table t produces partial result, which is further read by TableReader_17 and StreamAgg_16. In the partial result of StreamAgg_8, count comes first and sum comes second. This is determined by the planner, see this for more detail.

StreamAgg_16 handles the partial result of StreamAgg_8, reads count in the first parameter and sum in the second parameter.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the wonderful explain!

baseAggFunc
}

type partialResult4Count struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

type partialResult4Count = int64 is better

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will fix it.

// Build count functions which consume the original data and update their
// partial results.
case aggregation.CompleteMode, aggregation.Partial1Mode:
return &countOriginal{baseCount{base}}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This implementation has not properly handled the distinct property. We should implement other AggFuncs for COUNT to count the distinct and non-NULL values in a data group.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something like countOriginalWithDistinct ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Xuanwo Maybe what you need is HashChunkRow() defined in util/codec/codec.go?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks cool, let me check that.

@Xuanwo Xuanwo force-pushed the implement-count branch 4 times, most recently from 2d2fc4e to e07c870 Compare July 7, 2018 18:40
@Xuanwo
Copy link
Contributor Author

Xuanwo commented Jul 7, 2018

@zz-jason Please take a look again, I'm not sure am I in the right way.

I failed this test for now:

FAIL: aggregate_test.go:24: testSuite.TestAggregation

aggregate_test.go:264:
    result.Check(testkit.Rows("2 1"))
/home/xuanwo/Code/go/src/github.com/pingcap/tidb/util/testkit/testkit.go:48:
    res.c.Assert(got, check.Equals, need, res.comment)
... obtained string = "[[1 1]]"
... expected string = "[[2 1]]"
... sql:select count(distinct c), count(distinct a,b) from t, args:[]

I believe the problem is e07c870#diff-0558a5726173797efd5bac024b5a3cadR114 , how can I get the right col index for by an Expression ?

@Xuanwo Xuanwo force-pushed the implement-count branch from e07c870 to 708ae5c Compare July 8, 2018 04:34
@Xuanwo
Copy link
Contributor Author

Xuanwo commented Jul 8, 2018

After read https://www.pingcap.com/blog-cn/tidb-source-code-reading-10/ , I solved this problem, please review again.

@Xuanwo Xuanwo force-pushed the implement-count branch from 708ae5c to dab1a01 Compare July 8, 2018 06:11
@Xuanwo
Copy link
Contributor Author

Xuanwo commented Jul 9, 2018

PTAL @zz-jason

@jackysp jackysp added the contribution This PR is from a community contributor. label Jul 9, 2018
@@ -70,8 +93,8 @@ func buildAvg(aggFuncDesc *aggregation.AggFuncDesc, ordinal int) AggFunc {
case aggregation.DedupMode:
return nil // not implemented yet.

// Build avg functions which consume the original data and update their
// partial results.
// Build avg functions which consume the original data and update their
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this tab. so as line 97, 112, 113

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a mistake introduced be Goland, I will fix it.

p := (*partialResult4Count)(pr)

for _, row := range rowsInGroup {
_, isNull, err := e.args[0].EvalInt(sctx, row)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should implement countOriginal4Int/ countOriginal4Real/ countOriginal4String/ countOriginal4Decimal/ countOriginal4Datetime/ countOriginal4Duration/ countOriginal4Json.
Since it will cause an error if we call EvalInt on other types.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will fix it.

p := (*partialResult4Count)(pr)

for _, row := range rowsInGroup {
buf, hasNull, err := encodeRow(sctx, e.args, row)
Copy link
Contributor

@XuHuaiyu XuHuaiyu Jul 10, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. This is not correct, we should not encode the whole row. We only check whether the args of the aggregation function is distinct.
  2. encodeRow is expensive, this implementation can be referred.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed at #7009 (comment) , count may have many parameters, like select count(distinct a, b) from t, we need to eval the whole row to judge whether current row should be counted.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Such as select count(distinct a, b), c from t;
    The row will be [a, b, c]. So encode the whole row is still not correct.
  2. There is already a distinctChecker, we can implement a checkChunkRow function for it.
  3. You can implement a function like EncodeValue4ChunkRow in codec.go, and invoke encodeChunkRow in it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. encodeRow will get the value from row by e.args.
  2. I think the checkChunkRow is what encodeRow did here.
  3. The problem of encodeChunkRow is I can't get the corrent colIdx by expressions. And it looks like I can't import expression in chunk package.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. The allTypes is the fieldTypes of count.Args.
    The colIdx is the ordinal of count.Args, e.g. for count(a, b, c), the colIdx is [0,1,2].
  2. I think we'd better change encodeRow to evalOneRow which only eval the result of the expressions and invoke EncodeValue4ChunkRow outside. If you take a look at EncodeValue, you can see that there are 2 parameters comparable and hash, but encodeRow does not consider them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I used to use this logic (invoke HashChunkRow in codec.go instead if encodeRow) but failed test select count(distinct c), count(distinct a, b) from t. For the distinct c, the ordinal is 0, but the correct ordinal is 2.
  2. I will have a try.

@@ -0,0 +1,131 @@
package aggfuncs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do not need this file.

@Xuanwo
Copy link
Contributor Author

Xuanwo commented Jul 10, 2018

    1. As discussed in aggfuncs: implement Count with new aggregation framework #7009 (comment) , the problem is not solved. Only in Column struct I can get the Index to be used as colIdx in EncodeValue.
    2. Eval will be removed for reducing the use of Datum, so I can't return []Datum here.
    3. I tried to add a EncodeChunkWithExpression in codec but faced cycle import.
    4. Constants like floatFlag are not exported, I'm not sure do you want to export them.
  1. Add a func arg.IsNull(sctx, row) which returns (isNull bool, err error) to allow user check NULL without eval it's value.

@Xuanwo Xuanwo force-pushed the implement-count branch from 86b3589 to bf37ad2 Compare July 10, 2018 14:19
continue
}

buf, err := encodeRow(sctx, e.args, row)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

count(distinct) is special, we can do it this way for convenience, although it sacrifices some performance:

  1. extract a function to evaluate an argument and encode it into a provided []byte, as for these appendXXX functions, for example appendInt64, you can refer to https://github.com/pingcap/tidb/blob/master/util/chunk/column.go#L105, which is much faster than EncodeXXX functions, and thus we can remove the implementation of distinctChecker, which is inefficient
func (e *countOriginalWithDistinct) evalAndEncode(arg expression.Expression, encodedBytes []byte) ([]byte, bool, error) {
    switch e.args[i].GetType().EvalType() {
    case types.ETInt:
        val, isNull, err := e.args[i].EvalInt(ctx, row)
        if err != nil || isNull {
            return encodedBytes, isNull, errors.Trace(err)
        }
        encodedBytes = appendInt64(encodedBytes, val)
    case types.ETReal:
            ...
    }
    return encodedBytes, false, nil
}
  1. evaluate each argument and encode them into a []byte, use that []byte to check where the input parameters are duplicated:
encodedBytes := make([]byte, 0, 8)
for _, row := range rowsInGroup {
    for i := 0; i < len(e.args) && !hasNull; i++  {
        encodedBytes, isNull, err = e. evalAndEncode(e.args[i], encodedBytes)
        if err != nil { return ... }
        if isNull { hasNull = true; break... }
    }
    if hasNull || p.exists[encodedBytes] {
        continue
    }
    p.exists[encodedBytes] = true
    p.count++
}

p := (*partialResult4Count)(pr)

for _, row := range rowsInGroup {
isNull, err := e.args[0].IsNull(sctx, row)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need to implement a IsNull api on the Expression interface, and we'd better not to do this:

  • this API is currently only used in COUNT function
  • in the future, we are planning to change the aggregate argument from args []expression.Expression to args []*expression.Column

There are two ways to solve the NULL input issue:

  1. use a single countOriginal to handle all the input types, and:
switch e.args[i].GetType().EvalType() {
case types.ETInt:
    val, isNull, err := e.args[0].EvalInt(ctx, row)
    if err != nil {
        return errors.Trace(err)
    }
    if isNull {
        continue
    }
case types.ETReal:
        ...
}
  1. implement a countOriginal4XX for each input type:
  • countOriginal4Decimal
  • countOriginal4Int64
  • ...

I suggest the second way: It reduces a lot of CPU branch predications, which are introduced by the switch statement, utilizes the CPU pipelines thus is more inefficient than the first way.

@Xuanwo Xuanwo force-pushed the implement-count branch from 1cf1ebf to f21dbc1 Compare July 11, 2018 16:00
@Xuanwo
Copy link
Contributor Author

Xuanwo commented Jul 11, 2018

@zz-jason PTAL

I use a map[string]struct{} here because we can't use []byte as map key and golang has some optimization for converting []byte to string.

"unsafe"

"github.com/juju/errors"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this empty line

return encodedBytes, false, nil
}

func appendInt64(encodedBytes []byte, val int64) (_ []byte) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about changing the return value definition from (_ []byte) to []byte?

}

func appendInt64(encodedBytes []byte, val int64) (_ []byte) {
buf := make([]byte, 8)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can pass buf as a parameter of this function and reuse it if possible to avoid frequently object allocation.

buf := []byte{}

for _, row := range rowsInGroup {
encodedBytes := make([]byte, 0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can encodedBytes be reused for every row?

@zz-jason zz-jason added this to the 2.1 milestone Jul 11, 2018
@zz-jason zz-jason added type/enhancement The issue or PR belongs to an enhancement. sig/execution SIG execution labels Jul 11, 2018
@Xuanwo
Copy link
Contributor Author

Xuanwo commented Jul 11, 2018

@zz-jason There is not []bytes clean logic like buf[:0] in the https://github.com/pingcap/tidb/blob/master/util/chunk/column.go#L105 , we don't need that?

import (
"encoding/binary"
"unsafe"
"github.com/juju/errors"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We usually separate the imported packages to two sections: one for the golang standard packages, one for the third part packages. For example: https://github.com/pingcap/tidb/blob/master/executor/aggfuncs/aggfuncs.go#L16. So here we should put "github.com/juju/errors" together with other third part packages and leave an empty line between the golang standard packages like "unsafe" 😄

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, I misread the comment: #7009 (comment) , I will fix it.

@Xuanwo Xuanwo force-pushed the implement-count branch from 21c6d3c to d7aa82d Compare July 11, 2018 16:52
"github.com/pingcap/tidb/types"
"github.com/pingcap/tidb/types/json"
"github.com/pingcap/tidb/util/chunk"
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually, this is what I'm trying to explain:

import (
	"encoding/binary"
	"unsafe"
 
	"github.com/juju/errors"
	"github.com/pingcap/tidb/expression"
	"github.com/pingcap/tidb/sessionctx"
	"github.com/pingcap/tidb/types"
	"github.com/pingcap/tidb/types/json"
	"github.com/pingcap/tidb/util/chunk"
)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I got the idea just now.

@Xuanwo Xuanwo force-pushed the implement-count branch from d7aa82d to 6444041 Compare July 11, 2018 16:56
@Xuanwo
Copy link
Contributor Author

Xuanwo commented Jul 11, 2018

But the reset didn't clean the elemBuf. I think it's because the elemBuf always comes with fixed size, so they don't need to do strip work like buf[:16] as in https://github.com/pingcap/tidb/pull/7009/files#diff-0558a5726173797efd5bac024b5a3cadR366 ?

@zz-jason
Copy link
Member

zz-jason commented Jul 11, 2018

@Xuanwo elemBuf doesn't need to be reset, we only use it to write data into a column or read data out from a column. Its content will be reset every time we read/write an element from/to a column.

@Xuanwo
Copy link
Contributor Author

Xuanwo commented Jul 11, 2018

So I don't need to reslice the buf like following?

func appendInt64(encodedBytes, buf []byte, val int64) []byte {
	*(*int64)(unsafe.Pointer(&buf[0])) = val
	buf = buf[:8]
	encodedBytes = append(encodedBytes, buf...)
	return encodedBytes
}

@zz-jason
Copy link
Member

@Xuanwo I'm sorry that I misunderstand your last comments. buf = buf[:8] is needed here because you allocate the buf firstly with a length of 40, not 8, so you have to slice it to buf[:8] to make the following encodedBytes = append(encodedBytes, buf...) correct.

@Xuanwo
Copy link
Contributor Author

Xuanwo commented Jul 11, 2018

OK, I'm ready for reviewing now.

@zz-jason
Copy link
Member

Thanks for you great contribution, LGTM 👍

@zz-jason
Copy link
Member

@XuHuaiyu @winoros PTAL

@zz-jason zz-jason added the status/LGT1 Indicates that a PR has LGTM 1. label Jul 11, 2018
@zz-jason
Copy link
Member

/run-all-tests

@XuHuaiyu
Copy link
Contributor

hi, @Xuanwo
The following case would fail.

drop table if exists tab0, tab1;

CREATE TABLE tab0(pk INTEGER PRIMARY KEY, col0 INTEGER, col1 FLOAT, col2 TEXT, col3 INTEGER, col4 FLOAT, col5 TEXT);
INSERT INTO tab0 VALUES(0,854,111.96,'mguub',711,966.36,'snwlo');
INSERT INTO tab0 VALUES(1,518,457.51,'hzanm',251,363.97,'xljvu');
INSERT INTO tab0 VALUES(2,640,325.31,'jempi',596,569.99,'xmtxn');
INSERT INTO tab0 VALUES(3,758,256.26,'lktfw',174,453.33,'imxxc');
INSERT INTO tab0 VALUES(4,98,727.48,'qiyfp',918,376.45,'gavyb');
INSERT INTO tab0 VALUES(5,203,268.89,'nwrqf',885,321.93,'ixrql');
INSERT INTO tab0 VALUES(6,554,593.89,'hdikx',886,12.12,'xzvvo');
INSERT INTO tab0 VALUES(7,195,720.95,'yydxj',108,45.77,'dlbem');
INSERT INTO tab0 VALUES(8,363,102.25,'kmgry',740,9.66,'cussx');
INSERT INTO tab0 VALUES(9,106,745.31,'mwyzu',598,612.17,'aftom');


CREATE TABLE tab1(pk INTEGER PRIMARY KEY, col0 INTEGER, col1 FLOAT, col2 TEXT, col3 INTEGER, col4 FLOAT, col5 TEXT);
CREATE INDEX idx_tab1_0 on tab1 (col0);
CREATE INDEX idx_tab1_1 on tab1 (col1);
CREATE INDEX idx_tab1_3 on tab1 (col3);
CREATE INDEX idx_tab1_4 on tab1 (col4);
INSERT INTO tab1 SELECT * FROM tab0;

SELECT DISTINCT COUNT( DISTINCT + col4 ) col0 FROM tab1  WHERE 1;

want 10, but got 5

@zz-jason
Copy link
Member

/run-all-tests

return nil
}

func (e *countOriginalWithDistinct) evalAndEncode(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a comment for this function.

break
}
}
if hasNull || p.valSet.exist(string(encodedBytes)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

extract string(encodeBytes) as a variable to reduce memory usage
benchmark:

func BenchmarkBytesToString(b *testing.B) {
	str := ""
	for i := 0; i < 100; i ++{
		str += "0"
	}
	bytes := []byte(str)
	b.ResetTimer()
	for i := 0; i < b.N; i++{
		s := newStringSet()
		s.exist(string(bytes))
		s.insert(string(bytes))
	}
}

func BenchmarkBytesToString2(b *testing.B) {
	str := ""
	for i := 0; i < 100; i ++{
		str += "0"
	}
	bytes := []byte(str)
	b.ResetTimer()
	for i := 0; i < b.N; i++{
		s := newStringSet()
		tmp := string(bytes)
		s.exist(tmp)
		s.insert(tmp)
	}
}
$ go test -test.bench=. -test.run=None -benchmem

BenchmarkBytesToString-4        10000000               175 ns/op             224 B/op          2 allocs/op
BenchmarkBytesToString2-4       10000000               120 ns/op             112 B/op          1 allocs/op

@zz-jason
Copy link
Member

@Xuanwo please merge master and resolve conflicts.

@Xuanwo Xuanwo force-pushed the implement-count branch from 4e51e51 to 0b50e7b Compare July 12, 2018 08:43
@Xuanwo
Copy link
Contributor Author

Xuanwo commented Jul 12, 2018

@zz-jason The code has been rebased.

Copy link
Contributor

@XuHuaiyu XuHuaiyu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@zz-jason zz-jason added status/LGT2 Indicates that a PR has LGTM 2. and removed status/LGT1 Indicates that a PR has LGTM 1. labels Jul 12, 2018
@zz-jason
Copy link
Member

/run-all-tests

@zz-jason zz-jason merged commit 4a7869e into pingcap:master Jul 12, 2018
@Xuanwo Xuanwo deleted the implement-count branch July 14, 2018 02:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contribution This PR is from a community contributor. sig/execution SIG execution status/LGT2 Indicates that a PR has LGTM 2. type/enhancement The issue or PR belongs to an enhancement.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants