Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add error section in report and the rest queries #9150

Merged
merged 6 commits into from
Sep 7, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,30 @@ class TableGenerator(scaleFactor: Int, complexity: Int, seed: Int, spark: SparkS
candidates(random.nextInt(candidates.length))
}

private val keyGroup2ColumnTypeCandidate: Seq[String] = Seq(
"string",
"decimal(7, 2)",
"decimal(15, 2)",
"int",
"long",
"timestamp",
"date",
"struct<num: long, desc: string>"
)

// To get a constant column type list so it can be used for both c_data and d_data when comparing
// key columns. Otherwise we will see column type mismatch error.
private val randomColumnTypesKeyGroup2: Seq[String] = {
(1 to complexity).map(_ => keyGroup2ColumnTypeCandidate(random.nextInt
(keyGroup2ColumnTypeCandidate.length)))
}

private def expandColumnsForKeyGroup2(prefix: String): String = {
(1 to complexity).map(i => s"${prefix}_$i").zip(randomColumnTypesKeyGroup2).map {
case (colName, colType) => s"$colName $colType"
}.mkString(", ")
}

/**
a_facts: Each scale factor corresponds to 10,000 rows
- primary_a the primary key from key group 1
Expand Down Expand Up @@ -142,7 +166,7 @@ class TableGenerator(scaleFactor: Int, complexity: Int, seed: Int, spark: SparkS
*/
private def genCData(): DataFrame = {
val schema = "c_foreign_a long," +
(1 to complexity).map(i => s"c_key2_$i ${randomColumnType()}").mkString(",") + "," +
expandColumnsForKeyGroup2("c_key2") + "," +
"c_data_row_num_1 long," +
(1 to 5).map(i => s"c_data_$i ${randomColumnType()}").mkString(",") + "," +
(1 to 5).map(i => s"c_data_numeric_$i ${randomNumericColumnType()}").mkString(",")
Expand All @@ -166,7 +190,7 @@ class TableGenerator(scaleFactor: Int, complexity: Int, seed: Int, spark: SparkS
- 10 data columns
*/
private def genDData(): DataFrame = {
val schema = (1 to complexity).map(i => s"d_key2_$i ${randomColumnType()}").mkString(",") +
val schema = expandColumnsForKeyGroup2("d_key2") +
"," +
(1 to 10).map(i => s"d_data_$i ${randomColumnType()}").mkString(",")
val dData = dbgen.addTable("d_data", schema, dNumRows)
Expand Down
40 changes: 40 additions & 0 deletions integration_tests/ScaleTest.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,45 @@ The queries are define in the source code. You can check the table below to see
| q1 | Inner join with lots of ride along columns | SELECT a_facts.*, b_data_{1-10} FROM b_data JOIN a_facts WHERE primary_a = b_foreign_a |
| q2 | Full outer join with lots of ride along columns | SELECT a_facts.*, b_data_{1-10} FROM b_data FULL OUTER JOIN a_facts WHERE primary_a = b_foreign_a |
| q3 | Left outer join with lots of ride along columns | SELECT a_facts.*, b_data_{1-10} FROM b_data LEFT OUTER JOIN a_facts WHERE primary_a = b_foreign_a |
| q4 | Left anti-join lots of ride along columns. | SELECT c_data.* FROM c_data LEFT ANTI JOIN a_facts WHERE primary_a = c_foreign_a |
| q5 | Left semi-join lots of ride along columns. | SELECT c_data.* FROM c_data LEFT SEMI JOIN a_facts WHERE primary_a = c_foreign_a |
| q6 | Exploding inner large key count equi-join followed by min/max agg. | SELECT c_key2_*, COUNT(1), MIN(c_data_*), MAX(d_data_*) FROM c_data JOIN d_data WHERE c_key2_* = d_key2_* GROUP BY c_key2_* |
| q7 | Exploding full outer large key count equi-join followed by min/max agg. | SELECT c_key2_*, COUNT(1), MIN(c_data_*), MAX(d_data_*) FROM c_data FULL OUTER JOIN d_data WHERE c_key2_* = d_key2_* GROUP BY c_key2_* |
| q8 | Exploding left outer large key count equi-join followed by min/max agg. | SELECT c_key2_*, COUNT(1), MIN(c_data_*), MAX(d_data_*) FROM c_data LEFT OUTER JOIN d_data WHERE c_key2_* = d_key2_* GROUP BY c_key2_* |
| q9 | Left semi large key count equi-join followed by min/max agg. | SELECT c_key2_*, COUNT(1), MIN(c_data_*) FROM c_data LEFT SEMI JOIN d_data WHERE c_key2_* = d_key2_* GROUP BY c_key2_* |
| q10 | Left anti large key count equi-join followed by min/max agg. | SELECT c_key2_*, COUNT(1), MIN(c_data_*) FROM c_data LEFT ANTI JOIN d_data WHERE c_key2_* = d_key2_* GROUP BY c_key2_* |
| q11 | No obvious build side inner equi-join. (Shuffle partitions should be set to 10) | SELECT b_key3_*, e_data_*, b_data_* FROM b_data JOIN e_data WHERE b_key3_* = e_key3_* |
| q12 | No obvious build side full outer equi-join. (Shuffle partitions should be set to 10) | SELECT b_key3_*, e_data_*, b_data_* FROM b_data FULL OUTER JOIN e_data WHERE b_key3_* = e_key3_* |
| q13 | No obvious build side left outer equi-join. (Shuffle partitions should be set to 10) | SELECT b_key3_*, e_data_*, b_data_* FROM b_data LEFT OUTER JOIN e_data WHERE b_key3_* = e_key3_* |
| q14 | Both sides large left semi equi-join. (Shuffle partitions should be set to 10) | SELECT b_key3_*, b_data_* FROM b_data LEFT SEMI JOIN e_data WHERE b_key3_* = e_key3_* |
| q15 | Both sides large left anti equi-join. (Shuffle partitions should be set to 10) | SELECT b_key3_*, b_data_* FROM b_data LEFT ANTI JOIN e_data WHERE b_key3_* = e_key3_* |
| q16 | Extreme skew conditional AST inner join. | SELECT a_key4_1, a_data_(1-complexit/2), f_data_(1-complexity/2) FROM a_facts JOIN f_facts WHERE a_key4_1 = f_key4_1 && (a_data_low_unique_1 + f_data_low_unique_1) = 2 |
| q17 | Extreme skew conditional AST full outer join. | SELECT a_key4_1, a_data_(1-complexit/2), f_data_(1-complexity/2) FROM a_fact FULL OUTER JOIN f_fact WHERE a_key4_1 = f_key4_1 && (a_data_low_unique_1 + f_data_low_unique_1) = 2 |
| q18 | Extreme skew conditional AST left outer join. | SELECT a_key4_1, a_data_(1-complexit/2), f_data_(1-complexity/2) FROM a_fact LEFT OUTER JOIN f_fact WHERE a_key4_1 = f_key4_1 && (a_data_low_unique_1 + f_data_low_unique_1) = 2 |
| q19 | Extreme skew conditional AST left anti join. | SELECT a_key4_1, a_data_*, FROM a_fact LEFT ANTI JOIN f_fact WHERE a_key4_1 = f_key4_1 && (a_data_low_unique_1 + f_data_low_unique_1) != 2 |
| q20 | Extreme skew conditional AST left semi join. | SELECT a_key4_1, a_data_* FROM a_fact LEFT SEMI JOIN f_fact WHERE a_key4_1 = f_key4_1 && (a_data_low_unique_1 + f_data_low_unique_1) = 2 |
| q21 | Extreme skew conditional NON-AST inner join. | SELECT a_key4_1, a_data_(1-complexity/2), f_data_(1-complexity/2) FROM a_fact JOIN f_fact WHERE a_key4_1 = f_key4_1 && (length(concat(a_data_low_unique_len_1, f_data_low_unique_len_1))) = 2 |
| q22 | Group by aggregation, not a lot of combining, but lots of aggregations, and CUDF does sort agg internally. | SELECT b_key3_*, complexity number of aggregations that are SUMs of 2 or more numeric data columns multiplied together or MIN/MAX of any data column FROM b_data GROUP BY b_key3_*. |
| q23 | Reduction with with lots of aggregations | SELECT complexity number of aggregations that are SUMs of 2 or more numeric data columns multiplied together or MIN/MAX of any data column FROM b_data. |
| q24 | Group by aggregation with lots of combining, lots of aggs, and CUDF does hash agg internally | SELECT g_key3_*, complexity number of aggregations that are SUM/MIN/MAX/AVERAGE/COUNT of 2 or more byte columns cast to int and added, subtracted, multiplied together. FROM g_data GROUP BY g_key3_* |
| q25 | collect set group by agg | select g_key3_*, collect_set(g_data_enum_1) FROM g_data GROUP BY g_key3_* |
| q26 | collect list group by agg with some hope of succeeding. | select b_foreign_a, collect_list(b_data_1) FROM b_data GROUP BY b_foreign_a |
| q27 | Running Window with skewed partition by columns, and single order by column with small number of basic window ops (min, max, sum, count, average, row_number) | select {min(g_data_1), max(g_data_1), sum(g_data_2), count(g_data_3), average(g_data_4), row_number} over (UNBOUNDED PRECEDING TO CURRENT ROW PARTITION BY g_key3_* ORDER BY g_data_row_num_1) |
| q28 | Ranged Window with large range (lots of rows preceding and following) skewed partition by columns and single order by column with small number of basic window ops (min, max, sum, count, average) | select {min(g_data_1), max(g_data_1), sum(g_data_2), count(g_data_3), average(g_data_4)} over (RANGE BETWEEN 1000 * scale_factor PRECEDING AND 5000 * scale_factor FOLLOWING PARTITION BY g_key3_* ORDER BY g_data_row_num_1) |
| q29 | unbounded preceding and following window with skewed partition by columns, and single order by column with small number of basic window op (min, max, sum, count, average) | select {min(g_data_1), max(g_data_1), sum(g_data_2), count(g_data_3), average(g_data_4)} over (ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING PARTITION BY g_key3_* ORDER BY g_data_row_num_1) |
| q30 | running window with no partition by columns and single order by column with small number of basic window ops (min, max, sum, count, average, row_number) | select {min(g_data_1), max(g_data_1), sum(g_data_2), count(g_data_3), average(g_data_4)} over (ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW ORDER BY g_data_row_num_1) |
| q31 | ranged window with large range (lots of rows preceding and following) no partition by columns and single order by column with small number of basic window ops (min, max, sum, count, average) | select {min(g_data_1), max(g_data_1), sum(g_data_2), count(g_data_3), average(g_data_4)} over (RANGE BETWEEN 1000 * scale_factor PRECEDING AND 5000 * scale_factor FOLLOWING ORDER BY g_data_row_num_1) |
| q32 | unbounded preceding and following window with no partition by columns and single order by column with small number of basic window ops (min, max, sum, count, average) | select {min(g_data_1), max(g_data_1), sum(g_data_2), count(g_data_3), average(g_data_4)} over (ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING ORDER BY g_row_num_1) |
| q33 | Lead/Lag window with skewed partition by columns and single order by column. | select {lag(g_data_1, 10 * scale_factor, lead(g_data_2, 10 * scale_factor)} OVER (PARTITION BY g_key3_* ORDER BY g_data_row_num_1) |
| q34 | Lead/Lag window with no partition by columns and single order by column. | select {lag(g_data_1, 10 * scale_factor, lead(g_data_2, 10 * scale_factor)} OVER (ORDER BY g_data_row_num_1) |
| q35 | Running window with complexity/2 in partition by columns and complexity/2 in order by columns. | select {min(c_data_1), max(c_data_2)} over (ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW PARTITION BY c_key2_(1 to complexity/2) ORDER BY c_key2_(complexity/2 to complexity) |
| q36 | unbounded to unbounded window with complexity/2 in partition by columns and complexity/2 in order by columns. | select {min(c_data_1), max(c_data_2)} over (ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING PARTITION BY c_key2_(1 to complexity/2) ORDER BY c_key2_(complexity/2 to complexity) |
| q37 | Running window with simple partition by and order by columns, but complexity window operations as combinations of a few input columns | select {complexity aggregations mins/max of any column or SUM of two or more numeric columns multiplied together} over (ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW PARTITION BY c_foreign_a ORDER BY c_data_row_num_1 |
| q38 | Ranged window with simple partition by and order by columns, but complexity window operations as combinations of a few input columns | select {complexity aggregations mins/max of any column or SUM of two or more numeric columns multiplied together} over (RANGE BETWEEN 10 PRECEDING AND 10 FOLLOWING PARTITION BY c_foreign_a ORDER BY c_data_row_num_1 |
| q39 | unbounded window with simple partition by and order by columns, but complexity window operations as combinations of a few input columns | select {complexity aggregations mins/max of any column or SUM of two or more numeric columns multiplied together} over (ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING PARTITION BY c_foreign_a ORDER BY c_data_row_num_1 |
| q40 | COLLECT SET WINDOW (We may never really be able to do this well) | select array_sort(collect_set(f_data_low_unique_1)) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW PARTITION BY f_key4_* order by f_data_row_num_1) |
| q41 | COLLECT LIST WINDOW (We may never really be able to do this well) | select collect_list(f_data_low_unique_1) OVER (ROWS BETWEEN complexity PRECEDING and CURRENT ROW PARTITION BY f_key4_* order by f_data_row_num_1) |


## Submit

Expand All @@ -44,6 +83,7 @@ Usage: ScaleTest [options] <scale factor> <complexity> <format> <input directory
--iterations <value> iterations to run for each query. default: 1
--queries <value> Specify queries to run specifically. the format must be query names with comma separated. e.g. --tables q1,q2,q3. If not specified, all queries will be run for `--iterations` rounds
--timeout <value> timeout for each query in milliseconds, default is 10 minutes(600000)
--dry Flag argument. Only print the queries but not execute them
```

An example command to launch the Scale Test:
Expand Down
Loading