Skip to content

Commit

Permalink
[MINOR] Minor English fixes
Browse files Browse the repository at this point in the history
### What changes were proposed in this pull request?

Minor English grammar and wording fixes.

### Why are the changes needed?

They're not strictly needed, but give the project a tiny bit more polish.

### Does this PR introduce _any_ user-facing change?

Yes, some user-facing errors have been tweaked.

### How was this patch tested?

No testing beyond CI.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes apache#45461 from nchammas/minor-wording-tweaks.

Authored-by: Nicholas Chammas <[email protected]>
Signed-off-by: Ruifeng Zheng <[email protected]>
  • Loading branch information
nchammas authored and zhengruifeng committed Mar 12, 2024
1 parent e778ce6 commit e186581
Show file tree
Hide file tree
Showing 4 changed files with 9 additions and 9 deletions.
8 changes: 4 additions & 4 deletions common/utils/src/main/resources/error/error-classes.json
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@
"AMBIGUOUS_COLUMN_REFERENCE" : {
"message" : [
"Column <name> is ambiguous. It's because you joined several DataFrame together, and some of these DataFrames are the same.",
"This column points to one of the DataFrame but Spark is unable to figure out which one.",
"This column points to one of the DataFrames but Spark is unable to figure out which one.",
"Please alias the DataFrames with different names via `DataFrame.alias` before joining them,",
"and specify the column using qualified name, e.g. `df.alias(\"a\").join(df.alias(\"b\"), col(\"a.id\") > col(\"b.id\"))`."
],
Expand Down Expand Up @@ -6184,17 +6184,17 @@
},
"_LEGACY_ERROR_TEMP_2109" : {
"message" : [
"Cannot build HashedRelation with more than 1/3 billions unique keys."
"Cannot build HashedRelation with more than 1/3 billion unique keys."
]
},
"_LEGACY_ERROR_TEMP_2110" : {
"message" : [
"Can not build a HashedRelation that is larger than 8G."
"Cannot build a HashedRelation that is larger than 8G."
]
},
"_LEGACY_ERROR_TEMP_2111" : {
"message" : [
"failed to push a row into <rowQueue>."
"Failed to push a row into <rowQueue>."
]
},
"_LEGACY_ERROR_TEMP_2112" : {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -323,11 +323,11 @@ class BlockManagerMasterEndpoint(
val isAlive = try {
driverEndpoint.askSync[Boolean](CoarseGrainedClusterMessages.IsExecutorAlive(executorId))
} catch {
// ignore the non-fatal error from driverEndpoint since the caller doesn't really
// care about the return result of removing blocks. And so we could avoid breaking
// Ignore the non-fatal error from driverEndpoint since the caller doesn't really
// care about the return result of removing blocks. That way we avoid breaking
// down the whole application.
case NonFatal(e) =>
logError(s"Fail to know the executor $executorId is alive or not.", e)
logError(s"Cannot determine whether executor $executorId is alive or not.", e)
false
}
if (!isAlive) {
Expand Down
2 changes: 1 addition & 1 deletion docs/sql-error-conditions.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ Column or field `<name>` is ambiguous and has `<n>` matches.
[SQLSTATE: 42702](sql-error-conditions-sqlstates.html#class-42-syntax-error-or-access-rule-violation)

Column `<name>` is ambiguous. It's because you joined several DataFrame together, and some of these DataFrames are the same.
This column points to one of the DataFrame but Spark is unable to figure out which one.
This column points to one of the DataFrames but Spark is unable to figure out which one.
Please alias the DataFrames with different names via `DataFrame.alias` before joining them,
and specify the column using qualified name, e.g. `df.alias("a").join(df.alias("b"), col("a.id") > col("b.id"))`.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ class UISeleniumSuite extends SparkFunSuite with WebBrowser {
test("SPARK-44801: Analyzer failure shall show the query in failed table") {
spark = creatSparkSessionWithUI

intercept[Exception](spark.sql("SELECT * FROM I_AM_A_INVISIBLE_TABLE").isEmpty)
intercept[Exception](spark.sql("SELECT * FROM I_AM_AN_INVISIBLE_TABLE").isEmpty)
eventually(timeout(10.seconds), interval(100.milliseconds)) {
val sd = findErrorMessageOnSQLUI()
assert(sd.size === 1, "Analyze fail shall show the query in failed table")
Expand Down

0 comments on commit e186581

Please sign in to comment.