Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-26936][SQL] Fix bug of insert overwrite local dir can not create temporary path in local staging directory #23841

Closed
wants to merge 2 commits into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -86,20 +86,21 @@ case class InsertIntoHiveDirCommand(
val jobConf = new JobConf(hadoopConf)

val targetPath = new Path(storage.locationUri.get)
val writeToPath =
val qualifiedPath = FileUtils.makeQualified(targetPath, hadoopConf)
val (writeToPath: Path, fs: FileSystem) =
beliefer marked this conversation as resolved.
Show resolved Hide resolved
if (isLocal) {
val localFileSystem = FileSystem.getLocal(jobConf)
localFileSystem.makeQualified(targetPath)
(localFileSystem.makeQualified(targetPath), localFileSystem)
} else {
val qualifiedPath = FileUtils.makeQualified(targetPath, hadoopConf)
val dfs = qualifiedPath.getFileSystem(jobConf)
if (!dfs.exists(qualifiedPath)) {
dfs.mkdirs(qualifiedPath.getParent)
}
qualifiedPath
val dfs = qualifiedPath.getFileSystem(hadoopConf)
beliefer marked this conversation as resolved.
Show resolved Hide resolved
(qualifiedPath, dfs)
}
if (!fs.exists(writeToPath)) {
fs.mkdirs(writeToPath)
}

val tmpPath = getExternalTmpPath(sparkSession, hadoopConf, writeToPath)
// The temporary path must be a HDFS path, not a local path.
val tmpPath = getExternalTmpPath(sparkSession, hadoopConf, qualifiedPath)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In case of inserts from non-hive tables, we still need to use a non-local path?

Copy link
Contributor Author

@beliefer beliefer Feb 26, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If target path is local, we still need to use a non-local path.

val fileSinkConf = new org.apache.spark.sql.hive.HiveShim.ShimFileSinkDesc(
tmpPath.toString, tableDesc, false)

Expand All @@ -111,15 +112,20 @@ case class InsertIntoHiveDirCommand(
fileSinkConf = fileSinkConf,
outputLocation = tmpPath.toString)

val fs = writeToPath.getFileSystem(hadoopConf)
if (overwrite && fs.exists(writeToPath)) {
fs.listStatus(writeToPath).foreach { existFile =>
if (Option(existFile.getPath) != createdTempDir) fs.delete(existFile.getPath, true)
}
}

fs.listStatus(tmpPath).foreach {
tmpFile => fs.rename(tmpFile.getPath, writeToPath)
val dfs = tmpPath.getFileSystem(hadoopConf)
dfs.listStatus(tmpPath).foreach {
tmpFile =>
if (isLocal) {
dfs.copyToLocalFile(tmpFile.getPath, writeToPath)
} else {
dfs.rename(tmpFile.getPath, writeToPath)
}
}
} catch {
case e: Throwable =>
Expand Down