Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suppress hdfsFixture if there are spaces in the path #30302

Merged
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 11 additions & 1 deletion plugins/repository-hdfs/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -230,6 +230,11 @@ if (Os.isFamily(Os.FAMILY_WINDOWS)) {
fixtureSupported = true
}

boolean legalPath = rootProject.rootDir.toString().contains(" ") == false
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure a regular space in the path is what was causing the original issue. The issue was because a space character was HTTP encoded to %20 (and then encoded again to %2520). The % sign with numbers trips up string formatting in Java, which is what caused the exception.

Perhaps it might make sense to try formatting the string the same way HDFS does in a try block, catch the exception and mark the path illegal in that case.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The offending code in HDFS does:

ThreadFactory workerFactory = new ThreadFactoryBuilder()
        .setDaemon(true)
        .setNameFormat("FsVolumeImplWorker-" + parent.toString() + "-%d") // <----
        .build();

Which checks the name format eagerly to make sure it works with:

public ThreadFactoryBuilder setNameFormat(String nameFormat) {
    String.format(nameFormat, 0); // fail fast if the format is bad or null
    this.nameFormat = nameFormat;
    return this;
  }

So perhaps we could try formatting the string the exact same way HDFS does to ensure max compatibility in this regard.

try {
    // See o.a.h.h.s.d.f.i.FsVolumeImpl#initializeCacheExecutor:150
    String.format(rootProject.rootDir.toString() + '-%d', 0) 
} catch (IllegalFormatException ife) {
    legalPath = false;
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, the URL encoding hasn't happened by this point, so looking for a literal space finds the issue here:

$ pwd 
/var/lib/jenkins/workspace/elastic+elasticsearch+6.x+multijob-unix-compatibility/os/ubuntu && virtual
$ pwd | xxd
00000000: 2f76 6172 2f6c 6962 2f6a 656e 6b69 6e73  /var/lib/jenkins
00000010: 2f77 6f72 6b73 7061 6365 2f65 6c61 7374  /workspace/elast
00000020: 6963 2b65 6c61 7374 6963 7365 6172 6368  ic+elasticsearch
00000030: 2b36 2e78 2b6d 756c 7469 6a6f 622d 756e  +6.x+multijob-un
00000040: 6978 2d63 6f6d 7061 7469 6269 6c69 7479  ix-compatibility
00000050: 2f6f 732f 7562 756e 7475 2026 2620 7669  /os/ubuntu && vi
00000060: 7274 7561 6c0a                           rtual.

However, from the code you've shown it looks like any path that contains anything that needs URL-encoding is going to cause issues here - % for instance - so I agree that trying to catch an IllegalFormatException seems like it'll be more faithful. Can you dig out how the URL-encoding is happening (e.g. does it use java.net.URLEncoder (twice?!) or is it more complicated)?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good question. I'll take a look but I wouldn't be surprised if its something deep in Hadoop or far away from where it's breaking. If it's not encoded at this point the space check should be fine enough then. Thanks!

if (legalPath == false) {
fixtureSupported = false
}

// Always ignore HA integration tests in the normal integration test runner, they are included below as
// part of their own HA-specific integration test tasks.
integTestRunner.exclude('**/Ha*TestSuiteIT.class')
Expand All @@ -248,7 +253,12 @@ if (fixtureSupported) {
// Only include the HA integration tests for the HA test task
integTestHaRunner.patternSet.setIncludes(['**/Ha*TestSuiteIT.class'])
} else {
logger.warn("hdfsFixture unsupported, please set HADOOP_HOME and put HADOOP_HOME\\bin in PATH")
if (legalPath) {
logger.warn("hdfsFixture unsupported, please set HADOOP_HOME and put HADOOP_HOME\\bin in PATH")
} else {
logger.warn("hdfsFixture unsupported since there are spaces in the path: '" + rootProject.rootDir.toString() + "'")
}

// The normal integration test runner will just test that the plugin loads
integTestRunner.systemProperty 'tests.rest.suite', 'hdfs_repository/10_basic'
// HA fixture is unsupported. Don't run them.
Expand Down