Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove "nodes/0" folder prefix from data path #42489

Merged
merged 9 commits into from
May 28, 2019
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions docs/reference/commands/shard-tool.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -51,14 +51,14 @@ $ bin/elasticsearch-shard remove-corrupted-data --index twitter --shard-id 0
Please make a complete backup of your index before using this tool.


Opening Lucene index at /var/lib/elasticsearchdata/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/index/
Opening Lucene index at /var/lib/elasticsearchdata/indices/P45vf_YQRhqjfwLMUvSqDw/0/index/

>> Lucene index is corrupted at /var/lib/elasticsearchdata/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/index/
>> Lucene index is corrupted at /var/lib/elasticsearchdata/indices/P45vf_YQRhqjfwLMUvSqDw/0/index/

Opening translog at /var/lib/elasticsearchdata/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/
Opening translog at /var/lib/elasticsearchdata/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/


>> Translog is clean at /var/lib/elasticsearchdata/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/
>> Translog is clean at /var/lib/elasticsearchdata/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/


Corrupted Lucene index segments found - 32 documents will be lost.
Expand Down Expand Up @@ -93,7 +93,7 @@ POST /_cluster/reroute

You must accept the possibility of data loss by changing parameter `accept_data_loss` to `true`.

Deleted corrupt marker corrupted_FzTSBSuxT7i3Tls_TgwEag from /var/lib/elasticsearchdata/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/index/
Deleted corrupt marker corrupted_FzTSBSuxT7i3Tls_TgwEag from /var/lib/elasticsearchdata/indices/P45vf_YQRhqjfwLMUvSqDw/0/index/

--------------------------------------------------

Expand Down
17 changes: 17 additions & 0 deletions docs/reference/migration/migrate_8_0/node.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,3 +14,20 @@
The `node.max_local_storage_nodes` setting was deprecated in 7.x and
has been removed in 8.0. Nodes should be run on separate data paths
to ensure that each node is consistently assigned to the same data path.

[float]
==== Change of data folder layout
DaveCTurner marked this conversation as resolved.
Show resolved Hide resolved

While data was previously stored in `$DATA_DIR/nodes/$nodeOrdinal`, it
has now, with the removal of the `node.max_local_storage_nodes` setting,
moved directly to `$DATA_DIR`. Upon startup, Elasticsearch will check
to see if there is data in the old location, and automatically move it
to the new location. This automatic migration only works if `$nodeOrdinal`
is 0, i.e., multiple node instances have not previously run on the same
data path, which required for `node.max_local_storage_nodes` to explicitly
be configured. In case where the automatic migration cannot be done due to
ambiguity of `$nodeOrdinal` subfolders, the data path (i.e. `path.data`
setting) can either be adjusted for each node instance to one of the
`$nodeOrdinal` subfolders, or preferably the contents of the `$nodeOrdinal`
subfolders can be manually moved into separate new folders, with `path.data`
then set to one of these folders for each node instance.
Original file line number Diff line number Diff line change
Expand Up @@ -51,18 +51,19 @@ public void testMissingWritePermission() throws IOException {
Settings build = Settings.builder()
.put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toAbsolutePath().toString())
.putList(Environment.PATH_DATA_SETTING.getKey(), tempPaths).build();
IOException exception = expectThrows(IOException.class, () -> {
IllegalStateException exception = expectThrows(IllegalStateException.class, () -> {
new NodeEnvironment(build, TestEnvironment.newEnvironment(build));
});
assertTrue(exception.getMessage(), exception.getMessage().startsWith(path.toString()));
assertTrue(exception.getCause().getCause().getMessage(),
exception.getCause().getCause().getMessage().startsWith(path.toString()));
}
}

public void testMissingWritePermissionOnIndex() throws IOException {
assumeTrue("posix filesystem", isPosix);
final String[] tempPaths = tmpPaths();
Path path = PathUtils.get(randomFrom(tempPaths));
Path fooIndex = path.resolve("nodes").resolve("0").resolve(NodeEnvironment.INDICES_FOLDER)
Path fooIndex = path.resolve(NodeEnvironment.INDICES_FOLDER)
.resolve("foo");
Files.createDirectories(fooIndex);
try (PosixPermissionsResetter attr = new PosixPermissionsResetter(fooIndex)) {
Expand All @@ -82,7 +83,7 @@ public void testMissingWritePermissionOnShard() throws IOException {
assumeTrue("posix filesystem", isPosix);
final String[] tempPaths = tmpPaths();
Path path = PathUtils.get(randomFrom(tempPaths));
Path fooIndex = path.resolve("nodes").resolve("0").resolve(NodeEnvironment.INDICES_FOLDER)
Path fooIndex = path.resolve(NodeEnvironment.INDICES_FOLDER)
.resolve("foo");
Path fooShard = fooIndex.resolve("0");
Path fooShardIndex = fooShard.resolve("index");
Expand Down
167 changes: 147 additions & 20 deletions server/src/main/java/org/elasticsearch/env/NodeEnvironment.java
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.common.CheckedFunction;
import org.elasticsearch.common.CheckedRunnable;
import org.elasticsearch.common.Randomness;
import org.elasticsearch.common.SuppressForbidden;
import org.elasticsearch.common.UUIDs;
Expand All @@ -45,6 +46,7 @@
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.ByteSizeValue;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.util.set.Sets;
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
import org.elasticsearch.core.internal.io.IOUtils;
import org.elasticsearch.gateway.MetaDataStateFormat;
Expand Down Expand Up @@ -81,6 +83,7 @@
import java.util.concurrent.Semaphore;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.function.Function;
import java.util.function.Predicate;
import java.util.stream.Collectors;
import java.util.stream.Stream;
Expand All @@ -90,9 +93,9 @@
*/
public final class NodeEnvironment implements Closeable {
public static class NodePath {
/* ${data.paths}/nodes/0 */
/* ${data.paths} */
public final Path path;
/* ${data.paths}/nodes/0/indices */
/* ${data.paths}/indices */
public final Path indicesPath;
/** Cached FileStore from path */
public final FileStore fileStore;
Expand All @@ -115,15 +118,15 @@ public NodePath(Path path) throws IOException {

/**
* Resolves the given shards directory against this NodePath
* ${data.paths}/nodes/{node.id}/indices/{index.uuid}/{shard.id}
* ${data.paths}/indices/{index.uuid}/{shard.id}
*/
public Path resolve(ShardId shardId) {
return resolve(shardId.getIndex()).resolve(Integer.toString(shardId.id()));
}

/**
* Resolves index directory against this NodePath
* ${data.paths}/nodes/{node.id}/indices/{index.uuid}
* ${data.paths}/indices/{index.uuid}
*/
public Path resolve(Index index) {
return resolve(index.getUUID());
Expand Down Expand Up @@ -170,7 +173,6 @@ public String toString() {
public static final Setting<Boolean> ENABLE_LUCENE_SEGMENT_INFOS_TRACE_SETTING =
Setting.boolSetting("node.enable_lucene_segment_infos_trace", false, Property.NodeScope);

public static final String NODES_FOLDER = "nodes";
public static final String INDICES_FOLDER = "indices";
public static final String NODE_LOCK_FILENAME = "node.lock";

Expand All @@ -179,20 +181,28 @@ public static class NodeLock implements Releasable {
private final Lock[] locks;
private final NodePath[] nodePaths;


public NodeLock(final Logger logger,
final Environment environment,
final CheckedFunction<Path, Boolean, IOException> pathFunction) throws IOException {
this(logger, environment, pathFunction, Function.identity());
}

/**
* Tries to acquire a node lock for a node id, throws {@code IOException} if it is unable to acquire it
* @param pathFunction function to check node path before attempt of acquiring a node lock
*/
public NodeLock(final Logger logger,
final Environment environment,
final CheckedFunction<Path, Boolean, IOException> pathFunction) throws IOException {
final CheckedFunction<Path, Boolean, IOException> pathFunction,
final Function<Path, Path> subPathMapping) throws IOException {
nodePaths = new NodePath[environment.dataFiles().length];
locks = new Lock[nodePaths.length];
try {
final Path[] dataPaths = environment.dataFiles();
for (int dirIndex = 0; dirIndex < dataPaths.length; dirIndex++) {
Path dataDir = dataPaths[dirIndex];
Path dir = resolveNodePath(dataDir);
Path dir = subPathMapping.apply(dataDir);
if (pathFunction.apply(dir) == false) {
continue;
}
Expand Down Expand Up @@ -247,7 +257,7 @@ public NodeEnvironment(Settings settings, Environment environment) throws IOExce
sharedDataPath = environment.sharedDataFile();

for (Path path : environment.dataFiles()) {
Files.createDirectories(resolveNodePath(path));
Files.createDirectories(path);
}

final NodeLock nodeLock;
Expand All @@ -264,7 +274,6 @@ public NodeEnvironment(Settings settings, Environment environment) throws IOExce

this.locks = nodeLock.locks;
this.nodePaths = nodeLock.nodePaths;
this.nodeMetaData = loadOrCreateNodeMetaData(settings, logger, nodePaths);

logger.debug("using node location {}", Arrays.toString(nodePaths));

Expand All @@ -278,6 +287,10 @@ public NodeEnvironment(Settings settings, Environment environment) throws IOExce
ensureAtomicMoveSupported(nodePaths);
}

if (upgradeLegacyNodeFolders(logger, settings, environment, nodeLock)) {
assertCanWrite();
}

if (DiscoveryNode.isDataNode(settings) == false) {
if (DiscoveryNode.isMasterNode(settings) == false) {
ensureNoIndexMetaData(nodePaths);
Expand All @@ -286,6 +299,8 @@ public NodeEnvironment(Settings settings, Environment environment) throws IOExce
ensureNoShardData(nodePaths);
}

this.nodeMetaData = loadOrCreateNodeMetaData(settings, logger, nodePaths);

success = true;
} finally {
if (success == false) {
Expand All @@ -295,13 +310,125 @@ public NodeEnvironment(Settings settings, Environment environment) throws IOExce
}

/**
* Resolve a specific nodes/{node.id} path for the specified path and node lock id.
*
* @param path the path
* @return the resolved path
* Upgrades all data paths that have been written to by an older ES version to the 8.0+ compatible folder layout,
* removing the "nodes/${lockId}" folder prefix
*/
public static Path resolveNodePath(final Path path) {
return path.resolve(NODES_FOLDER).resolve("0");
private static boolean upgradeLegacyNodeFolders(Logger logger, Settings settings, Environment environment,
NodeLock nodeLock) throws IOException {
boolean upgradeNeeded = false;

// check if we can do an auto-upgrade
for (Path path : environment.dataFiles()) {
final Path nodesFolderPath = path.resolve("nodes");
if (Files.isDirectory(nodesFolderPath)) {
final List<Integer> nodeLockIds = new ArrayList<>();

try (DirectoryStream<Path> stream = Files.newDirectoryStream(nodesFolderPath)) {
for (Path nodeLockIdPath : stream) {
String fileName = nodeLockIdPath.getFileName().toString();
if (Files.isDirectory(nodeLockIdPath) && fileName.chars().allMatch(Character::isDigit)) {
int nodeLockId = Integer.parseInt(fileName);
nodeLockIds.add(nodeLockId);
}
DaveCTurner marked this conversation as resolved.
Show resolved Hide resolved
}
}

if (nodeLockIds.isEmpty() == false) {
upgradeNeeded = true;

if (nodeLockIds.equals(Arrays.asList(0)) == false) {
throw new IllegalStateException("data path " + nodesFolderPath + " cannot be upgraded automatically because it " +
"contains data from nodes with ordinals " + nodeLockIds + ", due to previous use of the now obsolete " +
"[node.max_local_storage_nodes] setting. Please check the breaking changes docs for the current version of " +
"Elasticsearch to find an upgrade path");
}
}
}
}

if (upgradeNeeded == false) {
logger.trace("data folder upgrade not required");
return false;
}

logger.info("upgrading legacy data folders: {}", Arrays.toString(environment.dataFiles()));

// acquire locks on legacy path for duration of upgrade (to ensure there is no older ES version running on this path)
final NodeLock legacyNodeLock;
try {
legacyNodeLock = new NodeLock(logger, environment, dir -> true, path -> path.resolve("nodes").resolve("0"));
} catch (IOException e) {
final String message = String.format(
Locale.ROOT,
"failed to obtain legacy node locks, tried %s;" +
" maybe these locations are not writable or multiple nodes were started on the same data path?",
Arrays.toString(environment.dataFiles()));
throw new IllegalStateException(message, e);
}

// move contents from legacy path to new path
assert nodeLock.getNodePaths().length == legacyNodeLock.getNodePaths().length;
try {
final List<CheckedRunnable<IOException>> upgradeActions = new ArrayList<>();
for (int i = 0; i < legacyNodeLock.getNodePaths().length; i++) {
final NodePath legacyNodePath = legacyNodeLock.getNodePaths()[i];
final NodePath nodePath = nodeLock.getNodePaths()[i];

// determine folders to move and check that there are no extra files/folders
final Set<String> folderNames = new HashSet<>();

try (DirectoryStream<Path> stream = Files.newDirectoryStream(legacyNodePath.path)) {
for (Path subFolderPath : stream) {
final String fileName = subFolderPath.getFileName().toString();
if (FileSystemUtils.isDesktopServicesStore(subFolderPath)) {
// ignore
} else if (FileSystemUtils.isAccessibleDirectory(subFolderPath, logger)) {
if (fileName.equals(INDICES_FOLDER) == false && // indices folder
fileName.equals(MetaDataStateFormat.STATE_DIR_NAME) == false) { // global metadata & node state folder
throw new IllegalStateException("unexpected folder encountered during data folder upgrade: " +
subFolderPath);
}
final Path targetSubFolderPath = nodePath.path.resolve(fileName);
if (Files.exists(targetSubFolderPath)) {
throw new IllegalStateException("target folder already exists during data folder upgrade: " +
targetSubFolderPath);
}
folderNames.add(fileName);
} else if (fileName.equals(NODE_LOCK_FILENAME) == false &&
fileName.equals(TEMP_FILE_NAME) == false) {
throw new IllegalStateException("unexpected file/folder encountered during data folder upgrade: " +
subFolderPath);
}
}
}

DaveCTurner marked this conversation as resolved.
Show resolved Hide resolved
assert Sets.difference(Sets.newHashSet(INDICES_FOLDER, MetaDataStateFormat.STATE_DIR_NAME), folderNames).isEmpty() :
"expected indices and/or state dir folder but was " + folderNames;

upgradeActions.add(() -> {
for (String folderName : folderNames) {
final Path sourceSubFolderPath = legacyNodePath.path.resolve(folderName);
final Path targetSubFolderPath = nodePath.path.resolve(folderName);
Files.move(sourceSubFolderPath, targetSubFolderPath, StandardCopyOption.ATOMIC_MOVE);
logger.info("data folder upgrade: moved from [{}] to [{}]", sourceSubFolderPath, targetSubFolderPath);
}
IOUtils.fsync(nodePath.path, true);
});
}
// now do the actual upgrade. start by upgrading the node metadata file before moving anything, since a downgrade in an
// intermediate state would be pretty disastrous
loadOrCreateNodeMetaData(settings, logger, legacyNodeLock.getNodePaths());
for (CheckedRunnable<IOException> upgradeAction : upgradeActions) {
upgradeAction.run();
}
} finally {
legacyNodeLock.close();
}

// upgrade successfully completed, remove legacy nodes folders
IOUtils.rm(Stream.of(environment.dataFiles()).map(path -> path.resolve("nodes")).toArray(Path[]::new));

return true;
}

private void maybeLogPathDetails() throws IOException {
Expand Down Expand Up @@ -801,14 +928,14 @@ public Path[] availableShardPaths(ShardId shardId) {
}

/**
* Returns all folder names in ${data.paths}/nodes/{node.id}/indices folder
* Returns all folder names in ${data.paths}/indices folder
*/
public Set<String> availableIndexFolders() throws IOException {
return availableIndexFolders(p -> false);
}

/**
* Returns folder names in ${data.paths}/nodes/{node.id}/indices folder that don't match the given predicate.
* Returns folder names in ${data.paths}/indices folder that don't match the given predicate.
* @param excludeIndexPathIdsPredicate folder names to exclude
*/
public Set<String> availableIndexFolders(Predicate<String> excludeIndexPathIdsPredicate) throws IOException {
Expand All @@ -825,7 +952,7 @@ public Set<String> availableIndexFolders(Predicate<String> excludeIndexPathIdsPr
}

/**
* Return all directory names in the nodes/{node.id}/indices directory for the given node path.
* Return all directory names in the indices directory for the given node path.
*
* @param nodePath the path
* @return all directories that could be indices for the given node path.
Expand All @@ -836,7 +963,7 @@ public Set<String> availableIndexFoldersForPath(final NodePath nodePath) throws
}

/**
* Return directory names in the nodes/{node.id}/indices directory for the given node path that don't match the given predicate.
* Return directory names in the indices directory for the given node path that don't match the given predicate.
*
* @param nodePath the path
* @param excludeIndexPathIdsPredicate folder names to exclude
Expand Down Expand Up @@ -865,7 +992,7 @@ public Set<String> availableIndexFoldersForPath(final NodePath nodePath, Predica
}

/**
* Resolves all existing paths to <code>indexFolderName</code> in ${data.paths}/nodes/{node.id}/indices
* Resolves all existing paths to <code>indexFolderName</code> in ${data.paths}/indices
*/
public Path[] resolveIndexFolder(String indexFolderName) {
if (nodePaths == null || locks == null) {
Expand Down
Loading