Skip to content

Commit

Permalink
Fix comment syntax errors
Browse files Browse the repository at this point in the history
  • Loading branch information
LYCJeff committed Dec 13, 2022
1 parent 320eca2 commit 560fb5b
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 10 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@
* An asynchronous HDFS output stream implementation which fans out data to datanode and only
* supports writing file with only one block.
* <p>
* Use the createOutput method in {@link FanOutOneBlockAsyncDFSOutputHelper} to create. The mainly
* Use the createOutput method in {@link FanOutOneBlockAsyncDFSOutputHelper} to create. The main
* usage of this class is implementing WAL, so we only expose a little HDFS configurations in the
* method. And we place it here under io package because we want to make it independent of WAL
* implementation thus easier to move it to HDFS project finally.
Expand All @@ -104,8 +104,8 @@
@InterfaceAudience.Private
public class FanOutOneBlockAsyncDFSOutput implements AsyncFSOutput {

// The MAX_PACKET_SIZE is 16MB but it include the header size and checksum size. So here we set a
// smaller limit for data size.
// The MAX_PACKET_SIZE is 16MB, but it includes the header size and checksum size. So here we set
// a smaller limit for data size.
private static final int MAX_DATA_LEN = 12 * 1024 * 1024;

private final Configuration conf;
Expand Down Expand Up @@ -173,7 +173,7 @@ public Callback(CompletableFuture<Long> future, long ackedLength,
private long nextPacketOffsetInBlock = 0L;

// the length of the trailing partial chunk, this is because the packet start offset must be
// aligned with the length of checksum chunk so we need to resend the same data.
// aligned with the length of checksum chunk, so we need to resend the same data.
private int trailingPartialChunkLength = 0;

private long nextPacketSeqno = 0L;
Expand Down Expand Up @@ -437,7 +437,7 @@ private void flushBuffer(CompletableFuture<Long> future, ByteBuf dataBuf,
checksumBuf.release();
headerBuf.release();

// This method takes ownership of the dataBuf so we need release it before returning.
// This method takes ownership of the dataBuf, so we need release it before returning.
dataBuf.release();
return;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,14 +72,14 @@ public static void recoverFileLease(FileSystem fs, Path p, Configuration conf,
* file's primary node. If all is well, it should return near immediately. But, as is common, it
* is the very primary node that has crashed and so the namenode will be stuck waiting on a socket
* timeout before it will ask another datanode to start the recovery. It does not help if we call
* recoverLease in the meantime and in particular, subsequent to the socket timeout, a
* recoverLease in the meantime and in particular, after the socket timeout, a
* recoverLease invocation will cause us to start over from square one (possibly waiting on socket
* timeout against primary node). So, in the below, we do the following: 1. Call recoverLease. 2.
* If it returns true, break. 3. If it returns false, wait a few seconds and then call it again.
* 4. If it returns true, break. 5. If it returns false, wait for what we think the datanode
* socket timeout is (configurable) and then try again. 6. If it returns true, break. 7. If it
* returns false, repeat starting at step 5. above. If HDFS-4525 is available, call it every
* second and we might be able to exit early.
* second, and we might be able to exit early.
*/
private static boolean recoverDFSFileLease(final DistributedFileSystem dfs, final Path p,
final Configuration conf, final CancelableProgressable reporter) throws IOException {
Expand All @@ -89,10 +89,10 @@ private static boolean recoverDFSFileLease(final DistributedFileSystem dfs, fina
// usually needs 10 minutes before marking the nodes as dead. So we're putting ourselves
// beyond that limit 'to be safe'.
long recoveryTimeout = conf.getInt("hbase.lease.recovery.timeout", 900000) + startWaiting;
// This setting should be a little bit above what the cluster dfs heartbeat is set to.
// This setting should be a little above what the cluster dfs heartbeat is set to.
long firstPause = conf.getInt("hbase.lease.recovery.first.pause", 4000);
// This should be set to how long it'll take for us to timeout against primary datanode if it
// is dead. We set it to 64 seconds, 4 second than the default READ_TIMEOUT in HDFS, the
// is dead. We set it to 64 seconds, 4 seconds than the default READ_TIMEOUT in HDFS, the
// default value for DFS_CLIENT_SOCKET_TIMEOUT_KEY. If recovery is still failing after this
// timeout, then further recovery will take liner backoff with this base, to avoid endless
// preemptions when this value is not properly configured.
Expand All @@ -118,7 +118,7 @@ private static boolean recoverDFSFileLease(final DistributedFileSystem dfs, fina
Thread.sleep(firstPause);
} else {
// Cycle here until (subsequentPause * nbAttempt) elapses. While spinning, check
// isFileClosed if available (should be in hadoop 2.0.5... not in hadoop 1 though.
// isFileClosed if available (should be in hadoop 2.0.5... not in hadoop 1 though).
long localStartWaiting = EnvironmentEdgeManager.currentTime();
while (
(EnvironmentEdgeManager.currentTime() - localStartWaiting)
Expand Down

0 comments on commit 560fb5b

Please sign in to comment.