Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-3359][DOCS] Make javadoc8 working for unidoc/genjavadoc compatibility in Java API documentation #16013

Closed
wants to merge 17 commits into from
Closed
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion core/src/main/scala/org/apache/spark/Accumulator.scala
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ package org.apache.spark
*
* An accumulator is created from an initial value `v` by calling
* [[SparkContext#accumulator SparkContext.accumulator]].
* Tasks running on the cluster can then add to it using the [[Accumulable#+= +=]] operator.
* Tasks running on the cluster can then add to it using the `+=` operator.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just decided to keep original format rather than trying to make this pretty.

The original was as below:

  • Scala
    2016-11-26 12 46 33

  • Java
    2016-11-26 12 46 46

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After this PR it still prints the same.

  • Scala

    2016-11-26 12 51 56
  • Java

    2016-11-26 12 51 38

* However, they cannot read its value. Only the driver program can read the accumulator's value,
* using its [[#value]] method.
*
Expand Down
12 changes: 6 additions & 6 deletions core/src/main/scala/org/apache/spark/SparkConf.scala
Original file line number Diff line number Diff line change
Expand Up @@ -262,7 +262,7 @@ class SparkConf(loadDefaults: Boolean) extends Cloneable with Logging with Seria
/**
* Get a time parameter as seconds; throws a NoSuchElementException if it's not set. If no
* suffix is provided then seconds are assumed.
* @throws NoSuchElementException
* @note Throws `NoSuchElementException`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I think this may be resolved if java.util.NoSuchElementException is imported, or if the fully qualified name is used here. I favor the latter. That would be better; does it work?

*/
def getTimeAsSeconds(key: String): Long = {
Utils.timeStringAsSeconds(get(key))
Expand All @@ -279,7 +279,7 @@ class SparkConf(loadDefaults: Boolean) extends Cloneable with Logging with Seria
/**
* Get a time parameter as milliseconds; throws a NoSuchElementException if it's not set. If no
* suffix is provided then milliseconds are assumed.
* @throws NoSuchElementException
* @note Throws `NoSuchElementException`
*/
def getTimeAsMs(key: String): Long = {
Utils.timeStringAsMs(get(key))
Expand All @@ -296,7 +296,7 @@ class SparkConf(loadDefaults: Boolean) extends Cloneable with Logging with Seria
/**
* Get a size parameter as bytes; throws a NoSuchElementException if it's not set. If no
* suffix is provided then bytes are assumed.
* @throws NoSuchElementException
* @note Throws `NoSuchElementException`
*/
def getSizeAsBytes(key: String): Long = {
Utils.byteStringAsBytes(get(key))
Expand All @@ -320,7 +320,7 @@ class SparkConf(loadDefaults: Boolean) extends Cloneable with Logging with Seria
/**
* Get a size parameter as Kibibytes; throws a NoSuchElementException if it's not set. If no
* suffix is provided then Kibibytes are assumed.
* @throws NoSuchElementException
* @note Throws `NoSuchElementException`
*/
def getSizeAsKb(key: String): Long = {
Utils.byteStringAsKb(get(key))
Expand All @@ -337,7 +337,7 @@ class SparkConf(loadDefaults: Boolean) extends Cloneable with Logging with Seria
/**
* Get a size parameter as Mebibytes; throws a NoSuchElementException if it's not set. If no
* suffix is provided then Mebibytes are assumed.
* @throws NoSuchElementException
* @note Throws `NoSuchElementException`
*/
def getSizeAsMb(key: String): Long = {
Utils.byteStringAsMb(get(key))
Expand All @@ -354,7 +354,7 @@ class SparkConf(loadDefaults: Boolean) extends Cloneable with Logging with Seria
/**
* Get a size parameter as Gibibytes; throws a NoSuchElementException if it's not set. If no
* suffix is provided then Gibibytes are assumed.
* @throws NoSuchElementException
* @note Throws `NoSuchElementException`
*/
def getSizeAsGb(key: String): Long = {
Utils.byteStringAsGb(get(key))
Expand Down
14 changes: 7 additions & 7 deletions core/src/main/scala/org/apache/spark/SparkContext.scala
Original file line number Diff line number Diff line change
Expand Up @@ -645,7 +645,7 @@ class SparkContext(config: SparkConf) extends Logging {

/**
* Get a local property set in this thread, or null if it is missing. See
* [[org.apache.spark.SparkContext.setLocalProperty]].
* `org.apache.spark.SparkContext.setLocalProperty`.
*/
def getLocalProperty(key: String): String =
Option(localProperties.get).map(_.getProperty(key)).orNull
Expand All @@ -663,7 +663,7 @@ class SparkContext(config: SparkConf) extends Logging {
* Application programmers can use this method to group all those jobs together and give a
* group description. Once set, the Spark web UI will associate such jobs with this group.
*
* The application can also use [[org.apache.spark.SparkContext.cancelJobGroup]] to cancel all
* The application can also use `org.apache.spark.SparkContext.cancelJobGroup` to cancel all
* running jobs in this group. For example,
* {{{
* // In the main thread:
Expand Down Expand Up @@ -1384,7 +1384,7 @@ class SparkContext(config: SparkConf) extends Logging {
}

/**
* Create and register a [[CollectionAccumulator]], which starts with empty list and accumulates
* Create and register a `CollectionAccumulator`, which starts with empty list and accumulates
* inputs by adding them into the list.
*/
def collectionAccumulator[T]: CollectionAccumulator[T] = {
Expand All @@ -1394,7 +1394,7 @@ class SparkContext(config: SparkConf) extends Logging {
}

/**
* Create and register a [[CollectionAccumulator]], which starts with empty list and accumulates
* Create and register a `CollectionAccumulator`, which starts with empty list and accumulates
* inputs by adding them into the list.
*/
def collectionAccumulator[T](name: String): CollectionAccumulator[T] = {
Expand Down Expand Up @@ -2043,7 +2043,7 @@ class SparkContext(config: SparkConf) extends Logging {
}

/**
* Cancel active jobs for the specified group. See [[org.apache.spark.SparkContext.setJobGroup]]
* Cancel active jobs for the specified group. See `org.apache.spark.SparkContext.setJobGroup`
* for more information.
*/
def cancelJobGroup(groupId: String) {
Expand All @@ -2061,7 +2061,7 @@ class SparkContext(config: SparkConf) extends Logging {
* Cancel a given job if it's scheduled or running.
*
* @param jobId the job ID to cancel
* @throws InterruptedException if the cancel message cannot be sent
* @note Throws `InterruptedException` if the cancel message cannot be sent
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, InterruptedException is in java.lang so I am surprised it isn't found. Does it help if you write @throws java.lang.InterruptedException? that's better if it works

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I will try!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm.. interesting. I haven't looked into this deeper but it seems it fails anyway.

[error] .../java/org/apache/spark/SparkContext.java:1150: error: exception not thrown: java.lang.InterruptedException
[error]    * @throws java.lang.InterruptedException if the cancel message cannot be sent
[error]              ^

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, so it's just complaining that it's documented as a checked exception but can't be thrown according to the byte code. It has a point there, but I am also kind of surprised it's an error. OK Leave it the way you have it as it seems like it's the only way that works.

*/
def cancelJob(jobId: Int) {
dagScheduler.cancelJob(jobId)
Expand All @@ -2071,7 +2071,7 @@ class SparkContext(config: SparkConf) extends Logging {
* Cancel a given stage and all jobs associated with it.
*
* @param stageId the stage ID to cancel
* @throws InterruptedException if the cancel message cannot be sent
* @note Throws `InterruptedException` if the cancel message cannot be sent
*/
def cancelStage(stageId: Int) {
dagScheduler.cancelStage(stageId)
Expand Down
4 changes: 2 additions & 2 deletions core/src/main/scala/org/apache/spark/TaskContext.scala
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ abstract class TaskContext extends Serializable {

/**
* Get a local property set upstream in the driver, or null if it is missing. See also
* [[org.apache.spark.SparkContext.setLocalProperty]].
* `org.apache.spark.SparkContext.setLocalProperty`.
*/
def getLocalProperty(key: String): String

Expand All @@ -174,7 +174,7 @@ abstract class TaskContext extends Serializable {
/**
* ::DeveloperApi::
* Returns all metrics sources with the given name which are associated with the instance
* which runs the task. For more information see [[org.apache.spark.metrics.MetricsSystem!]].
* which runs the task. For more information see `org.apache.spark.metrics.MetricsSystem`.
*/
@DeveloperApi
def getMetricsSources(sourceName: String): Seq[Source]
Expand Down
2 changes: 1 addition & 1 deletion core/src/main/scala/org/apache/spark/TaskEndReason.scala
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ sealed trait TaskFailedReason extends TaskEndReason {

/**
* :: DeveloperApi ::
* A [[org.apache.spark.scheduler.ShuffleMapTask]] that completed successfully earlier, but we
* A `org.apache.spark.scheduler.ShuffleMapTask` that completed successfully earlier, but we
* lost the executor before the stage completed. This means Spark needs to reschedule the task
* to be re-executed on a different executor.
*/
Expand Down
2 changes: 1 addition & 1 deletion core/src/main/scala/org/apache/spark/TestUtils.scala
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@ private[spark] object TestUtils {


/**
* A [[SparkListener]] that detects whether spills have occurred in Spark jobs.
* A `SparkListener` that detects whether spills have occurred in Spark jobs.
*/
private class SpillListener extends SparkListener {
private val stageIdToTaskMetrics = new mutable.HashMap[Int, ArrayBuffer[TaskMetrics]]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ class DoubleRDDFunctions(self: RDD[Double]) extends Logging with Serializable {
* to the right except for the last which is closed
* e.g. for the array
* [1, 10, 20, 50] the buckets are [1, 10) [10, 20) [20, 50]
* e.g 1<=x<10 , 10<=x<20, 20<=x<=50
* e.g 1&lt;=x&lt;10 , 10&lt;=x&lt;20, 20&lt;=x&lt;=50
Copy link
Member Author

@HyukjinKwon HyukjinKwon Nov 26, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This originally gives an error as below

[error] .../java/org/apache/spark/rdd/DoubleRDDFunctions.java:73: error: malformed HTML
[error]    *  e.g 1<=x<10, 10<=x<20, 20<=x<=50
[error]            ^
[error] .../java/org/apache/spark/rdd/DoubleRDDFunctions.java:73: error: malformed HTML
[error]    *  e.g 1<=x<10, 10<=x<20, 20<=x<=50
[error]               ^
[error] .../java/org/apache/spark/rdd/DoubleRDDFunctions.java:73: error: malformed HTML
[error]    *  e.g 1<=x<10, 10<=x<20, 20<=x<=50
[error]                      ^
...

However, after fixing it as above,

This is being printed as they are in javadoc (not in scaladoc)

2016-11-26 1 14 44

It seems we should find another approach to deal with this. It seems &#60 and &#62 also do not work. It seems & is always converted into &amp;.

Copy link
Member Author

@HyukjinKwon HyukjinKwon Nov 26, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In generated javadoc,

  • these throw errors

    <
    <code>.. < .. </code>
    <blockquote> ... < .. </blockquote>
    `... < ..`
    {{{... < ..}}}

    as below:

    [error] .../java/org/apache/spark/rdd/DoubleRDDFunctions.java:71: error: malformed HTML
    [error]    * < 
    
    [error] .../java/org/apache/spark/rdd/DoubleRDDFunctions.java:72: error: malformed HTML
    [error]    * <code>.. < .. </code>
    
    [error] .../java/org/apache/spark/rdd/DoubleRDDFunctions.java:73: error: malformed HTML
    [error]    * <blockquote> ... < .. </blockquote>
    
    [error] .../java/org/apache/spark/rdd/DoubleRDDFunctions.java:74: error: malformed HTML
    [error]    * <code>... < ..</code>
    
    [error] .../java/org/apache/spark/rdd/DoubleRDDFunctions.java:75: error: malformed HTML
    [error]    * <pre><code>... < ..</code></pre>
    
  • These do not print < but &lt;.

    &lt;
    <code>.. &lt; .. </code>
    <blockquote> ... &lt; .. </blockquote>
    `... &lt; ..`
    {{{... &lt; ..}}}

    as below:

    2016-11-26 3 54 45
  • The below one is fine

    {{{
    1<=x<10 , 10<=x<20, 20<=x<=50
    }}}
    

    but newlines are inserted as below:

    2016-11-26 3 50 57

Copy link
Member Author

@HyukjinKwon HyukjinKwon Nov 26, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note to myself, It seems inlined tags such as

{@code ... < ...}

and

{@literal >}

work also okay for both but they are valid ones for javadoc. For scaladoc, they are dealt with monospace text (like `<` or `... < ...`). As genjavadoc seems not replacing it, it seems they work apparently okay. I guess we should avoid those though.

* And on the input of 1 and 50 we would have a histogram of 1, 0, 1
*
* @note If your histogram is evenly spaced (e.g. [0, 10, 20, 30]) this can be switched
Expand Down
2 changes: 1 addition & 1 deletion core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ private[spark] class HadoopPartition(rddId: Int, override val index: Int, s: Inp
* @param minPartitions Minimum number of HadoopRDD partitions (Hadoop Splits) to generate.
*
* @note Instantiating this class directly is not recommended, please use
* [[org.apache.spark.SparkContext.hadoopRDD()]]
* `org.apache.spark.SparkContext.hadoopRDD()`
*/
@DeveloperApi
class HadoopRDD[K, V](
Expand Down
6 changes: 3 additions & 3 deletions core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ private[spark] class JdbcPartition(idx: Int, val lower: Long, val upper: Long) e
* The RDD takes care of closing the connection.
* @param sql the text of the query.
* The query must contain two ? placeholders for parameters used to partition the results.
* E.g. "select title, author from books where ? <= id and id <= ?"
* E.g. "select title, author from books where ? &lt;= id and id &lt;= ?"
* @param lowerBound the minimum value of the first placeholder
* @param upperBound the maximum value of the second placeholder
* The lower and upper bounds are inclusive.
Expand Down Expand Up @@ -151,7 +151,7 @@ object JdbcRDD {
* The RDD takes care of closing the connection.
* @param sql the text of the query.
* The query must contain two ? placeholders for parameters used to partition the results.
* E.g. "select title, author from books where ? <= id and id <= ?"
* E.g. "select title, author from books where ? &lt;= id and id &lt;= ?"
* @param lowerBound the minimum value of the first placeholder
* @param upperBound the maximum value of the second placeholder
* The lower and upper bounds are inclusive.
Expand Down Expand Up @@ -191,7 +191,7 @@ object JdbcRDD {
* The RDD takes care of closing the connection.
* @param sql the text of the query.
* The query must contain two ? placeholders for parameters used to partition the results.
* E.g. "select title, author from books where ? <= id and id <= ?"
* E.g. "select title, author from books where ? &lt;= id and id &lt;= ?"
* @param lowerBound the minimum value of the first placeholder
* @param upperBound the maximum value of the second placeholder
* The lower and upper bounds are inclusive.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ private[spark] class NewHadoopPartition(
* @param valueClass Class of the value associated with the inputFormatClass.
*
* @note Instantiating this class directly is not recommended, please use
* [[org.apache.spark.SparkContext.newAPIHadoopRDD()]]
* `org.apache.spark.SparkContext.newAPIHadoopRDD()`
*/
@DeveloperApi
class NewHadoopRDD[K, V](
Expand Down
16 changes: 8 additions & 8 deletions core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
Original file line number Diff line number Diff line change
Expand Up @@ -399,7 +399,7 @@ class PairRDDFunctions[K, V](self: RDD[(K, V)])
* Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm", available
* <a href="http://dx.doi.org/10.1145/2452376.2452456">here</a>.
*
* The relative accuracy is approximately `1.054 / sqrt(2^p)`. Setting a nonzero `sp > p`
* The relative accuracy is approximately `1.054 / sqrt(2^p)`. Setting a nonzero (sp &gt; p)
* would trigger sparse representation of registers, which may reduce the memory consumption
* and increase accuracy when the cardinality is small.
*
Expand Down Expand Up @@ -492,8 +492,8 @@ class PairRDDFunctions[K, V](self: RDD[(K, V)])
* each time the resulting RDD is evaluated.
*
* @note This operation may be very expensive. If you are grouping in order to perform an
* aggregation (such as a sum or average) over each key, using [[PairRDDFunctions.aggregateByKey]]
* or [[PairRDDFunctions.reduceByKey]] will provide much better performance.
* aggregation (such as a sum or average) over each key, using `PairRDDFunctions.aggregateByKey`
* or `PairRDDFunctions.reduceByKey` will provide much better performance.
*
* @note As currently implemented, groupByKey must be able to hold all the key-value pairs for any
* key in memory. If a key has too many values, it can result in an [[OutOfMemoryError]].
Expand All @@ -516,8 +516,8 @@ class PairRDDFunctions[K, V](self: RDD[(K, V)])
* each group is not guaranteed, and may even differ each time the resulting RDD is evaluated.
*
* @note This operation may be very expensive. If you are grouping in order to perform an
* aggregation (such as a sum or average) over each key, using [[PairRDDFunctions.aggregateByKey]]
* or [[PairRDDFunctions.reduceByKey]] will provide much better performance.
* aggregation (such as a sum or average) over each key, using `PairRDDFunctions.aggregateByKey`
* or `PairRDDFunctions.reduceByKey` will provide much better performance.
*
* @note As currently implemented, groupByKey must be able to hold all the key-value pairs for any
* key in memory. If a key has too many values, it can result in an [[OutOfMemoryError]].
Expand Down Expand Up @@ -637,8 +637,8 @@ class PairRDDFunctions[K, V](self: RDD[(K, V)])
* evaluated.
*
* @note This operation may be very expensive. If you are grouping in order to perform an
* aggregation (such as a sum or average) over each key, using [[PairRDDFunctions.aggregateByKey]]
* or [[PairRDDFunctions.reduceByKey]] will provide much better performance.
* aggregation (such as a sum or average) over each key, using `PairRDDFunctions.aggregateByKey`
* or `PairRDDFunctions.reduceByKey` will provide much better performance.
*/
def groupByKey(): RDD[(K, Iterable[V])] = self.withScope {
groupByKey(defaultPartitioner(self))
Expand Down Expand Up @@ -908,7 +908,7 @@ class PairRDDFunctions[K, V](self: RDD[(K, V)])
* Return an RDD with the pairs from `this` whose keys are not in `other`.
*
* Uses `this` partitioner/partition size, because even if `other` is huge, the resulting
* RDD will be <= us.
* RDD will be &lt;= us.
*/
def subtractByKey[W: ClassTag](other: RDD[(K, W)]): RDD[(K, V)] = self.withScope {
subtractByKey(other, self.partitioner.getOrElse(new HashPartitioner(self.partitions.length)))
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ import org.apache.spark.Partition

/**
* Enumeration to manage state transitions of an RDD through checkpointing
* [ Initialized --> checkpointing in progress --> checkpointed ].
* [ Initialized --&gt; checkpointing in progress --&gt; checkpointed ].
*/
private[spark] object CheckpointState extends Enumeration {
type CheckpointState = Value
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,14 +35,14 @@ trait PartitionCoalescer {
* @param maxPartitions the maximum number of partitions to have after coalescing
* @param parent the parent RDD whose partitions to coalesce
* @return an array of [[PartitionGroup]]s, where each element is itself an array of
* [[Partition]]s and represents a partition after coalescing is performed.
* `Partition`s and represents a partition after coalescing is performed.
*/
def coalesce(maxPartitions: Int, parent: RDD[_]): Array[PartitionGroup]
}

/**
* ::DeveloperApi::
* A group of [[Partition]]s
* A group of `Partition`s
* @param prefLoc preferred location for the partition group
*/
@DeveloperApi
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ package org.apache.spark.rpc.netty
import org.apache.spark.rpc.{RpcCallContext, RpcEndpoint, RpcEnv}

/**
* An [[RpcEndpoint]] for remote [[RpcEnv]]s to query if an [[RpcEndpoint]] exists.
* An [[RpcEndpoint]] for remote [[RpcEnv]]s to query if an `RpcEndpoint` exists.
*
* This is used when setting up a remote endpoint reference.
*/
Expand All @@ -35,6 +35,6 @@ private[netty] class RpcEndpointVerifier(override val rpcEnv: RpcEnv, dispatcher
private[netty] object RpcEndpointVerifier {
val NAME = "endpoint-verifier"

/** A message used to ask the remote [[RpcEndpointVerifier]] if an [[RpcEndpoint]] exists. */
/** A message used to ask the remote [[RpcEndpointVerifier]] if an `RpcEndpoint` exists. */
case class CheckExistence(name: String)
}
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ object InputFormatInfo {

a) For each host, count number of splits hosted on that host.
b) Decrement the currently allocated containers on that host.
c) Compute rack info for each host and update rack -> count map based on (b).
c) Compute rack info for each host and update rack -&gt; count map based on (b).
d) Allocate nodes based on (c)
e) On the allocation result, ensure that we don't allocate "too many" jobs on a single node
(even if data locality on that is very high) : this is to prevent fragility of job if a
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ import org.apache.spark.rdd.RDD
* @param outputId index of the task in this job (a job can launch tasks on only a subset of the
* input RDD's partitions).
* @param localProperties copy of thread-local properties set by the user on the driver side.
* @param metrics a [[TaskMetrics]] that is created at driver side and sent to executor side.
* @param metrics a `TaskMetrics` that is created at driver side and sent to executor side.
*
* The parameters below are optional:
* @param jobId id of the job this task belongs to
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ import org.apache.spark.shuffle.ShuffleWriter
* the type should be (RDD[_], ShuffleDependency[_, _, _]).
* @param partition partition of the RDD this task is associated with
* @param locs preferred task execution locations for locality scheduling
* @param metrics a [[TaskMetrics]] that is created at driver side and sent to executor side.
* @param metrics a `TaskMetrics` that is created at driver side and sent to executor side.
* @param localProperties copy of thread-local properties set by the user on the driver side.
*
* The parameters below are optional:
Expand Down
2 changes: 1 addition & 1 deletion core/src/main/scala/org/apache/spark/scheduler/Task.scala
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ import org.apache.spark.util._
* @param stageId id of the stage this task belongs to
* @param stageAttemptId attempt id of the stage this task belongs to
* @param partitionId index of the number in the RDD
* @param metrics a [[TaskMetrics]] that is created at driver side and sent to executor side.
* @param metrics a `TaskMetrics` that is created at driver side and sent to executor side.
* @param localProperties copy of thread-local properties set by the user on the driver side.
*
* The parameters below are optional:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ import org.apache.spark.util.SerializableBuffer

/**
* Description of a task that gets passed onto executors to be executed, usually created by
* [[TaskSetManager.resourceOffer]].
* `TaskSetManager.resourceOffer`.
*/
private[spark] class TaskDescription(
val taskId: Long,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ private[spark] object BlockManagerMessages {
extends ToBlockManagerSlave

/**
* Driver -> Executor message to trigger a thread dump.
* Driver -&gt; Executor message to trigger a thread dump.
*/
case object TriggerThreadDump extends ToBlockManagerSlave

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -247,7 +247,7 @@ final class ShuffleBlockFetcherIterator(

/**
* Fetch the local blocks while we are fetching remote blocks. This is ok because
* [[ManagedBuffer]]'s memory is allocated lazily when we create the input stream, so all we
* `ManagedBuffer`'s memory is allocated lazily when we create the input stream, so all we
* track in-memory are the ManagedBuffer references themselves.
*/
private[this] def fetchLocalBlocks() {
Expand Down Expand Up @@ -423,7 +423,7 @@ object ShuffleBlockFetcherIterator {
* @param address BlockManager that the block was fetched from.
* @param size estimated size of the block, used to calculate bytesInFlight.
* Note that this is NOT the exact bytes.
* @param buf [[ManagedBuffer]] for the content.
* @param buf `ManagedBuffer` for the content.
* @param isNetworkReqDone Is this the last network request for this host in this fetch request.
*/
private[storage] case class SuccessFetchResult(
Expand Down
Loading