Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-9023] [SQL] Efficiency improvements for UnsafeRows in Exchange #7456

Closed
wants to merge 14 commits into from

Conversation

JoshRosen
Copy link
Contributor

This pull request aims to improve the performance of SQL's Exchange operator when shuffling UnsafeRows. It also makes several general efficiency improvements to Exchange.

Key changes:

  • When performing hash partitioning, the old Exchange projected the partitioning columns into a new row then passed a (partitioningColumRow: InternalRow, row: InternalRow) pair into the shuffle. This is very inefficient because it ends up redundantly serializing the partitioning columns only to immediately discard them after the shuffle. After this patch's changes, Exchange now shuffles (partitionId: Int, row: InternalRow) pairs. This still isn't optimal, since we're still shuffling extra data that we don't need, but it's significantly more efficient than the old implementation; in the future, we may be able to further optimize this once we implement a new shuffle write interface that accepts non-key-value-pair inputs.
  • Exchange's compute() method has been significantly simplified; the new code has less duplication and thus is easier to understand.
  • When the Exchange's input operator produces UnsafeRows, Exchange will use a specialized UnsafeRowSerializer to serialize these rows. This serializer is significantly more efficient since it simply copies the UnsafeRow's underlying bytes. Note that this approach does not work for UnsafeRows that use the ObjectPool mechanism; I did not add support for this because we are planning to remove ObjectPool in the next few weeks.

@SparkQA
Copy link

SparkQA commented Jul 17, 2015

Test build #37562 has finished for PR 7456 at commit 8dd3ff2.

  • This patch fails Scala style tests.
  • This patch merges cleanly.
  • This patch adds the following public classes (experimental):
    • class ShuffledRowRDD(
    • class UnsafeRowSerializer(numFields: Int) extends Serializer

import org.apache.spark.unsafe.PlatformDependent


class UnsafeRowSerializer(numFields: Int) extends Serializer {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a file that deserves a little bit more comment to explain what this is for.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep. I'm going to add comments now.

@rxin
Copy link
Contributor

rxin commented Jul 17, 2015

The current code looks pretty good to me.

@JoshRosen
Copy link
Contributor Author

I'm going to rebase this on top of #7482 to make things easier to test; will rebase again once that patch is merged.

@JoshRosen
Copy link
Contributor Author

Alright, I've updated this and it should be ready for another look. Added a very trivial test to trigger the new shuffle path and caught a bug related to UnsafeRowSerializer not being Serializable.

@JoshRosen JoshRosen changed the title [SPARK-9023] [SQL] [WIP] Efficiency improvements for UnsafeRows in Exchange [SPARK-9023] [SQL] Efficiency improvements for UnsafeRows in Exchange Jul 18, 2015
@JoshRosen
Copy link
Contributor Author

Jenkins, retest this please.

@SparkQA
Copy link

SparkQA commented Jul 18, 2015

Test build #37736 has finished for PR 7456 at commit 0082515.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds the following public classes (experimental):
    • case class Concat(children: Seq[Expression]) extends Expression with ImplicitCastInputTypes
    • class ShuffledRowRDD(

@SparkQA
Copy link

SparkQA commented Jul 18, 2015

Test build #37738 has finished for PR 7456 at commit 0082515.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds the following public classes (experimental):
    • case class Concat(children: Seq[Expression]) extends Expression with ImplicitCastInputTypes
    • class ShuffledRowRDD(

@rxin
Copy link
Contributor

rxin commented Jul 18, 2015

Can you also update the pull request description?

@JoshRosen
Copy link
Contributor Author

Done; I removed the "work-in-progress" part.

@JoshRosen
Copy link
Contributor Author

Huh, looks like a legitimate test failure in SparkSqlSerializer2SortMergeShuffleSuite:

org.apache.spark.rdd.MapPartitionsRDD cannot be cast to org.apache.spark.rdd.ShuffledRDD

@SparkQA
Copy link

SparkQA commented Jul 19, 2015

Test build #37744 has finished for PR 7456 at commit 7e75259.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds the following public classes (experimental):
    • class ShuffledRowRDD(

this
}
override def writeKey[T: ClassTag](key: T): SerializationStream = {
assert(key.isInstanceOf[Int])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you need to add some comment explaining why we are not doing anything when writing keys.

var dataRemaining: Int = row.getSizeInBytes
val baseObject = row.getBaseObject
var rowReadPosition: Long = row.getBaseOffset
while (dataRemaining > 0) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably doesn't matter in the MVP, but if we know the UnsafeRow is backed by a byte array, we don't need to do this copying, do we?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A nice way to address this would be to add a writeTo method to UnsafeRow itself. That method could contain a special case to handle the case where the row is backed by an on-heap byte array.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

@rxin
Copy link
Contributor

rxin commented Jul 20, 2015

Looks pretty good. I'm going to merge it. Please submit a followup pr to address some of the comments on documentation and choice of buffer size.

@asfgit asfgit closed this in 79ec072 Jul 20, 2015
asfgit pushed a commit that referenced this pull request Jul 21, 2015
…safeRows in Exchange)

This patch addresses code review feedback from #7456.

Author: Josh Rosen <[email protected]>

Closes #7551 from JoshRosen/unsafe-exchange-followup and squashes the following commits:

76dbdf8 [Josh Rosen] Add comments + more methods to UnsafeRowSerializer
3d7a1f2 [Josh Rosen] Add writeToStream() method to UnsafeRow
@JoshRosen JoshRosen deleted the unsafe-exchange branch December 29, 2015 20:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants