Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-19319][SparkR]:SparkR Kmeans summary returns error when the cluster size doesn't equal to k #16666

Closed
wants to merge 4 commits into from
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 9 additions & 6 deletions R/pkg/R/mllib_clustering.R
Original file line number Diff line number Diff line change
Expand Up @@ -225,10 +225,12 @@ setMethod("spark.kmeans", signature(data = "SparkDataFrame", formula = "formula"

#' @param object a fitted k-means model.
#' @return \code{summary} returns summary information of the fitted model, which is a list.
#' The list includes the model's \code{k} (number of cluster centers),
#' The list includes the model's \code{k} (the configured number of cluster centers),
#' \code{coefficients} (model cluster centers),
#' \code{size} (number of data points in each cluster), and \code{cluster}
#' (cluster centers of the transformed data).
#' \code{size} (number of data points in each cluster), \code{cluster}
#' (cluster centers of the transformed data), and \code{clusterSize}
#' (the actual number of cluster centers. When using initMode = "random",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's add is.loaded here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. I will add it. For bisecting kmeans, I haven't found a case like this. This case only occurs when initMode is random and this behavior was due to one fix to kmeans implementation.

#' \code{clusterSize} may not equal to \code{k}).
#' @rdname spark.kmeans
#' @export
#' @note summary(KMeansModel) since 2.0.0
Expand All @@ -240,16 +242,17 @@ setMethod("summary", signature(object = "KMeansModel"),
coefficients <- callJMethod(jobj, "coefficients")
k <- callJMethod(jobj, "k")
size <- callJMethod(jobj, "size")
coefficients <- t(matrix(coefficients, ncol = k))
clusterSize <- callJMethod(jobj, "clusterSize")
coefficients <- t(matrix(coefficients, ncol = clusterSize))
colnames(coefficients) <- unlist(features)
rownames(coefficients) <- 1:k
rownames(coefficients) <- 1:clusterSize
cluster <- if (is.loaded) {
NULL
} else {
dataFrame(callJMethod(jobj, "cluster"))
}
list(k = k, coefficients = coefficients, size = size,
cluster = cluster, is.loaded = is.loaded)
cluster = cluster, is.loaded = is.loaded, clusterSize = clusterSize)
})

# Predicted values based on a k-means model
Expand Down
15 changes: 11 additions & 4 deletions R/pkg/inst/tests/testthat/test_mllib_clustering.R
Original file line number Diff line number Diff line change
Expand Up @@ -145,13 +145,20 @@ test_that("spark.kmeans", {
model2 <- spark.kmeans(data = df, ~ ., k = 5, maxIter = 10,
initMode = "random", seed = 22222, tol = 1E-5)

fitted.model1 <- fitted(model1)
fitted.model2 <- fitted(model2)
summary.model1 <- summary(model1)
summary.model2 <- summary(model2)
cluster1 <- summary.model1$cluster
cluster2 <- summary.model2$cluster
clusterSize1 <- summary.model1$clusterSize
clusterSize2 <- summary.model2$clusterSize

# The predicted clusters are different
expect_equal(sort(collect(distinct(select(fitted.model1, "prediction")))$prediction),
expect_equal(sort(collect(distinct(select(cluster1, "prediction")))$prediction),
c(0, 1, 2, 3))
expect_equal(sort(collect(distinct(select(fitted.model2, "prediction")))$prediction),
expect_equal(sort(collect(distinct(select(cluster2, "prediction")))$prediction),
c(0, 1, 2))
expect_equal(clusterSize1, 4)
expect_equal(clusterSize2, 3)
})

test_that("spark.lda with libsvm", {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,8 @@ private[r] class KMeansWrapper private (

lazy val cluster: DataFrame = kMeansModel.summary.cluster

lazy val clusterSize: Int = kMeansModel.clusterCenters.size

def fitted(method: String): DataFrame = {
if (method == "centers") {
kMeansModel.summary.predictions.drop(kMeansModel.getFeaturesCol)
Expand Down