Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Minor][DOC] Add JavaStreamingTestExample #11776

Closed
wants to merge 6 commits into from
Closed
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions docs/mllib-statistics.md
Original file line number Diff line number Diff line change
Expand Up @@ -544,6 +544,13 @@ provides streaming hypothesis testing.

{% include_example scala/org/apache/spark/examples/mllib/StreamingTestExample.scala %}
</div>

<div data-lang="java" markdown="1">
[`StreamingTest`](api/java/index.html#org.apache.spark.mllib.stat.test.StreamingTest)
provides streaming hypothesis testing.

{% include_example java/org/apache/spark/examples/mllib/JavaStreamingTestExample.java %}
</div>
</div>


Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.examples.mllib;


import org.apache.spark.Accumulator;
// $example on$
import org.apache.spark.api.java.function.VoidFunction;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't think this import is actually required, as that code is after the final $example off$?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MLnick Thinks, I will remove it.

// $example off$
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.function.Function;
// $example on$
import org.apache.spark.mllib.stat.test.BinarySample;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add // $example on$ and // $example off$ for the required imports.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MLnick OK, I add required imports according to your comments

import org.apache.spark.mllib.stat.test.StreamingTest;
import org.apache.spark.mllib.stat.test.StreamingTestResult;
// $example off$
import org.apache.spark.SparkConf;
import org.apache.spark.streaming.Duration;
import org.apache.spark.streaming.Seconds;
import org.apache.spark.streaming.api.java.JavaDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.util.Utils;


/**
* Perform streaming testing using Welch's 2-sample t-test on a stream of data, where the data
* stream arrives as text files in a directory. Stops when the two groups are statistically
* significant (p-value < 0.05) or after a user-specified timeout in number of batches is exceeded.
*
* The rows of the text files must be in the form `Boolean, Double`. For example:
* false, -3.92
* true, 99.32
*
* Usage:
* JavaStreamingTestExample <dataDir> <batchDuration> <numBatchesTimeout>
*
* To run on your local machine using the directory `dataDir` with 5 seconds between each batch and
* a timeout after 100 insignificant batches, call:
* $ bin/run-example mllib.JavaStreamingTestExample dataDir 5 100
*
* As you add text files to `dataDir` the significance test wil continually update every
* `batchDuration` seconds until the test becomes significant (p-value < 0.05) or the number of
* batches processed exceeds `numBatchesTimeout`.
*/
public class JavaStreamingTestExample {
public static void main(String[] args) {
if (args.length != 3) {
System.err.println("Usage: JavaStreamingTestExample " +
"<dataDir> <batchDuration> <numBatchesTimeout>");
System.exit(1);
}

String dataDir = args[0];
Duration batchDuration = Seconds.apply(Long.valueOf(args[1]));
int numBatchesTimeout = Integer.valueOf(args[2]);

SparkConf conf = new SparkConf().setMaster("local").setAppName("StreamingTestExample");
JavaStreamingContext ssc = new JavaStreamingContext(conf, batchDuration);

ssc.checkpoint(Utils.createTempDir(System.getProperty("java.io.tmpdir"), "spark").toString());

// $example on$
JavaDStream<BinarySample> data = ssc.textFileStream(dataDir).map(
new Function<String, BinarySample>() {
@Override
public BinarySample call(String line) throws Exception {
String[] ts = line.split(",");
boolean label = Boolean.valueOf(ts[0]);
double value = Double.valueOf(ts[1]);
return new BinarySample(label, value);
}
});

StreamingTest streamingTest = new StreamingTest()
.setPeacePeriod(0)
.setWindowSize(0)
.setTestMethod("welch");

JavaDStream<StreamingTestResult> out = streamingTest.registerStream(data);
out.print();
// $example off$

// Stop processing if test becomes significant or we time out
final Accumulator<Integer> timeoutCounter =
ssc.sparkContext().accumulator(numBatchesTimeout);

out.foreachRDD(new VoidFunction<JavaRDD<StreamingTestResult>>() {
@Override
public void call(JavaRDD<StreamingTestResult> rdd) throws Exception {
timeoutCounter.add(-1);

long cntSignificant = rdd.filter(new Function<StreamingTestResult, Boolean>() {
@Override
public Boolean call(StreamingTestResult v) throws Exception {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some minor style sub-optimalities here, like some indentation issues, unneeded "throws Exception", and some variances from the Scala example. No big deal but this could have been left open more than a couple hours to get some eyes on it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry Sean - I admit I made a fairly quick pass. What variances from the Scala example do you see?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ex: I see why this call doesn't use "fold" but could have been !rdd.filter(...).isEmpty; you don't need an arg to createTempDir; timeoutCounter actually shouldn't be an accumulator here. The only one that isn't trivial is the last one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Utils.createTempDir uses default args so args are required in Java.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah right, scratch that one. The only thing that might be an issue here is counting with the accumulator. It's all actually local, so, I'm guessing it works OK anyway. It could be an AtomicInteger or anything that can be decremented and referred to inside the function, which will only ever run on the driver.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough - if one tries to just use a local var similar to the Scala example it won't compile as it needs a final var, maybe this was the workaround. Agreed AtomicInteger is cleaner. Happy to clean that up

return v.pValue() < 0.05;
}
}).count();

if (timeoutCounter.value() <= 0 || cntSignificant > 0) {
rdd.context().stop();
}
}
});

ssc.start();
ssc.awaitTermination();
}
}