Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-1522] : YARN ClientBase throws a NPE if there is no YARN Application CP #433

Closed
wants to merge 1 commit into from
Closed

[SPARK-1522] : YARN ClientBase throws a NPE if there is no YARN Application CP #433

wants to merge 1 commit into from

Conversation

berngp
Copy link
Contributor

@berngp berngp commented Apr 17, 2014

The current implementation of ClientBase.getDefaultYarnApplicationClasspath inspects
the MRJobConfig class for the field DEFAULT_YARN_APPLICATION_CLASSPATH when it should
be really looking into YarnConfiguration. If the Application Configuration has no
yarn.application.classpath defined a NPE exception will be thrown.

Additional Changes include:

  • Test Suite for ClientBase added

[ticket: SPARK-1522] : https://issues.apache.org/jira/browse/SPARK-1522

Author : [email protected]
Testing : SPARK_HADOOP_VERSION=2.3.0 SPARK_YARN=true ./sbt/sbt test

@AmplabJenkins
Copy link

Can one of the admins verify this patch?

@@ -52,7 +52,7 @@ object SparkBuild extends Build {
val SCALAC_JVM_VERSION = "jvm-1.6"
val JAVAC_JVM_VERSION = "1.6"

lazy val root = Project("root", file("."), settings = rootSettings) aggregate(allProjects: _*)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why change this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When importing to IntelliJ Idea as an SBT project it uses the name of the Projects and "root" lacks a bit of context. I presume the usage of the word "root" has been based on the SBT multimodule example and lacks a real reason.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So better?

lazy val spark = Project("spark", file("."), settings = rootSettings) aggregate(allProjects: _*)

@mridulm
Copy link
Contributor

mridulm commented Apr 17, 2014

Most of the changes in the diff look unrelated to what is mentioned in the summary.
In addition, they introduce additional bugs.

Please cleanup the diffs and include only what is required to fix the issue without unrelated changes.

@berngp berngp changed the title [SPARK-1522] : YARN ClientBase throws a NPE if there is no YARN applicat... [SPARK-1522] : YARN ClientBase throws a NPE if there is no YARN Application CP Apr 17, 2014
@berngp
Copy link
Contributor Author

berngp commented Apr 17, 2014

@mridulm reverted the changes not related with the issue.

} catch {
case err: NoSuchFieldError => null
case err: NoSuchFieldException => null
protected[yarn] def getAppClasspathForKey(key:String, conf:Configuration)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to private.

@andrewor14
Copy link
Contributor

@berngp Thanks for doing this. I literally ran into this NPE yesterday in my own YARN cluster. It turns out I forgot to point YARN_CONF_DIR to the proper place, but running into a NPE did not leave any clue as to what the problem is (until I dug into the code, which is bad user experience). This PR is a much needed fix.

I left a couple of comments. As @tgraves mentioned, the style of this PR is inconsistent with the Spark style guide. Further, it would be good if we could remove several levels of indirection to make the code clearer.

protected[yarn] def getMRAppClasspath(conf: Configuration) =
getAppClasspathForKey("mapreduce.application.classpath", conf)(getDefaultMRApplicationClasspath)

protected[yarn] def addToAppClasspath(env: HashMap[String, String], elements : Iterable[String]) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, in Spark we try not to use protected[*], since the semantics of that aren't intuitive at all. I think these can just be private?

@berngp
Copy link
Contributor Author

berngp commented Apr 18, 2014

@andrewor14 thanks for the review and feedback!

@tgravescs
Copy link
Contributor

Jenkins, test this please

@AmplabJenkins
Copy link

Merged build triggered.

@AmplabJenkins
Copy link

Merged build started.

@AmplabJenkins
Copy link

Merged build finished. All automated tests passed.

@AmplabJenkins
Copy link

All automated tests passed.
Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14236/

@berngp
Copy link
Contributor Author

berngp commented Apr 18, 2014

Not convinced that the build should be failing due hive/test taking too long, at least in Travis CI.

@andrewor14
Copy link
Contributor

Don't worry about Travis for now

File.pathSeparator)
}
classPathElementsToAdd
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since populateHadoopClasspath has a side effect on env by convention I am used to return the actual side effect that was applied. It aids testing/asserting the expectation of such side effect for consumers of the API and since populateHadoopClasspath is part of the public API of the ClientBase:Object I think is a good idea to provide it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better yet @andrewor14 I could change populateHadoopClasspath to avoid a side effect and return the new env. I don't see any consumer of populateHadoopClasspath outside the ClientBase.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops sorry I accidentally deleted my comment "No need to return this"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I also did a quick grep for it and I didn't see any usages outside of ClientBase. I didn't realize it was a public API, which seems super strange to me that this returns the classpaths, since this is more like a setter than a getter. I think for now it's OK to leave the public API the same, as you suggest.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is strange, public APIs with side effects make me anxious.

@andrewor14
Copy link
Contributor

@berngp This is looking good. I will do a quick test of this on a YARN cluster, and provided that I don't run into anything I think this is good to go.

@berngp
Copy link
Contributor Author

berngp commented Apr 18, 2014

Thank you @andrewor14, and again I appreciate very much all the feedback.

}
}

def getDefaultYarnApplicationClasspath: Option[Array[String]] = Try[Array[String]] {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the way I just noticed, it looks like we're changing the public API here. It used to return Array[String], but now it returns an Option. A second point is that using Try here changes the semantics a little bit. Before we propagate any exception that's not NoSuchField*, but now we swallow everything.

I think we should revert this method to what it was before, and have the caller deal with the fact that this can potentially be null.

pwendell added a commit to pwendell/spark that referenced this pull request May 12, 2014
@berngp
Copy link
Contributor Author

berngp commented May 16, 2014

@tgravescs any thoughts around this, should I just close this pull request?

@tgravescs
Copy link
Contributor

@berngp sorry for the delay I think everyone has been busy with the spark 1.0 release. I think this should still be fixed in spark 1.1. Lets leave this pr as is and I'll review it this week.

@tgravescs
Copy link
Contributor

There is still a lot going on with spark 1.0. I'm going to wait for that to setting down and then review this.

@tgravescs
Copy link
Contributor

@berngp can you upmerge to the latest master?

@berngp
Copy link
Contributor Author

berngp commented Jun 4, 2014

@tgravescs done, thank you again for following this PR.

object Fixtures {

val knownDefYarnAppCP: Option[Seq[String]] =
Some(YarnConfiguration.DEFAULT_YARN_APPLICATION_CLASSPATH)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DEFAULT_YARN_APPLICATION_CLASSPATH doesn't exist in hadoop 0.23 so we can't use it directly. That is why we use the reflection in the ClientBase.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for caching this.

I will wrap the call in a Try. e.g.

Try(YarnConfiguration.DEFAULT_YARN_APPLICATION_CLASSPATH).toOption

@tgravescs
Copy link
Contributor

Mostly looks good. Fix those couple minor test issues for hadoop 0.23 and I'll commit it.

…ation specific CP

The current implementation of ClientBase.getDefaultYarnApplicationClasspath inspects
the MRJobConfig class for the field DEFAULT_YARN_APPLICATION_CLASSPATH when it should
be really looking into YarnConfiguration. If the Application Configuration has no
yarn.application.classpath defined a NPE exception will be thrown.

Public API Changes
===========================

* YARN ClientBase getDefault*ApplicationClasspath returns Option[Seq[String]]

This commit depicts how the `ClientBase` API could change the
`getDefaultYarnApplicationClasspath` and `getDefaultMRApplicationClasspath`
to return a `Option[Seq[String]]` while recovering from `NoSuchFieldException`.

Both methods that return the default application's *classpath*, for both *YARN*
as well as *Map Reduce (MR)*, use reflection and per the Java API
documentation they can throw the following exceptions:

* Class:getField(String name):
    * NoSuchFieldException - if a field with the specified name is not found.
    * NullPointerException - if name is null
    * SecurityException - If a security manager, s, is present and any of
      the following conditions is met:
        1. Invocation of s.checkMemberAccess(this, Member.PUBLIC) denies
        access to the field.
        2. The caller's class loader is not the same as or an ancestor of
        the class loader for the current class and invocation of
        `s.checkPackageAccess()` denies access to the package of this class.

* Field:Object get(Object obj):
    * IllegalAccessException - if this Field object is enforcing Java language access
      control and the underlying field is inaccessible.
    * IllegalArgumentException - if the specified object is not an instance of the class
      or interface declaring the underlying field (or a subclass or implementor thereof).
    * NullPointerException - if the specified object is null and the field
      is an instance field.
    * ExceptionInInitializerError - if the initialization provoked
      by this method fails.

**NOTE**: The above is based on the *Java API for JDK 1.7*

An interesting thing to notice is that the official JDK doesn't mention
the occurrence of the `NoSuchFieldError`. This is completely acceptable
per the JDK spec. The reason is that it is an *Error* and as
described by the Java Language Specification and depicted in
the *Error Class* documentation.

    An `Error` "indicates serious problems that a reasonable
    application should not try to catch."

While

    An `Exception` "indicates conditions that a reasonable
    application might want to catch."

If we actually dig deeper according to the *JVM SE7 Specification*

    "While Loading, Linking, and Initializing, if an error occurs during resolution
    of a symbolic reference, then an instance of
    IncompatibleClassChangeError (or a subclass) must be thrown..."

    "If an attempt by the Java Virtual Machine to resolve a symbolic reference fails
    because an error is thrown that is an instance of LinkageError (or a subclass),
    then subsequent attempts to resolve the reference always fail with the same error
    that was thrown as a result of the initial resolution attempt."

Now `NoSuchFieldError` extends `LinkageError` which in turn is a `IncompatibleClassChangeError`
and according to its documentation, the *LinkageError Class*,

    "indicates that a class has some dependency on another class;
    however, the latter class has incompatibly changed after the
    compilation of the former class."

Why all these is important and how it relates with a couple of lines of
code?

Well, the original approach catches the two most probable problems
you might encounter if you access, using reflection, a field that you
are almost sure that if it exist it will be of _public_ access but you
are not sure it will always be there. Interesting enough the original
implementation addresses one of the _exceptions_ as well as a potential
_linkage error_ but as mentioned neglects a documented _security exception_,
probably due its unlikeliness to occur.

Fact is that if an error _bubbles_ up the Spark YARN Client doesn't handle it
will terminate, in a probably obscure fashion. The current call stack is
as follows.

    Client >> run >> runApp >> ClientBase.setupLaunchEnv >> populateClasspath

In my opinion it is questionable to let an exception escape of this context, the _ClientBase Object_.
In my opinion such _ClientBase Object_ should fail gracefully by handling the potential
_exceptions_ and _linkage error_ while providing enough logging
to let a user know and identify what happened. Yet again, in my opinion
the implementation in this commit handles it in a better, more
resilient, manner than the previous implementation while adding logging
that will help clarify the issues in case of an _exception_.

Additional Changes include:
===========================

* Test Suite for ClientBase added
* Coding Style:
    * [Spark Style Guidelines](https://cwiki.apache.org/confluence/display/SPARK/Spark+Code+Style+Guide)
    * [Scala Official Style Guidelines](http://docs.scala-lang.org/style/)
    * [Scalariform](https://github.com/mdr/scalariform)
* Code refactoring and cleanup per review by andrewor14

Ref.
    "JVM SE7 Specification" http://docs.oracle.com/javase/specs/jvms/se7/html/jvms-5.html#jvms-5.4
    "Java API for JDK 1.7" http://docs.oracle.com/javase/7/docs/api/

[ticket: SPARK-1522] : https://issues.apache.org/jira/browse/SPARK-1522

Author      : berngp
Reviewer    : andrewor14, tgravescs
Testing     : ?

def flatten(a: Option[Seq[String]], b: Option[Seq[String]]) = (a ++ b).flatten.toArray

def getFieldValue[A, B](clazz: Class[_], field: String, defaults: => B)(mapTo: A => B): B =
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tgravescs tested for Hadoop 0.23 and 2.4.0. A bit silly that I made such mistake when the main issue was actually accessing through reflection such fields within the ClientBase classs.

@berngp
Copy link
Contributor Author

berngp commented Jun 9, 2014

@tgravescs I fixed the test, squashed the commit and repointed the branch. The latest commit addresses the test failure you were seeing while running it for Hadoop 0.23.

@asfgit asfgit closed this in e273447 Jun 9, 2014
@tgravescs
Copy link
Contributor

Looks good. Thanks @berngp

pdeyhim pushed a commit to pdeyhim/spark-1 that referenced this pull request Jun 25, 2014
…cation CP

The current implementation of ClientBase.getDefaultYarnApplicationClasspath inspects
the MRJobConfig class for the field DEFAULT_YARN_APPLICATION_CLASSPATH when it should
be really looking into YarnConfiguration. If the Application Configuration has no
yarn.application.classpath defined a NPE exception will be thrown.

Additional Changes include:
* Test Suite for ClientBase added

[ticket: SPARK-1522] : https://issues.apache.org/jira/browse/SPARK-1522

Author      : [email protected]
Testing     : SPARK_HADOOP_VERSION=2.3.0 SPARK_YARN=true ./sbt/sbt test

Author: Bernardo Gomez Palacio <[email protected]>

Closes apache#433 from berngp/feature/SPARK-1522 and squashes the following commits:

2c2e118 [Bernardo Gomez Palacio] [SPARK-1522]: YARN ClientBase throws a NPE if there is no YARN Application specific CP
berngp added a commit to ThalesGroup/spark that referenced this pull request Jul 22, 2014
…cation CP

The current implementation of ClientBase.getDefaultYarnApplicationClasspath inspects
the MRJobConfig class for the field DEFAULT_YARN_APPLICATION_CLASSPATH when it should
be really looking into YarnConfiguration. If the Application Configuration has no
yarn.application.classpath defined a NPE exception will be thrown.

Additional Changes include:
* Test Suite for ClientBase added

[ticket: SPARK-1522] : https://issues.apache.org/jira/browse/SPARK-1522

Author      : [email protected]
Testing     : SPARK_HADOOP_VERSION=2.3.0 SPARK_YARN=true ./sbt/sbt test

Author: Bernardo Gomez Palacio <[email protected]>

Closes apache#433 from berngp/feature/SPARK-1522 and squashes the following commits:

2c2e118 [Bernardo Gomez Palacio] [SPARK-1522]: YARN ClientBase throws a NPE if there is no YARN Application specific CP
xiliu82 pushed a commit to xiliu82/spark that referenced this pull request Sep 4, 2014
…cation CP

The current implementation of ClientBase.getDefaultYarnApplicationClasspath inspects
the MRJobConfig class for the field DEFAULT_YARN_APPLICATION_CLASSPATH when it should
be really looking into YarnConfiguration. If the Application Configuration has no
yarn.application.classpath defined a NPE exception will be thrown.

Additional Changes include:
* Test Suite for ClientBase added

[ticket: SPARK-1522] : https://issues.apache.org/jira/browse/SPARK-1522

Author      : [email protected]
Testing     : SPARK_HADOOP_VERSION=2.3.0 SPARK_YARN=true ./sbt/sbt test

Author: Bernardo Gomez Palacio <[email protected]>

Closes apache#433 from berngp/feature/SPARK-1522 and squashes the following commits:

2c2e118 [Bernardo Gomez Palacio] [SPARK-1522]: YARN ClientBase throws a NPE if there is no YARN Application specific CP
andrewor14 pushed a commit to andrewor14/spark that referenced this pull request Jan 8, 2015
Updated Debian packaging
(cherry picked from commit 494d3c0)

Signed-off-by: Patrick Wendell <[email protected]>
markhamstra pushed a commit to markhamstra/spark that referenced this pull request Nov 7, 2017
This fixes local integration testing
bzhaoopenstack pushed a commit to bzhaoopenstack/spark that referenced this pull request Sep 11, 2019
arjunshroff pushed a commit to arjunshroff/spark that referenced this pull request Nov 24, 2020
apache#433)

* SPARK-417 impersonation fixes for spark executor. Impersonation is moved from HadoopRDD.compute() method to org.apache.spark.executor.Executor.run() method

* SPARK-363 Hive version changed to '1.2.0-mapr-spark-MEP-6.0.0'
Agirish pushed a commit to HPEEzmeral/apache-spark that referenced this pull request May 5, 2022
…cript

K8S-1077 (apache#598)

* K8S-1077 - use single k8s secret with user info

MapR [SPARK-651] Replacing joda-time-*.jar with joda-time-2.10.3.jar.

MapR [SPARK-638] Wrong permissions when creating files under directory
with GID bit set.

MapR [SPARK-627] SparkHistoryServer-2.4 is getting 403 Unauthorized home page for users(spark.ui.view.acls) via spark-submit

MapR [SPARK-639] Default headers are adding two times

MapR [SPARK-629] Spark UI for job lose CSS styles

MapR [MS-925] After upgrade to MEP 6.2 (Spark 2.4.0) can no longer
consume Kafka / MapR Streams.

MapR [SPARK-626] Update kafka dependencies for Spark 2.4.4.0 in release MEP-6.3.0

MapR [SPARK-340] Jetty web server version at Spark should be updated tp v9.4.X

MapR [SPARK-617] an't use ssl via spark beeline

MapR [SPARK-617] Can't use ssl via spark beeline

MapR [SPARK-620] Replace core dependency in Spark-2.4.4

MapR [SPARK-621] Fix multiple XML configuration initialization for (apache#575)

custom headers. Use X-XSS-Protection, X-Content-Type-Options
Content-Security-Policy and Strict-Transport-Security configuration
only in case: cluster security is enabled OR
spark.ui.security.headers.enabled set to true.

MapR [SPARK-595] Spark cannot access hs2 through zookeeper

Revert "MapR [SPARK-595] Spark cannot access hs2 through zookeeper (apache#577)"

MapR [SPARK-595] Spark cannot access hs2 through zookeeper

MapR [SPARK-620] Replace core dependency in Spark-2.4.

MapR [SPARK-619] Move absent commits from 2.4.3 branch to 2.4.4 (apache#574)

* Adding SQL API to write to kafka from Spark (apache#567)

* Branch 2.4.3 extended kafka and examples (apache#569)

* The v2 API is in its own package

- the v2 api is in a different package
- the old functionality is available in a separated package

* v2 API examples

- All the examples are using the newest API.
- I have removed the old examples since they are not relevant any more and the same functionality is shown in the new examples usin the new API.

* MapR [SPARK-619] Move absent commits from 2.4.3 branch to 2.4.4

CORE-321. Add custom http header support for jetty.

MapR [SPARK-609] Port Apache Spark-2.4.4 changes to the MapR Spark-2.4.4 branch

Adding multi table loader (apache#560)

* Adding multi table loader

- This allows us to load multiple matching tables into one Union DataFrame.

If we have the fallowing MFS structure:

```
/clients/client_1/data.table
/clients/client_2/data.table
```
we can load a union dataframe by doing `loadFromMapRDB("/clients/*/*.table")`

* Fixing the path to the reader

MapR [SPARK-588] Spark thriftserver fails when work with hive-maprdb json table

MapR [SPARK-598] Spark can't add needed properties to hive-site.xml

MAPR-SPARK-596: Change HBase compatible version for Spark 2.4.3

MapR [SPARK-592] Add possibility to use start-thriftserver.sh script with 2304 port

MapR [SPARK-584] MaprDB connector's setHintUsingIndex method doesn't work as expected

MapR [SPARK-583] MaprDB connector's loadFromMaprDB function for Java API doesn't work as expected

SPARK-579 info about ssl_trustore is added for metrics

MapR [SPARK-552] Failed to get broadcast_11_piece0 of broadcast_11

SPARK-569 Generation of SSL ceritificates for spark UI

MapR [SPARK-575] Warning messages in spark workspace after the second attempt to login to job's UI

Update zookeeper version

Adding `joinWithMapRDBTable` function (apache#529)

The related documentation of this function is here https://github.com/anicolaspp/MapRDBConnector#joinwithmaprdbtable.

The main idea is that having a dataframe (no matter how was it constructed) we can join it with a MapR-DB table. This functions looks at the join query and load only those records from MapR-DB that will join instead of loading the full table and then join in memory. In other words, we only load what we know will be joint.

Adding DataSource Reader Support (apache#525)

* Adding DataSource Reader Support

* Update SparkSessionExt.scala

* creating a package object

* Update MapRDBSpark.scala

* fully path to avoid name collition

* refactorings

MapR [SPARK-451] Spark hadoop/core dependency updates

MapR [SPARK-566] Move absent commits from 2.4.0 branch

MapR [SPARK-561] Spark 2.4.3 porting to MapR

MapR [SPARK-561] Spark 2.4.3 porting to MapR

MapR [SPARK-558] Render application UI init page if driver is not up

MapR [SPARK-541] Avoid duplication of the first unexpired record

MapR [COLD-150][K8S] Fix metrics copy

MapR [K8S-893] Hide plain text password from logs

MapR [SPARK-540] Include 'avro' artifacts

MapR [SPARK-536] PySpark streaming package for kafka-0-10 added

K8S-853: Enable spark metrics for external tenant

MapR [SPARK-531] Remove duplicating entries from classpath in ClasspathFilter

MapR [SPARK-516] Spark jobs failure using yarn mode on kerberos fixed

MapR [SPARK-462] Spark and SparkHistoryServer allow week ciphers, which can allow man in the middle attack

[SPARK-508] MapR-DB OJAI Connector for Spark isNull condition returns incorrect result

MapR [SPARK-510] nonmapr "admin" users not able to view other user logs in SHS

SPARK-460: Spark Metrics for CollectD Configuration for collecting Spark metrics

SPARK-463 MAPR_MAVEN_REPO variable for specifying mapR repository

MapR [SPARK-492] Spark 2.4.0.0 configure.sh has error messages

MapR [SPARK-515][K8S] Remove configure.sh call for k8s

MapR [SPARK-515] Move configuring spark-env.sh back to the private-pkg

MapR [SPARK-515] Move configuring spark-env.sh back to the private-pkg

MapR [SPARK-514] Recovery from checkpoint is broken

MapR [SPARK-445] Messages loss fixed by reverting [MAPR-32290] changes from kafka09 package (apache#460)

* MapR [SPARK-445] Revert "[MAPR-32290] Spark processing offsets when messages are already TTL in the first batch (apache#376)"

This reverts commit e8d59b9.

* MapR [SPARK-445] Revert "[MAPR-32290] Spark processing offsets when messages are already ttl in first batch (apache#368)"

This reverts commit b282a8b.

MapR [SPARK-445] Messages loss fixed by reverting [MAPR-32290] changes from kafka10 package

MapR [SPARK-469] Fix NPE in generated classes by reverting "[SPARK-23466][SQL] Remove redundant null checks in generated Java code by GenerateUnsafeProjection" (apache#455)

This reverts commit c5583fd.

MapR [SPARK-482] Spark streaming app fails to start by UnknownTopicOrPartitionException with checkpoint

MapR [SPARK-496] Spark HS UI doesn't work

MapR [SPARK-416] CVE-2018-1320 vulnerability in Apache Thrift

MapR [SPARK-486][K8S] Fix sasl encryption error on Kubernetes

MapR [SPARK-481] Cannot run spark configure.sh on Client node

MapR [K8S-637][K8S] Add configure.sh configuration in spark-defaults.conf for job runtime

MapR [SPARK-465] Error messages after update of spark 2.4

MapR [SPARK-465] Error messages after update of spark 2.4

MapR [SPARK-464] Can't submit spark 2.4 jobs from mapr-client

[SPARK-466] SparkR errors fixed

MapR [SPARK-456] Spark shell can't be started

SPARK-417 impersonation fixes for spark executor. Impersonation is mo… (apache#433)

* SPARK-417 impersonation fixes for spark executor. Impersonation is moved from HadoopRDD.compute() method to org.apache.spark.executor.Executor.run() method

* SPARK-363 Hive version changed to '1.2.0-mapr-spark-MEP-6.0.0'

[SPARK-449] Kafka offset commit issue fixed

MapR [SPARK-287] Move logic of creating /apps/spark folder from installer's scripts to the configure.sh

MapR [SPARK-221] Investigate possibility to move creating of the spark-env.sh from private-pkg to configure.sh

MapR [SPARK-430] PID files should be under /opt/mapr/pid

MapR [SPARK-446] Spark configure.sh doesn't start/stop Spark services

MapR [SPARK-434] Move absent commits from 2.3.2 branch (apache#425)

* MapR [SPARK-352] Spark shell fails with "NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream" if java is not available in PATH

* MapR [SPARK-350] Deprecate Spark Kafka-09 package

* MapR [SPARK-326] Investigate possibility of writing Java example for the MapRDB OJAI connector

* [SPARK-356] Merge mapr changes from kafka-09 package into the kafka-10

* SPARK-319 Fix for sparkR version check

* MapR [SPARK-349] Update OJAI client to v3 for Spark MapR-DB JSON connector

* MapR [SPARK-367] Move absent commits from 2.3.1 branch

* MapR [SPARK-137] Analyze the warning during compilation of OJAI connector

* MapR [SPARK-369] Spark 2.3.2 fails with error related to zookeeper

* [MAPR-26258] hbasecontext.HBaseDistributedScanExample fails

* [SPARK-24355] Spark external shuffle server improvement to better handle block fetch requests

* MapR [SPARK-374] Spark Hive example fails when we submit job from another(simple) cluster user

* MapR [SPARK-434] Move absent commits from 2.3.2 branch

* MapR [SPARK-434] Move absent commits from 2.3.2 branch

* MapR [SPARK-373] Unexpected behavior during job running in standalone cluster mode

* MapR [SPARK-419] Update hive-maprdb-json-handler jar for spark 2.3.2.0 and spark 2.2.1

* MapR [SPARK-396] Interface change of sendToKafka

* MapR [SPARK-357] consumer groups are prepeneded with a "service_" prefix

* MapR [SPARK-429] Changes in maprdb connector are the cause of broken backward compatibility

* MapR [SPARK-427] Update kafka in Spark-2.4.0 to the 1.1.1-mapr

* MapR [SPARK-434] Move absent commits from 2.3.2 branch

* Move absent commits from 2.3.2 branch

* MapR [SPARK-434] Move absent commits from 2.3.2 branch

* Move absent commits from 2.3.2 branch

* Move absent commits from 2.3.2 branch

MapR [SPARK-427] Update kafka in Spark-2.4.0 to the 1.1.1-mapr

MapR [SPARK-379] Spark 2.4 4-gidit version

MapR [PIC-48][K8S] Port k8s changes to 2.4.0

[PIC-48] Create user for k8s driver and executor if required

[PIC-48] Create user for k8s driver and executor if required

Revert "Remove spark.ui.filters property"

This reverts commit d8941ba36c3451cdce15d18d6c1a52991de3b971.

[SPARK-351] Copy kubernetes start scripts anyway

PIC-34: Rename default configmap name to be consistent with mapr-kubernetes

[SPARK-23668][K8S] Add config option for passing through k8s Pod.spec.imagePullSecrets (apache#355)

Pass through the `imagePullSecrets` option to the k8s pod in order to allow user to access private image registries.

See https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

Unit tests + manual testing.

Manual testing procedure:
1. Have private image registry.
2. Spark-submit application with no `spark.kubernetes.imagePullSecret` set. Do `kubectl describe pod ...`. See the error message:
```
Error syncing pod, skipping: failed to "StartContainer" for "spark-kubernetes-driver" with ErrImagePull: "rpc error: code = 2 desc = Error: Status 400 trying to pull repository ...: \"{\\n  \\\"errors\\\" : [ {\\n    \\\"status\\\" : 400,\\n    \\\"message\\\" : \\\"Unsupported docker v1 repository request for '...'\\\"\\n  } ]\\n}\""
```
3. Create secret `kubectl create secret docker-registry ...`
4. Spark-submit with `spark.kubernetes.imagePullSecret` set to the new secret. See that deployment was successful.

Author: Andrew Korzhuev <[email protected]>
Author: Andrew Korzhuev <[email protected]>

Closes apache#20811 from andrusha/spark-23668-image-pull-secrets.

[SPARK-321] Change default value of spark.mapr.ssl.secret.prefix property

[PIC-32] Spark on k8s with MapR secure cluster

Update entrypoint.sh with correct spark version (apache#340)

This PR has minor fix to correct the spark version string

[SPARK-274] Create home directory for user who submitted job

[MAPR-SPARK-230] Implement security for Spark on Kubernetes

Run Spark job with specify the username for driver and executor

Read cluster configs from configMap

Run configure.sh script form entrypoint.sh

Remove spark.kubernetes.driver.pod.commands property

Add Spark properties for executor and driver environment variable

MapR [SPARK-296] Structured Streaming memory leak

Revert "[MAPR-SPARK-210] Rename sprk-defaults.conf to spark-defaults.conf.tem…" (apache#252)

* Revert "[MAPR-SPARK-176] Fix Spark Project Catalyst unit tests (apache#251)"

This reverts commit 5de05075cd14abf8ac65046a57a5d76617818fbe.

* Revert "[MAPR-SPARK-210] Rename sprk-defaults.conf to spark-defaults.conf.template (apache#249)"

This reverts commit 1baa677d727e89db7c605ffbae9a9eba00337ad0.

[MAPR-SPARK-210] Rename sprk-defaults.conf to spark-defaults.conf.template

MapR [SPARK-379] Port Spark to 2.4.0

MapR [SPARK-341] Spark 2.3.2 porting

[MAPR-32290] Spark processing offsets when messages are already TTL in the first batch

* Bug 32263 - Seek called on unsubscribed partitions

[MSPARK-331] Remove snapshot versions of mapr dependencies from Spark-2.3.1

[MAPR-32290] Spark processing offsets when messages are already ttl in first batch

MapR [SPARK-325] Add examples for work with the MapRDB JSON connector into the Spark project

[ATS-449] Unit test for EBF 32013 created.

MAPR-SPARK-311: Spark beeline uses default ssl truststore instead of mapr ssl truststore

Bug 32355 - Executor tab empty on Spark UI

[SPARK-318] Submitting Spark jobs from Oozie fails due to ClassNotFoundException

Bug 32014 - Spark Consumer fails with java.lang.AssertionError

Revert "[SPARK-306] Kafka clients 1.0.1 present in jars directory for Spark 2.3.1" (apache#341)

* Revert "[SPARK-306] Kafka clients 1.0.1 present in jars directory for Spark 2.3.1 (apache#335)"

This reverts commit 832411e.

Bug 32014 - Spark Consumer fails with java.lang.AssertionError (apache#326) (apache#336)

* MapR [32014] Spark Consumer fails with java.lang.AssertionError

[SPARK-306] Kafka clients 1.0.1 present in jars directory for Spark 2.3.1

DEVOPS-2768 temporarily removed curl for file downloading

[SPARK-302] Local privilege escalation

MapR [SPARK-297] Added unit test for empty value conversion

MapR [SPARK-297] Empty values are loaded as non-null

MapR [SPARK-296] Structured Streaming memory leak

2.3.1 spark 289 (apache#318)

* MapR [SPARK-289] Fix unit test for Spark-2.3.1

[SPARK-130] MapRDB connector - NPE while saving Pair RDD with 'null' values

MapR [SPARK-283] Unit tests fail during initialization SSL properties.

[SPARK-212] SparkHiveExample fails when we run it twice

MapR [SPARK-282] Remove maprfs and hadoop jars from mapr spark package

MapR [SPARK-278] Spark submit fails for jobs with python

MapR [SPARK-279] Can't connect to spark thrift server with new spark and hive packages

MapR [SPARK-276] Update zookeeper dependency to v.3.4.11 for spark 2.3.1

MapR [SPARK-272] Use only client passwords from ssl-client.xml

MapR [SPARK-266] Spark jobs can't finish correctly, when there is an error during job running

MapR [SPARK-263] Add possibility to use keyPassword which is different from keyStorePassword

[MSPARK-31632] RM UI showing broken page for Spark jobs

MapR [SPARK-261] Use mapr-security-web for getting passwords.

MapR [SPARK-259] Spark application doesn't finish correctly

MapR [SPARK-268] Update Spark version for Warden

change project version to 2.3.1-mapr-SNAPSHOT

MapR [SPARK-256] Spark doesn't work on yarn mode

MapR [SPARK-255] Installer fresh install 610/600 secure fails to start "mapr-spark-thriftserver", "mapr-spark-historyserver"

Mapr [SPARK-248] MapRDBTableScanRDD fails to convert to Scala Dataframe when using where clause

MapR [SPARK-225] Hadoop credentials provider usage for hiding passwords at spark-defaults

MapR [SPARK-214] Hive-2.1 poperties can't be read from a hive-site.xml as Spark uses Hive-1.2

MapR [SPARK-216] Spark thriftserver fails when work with hive-maprdb json table

SPARK-244 (apache#278)

Provide ability to use MapR-Negotiation authentication for Spark HistoryServer

MapR [SPARK-226] Spark - pySpark Security Vulnerability

MapR [SPARK-220] SparkR fails with UDF functions bug fixed

MapR [SPARK-227] KafkaUtils.createDirectStream fails with kafka-09

MapR [SPARK-183] Spark Integration for Kafka 0.10 unit tests disabled

MapR [SPARK-182] Spark Project External Kafka Producer v09 unit tests fixed

MapR [SPARK-179] Spark Integration for Kafka 0.9 unit tests fixed

MapR [SPARK-181] Kafka 0.10 Structured Streaming unit tests fixed

[MSPARK-31305] Spark History server NOT loading applications submitted by users other than 'mapr'

MapR [SPARK-175] Fix Spark Project Streaming unit tests

[MAPR-SPARK-176] Fix Spark Project Catalyst unit tests

[MAPR-SPARK-178] Fix Spark Project Hive unit tests

MapR [SPARK-174] Spark Core unit tests fixed

Changed version for spark-kafka connector.

MapR [SPARK-202] Update MapR Spark to 2.3.0

Fixed compile time errors in tests

Change project version

[SPARK-198] Update hadoop dependency version to 2.7.0-mapr-1803 for Spark 2.2.1

MapR [SPARK-188] Couldn't connect to thrift server via spark beeline on kerberos cluster

MapR [SPARK-143] Spark History Server does not require login for secured-by-default clusters

MapR [SPARK-186] Update OJAI versions to the latest for Spark-2.2.1 OJAI Connector

MapR [SPARK-191] Incorrect work of MapR-DB Sink 'complete' and 'update' modes fixed

MapR [SPARK-170] StackOverflowException in equals method in DBMapValue

2.2.1 build fixed (apache#231)

* MapR [SPARK-164] Update Kafka version to 1.0.1-mapr in Spark Kafka Producer module

MapR [SPARK-161] Include Kafka Structured streaming jar to Spark package.

MapR [SPARK-155] Change Spark Master port from 8080

MapR [SPARK-153] Exception in spark job with configured labels on yarn-client mode

MapR [SPARK-152] Incorrect date string parsing fixed

MapR [SPARK-21] Structured Streaming MapR-DB Sink created

MapR [SPARK-135]  Spark 2.2 with MapR Streams ( Kafka 1.0) (apache#218)

* MapR [SPARK-135] Spark 2.2 with MapR Streams (Kafka 1.0)
Added functionality of MapR-Streams specific EOF handling.

MapR [SPARK-143] Spark History Server does not require login for secured-by-default clusters

Disable build failing if scalastyle checking is fall.

MapR [SPARK-16] Change Spark version in Warden files and configure.sh

MapR [SPARK-144] Add insertToMapRDB method for rdd for Java API

[MAPR-30536]  Spark SQL queries on Map column fails after upgrade

MapR [SPARK-139] Remove "update" related APIs from connector

MapR [SPARK-140] Change the option name "tableName" to "tablePath" in the Spark/MapR-DB connectors.

MapR [SPARK-121] Spark OJAI JAVA: update functionality removed

MapR [SPARK-118] Spark OJAI Python: missed DataFrame import while moving imports in order to fix MapR [ZEP-101] interpreter issue

MapR [SPARK-118] Spark OJAI Python: move MapR DB Connector class importing in order to fix MapR [ZEP-101] interpreter issue

MapR [SPARK-117] Spark OJAI Python: Save functionality implementation

MapR [SPARK-131] Exception when try to save JSON table with Binary _id field

Spark OJAI JAVA: load to RDD, save from RDD implementation (apache#195)

* MapR [SPARK-124] Loading to JavaRDD implemented
* MapR [SPARK-124] MapRDBJavaSparkContext constructor changed
* MapR [SPARK-124] implemented RDD[Row] saving

MapR [SPARK-118] Spark OJAI Python: Read implementation

MapR [SPARK-128] MapRDB connector - wrong handle of null fields when nullable is false

* MapR [SPARK-121] Spark OJAI JAVA: Read to Dataset functionality implementation
* Minor refactoring

MapR [SPARK-125] Default value of idFieldPath parameter is not handle

MapR [SPARK-113] Hit java.lang.UnsupportedOperationException: empty.reduceLeft during loadFromMapRDB

Spark Mapr-DB connector was refactored according to Scala style
Removed code duplication

[MSPARK-107]idField information is lost in MapRDBDataFrameWriterFunctions.saveToMapRDB

configure.sh takes options to change ports

Kafka client excluded from package because correct version is located in "mapr classpath"

Changed Kafka version in Kafka producer module.

Branch spark 69 (apache#170)

* Fixing the wrong type casting of TimeStamp to OTimeStamp when read from spark dataFrame.

* SPARK-69: Problem with license when we try to read from json and write to maprdb

remove creatin /usr/local/spark link from configure.sh. This link will be creates by private-pkg

remove include-maprdb from default profiles

added profiles in maprdb pom file instead of two pom files

Fixed maprdb connector dependencies.

Fixing the wrong type casting of TimeStamp to OTimeStamp when read from spark dataFrame.

changed port for spark-thriftserver as it conflicts with hive server

changed port for spark-thriftserver as it conflicts with hive server

remove .not_configured_yet file after success

Ojai connector fixed required java version

[MSPARK-45] Move Spark-OJAI connector code to Spark github repo (apache#132)

* SPARK-45 Move Spark-OJAI connector code to Spark github repo

* Fixing pom versions for maprdb spark connector.

* Changes made to the connector code to be compatible with 5.2.* and 6.0 clients.

Spark 2.1.0 mapr 29106 (apache#150)

* [SPARK-20922][CORE] Add whitelist of classes that can be deserialized by the launcher.

Blindly deserializing classes using Java serialization opens the code up to
issues in other libraries, since just deserializing data from a stream may
end up execution code (think readObject()).

Since the launcher protocol is pretty self-contained, there's just a handful
of classes it legitimately needs to deserialize, and they're in just two
packages, so add a filter that throws errors if classes from any other
package show up in the stream.

This also maintains backwards compatibility (the updated launcher code can
still communicate with the backend code in older Spark releases).

Tested with new and existing unit tests.

Author: Marcelo Vanzin <[email protected]>

Closes apache#18166 from vanzin/SPARK-20922.

(cherry picked from commit 8efc6e9)
Signed-off-by: Marcelo Vanzin <[email protected]>

(cherry picked from commit 772a9b9)

* [SPARK-20922][CORE][HOTFIX] Don't use Java 8 lambdas in older branches.

Author: Marcelo Vanzin <[email protected]>

Closes apache#18178 from vanzin/SPARK-20922-hotfix.

Added security by default for historyserver

use waitForConsumerAssignment() instead of consumer.poll(0) for spark-29052

change MAPR_HADOOP_CLASSPATH in configure.sh for creating it by mapr-classpath.sh

change MAPR_HADOOP_CLASSPATH in configure.sh for creating it by mapr-classpath.sh

changes for mapr-classpath.sh

changes for mapr-classpath.sh

configure.sh changes

[SPARK-39] Classpath filter was added

Fixed impersonation when data read from MapR-DB via Spark-Hive.

added configure.sh and warden.spark-thriftserver.conf

hive-hbase-handler added to Spark jars

Fixed "Single message comes late"

28339 bug fixed

Spark streaming skipped message with zero offset from Kafka 0.9

[MSPARK-9] Initial fix for Spark unit tests

Bump dependencies after ECO-1703 release

[SPARK-33] Streaming example fixed

[MAPR-26060] Fixed case when mapr-streams make gaps in offsets

ported features from kafka 10 to kafka 9

[MAPR-26289][SPARK-2.1] Streaming general improvements (apache#93)

* Added include-kafka-09 profile to Assembly
* Set default poll timeout to 120s

Set default HBase verison to 1.1.8

Changes from Kafka10  package were ported to Kafka09 package.

[MAPR-26053] Include MapR Classes to the default value of spark.sql.hive.metastore.sharedPrefixes

[MAPR-25807] Spark-Warehouse path computes incorrectly

Add MapR-SASL support for Thrift Server

Adding scala library.

[MAPR-25713] Spark might try to load MapR Class Loader multiple times and fail

[MAPR-25311] Bump Spark dependencies after ECO-1611 release

[MINOR] Fix spark-jars.sh script

[MAPR-24603] Could not launch beeline shell after starting spark thrift server

fixed syntax error in V09DirectKafkaWordCount example

Spark 2.0.1 MAPR-streams Python API

[MAPR-24415] SPARK_JAVA_OPTS is deprecated

Kafka streaming producer added.

Minor fix for previous commit

Added script for MAPR-24374

Some minor changes to spark-defaults.conf

Changed default HBase version to 1.1.1 in compatibility.version

Streaming example was refactored

[MAPR-24470] HiveFromSpark test fails in yarn-cluster mode

Added MapR Repo

[MAPR-22940] Failed to connect spark beeline (after spark thrift server is started) on Kerberos cluster

[MAPR-18865] Unable to submit spark apps from Windows client

Skip maven clean task on the parent module

New: Issue with running Hive commands in Spark

This is fixed in SPARK-7819
Isolated Hive Client Loader appears to cause Native Library
libMapRClient.4.0.2-mapr.so already loaded in another classloader error

Spark warden.services.conf should have dependency on cldb

Remove DFS shuffle settings.

These settings are not used right now.

Copy every file in the conf directory into the distribution package.

Create spark-defaults.conf for MapR

Settings to enable DFS shuffle on MapR.

Support hbase classpath computation in util script.

Adding external conf and scripts.

Enable SPARK_HIVE mode while building.

This is needed to bundle datanucleus jars needed for hive table creation.

Build Spark on MapR.
- make-distribution.sh takes an environment variable to enable profiles -
  MVN_PROFILE_ARG
- Added warden conf files under ext-conf.
- Updated pom.xml to use right set of jars and version.

Spark Master failed to start in HA mode

Updated Apache Curator version

Added spark streaming integration with kafka 0.9 and mapr-streams

Added MapR Repo
RolatZhang added a commit to RolatZhang/spark that referenced this pull request Aug 15, 2022
AL-5546 upgrade commons-io,mesos,kafka-client,protobuf-java,gson,guava version
udaynpusa pushed a commit to mapr/spark that referenced this pull request Jan 30, 2024
…cript

K8S-1077 (apache#598)

* K8S-1077 - use single k8s secret with user info

MapR [SPARK-651] Replacing joda-time-*.jar with joda-time-2.10.3.jar.

MapR [SPARK-638] Wrong permissions when creating files under directory
with GID bit set.

MapR [SPARK-627] SparkHistoryServer-2.4 is getting 403 Unauthorized home page for users(spark.ui.view.acls) via spark-submit

MapR [SPARK-639] Default headers are adding two times

MapR [SPARK-629] Spark UI for job lose CSS styles

MapR [MS-925] After upgrade to MEP 6.2 (Spark 2.4.0) can no longer
consume Kafka / MapR Streams.

MapR [SPARK-626] Update kafka dependencies for Spark 2.4.4.0 in release MEP-6.3.0

MapR [SPARK-340] Jetty web server version at Spark should be updated tp v9.4.X

MapR [SPARK-617] an't use ssl via spark beeline

MapR [SPARK-617] Can't use ssl via spark beeline

MapR [SPARK-620] Replace core dependency in Spark-2.4.4

MapR [SPARK-621] Fix multiple XML configuration initialization for (apache#575)

custom headers. Use X-XSS-Protection, X-Content-Type-Options
Content-Security-Policy and Strict-Transport-Security configuration
only in case: cluster security is enabled OR
spark.ui.security.headers.enabled set to true.

MapR [SPARK-595] Spark cannot access hs2 through zookeeper

Revert "MapR [SPARK-595] Spark cannot access hs2 through zookeeper (apache#577)"

MapR [SPARK-595] Spark cannot access hs2 through zookeeper

MapR [SPARK-620] Replace core dependency in Spark-2.4.

MapR [SPARK-619] Move absent commits from 2.4.3 branch to 2.4.4 (apache#574)

* Adding SQL API to write to kafka from Spark (apache#567)

* Branch 2.4.3 extended kafka and examples (apache#569)

* The v2 API is in its own package

- the v2 api is in a different package
- the old functionality is available in a separated package

* v2 API examples

- All the examples are using the newest API.
- I have removed the old examples since they are not relevant any more and the same functionality is shown in the new examples usin the new API.

* MapR [SPARK-619] Move absent commits from 2.4.3 branch to 2.4.4

CORE-321. Add custom http header support for jetty.

MapR [SPARK-609] Port Apache Spark-2.4.4 changes to the MapR Spark-2.4.4 branch

Adding multi table loader (apache#560)

* Adding multi table loader

- This allows us to load multiple matching tables into one Union DataFrame.

If we have the fallowing MFS structure:

```
/clients/client_1/data.table
/clients/client_2/data.table
```
we can load a union dataframe by doing `loadFromMapRDB("/clients/*/*.table")`

* Fixing the path to the reader

MapR [SPARK-588] Spark thriftserver fails when work with hive-maprdb json table

MapR [SPARK-598] Spark can't add needed properties to hive-site.xml

MAPR-SPARK-596: Change HBase compatible version for Spark 2.4.3

MapR [SPARK-592] Add possibility to use start-thriftserver.sh script with 2304 port

MapR [SPARK-584] MaprDB connector's setHintUsingIndex method doesn't work as expected

MapR [SPARK-583] MaprDB connector's loadFromMaprDB function for Java API doesn't work as expected

SPARK-579 info about ssl_trustore is added for metrics

MapR [SPARK-552] Failed to get broadcast_11_piece0 of broadcast_11

SPARK-569 Generation of SSL ceritificates for spark UI

MapR [SPARK-575] Warning messages in spark workspace after the second attempt to login to job's UI

Update zookeeper version

Adding `joinWithMapRDBTable` function (apache#529)

The related documentation of this function is here https://github.com/anicolaspp/MapRDBConnector#joinwithmaprdbtable.

The main idea is that having a dataframe (no matter how was it constructed) we can join it with a MapR-DB table. This functions looks at the join query and load only those records from MapR-DB that will join instead of loading the full table and then join in memory. In other words, we only load what we know will be joint.

Adding DataSource Reader Support (apache#525)

* Adding DataSource Reader Support

* Update SparkSessionExt.scala

* creating a package object

* Update MapRDBSpark.scala

* fully path to avoid name collition

* refactorings

MapR [SPARK-451] Spark hadoop/core dependency updates

MapR [SPARK-566] Move absent commits from 2.4.0 branch

MapR [SPARK-561] Spark 2.4.3 porting to MapR

MapR [SPARK-561] Spark 2.4.3 porting to MapR

MapR [SPARK-558] Render application UI init page if driver is not up

MapR [SPARK-541] Avoid duplication of the first unexpired record

MapR [COLD-150][K8S] Fix metrics copy

MapR [K8S-893] Hide plain text password from logs

MapR [SPARK-540] Include 'avro' artifacts

MapR [SPARK-536] PySpark streaming package for kafka-0-10 added

K8S-853: Enable spark metrics for external tenant

MapR [SPARK-531] Remove duplicating entries from classpath in ClasspathFilter

MapR [SPARK-516] Spark jobs failure using yarn mode on kerberos fixed

MapR [SPARK-462] Spark and SparkHistoryServer allow week ciphers, which can allow man in the middle attack

[SPARK-508] MapR-DB OJAI Connector for Spark isNull condition returns incorrect result

MapR [SPARK-510] nonmapr "admin" users not able to view other user logs in SHS

SPARK-460: Spark Metrics for CollectD Configuration for collecting Spark metrics

SPARK-463 MAPR_MAVEN_REPO variable for specifying mapR repository

MapR [SPARK-492] Spark 2.4.0.0 configure.sh has error messages

MapR [SPARK-515][K8S] Remove configure.sh call for k8s

MapR [SPARK-515] Move configuring spark-env.sh back to the private-pkg

MapR [SPARK-515] Move configuring spark-env.sh back to the private-pkg

MapR [SPARK-514] Recovery from checkpoint is broken

MapR [SPARK-445] Messages loss fixed by reverting [MAPR-32290] changes from kafka09 package (apache#460)

* MapR [SPARK-445] Revert "[MAPR-32290] Spark processing offsets when messages are already TTL in the first batch (apache#376)"

This reverts commit e8d59b9.

* MapR [SPARK-445] Revert "[MAPR-32290] Spark processing offsets when messages are already ttl in first batch (apache#368)"

This reverts commit b282a8b.

MapR [SPARK-445] Messages loss fixed by reverting [MAPR-32290] changes from kafka10 package

MapR [SPARK-469] Fix NPE in generated classes by reverting "[SPARK-23466][SQL] Remove redundant null checks in generated Java code by GenerateUnsafeProjection" (apache#455)

This reverts commit c5583fd.

MapR [SPARK-482] Spark streaming app fails to start by UnknownTopicOrPartitionException with checkpoint

MapR [SPARK-496] Spark HS UI doesn't work

MapR [SPARK-416] CVE-2018-1320 vulnerability in Apache Thrift

MapR [SPARK-486][K8S] Fix sasl encryption error on Kubernetes

MapR [SPARK-481] Cannot run spark configure.sh on Client node

MapR [K8S-637][K8S] Add configure.sh configuration in spark-defaults.conf for job runtime

MapR [SPARK-465] Error messages after update of spark 2.4

MapR [SPARK-465] Error messages after update of spark 2.4

MapR [SPARK-464] Can't submit spark 2.4 jobs from mapr-client

[SPARK-466] SparkR errors fixed

MapR [SPARK-456] Spark shell can't be started

SPARK-417 impersonation fixes for spark executor. Impersonation is mo… (apache#433)

* SPARK-417 impersonation fixes for spark executor. Impersonation is moved from HadoopRDD.compute() method to org.apache.spark.executor.Executor.run() method

* SPARK-363 Hive version changed to '1.2.0-mapr-spark-MEP-6.0.0'

[SPARK-449] Kafka offset commit issue fixed

MapR [SPARK-287] Move logic of creating /apps/spark folder from installer's scripts to the configure.sh

MapR [SPARK-221] Investigate possibility to move creating of the spark-env.sh from private-pkg to configure.sh

MapR [SPARK-430] PID files should be under /opt/mapr/pid

MapR [SPARK-446] Spark configure.sh doesn't start/stop Spark services

MapR [SPARK-434] Move absent commits from 2.3.2 branch (apache#425)

* MapR [SPARK-352] Spark shell fails with "NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream" if java is not available in PATH

* MapR [SPARK-350] Deprecate Spark Kafka-09 package

* MapR [SPARK-326] Investigate possibility of writing Java example for the MapRDB OJAI connector

* [SPARK-356] Merge mapr changes from kafka-09 package into the kafka-10

* SPARK-319 Fix for sparkR version check

* MapR [SPARK-349] Update OJAI client to v3 for Spark MapR-DB JSON connector

* MapR [SPARK-367] Move absent commits from 2.3.1 branch

* MapR [SPARK-137] Analyze the warning during compilation of OJAI connector

* MapR [SPARK-369] Spark 2.3.2 fails with error related to zookeeper

* [MAPR-26258] hbasecontext.HBaseDistributedScanExample fails

* [SPARK-24355] Spark external shuffle server improvement to better handle block fetch requests

* MapR [SPARK-374] Spark Hive example fails when we submit job from another(simple) cluster user

* MapR [SPARK-434] Move absent commits from 2.3.2 branch

* MapR [SPARK-434] Move absent commits from 2.3.2 branch

* MapR [SPARK-373] Unexpected behavior during job running in standalone cluster mode

* MapR [SPARK-419] Update hive-maprdb-json-handler jar for spark 2.3.2.0 and spark 2.2.1

* MapR [SPARK-396] Interface change of sendToKafka

* MapR [SPARK-357] consumer groups are prepeneded with a "service_" prefix

* MapR [SPARK-429] Changes in maprdb connector are the cause of broken backward compatibility

* MapR [SPARK-427] Update kafka in Spark-2.4.0 to the 1.1.1-mapr

* MapR [SPARK-434] Move absent commits from 2.3.2 branch

* Move absent commits from 2.3.2 branch

* MapR [SPARK-434] Move absent commits from 2.3.2 branch

* Move absent commits from 2.3.2 branch

* Move absent commits from 2.3.2 branch

MapR [SPARK-427] Update kafka in Spark-2.4.0 to the 1.1.1-mapr

MapR [SPARK-379] Spark 2.4 4-gidit version

MapR [PIC-48][K8S] Port k8s changes to 2.4.0

[PIC-48] Create user for k8s driver and executor if required

[PIC-48] Create user for k8s driver and executor if required

Revert "Remove spark.ui.filters property"

This reverts commit d8941ba36c3451cdce15d18d6c1a52991de3b971.

[SPARK-351] Copy kubernetes start scripts anyway

PIC-34: Rename default configmap name to be consistent with mapr-kubernetes

[SPARK-23668][K8S] Add config option for passing through k8s Pod.spec.imagePullSecrets (apache#355)

Pass through the `imagePullSecrets` option to the k8s pod in order to allow user to access private image registries.

See https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

Unit tests + manual testing.

Manual testing procedure:
1. Have private image registry.
2. Spark-submit application with no `spark.kubernetes.imagePullSecret` set. Do `kubectl describe pod ...`. See the error message:
```
Error syncing pod, skipping: failed to "StartContainer" for "spark-kubernetes-driver" with ErrImagePull: "rpc error: code = 2 desc = Error: Status 400 trying to pull repository ...: \"{\\n  \\\"errors\\\" : [ {\\n    \\\"status\\\" : 400,\\n    \\\"message\\\" : \\\"Unsupported docker v1 repository request for '...'\\\"\\n  } ]\\n}\""
```
3. Create secret `kubectl create secret docker-registry ...`
4. Spark-submit with `spark.kubernetes.imagePullSecret` set to the new secret. See that deployment was successful.

Author: Andrew Korzhuev <[email protected]>
Author: Andrew Korzhuev <[email protected]>

Closes apache#20811 from andrusha/spark-23668-image-pull-secrets.

[SPARK-321] Change default value of spark.mapr.ssl.secret.prefix property

[PIC-32] Spark on k8s with MapR secure cluster

Update entrypoint.sh with correct spark version (apache#340)

This PR has minor fix to correct the spark version string

[SPARK-274] Create home directory for user who submitted job

[MAPR-SPARK-230] Implement security for Spark on Kubernetes

Run Spark job with specify the username for driver and executor

Read cluster configs from configMap

Run configure.sh script form entrypoint.sh

Remove spark.kubernetes.driver.pod.commands property

Add Spark properties for executor and driver environment variable

MapR [SPARK-296] Structured Streaming memory leak

Revert "[MAPR-SPARK-210] Rename sprk-defaults.conf to spark-defaults.conf.tem…" (apache#252)

* Revert "[MAPR-SPARK-176] Fix Spark Project Catalyst unit tests (apache#251)"

This reverts commit 5de05075cd14abf8ac65046a57a5d76617818fbe.

* Revert "[MAPR-SPARK-210] Rename sprk-defaults.conf to spark-defaults.conf.template (apache#249)"

This reverts commit 1baa677d727e89db7c605ffbae9a9eba00337ad0.

[MAPR-SPARK-210] Rename sprk-defaults.conf to spark-defaults.conf.template

MapR [SPARK-379] Port Spark to 2.4.0

MapR [SPARK-341] Spark 2.3.2 porting

[MAPR-32290] Spark processing offsets when messages are already TTL in the first batch

* Bug 32263 - Seek called on unsubscribed partitions

[MSPARK-331] Remove snapshot versions of mapr dependencies from Spark-2.3.1

[MAPR-32290] Spark processing offsets when messages are already ttl in first batch

MapR [SPARK-325] Add examples for work with the MapRDB JSON connector into the Spark project

[ATS-449] Unit test for EBF 32013 created.

MAPR-SPARK-311: Spark beeline uses default ssl truststore instead of mapr ssl truststore

Bug 32355 - Executor tab empty on Spark UI

[SPARK-318] Submitting Spark jobs from Oozie fails due to ClassNotFoundException

Bug 32014 - Spark Consumer fails with java.lang.AssertionError

Revert "[SPARK-306] Kafka clients 1.0.1 present in jars directory for Spark 2.3.1" (apache#341)

* Revert "[SPARK-306] Kafka clients 1.0.1 present in jars directory for Spark 2.3.1 (apache#335)"

This reverts commit 832411e.

Bug 32014 - Spark Consumer fails with java.lang.AssertionError (apache#326) (apache#336)

* MapR [32014] Spark Consumer fails with java.lang.AssertionError

[SPARK-306] Kafka clients 1.0.1 present in jars directory for Spark 2.3.1

DEVOPS-2768 temporarily removed curl for file downloading

[SPARK-302] Local privilege escalation

MapR [SPARK-297] Added unit test for empty value conversion

MapR [SPARK-297] Empty values are loaded as non-null

MapR [SPARK-296] Structured Streaming memory leak

2.3.1 spark 289 (apache#318)

* MapR [SPARK-289] Fix unit test for Spark-2.3.1

[SPARK-130] MapRDB connector - NPE while saving Pair RDD with 'null' values

MapR [SPARK-283] Unit tests fail during initialization SSL properties.

[SPARK-212] SparkHiveExample fails when we run it twice

MapR [SPARK-282] Remove maprfs and hadoop jars from mapr spark package

MapR [SPARK-278] Spark submit fails for jobs with python

MapR [SPARK-279] Can't connect to spark thrift server with new spark and hive packages

MapR [SPARK-276] Update zookeeper dependency to v.3.4.11 for spark 2.3.1

MapR [SPARK-272] Use only client passwords from ssl-client.xml

MapR [SPARK-266] Spark jobs can't finish correctly, when there is an error during job running

MapR [SPARK-263] Add possibility to use keyPassword which is different from keyStorePassword

[MSPARK-31632] RM UI showing broken page for Spark jobs

MapR [SPARK-261] Use mapr-security-web for getting passwords.

MapR [SPARK-259] Spark application doesn't finish correctly

MapR [SPARK-268] Update Spark version for Warden

change project version to 2.3.1-mapr-SNAPSHOT

MapR [SPARK-256] Spark doesn't work on yarn mode

MapR [SPARK-255] Installer fresh install 610/600 secure fails to start "mapr-spark-thriftserver", "mapr-spark-historyserver"

Mapr [SPARK-248] MapRDBTableScanRDD fails to convert to Scala Dataframe when using where clause

MapR [SPARK-225] Hadoop credentials provider usage for hiding passwords at spark-defaults

MapR [SPARK-214] Hive-2.1 poperties can't be read from a hive-site.xml as Spark uses Hive-1.2

MapR [SPARK-216] Spark thriftserver fails when work with hive-maprdb json table

SPARK-244 (apache#278)

Provide ability to use MapR-Negotiation authentication for Spark HistoryServer

MapR [SPARK-226] Spark - pySpark Security Vulnerability

MapR [SPARK-220] SparkR fails with UDF functions bug fixed

MapR [SPARK-227] KafkaUtils.createDirectStream fails with kafka-09

MapR [SPARK-183] Spark Integration for Kafka 0.10 unit tests disabled

MapR [SPARK-182] Spark Project External Kafka Producer v09 unit tests fixed

MapR [SPARK-179] Spark Integration for Kafka 0.9 unit tests fixed

MapR [SPARK-181] Kafka 0.10 Structured Streaming unit tests fixed

[MSPARK-31305] Spark History server NOT loading applications submitted by users other than 'mapr'

MapR [SPARK-175] Fix Spark Project Streaming unit tests

[MAPR-SPARK-176] Fix Spark Project Catalyst unit tests

[MAPR-SPARK-178] Fix Spark Project Hive unit tests

MapR [SPARK-174] Spark Core unit tests fixed

Changed version for spark-kafka connector.

MapR [SPARK-202] Update MapR Spark to 2.3.0

Fixed compile time errors in tests

Change project version

[SPARK-198] Update hadoop dependency version to 2.7.0-mapr-1803 for Spark 2.2.1

MapR [SPARK-188] Couldn't connect to thrift server via spark beeline on kerberos cluster

MapR [SPARK-143] Spark History Server does not require login for secured-by-default clusters

MapR [SPARK-186] Update OJAI versions to the latest for Spark-2.2.1 OJAI Connector

MapR [SPARK-191] Incorrect work of MapR-DB Sink 'complete' and 'update' modes fixed

MapR [SPARK-170] StackOverflowException in equals method in DBMapValue

2.2.1 build fixed (apache#231)

* MapR [SPARK-164] Update Kafka version to 1.0.1-mapr in Spark Kafka Producer module

MapR [SPARK-161] Include Kafka Structured streaming jar to Spark package.

MapR [SPARK-155] Change Spark Master port from 8080

MapR [SPARK-153] Exception in spark job with configured labels on yarn-client mode

MapR [SPARK-152] Incorrect date string parsing fixed

MapR [SPARK-21] Structured Streaming MapR-DB Sink created

MapR [SPARK-135]  Spark 2.2 with MapR Streams ( Kafka 1.0) (apache#218)

* MapR [SPARK-135] Spark 2.2 with MapR Streams (Kafka 1.0)
Added functionality of MapR-Streams specific EOF handling.

MapR [SPARK-143] Spark History Server does not require login for secured-by-default clusters

Disable build failing if scalastyle checking is fall.

MapR [SPARK-16] Change Spark version in Warden files and configure.sh

MapR [SPARK-144] Add insertToMapRDB method for rdd for Java API

[MAPR-30536]  Spark SQL queries on Map column fails after upgrade

MapR [SPARK-139] Remove "update" related APIs from connector

MapR [SPARK-140] Change the option name "tableName" to "tablePath" in the Spark/MapR-DB connectors.

MapR [SPARK-121] Spark OJAI JAVA: update functionality removed

MapR [SPARK-118] Spark OJAI Python: missed DataFrame import while moving imports in order to fix MapR [ZEP-101] interpreter issue

MapR [SPARK-118] Spark OJAI Python: move MapR DB Connector class importing in order to fix MapR [ZEP-101] interpreter issue

MapR [SPARK-117] Spark OJAI Python: Save functionality implementation

MapR [SPARK-131] Exception when try to save JSON table with Binary _id field

Spark OJAI JAVA: load to RDD, save from RDD implementation (apache#195)

* MapR [SPARK-124] Loading to JavaRDD implemented
* MapR [SPARK-124] MapRDBJavaSparkContext constructor changed
* MapR [SPARK-124] implemented RDD[Row] saving

MapR [SPARK-118] Spark OJAI Python: Read implementation

MapR [SPARK-128] MapRDB connector - wrong handle of null fields when nullable is false

* MapR [SPARK-121] Spark OJAI JAVA: Read to Dataset functionality implementation
* Minor refactoring

MapR [SPARK-125] Default value of idFieldPath parameter is not handle

MapR [SPARK-113] Hit java.lang.UnsupportedOperationException: empty.reduceLeft during loadFromMapRDB

Spark Mapr-DB connector was refactored according to Scala style
Removed code duplication

[MSPARK-107]idField information is lost in MapRDBDataFrameWriterFunctions.saveToMapRDB

configure.sh takes options to change ports

Kafka client excluded from package because correct version is located in "mapr classpath"

Changed Kafka version in Kafka producer module.

Branch spark 69 (apache#170)

* Fixing the wrong type casting of TimeStamp to OTimeStamp when read from spark dataFrame.

* SPARK-69: Problem with license when we try to read from json and write to maprdb

remove creatin /usr/local/spark link from configure.sh. This link will be creates by private-pkg

remove include-maprdb from default profiles

added profiles in maprdb pom file instead of two pom files

Fixed maprdb connector dependencies.

Fixing the wrong type casting of TimeStamp to OTimeStamp when read from spark dataFrame.

changed port for spark-thriftserver as it conflicts with hive server

changed port for spark-thriftserver as it conflicts with hive server

remove .not_configured_yet file after success

Ojai connector fixed required java version

[MSPARK-45] Move Spark-OJAI connector code to Spark github repo (apache#132)

* SPARK-45 Move Spark-OJAI connector code to Spark github repo

* Fixing pom versions for maprdb spark connector.

* Changes made to the connector code to be compatible with 5.2.* and 6.0 clients.

Spark 2.1.0 mapr 29106 (apache#150)

* [SPARK-20922][CORE] Add whitelist of classes that can be deserialized by the launcher.

Blindly deserializing classes using Java serialization opens the code up to
issues in other libraries, since just deserializing data from a stream may
end up execution code (think readObject()).

Since the launcher protocol is pretty self-contained, there's just a handful
of classes it legitimately needs to deserialize, and they're in just two
packages, so add a filter that throws errors if classes from any other
package show up in the stream.

This also maintains backwards compatibility (the updated launcher code can
still communicate with the backend code in older Spark releases).

Tested with new and existing unit tests.

Author: Marcelo Vanzin <[email protected]>

Closes apache#18166 from vanzin/SPARK-20922.

(cherry picked from commit 8efc6e9)
Signed-off-by: Marcelo Vanzin <[email protected]>

(cherry picked from commit 772a9b9)

* [SPARK-20922][CORE][HOTFIX] Don't use Java 8 lambdas in older branches.

Author: Marcelo Vanzin <[email protected]>

Closes apache#18178 from vanzin/SPARK-20922-hotfix.

Added security by default for historyserver

use waitForConsumerAssignment() instead of consumer.poll(0) for spark-29052

change MAPR_HADOOP_CLASSPATH in configure.sh for creating it by mapr-classpath.sh

change MAPR_HADOOP_CLASSPATH in configure.sh for creating it by mapr-classpath.sh

changes for mapr-classpath.sh

changes for mapr-classpath.sh

configure.sh changes

[SPARK-39] Classpath filter was added

Fixed impersonation when data read from MapR-DB via Spark-Hive.

added configure.sh and warden.spark-thriftserver.conf

hive-hbase-handler added to Spark jars

Fixed "Single message comes late"

28339 bug fixed

Spark streaming skipped message with zero offset from Kafka 0.9

[MSPARK-9] Initial fix for Spark unit tests

Bump dependencies after ECO-1703 release

[SPARK-33] Streaming example fixed

[MAPR-26060] Fixed case when mapr-streams make gaps in offsets

ported features from kafka 10 to kafka 9

[MAPR-26289][SPARK-2.1] Streaming general improvements (apache#93)

* Added include-kafka-09 profile to Assembly
* Set default poll timeout to 120s

Set default HBase verison to 1.1.8

Changes from Kafka10  package were ported to Kafka09 package.

[MAPR-26053] Include MapR Classes to the default value of spark.sql.hive.metastore.sharedPrefixes

[MAPR-25807] Spark-Warehouse path computes incorrectly

Add MapR-SASL support for Thrift Server

Adding scala library.

[MAPR-25713] Spark might try to load MapR Class Loader multiple times and fail

[MAPR-25311] Bump Spark dependencies after ECO-1611 release

[MINOR] Fix spark-jars.sh script

[MAPR-24603] Could not launch beeline shell after starting spark thrift server

fixed syntax error in V09DirectKafkaWordCount example

Spark 2.0.1 MAPR-streams Python API

[MAPR-24415] SPARK_JAVA_OPTS is deprecated

Kafka streaming producer added.

Minor fix for previous commit

Added script for MAPR-24374

Some minor changes to spark-defaults.conf

Changed default HBase version to 1.1.1 in compatibility.version

Streaming example was refactored

[MAPR-24470] HiveFromSpark test fails in yarn-cluster mode

Added MapR Repo

[MAPR-22940] Failed to connect spark beeline (after spark thrift server is started) on Kerberos cluster

[MAPR-18865] Unable to submit spark apps from Windows client

Skip maven clean task on the parent module

New: Issue with running Hive commands in Spark

This is fixed in SPARK-7819
Isolated Hive Client Loader appears to cause Native Library
libMapRClient.4.0.2-mapr.so already loaded in another classloader error

Spark warden.services.conf should have dependency on cldb

Remove DFS shuffle settings.

These settings are not used right now.

Copy every file in the conf directory into the distribution package.

Create spark-defaults.conf for MapR

Settings to enable DFS shuffle on MapR.

Support hbase classpath computation in util script.

Adding external conf and scripts.

Enable SPARK_HIVE mode while building.

This is needed to bundle datanucleus jars needed for hive table creation.

Build Spark on MapR.
- make-distribution.sh takes an environment variable to enable profiles -
  MVN_PROFILE_ARG
- Added warden conf files under ext-conf.
- Updated pom.xml to use right set of jars and version.

Spark Master failed to start in HA mode

Updated Apache Curator version

Added spark streaming integration with kafka 0.9 and mapr-streams

Added MapR Repo
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants