-
Notifications
You must be signed in to change notification settings - Fork 28.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SQL] SPARK-1333 First draft of java API #248
Conversation
Merged build triggered. |
Merged build started. |
Merged build finished. |
All automated tests passed. |
|
||
def length: Int = row.length | ||
|
||
def get(i: Int): Any = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These guys should all have Scaladocs
Merged build triggered. Build is starting -or- tests failed to complete. |
Merged build started. Build is starting -or- tests failed to complete. |
Merged build finished. Build is starting -or- tests failed to complete. |
Build is starting -or- tests failed to complete. |
Merged build triggered. Build is starting -or- tests failed to complete. |
Merged build started. Build is starting -or- tests failed to complete. |
@mateiz Here's a more complete version. Note that this includes the sql/hql distinction we discussed, but only for Java. I'll do the scala one in a separate PR. The docs here are also updated: http://people.apache.org/~pwendell/catalyst-docs/sql-programming-guide.html |
Merged build triggered. Build is starting -or- tests failed to complete. |
Merged build started. Build is starting -or- tests failed to complete. |
Merged build finished. |
Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13618/ |
Merged build finished. |
Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13621/ |
* Change JavaRow => Row * Add support for querying RDDs of JavaBeans * Docs * Tests * Hive support
Merged build triggered. |
Merged build started. |
Merged build finished. |
Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13635/ |
Merged build triggered. |
Merged build started. |
Merged build triggered. |
Merged build started. |
Merged build finished. All automated tests passed. |
All automated tests passed. |
Merged build finished. |
Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13705/ |
Jenkins, test this please. |
Merged build triggered. |
Merged build started. |
Merged build finished. All automated tests passed. |
All automated tests passed. |
Thanks Michael, I've merged this in. |
Fix POM file for mvn assembly on hadoop 2.2 Yarn This is the fix for maven YARN build on hadoop 2.2
WIP: Some work remains... * [x] Hive support * [x] Tests * [x] Update docs Feedback welcome! Author: Michael Armbrust <[email protected]> Closes apache#248 from marmbrus/javaSchemaRDD and squashes the following commits: b393913 [Michael Armbrust] @srowen 's java style suggestions. f531eb1 [Michael Armbrust] Address matei's comments. 33a1b1a [Michael Armbrust] Ignore JavaHiveSuite. 822f626 [Michael Armbrust] improve docs. ab91750 [Michael Armbrust] Improve Java SQL API: * Change JavaRow => Row * Add support for querying RDDs of JavaBeans * Docs * Tests * Hive support 0b859c8 [Michael Armbrust] First draft of java API.
The problem with PR apache#242 was that it renamed, but didn't completely decouple `DatabricksSqlParser` from Acls. As such, the Vacuum command was only recognized if Acl support was enabled (via `spark.session.extensions = AclExtensions` and `spark.databricks.acl.enabled = true`) ## What changes were proposed in this pull request? - extract out all Acl-related commands from `DatabricksSqlCommandBuilder` into `AclCommandBuilder` - separate related test suites accordingly - make Acl client optional for `DatabricksSqlParser` - create new `DatabricksExtensions` class that injects `DatabricksSqlParser` without Acl support - apply `DatabricksExtensions` by default ## How was this patch tested? Ran all tests in `spark-sql` Manually tested `Vacuum` from `sparkShell` and `sparkShellAcl` Author: Adrian Ionescu <[email protected]> Closes apache#248 from adrian-ionescu/db-parser.
* Part 1: making test code cluster-agnostic * Final checked * Move all test code into KubernetesTestComponents * Addressed comments * Fixed doc * Restructure the test backends (apache#248) * Restructured the test backends * Address comments * var -> val * Comments * removed deadcode
* Part 1: making test code cluster-agnostic * Final checked * Move all test code into KubernetesTestComponents * Addressed comments * Fixed doc * Restructure the test backends (#248) * Restructured the test backends * Address comments * var -> val * Comments * removed deadcode (cherry picked from commit 6b489c2)
* Part 1: making test code cluster-agnostic * Final checked * Move all test code into KubernetesTestComponents * Addressed comments * Fixed doc * Restructure the test backends (apache#248) * Restructured the test backends * Address comments * var -> val * Comments * removed deadcode
Resync with apache-spark-on-k8s upstream
* new directory name for services * reformat metadata for new site
Use legacy q-* to enable neutron fwaas
WIP: Some work remains...
Feedback welcome!