Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add index.md for default doc page, fix table formatting for configs #247

Merged
merged 1 commit into from
Jun 22, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions docs/configs.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ scala> spark.conf.set("spark.rapids.sql.incompatibleOps.enabled", true)


## General Configuration

Name | Description | Default Value
-----|-------------|--------------
<a name="memory.gpu.allocFraction"></a>spark.rapids.memory.gpu.allocFraction|The fraction of total GPU memory that should be initially allocated for pooled memory. Extra memory will be allocated as needed, but it may result in more fragmentation.|0.9
Expand Down Expand Up @@ -82,6 +83,7 @@ will enable all the settings in the table below which are not enabled by default
incompatibilities.

### Expressions

Name | Description | Default Value | Incompatibilities
-----|-------------|---------------|------------------
<a name="sql.expression.Abs"></a>spark.rapids.sql.expression.Abs|absolute value|true|None|
Expand Down Expand Up @@ -210,6 +212,7 @@ Name | Description | Default Value | Incompatibilities
<a name="sql.expression.NormalizeNaNAndZero"></a>spark.rapids.sql.expression.NormalizeNaNAndZero|normalize nan and zero|true|None|

### Execution

Name | Description | Default Value | Incompatibilities
-----|-------------|---------------|------------------
<a name="sql.exec.CoalesceExec"></a>spark.rapids.sql.exec.CoalesceExec|The backend for the dataframe coalesce method|true|None|
Expand All @@ -235,13 +238,15 @@ Name | Description | Default Value | Incompatibilities
<a name="sql.exec.WindowExec"></a>spark.rapids.sql.exec.WindowExec|Window-operator backend|true|None|

### Scans

Name | Description | Default Value | Incompatibilities
-----|-------------|---------------|------------------
<a name="sql.input.CSVScan"></a>spark.rapids.sql.input.CSVScan|CSV parsing|true|None|
<a name="sql.input.OrcScan"></a>spark.rapids.sql.input.OrcScan|ORC parsing|true|None|
<a name="sql.input.ParquetScan"></a>spark.rapids.sql.input.ParquetScan|Parquet parsing|true|None|

### Partitioning

Name | Description | Default Value | Incompatibilities
-----|-------------|---------------|------------------
<a name="sql.partitioning.HashPartitioning"></a>spark.rapids.sql.partitioning.HashPartitioning|Hash based partitioning|true|None|
Expand Down
9 changes: 5 additions & 4 deletions docs/about.md → docs/index.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
---
layout: page
title: About
permalink: /about/
layout: default
title: Home
nav_order: 1
permalink: /
description: This site serves as a collection of documentation about the RAPIDS accelerator for Apache Spark
---

As data scientists shift from using traditional analytics to leveraging AI applications that better model complex market demands, traditional CPU-based processing can no longer keep up without compromising either speed or cost. The growing adoption of AI in analytics has created the need for a new framework to process data quickly and cost efficiently with GPUs.

The RAPIDS Accelerator for Apache Spark combines the power of the <a href="github.com/rapidsai/cudf/">RAPIDS cuDF</a> library and the scale of the Spark distributed computing framework. The RAPIDS Accelerator library also has a built-in accelerated shuffle based on <a href="https://github.com/openucx/ucx/">UCX</a> that can be configured to leverage GPU-to-GPU communication and RDMA capabilities.

kuhushukla marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
Expand Up @@ -628,7 +628,7 @@ object RapidsConf {
|""".stripMargin)
// scalastyle:on line.size.limit

println("\n## General Configuration")
println("\n## General Configuration\n")
println("Name | Description | Default Value")
println("-----|-------------|--------------")
} else {
Expand All @@ -652,19 +652,19 @@ object RapidsConf {
|incompatibilities.""".stripMargin)
// scalastyle:on line.size.limit

printToggleHeader("Expressions")
printToggleHeader("Expressions\n")
}
GpuOverrides.expressions.values.toSeq.sortBy(_.tag.toString).foreach(_.confHelp(asTable))
if (asTable) {
printToggleHeader("Execution")
printToggleHeader("Execution\n")
}
GpuOverrides.execs.values.toSeq.sortBy(_.tag.toString).foreach(_.confHelp(asTable))
if (asTable) {
printToggleHeader("Scans")
printToggleHeader("Scans\n")
}
GpuOverrides.scans.values.toSeq.sortBy(_.tag.toString).foreach(_.confHelp(asTable))
if (asTable) {
printToggleHeader("Partitioning")
printToggleHeader("Partitioning\n")
}
GpuOverrides.parts.values.toSeq.sortBy(_.tag.toString).foreach(_.confHelp(asTable))
if (asTable) {
Expand Down