This is a common library for Exasol Apache Spark based connectors.
- Provides a helper class to describe
JDBC
column metadata - Converts Exasol query column descriptions into a Spark schema
- Generates Exasol import and export queries of cloud storage systems
Create column descriptions from JDBC
query result:
import static java.sql.ResultSetMetaData.columnNoNulls;
import java.sql.*;
import org.apache.spark.sql.types.StructType;
final ResultSetMetaData metadata = jdbcQueryResultSet.getMetaData();
final int numberOfColumns = metadata.getColumnCount();
final List<ColumnDescription> columns = new ArrayList<>(numberOfColumns);
for (int i = 1; i <= numberOfColumns; i++) {
columns.add(ColumnDescription.builder() //
.name(metadata.getColumnLabel(i)) //
.type(metadata.getColumnType(i)) //
.precision(metadata.getPrecision(i)) //
.scale(metadata.getScale(i)) //
.isSigned(metadata.isSigned(i)) //
.isNullable(metadata.isNullable(i) != columnNoNulls) //
.build());
}
Generate Spark schema from column descriptions:
final StructType schema = new SchemaConverter().convert(columns);
Users are developers including this library into their Spark connectors.
Developers in this context are building or modifying this library.