An extension to the Apache Spark framework that allows easy and fast processing of very large geospatial datasets.
Mosaic was created to simplify the implementation of scalable geospatial data pipelines by bounding together common Open Source geospatial libraries via Apache Spark, with a set of examples and best practices for common geospatial use cases.
Mosaic provides geospatial tools for
- Data ingestion (WKT, WKB, GeoJSON)
- Data processing
- Geometry and geography
ST_
operations (with ESRI or JTS) - Indexing (with H3 or BNG)
- Chipping of polygons and lines over an indexing grid co-developed with Ordnance Survey and Microsoft
- Geometry and geography
- Data visualization (Kepler)
The supported languages are Scala, Python, R, and SQL.
The Mosaic library is written in Scala to guarantee maximum performance with Spark and when possible, it uses code generation to give an extra performance boost.
The other supported languages (Python, R and SQL) are thin wrappers around the Scala code.
Image1: Mosaic logical design.
Create a Databricks cluster running Databricks Runtime 10.0 (or later).
We recommend using Databricks Runtime versions 11.2 or higher with Photon enabled, this will leverage the Databricks h3 expressions when using H3 grid system.
Check out the documentation pages.
Install databricks-mosaic as a cluster library, or run from a Databricks notebook
%pip install databricks-mosaic
Then enable it with
from mosaic import enable_mosaic
enable_mosaic(spark, dbutils)
Get the jar from the releases page and install it as a cluster library.
Then enable it with
import com.databricks.labs.mosaic.functions.MosaicContext
import com.databricks.labs.mosaic.H3
import com.databricks.labs.mosaic.ESRI
val mosaicContext = MosaicContext.build(H3, ESRI)
import mosaicContext.functions._
Get the Scala JAR and the R from the releases page. Install the JAR as a cluster library, and copy the sparkrMosaic.tar.gz
to DBFS (This example uses /FileStore
location, but you can put it anywhere on DBFS).
library(SparkR)
install.packages('/FileStore/sparkrMosaic.tar.gz', repos=NULL)
Enable the R bindings
library(sparkrMosaic)
enableMosaic()
Configure the Automatic SQL Registration or follow the Scala installation process and register the Mosaic SQL functions in your SparkSession from a Scala notebook cell:
%scala
import com.databricks.labs.mosaic.functions.MosaicContext
import com.databricks.labs.mosaic.H3
import com.databricks.labs.mosaic.ESRI
val mosaicContext = MosaicContext.build(H3, ESRI)
mosaicContext.register(spark)
Example | Description | Links |
---|---|---|
Quick Start | Example of performing spatial point-in-polygon joins on the NYC Taxi dataset | python, scala, R, SQL |
Spatial KNN | Runnable notebook-based example using Mosaic SpatialKNN model | python |
Open Street Maps | Ingesting and processing with Delta Live Tables the Open Street Maps dataset to extract buildings polygons and calculate aggregation statistics over H3 indexes | python |
STS Transfers | Detecting Ship-to-Ship transfers at scale by leveraging Mosaic to process AIS data. | python, blog |
You can import those examples in Databricks workspace using these instructions.
Mosaic is intended to augment the existing system and unlock the potential by integrating spark, delta and 3rd party frameworks into the Lakehouse architecture.
Image2: Mosaic ecosystem - Lakehouse integration.
Please note that all projects in the databrickslabs
github space are provided for your exploration only, and are not formally supported by Databricks with Service Level Agreements (SLAs). They are provided AS-IS and we do not make any guarantees of any kind. Please do not submit a support ticket relating to any issues arising from the use of these projects.
Any issues discovered through the use of this project should be filed as GitHub Issues on the Repo. They will be reviewed as time permits, but there are no formal SLAs for support.