Pillar manages migrations for your Cassandra data stores.
Pillar grew from a desire to automatically manage Cassandra schema as code. Managing schema as code enables automated build and deployment, a foundational practice for an organization striving to achieve Continuous Delivery.
Pillar is to Cassandra what Rails ActiveRecord migrations or Play Evolutions are to relational databases with one key difference: Pillar is completely independent from any application development framework.
- Java SE 6 runtime environment
- Cassandra 2.0 with the native CQL protocol enabled
This method requires Simple Build Tool (sbt). Building an RPM can be done via sbt-native-packager.
% sbt assembly # builds just the jar file in the target directory
% sbt rpm:packageBin # builds the RPM in the target directory
% sudo rpm -i target/pillar-<version>.noarch.rpm
The RPM installs Pillar to /opt/pillar.
Pillar is available at Maven Central under the GroupId de.kaufhof and ArtifactId pillar_2.10, pillar_2.11 or pillar_2.12. The current version of pillar is 4.1.2.
libraryDependencies += "de.kaufhof" %% "pillar" % "4.1.0"
compile 'de.kaufhof:pillar_2.11:4.1.0'
Data Store : A logical grouping of environments. You will likely have one data store per application.
Environment : A context or grouping of settings for a single data store. You will likely have at least development and production environments for each data store.
Migration : A single change to a data store. Migrations have a description and a time stamp indicating the time at which it was authored. Migrations are applied in ascending order and reversed in descending order.
#####Here's the short version:
Given the configuration:
pillar.my_keyspace {
prod {
...
}
development {
...
}
}
- Write migrations, place them in conf/pillar/migrations/myapp.
- Add pillar settings to conf/application.conf.
% pillar initialize -e prod my_keyspace
% pillar migrate -e prod my_keyspace
Note: development is the default environment if nothing is specified
Or we could compile and run the jar:
java -cp "slf4j-simple.jar:pillar-assembly.jar" de.kaufhof.pillar.cli.App -d "path/to/migrations" -e "prod" initialize "my_keyspace"
Migration files contain metadata about the migration, a CQL statement used to apply the migration and, optionally, a CQL statement used to reverse the migration. Each file describes one migration. You probably want to name your files according to time stamp and description, 1370028263_creates_views_table.cql, for example. Pillar reads and parses all files in the migrations directory, regardless of file name.
Pillar supports reversible, irreversible and reversible with a no-op down statement migrations. Here are examples of each:
Reversible migrations have up and down properties.
-- description: creates views table
-- authoredAt: 1370028263
-- up:
CREATE TABLE views (
id uuid PRIMARY KEY,
url text,
person_id int,
viewed_at timestamp
)
-- down:
DROP TABLE views
Irreversible migrations have an up property but no down property.
-- description: creates events table
-- authoredAt: 1370023262
-- up:
CREATE TABLE events (
batch_id text,
occurred_at uuid,
event_type text,
payload blob,
PRIMARY KEY (batch_id, occurred_at, event_type)
)
Reversible migrations with no-op down statements have an up property and an empty down property.
-- description: adds user_agent to views table
-- authoredAt: 1370028264
-- up:
ALTER TABLE views
ADD user_agent text
-- down:
Each migration may optionally specify multiple stages. Stages are executed in the order specified.
-- description: creates users and groups tables
-- authoredAt: 1469630066000
-- up:
-- stage: 1
CREATE TABLE groups (
id uuid,
name text,
PRIMARY KEY (id)
)
-- stage: 2
CREATE TABLE users (
id uuid,
group_id uuid,
username text,
password text,
PRIMARY KEY (id)
)
-- down:
-- stage: 1
DROP TABLE users
-- stage: 2
DROP TABLE groups
The Pillar command line interface expects to find migrations in conf/pillar/migrations unless overriden by the -d command-line option.
Pillar uses the Typesafe Config library for configuration. The Pillar command-line interface expects to find an application.conf file in ./conf or ./src/main/resources. The ReplicationStrategy and ReplicationFactor can be configured via environment. If left out completely, SimplyStrategy with RF 3 will be used by default. Given a data store called faker, the application.conf might look like the following:
pillar.faker {
development {
cassandra-seed-address: "127.0.0.1"
cassandra-keyspace-name: "pillar_development"
replicationStrategy: "SimpleStrategy"
replicationFactor: 0
}
}
pillar.faker {
development {
cassandra-seed-address: "127.0.0.1"
cassandra-keyspace-name: "pillar_development"
replicationStrategy: "NetworkTopologyStrategy"
replicationFactor: [
{dc1: 2},
{dc2: 3}
]
}
}
You can optionally add ssl options and authentication to each of the environments:
pillar.faker {
development {
cassandra-seed-address: "127.0.0.1"
cassandra-keyspace-name: "pillar_development"
auth {
username: cassandra
password: secret
}
}
test {
auth {
username: cassandra
password: secret
}
use-ssl: true
ssl-options: {
# ssl with just a trust store for test environment
trust-store-path: foobar.jks # maps to javax.net.ssl.trustStore
trust-store-password: secret # maps to javax.net.ssl.trustStorePassword
trust-store-type: JKS # maps to javax.net.ssl.trustStoreType
}
}
}
production {
auth {
username: cassandra
password: secret
}
use-ssl: true
ssl-options {
trust-store-path: foobar.jks # maps to javax.net.ssl.trustStore
trust-store-password: secret # maps to javax.net.ssl.trustStorePassword
trust-store-type: JKS # maps to javax.net.ssl.trustStoreType
key-store-path: keystore.jks # maps to javax.net.ssl.keyStore
key-store-password: secret # maps to javax.net.ssl.keyStorePassword
key-store-type: JKS # maps to javax.net.ssl.keyStoreType
}
}
Reference the acceptance spec suite for details.
The package installs to /opt/pillar by default. The /opt/pillar/bin/pillar executable usage looks like this:
Usage: pillar [OPTIONS] command data-store
OPTIONS
-d directory
--migrations-directory directory The directory containing migrations
-e env
--environment env environment
-t time
--time-stamp time The migration time stamp
PARAMETERS
command migrate or initialize
data-store The target data store, as defined in application.conf
Initialize the faker datastore development environment
% pillar -e development initialize faker
Apply all migrations to the faker datastore development environment
% pillar -e development migrate faker
You can also integrate Pillar directly into your application as a library. Reference the acceptance spec suite for details.
- Cross-compile for scala 2.11 and 2.12
- Replace fpm with native-packager
- Added the option to specify the name of the applied_migrations table in which the migrations are stored. This is useful when using pillar in a muli-module setup where the services have their own non shared tables but live both in the same keyspace and should be deployed independently from each other
- initialize-method split up into two methods (createKeyspace and createMigrationsTable).
- travis.yml file
- travis build status in README
- add replication strategy support [#1]:https://github.com/Galeria-Kaufhof/pillar/commit/e7429d52b21fb75a52c0756dc53abf930080a4e3
- tweaked scaladoc for CassandraSpec
- quorum consistency [#2]:https://github.com/Galeria-Kaufhof/pillar/commit/2a956146c6ed6d3137ba59ecb3752718c03882a9
- add replication strategy support [#9]:#9
- small bugfixes
- Allow authentication and ssl connections (convoi)
- Small bugfixes
- change package structure to de.kaufhof (MarcoPriebe)
- Update to sbt-sonatype dependency to version 1.1 (MarcoPriebe)
- Update to Scala to version 2.11.6 (MarcoPriebe)
- Update to Cassandra dependency to version 3.0.0 (MarcoPriebe)
- Update a argot dependency to version 1.0.3 (magro)
- Allow configuration of Cassandra port (fkoehler)
- Rework Migrator interface to allow passing a Session object when integrating Pillar as a library (magro, comeara)
- Clarify documentation (pvenable)
- Update Datastax Cassandra driver to version 2.0.2 (magro)
- Update Scala to version 2.10.4 (magro)
- Add cross-compilation to Scala version 2.11.1 (magro)
- Shutdown cluster in migrate & initialize (magro)
- Transition support from StreamSend to Chris O'Meara (comeara)
- Add a "destroy" method to drop a keyspace (iamsteveholmes)