Sparkplug is a very thin wrapper over DynamoDB DocumentClient with a nicer, Promise-based interface that feels more idiomatic to javascript. That means less nested indecipherable json and PascalCase'd properties.
Sparkplug isn't intended to be an ODM or a heavy abstraction over Amazon's client. It also doesn't deal with setting up table schemas programatically, as that is best left to Terraform, CloudFormation or configuration through tooling such as serverless.
const Sparkplug = require('sparkplug')
const plug = new Sparkplug()
plug
.table('accounts')
.get({ email: '[email protected]' })
.then(({ data }) => {
console.log(data.name)
}).catch((err) => {
// handle errors
})
Sparkplug is available through the npm registry
Download and install using npm install
.
npm install sparkplug
Instances of Sparkplug can be passed configuration options. Sparkplug accepts the same config options that Amazon's DynamoDB client does, including endpoint
and region
.
If you're running in context such as a Lambda function, you might not need to pass in any values at all, as they are automatically configured on AWS.
const Sparkplug = require('sparkplug')
// Use default environment variables.
const plug = new Sparkplug()
If running locally via DynamoDB Local or Dynalite, you can use the localhost
region along with the local endpoint.
// Use a locally running DynamoDB instance.
const localPlug = new Sparkplug({
region: 'localhost',
endpoint: 'http://localhost:4567'
})
Select which DynamoDB table to query with the .table()
method. Database operations can be chained off of the return value of this method.
const plug = new Sparkplug()
const accounts = plug.table('accounts')
Use .get()
method of Table
to perform read operations on the database. .get()
accepts an object with a primary key and value to look up, and returns a native Promise object.
In this example, query the accounts
table for a record where email
is '[email protected]'
.
plug
.table('accounts')
.get({ email: '[email protected]' })
.then(({ data }) => {
console.log(data.name)
}).catch((err) => {
// handle errors
})
Use the .put()
and .delete()
methods to create/update or delete entries respectively. The object passed to put must include a primary key of the table (in our example it is email
).
plug
.table('accounts')
.put({
email: '[email protected]',
name: 'Admiral Ackbar',
planet: 'Mon Calamari'
})
.then(({ data }) => {
console.log(data.name)
}).catch((err) => {
// handle errors
})
.delete()
accepts a primary key, similarly to .get()
.
plug
.table('accounts')
.delete({ email: '[email protected]' })
.then(() => {
// perform actions after deletion
}).catch((err) => {
// handle errors
})
Queries and Scans are both supported by Sparkplug.
Scans are a simple way to search on a non-primary key. Use the exec()
method to execute the scan and return a Promise.
plug
.table('accounts')
.scan({ planet: 'Mon Calamari' })
.exec()
.then(({ data }) => {
// `data` contains results of scan
}).catch((err) => {
// handle errors
})
Queries can be used as more performant lookups on primary keys or secondary indexes.
To query a primary key, use .query()
.
const promise = plug
.table('accounts')
.query({ email: '[email protected]' })
.exec()
To query a secondary index, chain the .on()
method to the query. The below example assumes you've set up a secondary index on name
.
const promise = plug
.table('accounts')
.query({ name: 'Admiral Ackbar' })
.on('name')
.exec()
Scans and queries can paginate through results with the .start()
and .limit()
methods of the Scan
or Query
objects.
This query starts at the object with the given primary key and limits the response to 2 results after the given key.
const promise = plug
.table('accounts')
.scan()
.start({ email: '[email protected]' })
.limit(2)
.exec()
Scans perform eventually consistent reads by default. To use strong consistency, use the .strongRead()
method of Scan
const promise = plug
.table('accounts')
.scan({ planet: 'Mon Calamari' })
.strongRead()
.exec()
You can use Sparkplug to make batch get
and put
and delete
requests by using the .batch()
method. Batch operations accept a sparkplug Table
as their first parameter and either an object or array of objects as their second.
const accounts = sparkplug.table(ACCOUNT_TABLE)
const orgs = sparkplug.table(ORG_TABLE)
const promise = plug
.batch()
.put(accounts, [{
email: '[email protected]',
name: 'Admiral Ackbar',
planet: 'Mon Calamari'
}, {
email: '[email protected]',
name: 'Darth Vader',
planet: 'Tatooine'
}])
.put(orgs, {
name: 'Github',
id: 45678
})
.exec()
Sparkplug is open for contributions via GitHub Pull Requests!
To run tests and a coverage report against the codebase:
- clone the repository,
- run
npm i
to install dependencies - run
npm test