This document describes breaking changes and how to upgrade. For a complete list of changes including minor and patch releases, please refer to the changelog.
Click to expand
This release adds hooks and drops callbacks, not-found errors and support of Node.js < 16. The guide for this release consists of two sections. One for the public API, relevant to all consumers of abstract-level
and implementations thereof (level
, classic-level
, memory-level
et cetera) and another for the private API that only implementors should have to read.
If you're upgrading from levelup
, abstract-leveldown
or other old modules, it's recommended to first upgrade to abstract-level
1.x because that version includes compatibility checks that have since been removed.
All methods that previously (also) accepted a callback now only support promises. If you were already using promises then nothing changed, except for subtle timing differences and improved performance. If you were not yet using promises, migrating should be relatively straightforward because nearly all callbacks had just two arguments (an error and a result) thus making promise function signatures predictable. The only method that had a callback with more than two arguments was iterator.next()
. If you previously did:
iterator.next(function (err, key, value) {
// ..
})
You must now do:
const [ key, value ] = await iterator.next()
Or switch to async iterators:
for await (const [key, value] of iterator) {
// ..
}
The deprecated iterator.end()
alias of iterator.close()
has been removed.
The db.get()
method now yields undefined
instead of an error for non-existing entries. If you previously did:
try {
await db.get('example')
} catch (err) {
if (err.code === 'LEVEL_NOT_FOUND') {
console.log('Not found')
}
}
You must now do:
const value = await db.get('example')
if (value === undefined) {
console.log('Not found')
}
The same applies to equivalent and older if (err.notFound)
code in the style of levelup
.
The ready
alias of the open
event has been removed. If you previously did:
db.once('ready', function () {
// ..
})
You must now do:
db.once('open', function () {
// ..
})
Although, old code that uses these events would likely be better off using db.open()
because synchronous events don't mix well with async/await
. You could instead do:
await db.open({ passive: true })
await db.get('example')
Or simply:
await db.get('example')
The internals of nested sublevels have been refactored for the benefit of hooks. Nested sublevels, no matter their depth, were previously all connected to the same parent database rather than forming a tree. In the following example, the colorIndex
sublevel would previously forward its operations directly to db
:
const indexes = db.sublevel('idx')
const colorIndex = indexes.sublevel('colors')
It will now forward its operations to indexes
, which in turn forwards them to db
. At each step, hooks and events are available to transform and react to data from a different perspective. Which comes at a (typically small) performance cost that increases with further nested sublevels.
To optionally negate that cost, a new feature has been added to db.sublevel(name)
: it now also accepts a name
that is an array. If the indexes
sublevel is only used to organize keys and not directly interfaced with, operations on colorIndex
can be made faster by skipping indexes
:
const colorIndex = db.sublevel(['idx', 'colors'])
It is no longer possible to create a chained batch while the database is opening. If you previously did:
const db = new ExampleLevel()
const batch = db.batch().del('example')
await batch.write()
You must now do:
const db = new ExampleLevel()
await db.open()
const batch = db.batch().del('example')
await batch.write()
Alternatively:
const db = new ExampleLevel()
await db.batch([{ type: 'del', key: 'example' }])
As for why that last example works yet the same is not supported on a chained batch: the put()
, del()
and clear()
methods of a chained batch are synchronous. This meant abstract-level
(and levelup
before it) had to jump through several hoops to make it work while the database is opening. Having such logic internally is fine, but the problem extended to the new hooks feature and more specifically, the prewrite
hook that runs on put()
and del()
.
All private methods that previously took a callback now use a promise. For example, the function signature _get(key, options, callback)
has changed to async _get(key, options)
. Same as in the public API, the new function signatures are predictable and the only method that requires special attention is iterator._next()
. For details, please see the updated README.
Internal use of process.nextTick
has been replaced with queueMicrotask
(which was already used in browsers) and the polyfill for queueMicrotask
(for older browsers) has been removed. The db.nextTick
utility has been removed as well. These utilities are typically not even needed anymore, thanks to the use of promises. If you previously did:
class ExampleLevel extends AbstractLevel {
_get (key, options, callback) {
process.nextTick(callback, null, 'abc')
}
customMethod () {
this.nextTick(() => {
// ..
})
}
}
You must now do:
class ExampleLevel extends AbstractLevel {
async _get (key, options) {
return 'abc'
}
customMethod () {
queueMicrotask(() => {
// ..
})
}
}
Iterators now take an experimental signal
option that is an AbortSignal
. You can use the signal
to abort an in-progress _next()
, _nextv()
or _all()
call. Doing so is optional until a future semver-major release.
If an implementations indicates support of snapshots via db.supports.snapshots
then the db._get()
and db._getMany()
methods are now required to synchronously create their snapshot, rather than asynchronously. For details, please see the README. This is a documentation-only change because the abstract test suite cannot verify it.
Introducing abstract-level
: a fork of abstract-leveldown
that removes the need for levelup
, encoding-down
and more. An abstract-level
database is a complete solution that doesn't need to be wrapped. It has the same API as level(up)
including encodings, promises and events. In addition, implementations can now choose to use Uint8Array instead of Buffer. Consumers of an implementation can use both. Sublevels are builtin.
We've put together several upgrade guides for different modules. See the FAQ to find the best upgrade guide for you. This upgrade guide describes how to replace abstract-leveldown
with abstract-level
. Implementations that do so, can no longer be wrapped with levelup
.
The npm package name is abstract-level
and the main export is called AbstractLevel
rather than AbstractLevelDOWN
. It started using classes. Support of Node.js 10 has been dropped.
For most folks, a database that upgraded from abstract-leveldown
to abstract-level
can be a drop-in replacement for a level(up)
database (with the exception of stream methods). Let's start this guide there: all methods have been enhanced to reach API parity with levelup
and level
.
Methods that take a callback now also support promises. They return a promise if no callback is provided, the same as levelup
. Implementations that override public (non-underscored) methods must do the same and any implementation should do the same for additional methods if any.
An abstract-level
database emits the same events as levelup
would.
Opening and closing a database is idempotent and safe, similar to levelup
but more precise. If open()
and close()
are called repeatedly, the last call dictates the final status. Callbacks are not called (or promises not resolved) until any pending state changes are done. Same for events. Unlike on levelup
it is safe to call open()
while status is 'closing'
: the database will wait for closing to complete and then reopen. None of these changes are likely to constitute a breaking change; they increase state consistency in edge cases.
The open()
method has a new option called passive
. If set to true
the call will wait for, but not initiate, opening of the database. To similar effect as db.once('open', callback)
with added benefit that it also works if the database is already open. Implementations that wrap another database can use the passive
option to open themselves without taking full control of the database that they wrap.
Deferred open is built-in. This means a database opens itself a tick after its constructor returns (unless open()
was called manually). Any operations made until opening has completed are queued up in memory. When opening completes the operations are replayed. If opening failed (and this is a new behavior compared to levelup
) the operations will yield errors. The AbstractLevel
class has a new defer()
method for an implementation to defer custom operations.
The initial status
of a database is 'opening'
rather than 'new'
, which no longer exists. Wrapping a database with deferred-leveldown
is not supported and will exhibit undefined behavior.
Implementations must also accept options for open()
in their constructor, which was previously done by levelup
. For example, usage of the classic-level
implementation is as follows:
const db = new ClassicLevel('./db', {
createIfMissing: false,
compression: false
})
This works by first forwarding options to the AbstractLevel
constructor, which in turn forwards them to open(options)
. If open(options)
is called manually those options will be shallowly merged with options from the constructor:
// Results in { createIfMissing: false, compression: true }
await db.open({ compression: true })
A database is not "patch-safe". If some form of plugin monkey-patches a database like in the following example, it must now also take the responsibility of deferring the operation (as well as handling promises and callbacks) using db.defer()
. I.e. this example is incomplete:
function plugin (db) {
const original = db.get
db.get = function (...args) {
original.call(this, ...args)
}
}
The database constructor does not take a callback argument, unlike levelup
. This goes for abstract-level
as well as implementations - which is to say, implementors don't have to (and should not) support this old pattern.
Instead call db.open()
if you wish to wait for opening (which is not necessary to use the database) or to capture an error. If that's your reason for using the callback and you previously initialized a database like so (simplified):
levelup(function (err, db) {
// ..
})
You must now do:
db.open(function (err) {
// ..
})
Or using promises:
await db.open()
On any operation, an abstract-level
database checks if it's open. If not, it will either throw an error (if the relevant API is synchronous) or asynchronously yield an error. For example:
await db.close()
try {
db.iterator()
} catch (err) {
console.log(err.code) // LEVEL_DATABASE_NOT_OPEN
}
Errors now have a code
property. More on that below.
This may be a breaking change downstream because it changes error messages for implementations that had their own safety checks (which will now be ineffective because abstract-level
checks are performed first) or implicitly relied on levelup
checks. By safety we mean mainly that yielding a JavaScript error is preferred over segmentation faults, though non-native implementations also benefit from detecting incorrect usage.
Implementations that have additional methods should add or align their own safety checks for consistency. Like so:
Click to expand
const ModuleError = require('module-error')
class ExampleLevel extends AbstractLevel {
// For brevity this example does not implement promises or encodings
approximateSize (start, end, callback) {
if (this.status === 'opening') {
this.defer(() => this.approximateSize(start, end, callback))
} else if (this.status !== 'open') {
this.nextTick(callback, new ModuleError('Database is not open', {
code: 'LEVEL_DATABASE_NOT_OPEN'
}))
} else {
// ..
}
}
}
The AbstractChainedBatch
prototype has a new length
property that, like a chained batch in levelup
, returns the number of queued operations in the batch. Implementations should not have to make changes for this unless they monkey-patched public methods of AbstractChainedBatch
.
It was previously necessary to use level
to get the "full experience". Or similar modules like level-mem
, level-rocksdb
and more. These modules combined an abstract-leveldown
implementation with encoding-down
and levelup
. Encodings are now built-in to abstract-level
, using level-transcoder
rather than level-codec
. The main change is that logic from the existing public API has been expanded down into the storage layer.
The level
module still has a place, for its support of both Node.js and browsers and for being the main entrypoint into the Level ecosystem. The next major version of level
, that's v8.0.0, will likely simply export classic-level
in Node.js and browser-level
in browsers. To differentiate, the text below will refer to the old version as level@7
.
All relevant methods including the database constructor now accept keyEncoding
and valueEncoding
options, the same as level@7
. Read operations now yield strings rather than buffers by default, having the same default 'utf8'
encoding as level@7
and friends.
There are a few differences from level@7
and encoding-down
. Some breaking:
- The lesser-used
'ascii'
,'ucs2'
and'utf16le'
encodings are not supported - The
'id'
encoding, which was not supported by any activeabstract-leveldown
implementation and aliased as'none'
, has been removed - The undocumented
encoding
option (as an alias forvalueEncoding
) is not supported.
And some non-breaking:
- The
'binary'
encoding has been renamed to'buffer'
, with'binary'
as an alias - The
'utf8'
encoding previously did not touch Buffers. Now it will callbuffer.toString('utf8')
for consistency. Consumers can use the'buffer'
encoding to avoid this conversion.
If you previously did one of the following (on a database that's defaulting to the 'utf8'
encoding):
await db.put('a', Buffer.from('x'))
await db.put('a', Buffer.from('x'), { valueEncoding: 'binary' })
Both examples will still work (assuming the buffer contains only UTF8 data) but you should now do:
await db.put('a', Buffer.from('x'), { valueEncoding: 'buffer' })
Or use the new 'view'
encoding which accepts Uint8Arrays (and therefore also Buffer):
await db.put('a', new Uint8Array(...), { valueEncoding: 'view' })
You can skip this section if you're consuming (rather than writing) an abstract-level
implementation.
Both the public and private API of abstract-level
are encoding-aware. This means that private methods receive keyEncoding
and valueEncoding
options too, instead of the keyAsBuffer
, valueAsBuffer
and asBuffer
options that abstract-leveldown
had. Implementations don't need to perform encoding or decoding themselves. In fact they can do less: the _serializeKey()
and _serializeValue()
methods are also gone and implementations are less likely to have to convert between strings and buffers.
For example: a call like db.put(key, { x: 2 }, { valueEncoding: 'json' })
will encode the { x: 2 }
value and might forward it to the private API as db._put(key, '{"x":2}', { valueEncoding: 'utf8' }, callback)
. Same for the key, omitted for brevity. We say "might" because it depends on the implementation, which can now declare which encodings it supports.
To first give a concrete example for get()
, if your implementation previously did:
class ExampleLeveldown extends AbstractLevelDOWN {
_get (key, options, callback) {
if (options.asBuffer) {
this.nextTick(callback, null, Buffer.from('abc'))
} else {
this.nextTick(callback, null, 'abc')
}
}
}
You must now do (if still relevant):
class ExampleLevel extends AbstractLevel {
_get (key, options, callback) {
if (options.valueEncoding === 'buffer') {
this.nextTick(callback, null, Buffer.from('abc'))
} else {
this.nextTick(callback, null, 'abc')
}
}
}
The encoding options and data received by the private API depend on which encodings it supports. It must declare those via the manifest passed to the AbstractLevel
constructor. See the README
for details. For example, an implementation might only support storing data as Uint8Arrays, known here as the 'view'
encoding:
class ExampleLevel extends AbstractLevel {
constructor (location, options) {
super({ encodings: { view: true } }, options)
}
}
The earlier put()
example would then result in db._put(key, value, { valueEncoding: 'view' }, callback)
where value
is a Uint8Array containing JSON in binary form. And the earlier _get()
example can be simplified to:
class ExampleLevel extends AbstractLevel {
_get (key, options, callback) {
// No need to check valueEncoding as it's always 'view'
this.nextTick(callback, null, new Uint8Array(...))
}
}
Implementations can also declare support of multiple encodings; keys and values will then be encoded via the most optimal path. For example:
super({
encodings: {
view: true,
utf8: true
}
})
- The
AbstractIterator
constructor now requires anoptions
argument, for encoding options - The
AbstractIterator#_seek()
method got a newoptions
argument, for akeyEncoding
option - The
db.supports.bufferKeys
property has been removed. Usedb.supports.encodings.buffer
instead.
Node.js readable streams must now be created with a new standalone module called level-read-stream
, rather than database methods like db.createReadStream()
. Please see its upgrade guide for details.
To offer an alternative to db.createKeyStream()
and db.createValueStream()
, two new types of iterators have been added: db.keys()
and db.values()
. Their default implementations are functional but implementors may want to override them for optimal performance. The same goes for two new methods on iterators: nextv()
and all()
. To achieve this and honor the limit
option, abstract iterators now count how many items they yielded, which may remove the need for implementations to do so on their own. Please see the README for details.
These keys sort before anything else. Historically they weren't supported for causing segmentation faults in leveldown
. That doesn't apply to today's codebase. Implementations must now support:
await db.put('', 'example')
console.log(await db.get('')) // 'example'
for await (const [key, value] of db.iterator({ lte: '' })) {
console.log(value) // 'example'
}
Same goes for zero-length Buffer and Uint8Array keys. Zero-length keys would previously result in an error and never reach the private API.
To further improve safety and consistency, additional changes were made that make an abstract-level
database safer to use than abstract-leveldown
wrapped with levelup
.
The iterator.end()
method has been renamed to iterator.close()
, with end()
being an alias until a next major version in the future. The term "close" makes it easier to differentiate between the iterator having reached its natural end (data-wise) versus closing it to cleanup resources. If you previously did:
const iterator = db.iterator()
iterator.end(callback)
You should now do one of:
iterator.close(callback)
await iterator.close()
Likewise, in the private API for implementors, _end()
has been renamed to _close()
but without an alias. This method is no longer allowed to yield an error.
On db.close()
, non-closed iterators are now automatically closed. This may be a breaking change but only if an implementation has (at its own risk) overridden the public end()
method, because close()
or end()
is now an idempotent operation rather than yielding an end() already called on iterator
error. If a next()
call is in progress, closing the iterator (or database) will wait for that.
The error message cannot call next() after end()
has been replaced with code LEVEL_ITERATOR_NOT_OPEN
, the error cannot call seek() after end()
has been removed in favor of a silent return, and cannot call next() before previous next() has completed
and cannot call seek() before next() has completed
have been replaced with code LEVEL_ITERATOR_BUSY
.
The next()
method no longer returns this
(when a callback is provided).
Chained batch has a new method close()
which is an idempotent operation and automatically called after write()
(for backwards compatibility) or on db.close()
. This to ensure batches can't be used after closing and reopening a db. If a write()
is in progress, closing will wait for that. If write()
is never called then close()
must be. For example:
const batch = db.batch()
.put('abc', 'zyz')
.del('foo')
if (someCondition) {
await batch.write()
} else {
// Decided not to commit
await batch.close()
}
// In either case this will throw
batch.put('more', 'data')
These changes could be breaking for an implementation that has (at its own risk) overridden the public write()
method. In addition, the error message write() already called on this batch
has been replaced with code LEVEL_BATCH_NOT_OPEN
.
An implementation can optionally override AbstractChainedBatch#_close()
if it has resources to free and wishes to free them earlier than GC would.
The level-errors
module as used by levelup
and friends, is not used or exposed by abstract-level
. Instead errors thrown or yielded from a database have a code
property. See the README
for details. Going forward, the semver contract will be on code
and error messages will change without a semver-major bump.
To minimize breakage, the most used error as yielded by get()
when an entry is not found, has the same properties that level-errors
added (notFound
and status
) in addition to code LEVEL_NOT_FOUND
. Those properties will be removed in a future version. Implementations can still yield an error that matches /NotFound/i.test(err)
or they can start using the code. Either way abstract-level
will normalize the error.
If you previously did:
db.get('abc', function (err, value) {
if (err && err.notFound) {
// Handle missing entry
}
})
That will still work but it's preferred to do:
db.get('abc', function (err, value) {
if (err && err.code === 'LEVEL_NOT_FOUND') {
// Handle missing entry
}
})
Or using promises:
try {
const value = await db.get('abc')
} catch (err) {
if (err.code === 'LEVEL_NOT_FOUND') {
// Handle missing entry
}
}
The following properties and methods can no longer be accessed, as they've been removed or replaced with internal symbols:
AbstractIterator#_nexting
AbstractIterator#_ended
AbstractChainedBatch#_written
AbstractChainedBatch#_checkWritten()
AbstractChainedBatch#_operations
AbstractLevel#_setupIteratorOptions()
You can skip this section if you're consuming (rather than writing) an abstract-level
implementation.
The abstract test suite of abstract-level
has some breaking changes compared to abstract-leveldown
:
- Options to skip tests have been removed in favor of
db.supports
- Support of
db.clear()
anddb.getMany()
is now mandatory. The default (slow) implementation of_clear()
has been removed. - Added tests that
gte
andlte
range options take precedence overgt
andlt
respectively. This is incompatible withltgt
but aligns withsubleveldown
,level-option-wrap
and half ofleveldown
. There was no good choice. - The
setUp
andtearDown
functions have been removed from the test suite andsuite.common()
. - Added ability to access manifests via
testCommon.supports
, by lazily copying it fromtestCommon.factory().supports
. This requires that the manifest does not change during the lifetime of adb
. - Your
factory()
function must now accept anoptions
argument.
Many tests were imported from levelup
, encoding-down
, deferred-leveldown
, memdown
, level-js
and leveldown
. They test the changes described above and improve coverage of existing behavior.
Lastly, it's recommended to revisit any custom tests of an implementation. In particular if those tests relied upon the previously loose state checking of abstract-leveldown
. For example, making a db.put()
call before db.open()
. Such a test now has a different meaning. The previous meaning can typically be restored by inserting db.once('open', ...)
or await db.open()
logic.
This section is only relevant if you use subleveldown
(which can not wrap an abstract-level
database).
Sublevels are now builtin. If you previously did:
const sub = require('subleveldown')
const example1 = sub(db, 'example1')
const example2 = sub(db, 'example2', { valueEncoding: 'json' })
You must now do:
const example1 = db.sublevel('example1')
const example2 = db.sublevel('example2', { valueEncoding: 'json' })
The key structure is equal to that of subleveldown
. This means that an abstract-level
sublevel can read sublevels previously created with (and populated by) subleveldown
. There are some new features:
db.batch(..)
takes asublevel
option on operations, to atomically commit data to multiple sublevels- Sublevels support Uint8Array in addition to Buffer
AbstractLevel#_sublevel()
can be overridden to add additional methods to sublevels.
To reduce function overloads, the prefix argument (example1
above) is now required and it's called name
here. If you previously did one of the following, resulting in an empty name:
subleveldown(db)
subleveldown(db, { separator: '@' })
You must now use an explicit empty name:
db.sublevel('')
db.sublevel('', { separator: '@' })
The string shorthand for { separator }
has also been removed. If you previously did:
subleveldown(db, 'example', '@')
You must now do:
db.sublevel('example', { separator: '@' })
Third, the open
option has been removed. If you need an asynchronous open hook, feel free to open an issue to discuss restoring this API. Should it support promises? Should abstract-level
support it on any database and not just sublevels?
Lastly, the error message Parent database is not open
(courtesy of subleveldown
which had to check open state to prevent segmentation faults from underlying databases) changed to error code LEVEL_DATABASE_NOT_OPEN
(courtesy of abstract-level
which does those checks on any database).
For earlier releases, before abstract-level
was forked from abstract-leveldown
(v7.2.0), please see the upgrade guide of abstract-leveldown
.