π πΎ Universal Storage Layer
Why β
Typically, we choose one or more data storages based on our use-cases like a filesystem, a database like Redis, Mongo, or LocalStorage for browsers but it will soon start to be lots of trouble for supporting and combining more than one or switching between them. For javascript library authors, this usually means they have to decide how many platforms they support and implement storage for each.
π‘ Unstorage solution is a unified and powerful Key-Value (KV) interface that allows combining drivers that are either built-in or can be implemented via a super simple interface and adding conventional features like mounting, watching, and working with metadata.
Comparing to similar solutions like localforage, unstorage core is almost 6x smaller (28.9 kB vs 4.7 kB), using modern ESM/Typescript/Async syntax and many more features to be used universally.
βοΈ Works in all environments (Browser, NodeJS, and Workers)
βοΈ Multiple built-in drivers (Memory, FS, LocalStorage, HTTP, Redis)
βοΈ Asynchronous API
βοΈ Unix-style driver mounting to combine storages
βοΈ Default in-memory storage
βοΈ Tree-shakable utils and tiny core
βοΈ Driver native and user provided metadata
βοΈ Native aware value serialization and deserialization
βοΈ Restore initial state (hydration)
βοΈ State snapshot
βοΈ Driver agnostic watcher
βοΈ HTTP Storage server (cli and programmatic)
βοΈ Namespaced storage
βοΈ Overlay storage (copy-on-write)
π§ Virtual `fs` interface
π§ Cached storage
π§ More drivers: MongoDB, S3 and IndexedDB
π Table of Contents
- Usage
- Storage Interface
storage.hasItem(key)
storage.getItem(key)
storage.setItem(key, value)
storage.removeItem(key, removeMeta = true)
storage.getMeta(key, nativeOnly?)
storage.setMeta(key)
storage.removeMeta(key)
storage.getKeys(base?)
storage.clear(base?)
storage.dispose()
storage.mount(mountpoint, driver)
storage.unmount(mountpoint, dispose = true)
storage.watch(callback)
storage.unwatch()
- Utils
- Storage Server
- Drivers
- Making custom drivers
- Contribution
- License
Install unstorage
npm package:
yarn add unstorage
# or
npm i unstorage
import { createStorage } from 'unstorage'
const storage = createStorage(/* opts */)
await storage.getItem('foo:bar') // or storage.getItem('/foo/bar')
Options:
driver
: Default driver (using memory if not provided)
Checks if storage contains a key. Resolves to either true
or false
.
await storage.hasItem('foo:bar')
Gets the value of a key in storage. Resolves to either string
or null
.
await storage.getItem('foo:bar')
Add/Update a value to the storage.
If the value is not a string, it will be stringified.
If value is undefined
, it is same as calling removeItem(key)
.
await storage.setItem('foo:bar', 'baz')
Remove a value (and it's meta) from storage.
await storage.removeItem('foo:bar')
Get metadata object for a specific key.
This data is fetched from two sources:
- Driver native meta (like file creation time)
- Custom meta set by
storage.setMeta
(overrides driver native meta)
await storage.getMeta('foo:bar') // For fs driver returns an object like { mtime, atime, size }
Set custom meta for a specific key by adding a $
suffix.
await storage.setMeta('foo:bar', { flag: 1 })
// Same as storage.setItem('foo:bar$', { flag: 1 })
Remove meta for a specific key by adding a $
suffix.
await storage.removeMeta('foo:bar',)
// Same as storage.removeMeta('foo:bar$')
Get all keys. Returns an array of strings.
Meta keys (ending with $
) will be filtered.
If a base is provided, only keys starting with the base will be returned also only mounts starting with base will be queried. Keys still have a full path.
await storage.getKeys()
Removes all stored key/values. If a base is provided, only mounts matching base will be cleared.
await storage.clear()
Disposes all mounted storages to ensure there are no open-handles left. Call it before exiting process.
Note: Dispose also clears in-memory data.
await storage.dispose()
By default, everything is stored in memory. We can mount additional storage space in a Unix-like fashion.
When operating with a key
that starts with mountpoint, instead of default storage, mounted driver will be called.
import { createStorage } from 'unstorage'
import fsDriver from 'unstorage/drivers/fs'
// Create a storage container with default memory storage
const storage = createStorage({})
storage.mount('/output', fsDriver({ base: './output' }))
// Writes to ./output/test file
await storage.setItem('/output/test', 'works')
// Adds value to in-memory storage
await storage.setItem('/foo', 'bar')
Unregisters a mountpoint. Has no effect if mountpoint is not found or is root.
await storage.unmount('/output')
Starts watching on all mountpoints. If driver does not supports watching, only emits even when storage.*
methods are called.
const unwatch = await storage.watch((event, key) => { })
// to stop this watcher
await unwatch()
Stop all watchers on all mountpoints.
await storage.unwatch()
Snapshot from all keys in specified base into a plain javascript object (string: string). Base is removed from keys.
import { snapshot } from 'unstorage'
const data = await snapshot(storage, '/etc')
Restore snapshot created by snapshot()
.
await restoreSnapshot(storage, { 'foo:bar': 'baz' }, '/etc2')
Create a namespaced instance of main storage.
All operations are virtually prefixed. Useful to create shorcuts and limit access.
import { createStorage, prefixStorage } from 'unstorage'
const storage = createStorage()
const assetsStorage = prefixStorage(storage, 'assets')
// Same as storage.setItem('assets:x', 'hello!')
await assetsStorage.setItem('x', 'hello!')
We can easily expose unstorage instance to an http server to allow remote connections. Request url is mapped to key and method/body mapped to function. See below for supported http methods.
π‘οΈ Security Note: Server is unprotected by default. You need to add your own authentication/security middleware like basic authentication. Also consider that even with authentication, unstorage should not be exposed to untrusted users since it has no protection for abuse (DDOS, Filesystem escalation, etc)
Programmatic usage:
import { listen } from 'listhen'
import { createStorage } from 'unstorage'
import { createStorageServer } from 'unstorage/server'
const storage = createStorage()
const storageServer = createStorageServer(storage)
// Alternatively we can use `storageServer.handle` as a middleware
await listen(storageServer.handle)
Using CLI:
npx unstorage .
Supported HTTP Methods:
GET
: Maps tostorage.getItem
. Returns list of keys on path if value not found.HEAD
: Maps tostorage.hasItem
. Returns 404 if not found.PUT
: Maps tostorage.setItem
. Value is read from body and returnsOK
if operation succeeded.DELETE
: Maps tostorage.removeItem
. ReturnsOK
if operation succeeded.
Maps data to the real filesystem using directory structure for nested keys. Supports watching using chokidar.
This driver implements meta for each key including mtime
(last modified time), atime
(last access time), and size
(file size) using fs.stat
.
import { createStorage } from 'unstorage'
import fsDriver from 'unstorage/drivers/fs'
const storage = createStorage({
driver: fsDriver({ base: './tmp' })
})
Options:
base
: Base directory to isolate operations on this directoryignore
: Ignore patterns for watchwatchOptions
: Additional chokidar options.
Store data in localStorage.
import { createStorage } from 'unstorage'
import localStorageDriver from 'unstorage/drivers/localstorage'
const storage = createStorage({
driver: localStorageDriver({ base: 'app:' })
})
Options:
base
: Add${base}:
to all keys to avoid collisionlocalStorage
: Optionally providelocalStorage
objectwindow
: Optionally providewindow
object
Keeps data in memory using Set.
By default it is mounted to top level so it is unlikely you need to mount it again.
import { createStorage } from 'unstorage'
import memoryDriver from 'unstorage/drivers/memory'
const storage = createStorage({
driver: memoryDriver()
})
This is a special driver that creates a multi-layer overlay driver.
All write operations happen on the top level layer while values are read from all layers.
When removing a key, a special value __OVERLAY_REMOVED__
will be set on the top level layer internally.
In the example below, we create an in-memory overlay on top of fs. No changes will be actually written to the disk.
import { createStorage } from 'unstorage'
import overlay from 'unstorage/drivers/overlay'
import memory from 'unstorage/drivers/memory'
import fs from 'unstorage/drivers/fs'
const storage = createStorage({
driver: overlay({
layers: [
memory(),
fs({ base: './data' })
]
})
})
Use a remote HTTP/HTTPS endpoint as data storage. Supports built-in http server methods.
This driver implements meta for each key including mtime
(last modified time) and status
from HTTP headers by making a HEAD
request.
import { createStorage } from 'unstorage'
import httpDriver from 'unstorage/drivers/http'
const storage = createStorage({
driver: httpDriver({ base: 'http://cdn.com' })
})
Options:
base
: Base URL for urls
Supported HTTP Methods:
getItem
: Maps to httpGET
. Returns deserialized value if response is okhasItem
: Maps to httpHEAD
. Returnstrue
if response is ok (200)setItem
: Maps to httpPUT
. Sends serialized value using bodyremoveItem
: Maps toDELETE
clear
: Not supported
Store data in a redis storage using ioredis.
import { createStorage } from 'unstorage'
import redisDriver from 'unstorage/drivers/redis'
const storage = createStorage({
driver: redisDriver({
base: 'storage:'
})
})
Options:
base
: Prefix all keys with baseurl
: (optional) connection string
See ioredis for all available options.
lazyConnect
option is enabled by default so that connection happens on first redis operation.
Store data in Cloudflare KV using the Cloudflare API v4.
You need to create a KV namespace. See KV Bindings for more information.
Note: This driver uses native fetch and works universally! For using directly in a cloudflare worker environemnt, please use cloudflare-kv-binding
driver for best performance!
import { createStorage } from 'unstorage'
import cloudflareKVHTTPDriver from 'unstorage/drivers/cloudflare-kv-http'
// Using `apiToken`
const storage = createStorage({
driver: cloudflareKVHTTPDriver({
accountId: 'my-account-id',
namespaceId: 'my-kv-namespace-id',
apiToken: 'supersecret-api-token',
}),
})
// Using `email` and `apiKey`
const storage = createStorage({
driver: cloudflareKVHTTPDriver({
accountId: 'my-account-id',
namespaceId: 'my-kv-namespace-id',
email: '[email protected]',
apiKey: 'my-api-key',
}),
})
// Using `userServiceKey`
const storage = createStorage({
driver: cloudflareKVHTTPDriver({
accountId: 'my-account-id',
namespaceId: 'my-kv-namespace-id',
userServiceKey: 'v1.0-my-service-key',
}),
})
Options:
accountId
: Cloudflare account ID.namespaceId
: The ID of the KV namespace to target. Note: be sure to use the namespace's ID, and not the name or binding used in a worker environment.apiToken
: API Token generated from the User Profile 'API Tokens' page.email
: Email address associated with your account. May be used along withapiKey
to authenticate in place ofapiToken
.apiKey
: API key generated on the "My Account" page of the Cloudflare console. May be used along withemail
to authenticate in place ofapiToken
.userServiceKey
: A special Cloudflare API key good for a restricted set of endpoints. Always begins with "v1.0-", may vary in length. May be used to authenticate in place ofapiToken
orapiKey
andemail
.apiURL
: Custom API URL. Default ishttps://api.cloudflare.com
.
Supported methods:
getItem
: Maps to Read key-value pairGET accounts/:account_identifier/storage/kv/namespaces/:namespace_identifier/values/:key_name
hasItem
: Maps to Read key-value pairGET accounts/:account_identifier/storage/kv/namespaces/:namespace_identifier/values/:key_name
. Returnstrue
if<parsed response body>.success
istrue
.setItem
: Maps to Write key-value pairPUT accounts/:account_identifier/storage/kv/namespaces/:namespace_identifier/values/:key_name
removeItem
: Maps to Delete key-value pairDELETE accounts/:account_identifier/storage/kv/namespaces/:namespace_identifier/values/:key_name
getKeys
: Maps to List a Namespace's KeysGET accounts/:account_identifier/storage/kv/namespaces/:namespace_identifier/keys
clear
: Maps to Delete key-value pairDELETE accounts/:account_identifier/storage/kv/namespaces/:namespace_identifier/bulk
Store data in Cloudflare KV and access from worker bindings.
Note: This driver only works in a cloudflare worker environment! Use cloudflare-kv-http
for other environments.
You need to create and assign a KV. See KV Bindings for more information.
import { createStorage } from 'unstorage'
import cloudflareKVBindingDriver from 'unstorage/drivers/cloudflare-kv-binding'
// Using binding name to be picked from globalThis
const storage = createStorage({
driver: cloudflareKVBindingDriver({ binding: 'STORAGE' })
})
// Directly setting binding
const storage = createStorage({
driver: cloudflareKVBindingDriver({ binding: globalThis.STORAGE })
})
// Using from Durable Objects and Workers using Modules Syntax
const storage = createStorage({
driver: cloudflareKVBindingDriver({ binding: this.env.STORAGE })
})
// Using outside of Cloudflare Workers (like Node.js)
// Use cloudflare-kv-http!
Options:
binding
: KV binding or name of namespace. Default isSTORAGE
.
Map files from a remote github repository. (readonly)
This driver fetches all possible keys once and keep it in cache for 10 minutes. Because of github rate limit, it is highly recommanded to provide a token. It only applies to fetching keys.
import { createStorage } from 'unstorage'
import githubDriver from 'unstorage/drivers/github'
const storage = createStorage({
driver: githubDriver({
repo: 'nuxt/framework',
branch: 'main',
dir: '/docs/content'
})
})
Options:
repo
: Github repository. Format isusername/repo
ororg/repo
. (Required!)token
: Github API token. (Recommended!)branch
: Target branch. Default ismain
dir
: Use a directory as driver root.ttl
: Filenames cache revalidate time. Default is600
seconds (10 minutes)apiURL
: Github API domain. Default ishttps://api.github.com
cdnURL
: Github RAW CDN Url. Default ishttps://raw.githubusercontent.com
It is possible to extend unstorage by creating custom drives.
- Keys are always normalized in
foo:bar
convention - Mount base is removed
- Returning promise or direct value is optional
- You should cleanup any open watcher and handlers in
dispose
- Value returned by
getItem
can be a serializable object or string - Having
watch
method, disables default handler for mountpoint. You are responsible to emit event ongetItem
,setItem
andremoveItem
.
See src/drivers to inspire how to implement them. Methods can
Example:
import { createStorage, defineDriver } from 'unstorage'
const myStorageDriver = defineDriver((_opts) => {
return {
async hasItem (key) {},
async getItem (key) {},
async setItem(key, value) {},
async removeItem (key) {},
async getKeys() {},
async clear() {},
async dispose() {},
// async watch(callback) {}
}
})
const storage = createStorage({
driver: myStorageDriver()
})
- Clone repository
- Install dependencies with
yarn install
- Use
yarn dev
to start jest watcher verifying changes - Use
yarn test
before pushing to ensure all tests and lint checks passing