mongobetween
is a lightweight MongoDB connection pooler written in Golang. It's primary function is to handle a large number of incoming connections, and multiplex them across a smaller connection pool to one or more MongoDB clusters.
mongobetween
is used in production at Coinbase. It is currently deployed as a Docker sidecar alongside a Rails application using the Ruby Mongo driver, connecting to a number of sharded MongoDB clusters. It was designed to connect to mongos
routers who are responsible for server selection for read/write preferences (connecting directly to a replica set's mongod
instances hasn't been battle tested).
mongobetween
listens for incoming connections from an application, and proxies any queries to the MongoDB Go driver which is connected to a MongoDB cluster. It also intercepts any ismaster
commands from the application, and responds with "I'm a shard router (mongos)"
, without proxying. This means mongobetween
appears to the application as an always-available MongoDB shard router, and any MongoDB connection issues or failovers are handled internally by the Go driver.
go install github.com/coinbase/mongobetween
Usage: mongobetween [OPTIONS] address1=uri1 [address2=uri2] ...
-loglevel string
One of: debug, info, warn, error, dpanic, panic, fatal (default "info")
-network string
One of: tcp, tcp4, tcp6, unix or unixpacket (default "tcp4")
-password string
MongoDB password
-ping
Ping downstream MongoDB before listening
-pretty
Pretty print logging
-statsd string
Statsd address (default "localhost:8125")
-unlink
Unlink existing unix sockets before listening
-username string
MongoDB username
-dynamic string
File or URL to query for dynamic configuration
-enable-sdam-metrics
Enable SDAM(Server Discovery And Monitoring) metrics
-enable-sdam-logging
Enable SDAM(Server Discovery And Monitoring) logging
TCP socket example:
mongobetween ":27016=mongodb+srv://username:[email protected]/database?maxpoolsize=10&label=cluster0"
Unix socket example:
mongobetween -network unix "/tmp/mongo.sock=mongodb+srv://username:[email protected]/database?maxpoolsize=10&label=cluster0"
Proxying multiple clusters:
mongobetween -network unix \
"/tmp/mongo1.sock=mongodb+srv://username:[email protected]/database?maxpoolsize=10&label=cluster1" \
"/tmp/mongo2.sock=mongodb+srv://username:[email protected]/database?maxpoolsize=10&label=cluster2"
The label
query parameter in the connection URI is used to any tag statsd metrics or logs for that connection.
Passing a file or URL as the -dynamic
argument will allow somewhat dynamic configuration of mongobetween
. Example supported file format:
{
"Clusters": {
":12345": {
"DisableWrites": true,
"RedirectTo": ""
},
"/var/tmp/cluster1.sock": {
"DisableWrites": false,
"RedirectTo": "/var/tmp/cluster2.sock"
}
}
}
This will disable writes to the proxy served from address :12345
, and redirect any traffic sent to /var/tmp/cluster1.sock
to the proxy running on /var/tmp/cluster2.sock
. This is useful for minimal-downtime migrations between clusters.
Current known missing features:
- Transaction server pinning
- Different cursors on separate servers with the same cursor ID value
mongobetween
supports reporting health metrics to a local statsd sidecar, using the Datadog Go library. By default it reports to localhost:8125
. The following metrics are reported:
mongobetween.handle_message
(Timing) - end-to-end time handling an incoming message from the applicationmongobetween.round_trip
(Timing) - round trip time sending a request and receiving a response from MongoDBmongobetween.request_size
(Distribution) - request size to MongoDBmongobetween.response_size
(Distribution) - response size from MongoDBmongobetween.open_connections
(Gauge) - number of open connections between the proxy and the applicationmongobetween.connection_opened
(Counter) - connection opened with the applicationmongobetween.connection_closed
(Counter) - connection closed with the applicationmongobetween.cursors
(Gauge) - number of open cursors being tracked (for cursor -> server mapping)mongobetween.transactions
(Gauge) - number of transactions being tracked (for client sessions -> server mapping)****mongobetween.server_selection
(Timing) - Go driver server selection timingmongobetween.checkout_connection
(Timing) - Go driver connection checkout timingmongobetween.pool.checked_out_connections
(Gauge) - number of connections checked out from the Go driver connection poolmongobetween.pool.open_connections
(Gauge) - number of open connections from the Go driver to MongoDBmongobetween.pool_event.connection_closed
(Counter) - Go driver connection closedmongobetween.pool_event.connection_pool_created
(Counter) - Go driver connection pool createdmongobetween.pool_event.connection_created
(Counter) - Go driver connection createdmongobetween.pool_event.connection_check_out_failed
(Counter) - Go driver connection check out failedmongobetween.pool_event.connection_checked_out
(Counter) - Go driver connection checked outmongobetween.pool_event.connection_checked_in
(Counter) - Go driver connection checked inmongobetween.pool_event.connection_pool_cleared
(Counter) - Go driver connection pool clearedmongobetween.pool_event.connection_pool_closed
(Counter) - Go driver connection pool closed
mongobetween
was built to address a connection storm issue between a high scale Rails app and MongoDB (see blog post). Due to Ruby MRI's global interpreter lock, multi-threaded web applications don't utilize multiple CPU cores. To achieve better CPU utilization, Puma is run with multiple workers (processes), each of which need a separate MongoDB connection pool. This leads to a large number of connections to MongoDB, sometimes exceeding MongoDB's upstream connection limit of 128k connections.
mongobetween
has reduced connection counts by an order of magnitude, spikes of up to 30k connections are now reduced to around 2k. It has also significantly reduced ismaster
commands on the cluster, as there's only a single monitor goroutine per mongobetween
process, instead of a monitor thread for each Ruby process.