Handle Safe indexing events from Transaction Service and deliver as HTTP webhooks. This service should be connected to the Safe Transaction Service:
- Transaction service sends events to RabbitMQ.
- Events service holds a database with services to send webhooks to, and some filters like
chainId
oreventType
can be configured. - Events service connects to RabbitMQ and susbscribes to the events. When an event matches filters for a service, a webhook is posted.
Available endpoints:
- /health/ -> Check health for the service.
- /admin/ -> Admin panel to edit database models.
- /events/sse/{CHECKSUMMED_SAFE_ADDRESS} -> Server side events endpoint. If
SSE_AUTH_TOKEN
is defined then authentication will be enabled and headerAuthorization: Basic $SSE_AUTH_TOKEN
must be added to the request.
If you want to integrate with the events service, you need to:
- Build a REST API with an endpoint that can receive
json/application
requests (take a look at Events Supported). - Endpoint need to answer with:
HTTP 202
status- Nothing in the body.
- It should answer as soon as posible, as events service will timeout in 2 seconds, if multiple timeouts are detected service will stop sending requests to your endpoint. So you should receive the event, return a HTTP response and then act upon it.
- Configuring HTTP Basic Auth in your endpoint is recommended so a malicious user cannot post fake events to your service.
Some parameters are common to every event:
address
: Safe address.type
: Event type.chainId
: Chain id.
{
"address": "<Ethereum checksummed address>",
"type": "NEW_CONFIRMATION",
"owner": "<Ethereum checksummed address>",
"safeTxHash": "<0x-prefixed-hex-string>",
"chainId": "<stringified-int>"
}
{
"address": "<Ethereum checksummed address>",
"type": "EXECUTED_MULTISIG_TRANSACTION",
"safeTxHash": "<0x-prefixed-hex-string>",
"failed": "true" | "false",
"txHash": "<0x-prefixed-hex-string>",
"chainId": "<stringified-int>"
}
{
"address": "<Ethereum checksummed address>",
"type": "PENDING_MULTISIG_TRANSACTION",
"safeTxHash": "<0x-prefixed-hex-string>",
"chainId": "<stringified-int>"
}
{
"address": "<Ethereum checksummed address>",
"type": "DELETED_MULTISIG_TRANSACTION",
"safeTxHash": "<0x-prefixed-hex-string>",
"chainId": "<stringified-int>"
}
{
"address": "<Ethereum checksummed address>",
"type": "INCOMING_ETHER" | "OUTGOING_ETHER",
"txHash": "<0x-prefixed-hex-string>",
"value": "<stringified-int>",
"chainId": "<stringified-int>"
}
{
"address": "<Ethereum checksummed address>",
"type": "INCOMING_TOKEN" | "OUTGOING_TOKEN",
"tokenAddress": "<Ethereum checksummed address>",
"txHash": "<0x-prefixed-hex-string>",
"value": "<stringified-int>",
"chainId": "<stringified-int>"
}
{
"address": "<Ethereum checksummed address>",
"type": "INCOMING_TOKEN" | "OUTGOING_TOKEN",
"tokenAddress": "<Ethereum checksummed address>",
"txHash": "<0x-prefixed-hex-string>",
"tokenId": "<stringified-int>",
"chainId": "<stringified-int>"
}
{
"address": "<Ethereum checksummed address>",
"type": "MESSAGE_CREATED" | "MESSAGE_CONFIRMATION",
"messageHash": "<0x-prefixed-hex-string>",
"chainId": "<stringified-int>"
}
{
"type": "REORG_DETECTED",
"blockNumber": "<int>",
"chainId": "<stringified-int>"
}
Not currently.
No, this event is only meant to be run by companies running the Safe Transaction Service. You need to develop your own endpoint as explained in How to integrate with the service
Indexing can take 1-2 minutes in the worst cases and less than 15 seconds in good cases.
Currently no, and please count on that maybe due to some network issues you can lose a webhook. We will work on resilience patterns like retrying or removing an integration if service cannot deliver webhooks for some time.
In case our systems go down, messages should be stored in our queue and when the systems are up resending should be restored (unless queue is overflowed because services have been done for a while and some old messages are discarded).
Yes, and we can configure the chains you want to get events from.
You get webhooks for all Safes, it currently cannot be configured.
No, we would like to keep webhook information minimal. Doing queries afterwards to the service is ok, but we are not planning on doing the webhooks the source of information for the service. The idea for webhooks is to remove the need for polling the services.
How do you handle confirmed/unconfirmed blocks and reorgs. When do you send an event? After waiting for confirmation or immediately? If a transaction is removed due to a chain reorg, would you still send the event before it is confirmed?
We don't send notifications when a reorg happens. We send the events as soon as we detect them, no waiting for confirmations. So you should always come to the API and make sure the data is what you expect. This events feature is something built for notifying so we prevent people http polling our API, but it shouldn't be taking the events as a source of trust, only as a signal to come back to the API (that's why we don't send a lot of informations in the events).
Node 20 LTS is required.
$ npm install
Docker compose is required to run RabbitMQ and Postgres
cp .env.sample .env
docker compose up -d
# development
$ npm run start
# watch mode
$ npm run start:dev
# production mode
$ npm run start:prod
Note: It's important that web
is not running during tests, as it can consume messages
and tests will fail.
cp .env.sample .env
Simple way:
bash ./scripts/run_tests.sh
Manual way:
docker compose down
docker compose up -d rabbitmq db db-migrations
# unit tests
npm run test
# e2e tests
npm run test:e2e
# test coverage
npm run test:cov
By default, the local dockerized migrations database will be used (test should not be used as it doesn't use migrations).
To use a custom database for migrations, set MIGRATIONS_DATABASE_URL
environment variable.
Remember to add the new database entities to ./src/datasources/db/database.options.ts
bash ./scripts/db_generate_migrations.sh RELEVANT_MIGRATION_NAME