You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With the near completion of Scribe (#114), we're ready to start indexing events for our analytics api. The current state of the analytics api is quite convoluted. analytics.synapseprotocol.com is currently broken on several chains and missing lots of data. You can see that code here along w/ the explorer code here.
A second iteration of analytics, comprised of synapse-indexer and analytics-api requires too much complexity/is too stateful to deploy (which was part of the motivation for #114 along with issues like #153 popping up all over the place rather than in one place where they can be fixed all at once).
The finished product will be a graphql api that looks like this over go, but the first step is to replicate the indexer.
Let's walk through a few real bridging transactions and how they should be indexed. Since this is you're first contribution, I'll run through some steps to get started further below:
The Indexing Process
Here we take an example from the live bridge and walk through the indexing process. Your indexer will take a yaml config file that should look something like the following. I only define two chains since those are the two used for the example. These config values will make sense as I go through the example
The Config:
chains:
- id: 1# chain id url: "http://127.0.0.1:8545"# rpc urlcontracts:
# this is a list since in some cases we have multiple versions of the same contract. You'll need to define these as an enum somewhere
- type: bridge# this will be sourced by the person writing the config for abi.receipt. blockNumber, e.g. this is from https://github.com/synapsecns/synapse-contracts/blob/master/deployments/mainnet/SynapseBridge.jsonstart_block: 13033669# some contracts (really only bridgeconfig/poolconfig: an older iteration of bridge config) are only on ethereum
- type: "bridgeconfig"address: "0x5217c83ca75559B1f8a8803824E5b7ac233A12a1"# see: https://github.com/synapsecns/synapse-contracts/blob/master/deployments/mainnet/BridgeConfigV3.json#L1100start_block: 14259367# an older verison of bridge config
- type: "bridgeconfig"address: "0xAE908bb4905bcA9BdE0656CC869d0F23e77875E7"start_block: 13949327# when we start using v3. end_block: 14259367
- id: 42161url: "http://127.0.0.1:8546"contracts:
- type: bridge# see: [0x6F4e8eBa4D337f874Ab57478AcC2Cb5BACdc19c9](https://github.com/synapsecns/synapse-contracts/blob/master/deployments/arbitrum/SynapseBridge.json#L2)address: "0x6F4e8eBa4D337f874Ab57478AcC2Cb5BACdc19c9"# see: https://github.com/synapsecns/synapse-contracts/blob/master/deployments/arbitrum/SynapseBridge.json#L1462start_block: 657404
- type: pool# https://github.com/synapsecns/synapse-contracts/blob/master/deployments/arbitrum/nUSDPoolV3.json#L2address: 0x9Dd329F5411466d9e0C488fF72519CA9fEf0cb40# see: https://arbiscan.io/tx/0x500afe6cf8e927ccad7a8a2e01f7d3bfc2fa9ef3af6a55f841d71bd5b62c84d3, older deploys don't have the receipt so we pull it from the top right corner of the contract address in the explorer, arbiscan in this casestart_block: 5152261# url of the scribe service, should probably also be embedable scribe: http://scribe:1231
The Example:
Let's look at a live example. Here is a transaction which occured on arbitrum. As we can see from the data, the user is bridging to ethereum.
Note: I've chosen the most complicated bridge type here, other types such as mint do not require bridgeconfig, etc
Bridge Parsing
This transaction is going to trigger a few events that will get populated in the contracts we watch on scribe. The first is the bridge event. This particular event triggered on the bridge is TokenRedeemAndRemove We can see it contains the following items:
We now know that on ethereum, 0x59719d517208b306eA9c7a9FD90D6215163323Ee will receive a minimum of 5330566953 nusd (which will then be swapped for tokenIndexTo: 0 which is usdc) before 1662394851 (Monday, September 5, 2022 4:20:51 PM) on ethereum (chain id 1). If the swap can't be completed, the user will receive nusd on the other end which they can then trade for any token in the pool.
We can also look at the raw data (for most transactions, txes triggered this can't be used for indexing because other contracts can call ours, but it is helpful for understanding the flow) and see the method called:
Pool Parsing:
Since it's a swapAndRedeemAndRemove, we can see exactly what methods are called for the contract to execute in L2BridgeZap:
In addition to being pased in the input, these are also passed as a log: that can be parsed by the abi we generated and inserted
We also have another event to index here: a swap.
We can see the raw swap data here:
. If we look at the swap:
We can see exactly what happened here. We're going to want to index this so we can calculate pool volume.
The Receiving Chain
This transaction triggered a bridge that was then received at the other end. Let's take a look at the transaction here. We can see here that withdrawAndRemove was called.
One of the challenges of parsing transactions on the other end is the pool is never emitted directly:
We can see from the contract that in cases where the swap is not successful, we simply transfer the token (nusd in this case) to the user. Since there's nothing more to index here, we can finish up after just indexing the receiving TokenWithdrawAndRemove without any pool data.
Bridge Config
In cases where expectedOutput >= swapMinAmount (most cases), we'll also receive an event from a pool. But how do the validators know which pool to pass here? And why is the token different than the address on the origin chain)
This is where bridgeconfig comes in. Two calls are made to BridgeConfigV3, in your case, these should be archive calls at the block_number of the transaction. First we call getTokenID(0x2913E812Cf0dcCA30FB28E6Cac3d2DCFF4497688, 42161). This is the token address in the call above and the chain id from above. This should be called on 0x5217c83ca75559B1f8a8803824E5b7ac233A12a1 rather than the other bridge config since the the current block number is greater than start block. If this tx were between 13949327 and 14259367 we'd use 0xAE908bb4905bcA9BdE0656CC869d0F23e77875E7 instead.
We can try this out on etherscan here. This won't be an archive call, but it's good enough for us to see what happened, since bridge config hasn't changed in the meantime. We can see the tokenID is nusd:
Now, let's figure out the token address we want to use on chainID 1 using the token id we just got:
This data corresponds to this struct, in order:
We can see here the the token address 0x1b84765de8b7566e4ceaf4d0fd3c5af52d3dde4f matches nusd on ethreum. Since this transaction is a swap, we want to query the pool config as well to see what pool we've swapped (or attempted to swap on). Let's call getPoolconfig with the token address we received above:
. We can see the first argument is nusd and the second is a SwapFlashLoan contract. This is where the swap from nusd to usdc happened in our contract.
If we go back to the event logs for the tx we're inspecting here we can see an event emitted by this contract:
. Our topic map will tell is this is RemoveLiquidityOne. We'll need to store this for swap analytics. We can also see the amount of tokens the user actually received this way and use that for volume calculations.
We can also see from the logs a TokenWithdrawAndRemove events:
. We'll want to index this.
One final thing to note. You can see the last indexed topic here is bytes32 kappa. Kappa is simply the keccac256(origin_tx_hash).
So in this transaction, we should've indexed the following:
[TokenWithdrawAndRemove](https://github.com/synapsecns/synapse-contracts/blob/9e390f7c826ab09c48c3c8fe3d040226ee8b3aa0/contracts/bridge/SynapseBridge.sol#L108): on ethereum
We can compute the price of usdc and nusd against usdc.
We can calculate the volume
We can calculate the fees earned
Steps to building the service
Abigen
First, you're going to create a new service in services/explorer, next you're going to need to generate some contracts. This readme will walk you through the process. (Note: prior to the merge of #166, you could've imported synapse-node and used its contracts. The topics file and bridge folder generally are worth referencing. I'd recommend adding the contracts repo as a submodule in order to abigen against them. I'd also reccomend giving the contracts a versioned name, as it's quite possible we'll have to generate multiple ersions in order to parse events against them. For instance, we've had several iterations of the BridgeConfig so far.
There are a few contracts you'll have to generate abi's for in order to succesfully track events from the bridge:
In general, all events from these contracts should be indexed in a standardize way (e.g. store all data in the db as structured data). Many of the bridge events are indexed here so you should straight up be able to copy and paste the code. Ordinarily, copying and pasting code is a big no-no, but in this case since we're deprecating synapse-node it's fine. Crucially, you'll need the topicMap and the standardized parsing
Config
Create a config parser for the config defined above, you should be able to use this file and the corresponding test. You'll use this to decide which contracts to index/their types.
Scribe Client
Create a graphql client against scribe, @CryptoMaxPlanck should be able to walk you through this, but your goal is to be able query continiously and index against the JSON. You're best bet here is going to be to use the raw JSON scalar and call UnmarshallJSON on the ethereum types, e.g. for logs this method. These can then be used to parse out events, like so
DB
I'd use DBService here for reference. You're going to want to store all these events in a format that they can easily be aggregated in real time. You'll need a tiny bit of additional data, namely the prices. I'd probably handle this with a sql join.
GraphQL server:
You should be able to staright up copy this schema. This doesn't include analytics methods, but should be a good start to the sever.
The server should be run independently of the indexer.
The text was updated successfully, but these errors were encountered:
### Description
This PR inits services/explorer, an indexer and a service platform analytics.
The specifics are as follows:
- basic contract generation via abigen (for contracts outlined in #167)
- basic config settings
- basic implementation of cli
- placeholders for db
**To Do**
- Verify Abigen process, add indexing functionality
- Integrate with scribe api
- More in [#167](#167)
### Metadata
Issue: [#167](#167)
PoIs: @trajan0x
Co-authored-by: Trajan0x <[email protected]>
Co-authored-by: Max Planck <[email protected]>
Overview
With the near completion of Scribe (#114), we're ready to start indexing events for our analytics api. The current state of the analytics api is quite convoluted. analytics.synapseprotocol.com is currently broken on several chains and missing lots of data. You can see that code here along w/ the explorer code here.
A second iteration of analytics, comprised of synapse-indexer and analytics-api requires too much complexity/is too stateful to deploy (which was part of the motivation for #114 along with issues like #153 popping up all over the place rather than in one place where they can be fixed all at once).
The finished product will be a graphql api that looks like this over go, but the first step is to replicate the indexer.
Let's walk through a few real bridging transactions and how they should be indexed. Since this is you're first contribution, I'll run through some steps to get started further below:
The Indexing Process
Here we take an example from the live bridge and walk through the indexing process. Your indexer will take a yaml config file that should look something like the following. I only define two chains since those are the two used for the example. These config values will make sense as I go through the example
The Config:
The Example:
Let's look at a live example. Here is a transaction which occured on arbitrum. As we can see from the data, the user is bridging to ethereum.
Note: I've chosen the most complicated bridge type here, other types such as mint do not require bridgeconfig, etc
Bridge Parsing
This transaction is going to trigger a few events that will get populated in the contracts we watch on scribe. The first is the bridge event. This particular event triggered on the bridge is
TokenRedeemAndRemove
We can see it contains the following items:We now know that on ethereum, 0x59719d517208b306eA9c7a9FD90D6215163323Ee will receive a minimum of 5330566953 nusd (which will then be swapped for tokenIndexTo: 0 which is usdc) before 1662394851 (Monday, September 5, 2022 4:20:51 PM) on ethereum (chain id 1). If the swap can't be completed, the user will receive nusd on the other end which they can then trade for any token in the pool.
We can also look at the raw data (for most transactions, txes triggered this can't be used for indexing because other contracts can call ours, but it is helpful for understanding the flow) and see the method called:
Pool Parsing:
Since it's a
swapAndRedeemAndRemove
, we can see exactly what methods are called for the contract to execute inL2BridgeZap
:In addition to being pased in the input, these are also passed as a log:
that can be parsed by the abi we generated and inserted
We also have another event to index here: a swap.
We can see the raw swap data here:
. If we look at the swap:
We can see exactly what happened here. We're going to want to index this so we can calculate pool volume.
The Receiving Chain
This transaction triggered a bridge that was then received at the other end. Let's take a look at the transaction here. We can see here that
withdrawAndRemove
was called.One of the challenges of parsing transactions on the other end is the pool is never emitted directly:
We can see from the contract that in cases where the swap is not successful, we simply transfer the token (nusd in this case) to the user. Since there's nothing more to index here, we can finish up after just indexing the receiving
TokenWithdrawAndRemove
without any pool data.Bridge Config
In cases where
expectedOutput >= swapMinAmount
(most cases), we'll also receive an event from a pool. But how do the validators know which pool to pass here? And why is the token different than the address on the origin chain)This is where bridgeconfig comes in. Two calls are made to
BridgeConfigV3
, in your case, these should be archive calls at theblock_number
of the transaction. First we callgetTokenID(0x2913E812Cf0dcCA30FB28E6Cac3d2DCFF4497688, 42161)
. This is the token address in the call above and the chain id from above. This should be called on0x5217c83ca75559B1f8a8803824E5b7ac233A12a1
rather than the other bridge config since the the current block number is greater than start block. If this tx were between 13949327 and 14259367 we'd use0xAE908bb4905bcA9BdE0656CC869d0F23e77875E7
instead.We can try this out on etherscan here. This won't be an archive call, but it's good enough for us to see what happened, since bridge config hasn't changed in the meantime. We can see the
tokenID is nusd
:Now, let's figure out the
token
address we want to use on chainID 1 using the token id we just got:This data corresponds to this struct, in order:
We can see here the the token address
0x1b84765de8b7566e4ceaf4d0fd3c5af52d3dde4f
matchesnusd
on ethreum. Since this transaction is a swap, we want to query the pool config as well to see what pool we've swapped (or attempted to swap on). Let's callgetPoolconfig
with the token address we received above:. We can see the first argument is nusd and the second is a
SwapFlashLoan
contract. This is where the swap from nusd to usdc happened in our contract.If we go back to the event logs for the tx we're inspecting here we can see an event emitted by this contract:
. Our topic map will tell is this is
RemoveLiquidityOne
. We'll need to store this for swap analytics. We can also see the amount of tokens the user actually received this way and use that for volume calculations.We can also see from the logs a
TokenWithdrawAndRemove
events:. We'll want to index this.
One final thing to note. You can see the last indexed topic here is
bytes32 kappa
. Kappa is simply thekeccac256(origin_tx_hash)
.So in this transaction, we should've indexed the following:
TokenRedeemAndRemove
: on arbitrumTokenSwap
: on arbitrumTokenWithdrawAndRemove]
(https://github.com/synapsecns/synapse-contracts/blob/9e390f7c826ab09c48c3c8fe3d040226ee8b3aa0/contracts/bridge/SynapseBridge.sol#L108): on ethereumRemoveLiquidityOne
From this, we'll be able to compute a few things:
Steps to building the service
Abigen
First, you're going to create a new service in
services/explorer
, next you're going to need to generate some contracts. This readme will walk you through the process. (Note: prior to the merge of #166, you could've imported synapse-node and used its contracts. The topics file andbridge
folder generally are worth referencing. I'd recommend adding the contracts repo as a submodule in order to abigen against them. I'd also reccomend giving the contracts a versioned name, as it's quite possible we'll have to generate multiple ersions in order to parse events against them. For instance, we've had several iterations of the BridgeConfig so far.There are a few contracts you'll have to generate abi's for in order to succesfully track events from the bridge:
In general, all events from these contracts should be indexed in a standardize way (e.g. store all data in the db as structured data). Many of the bridge events are indexed here so you should straight up be able to copy and paste the code. Ordinarily, copying and pasting code is a big no-no, but in this case since we're deprecating synapse-node it's fine. Crucially, you'll need the topicMap and the standardized parsing
Config
Create a config parser for the config defined above, you should be able to use this file and the corresponding test. You'll use this to decide which contracts to index/their types.
Scribe Client
Create a graphql client against scribe, @CryptoMaxPlanck should be able to walk you through this, but your goal is to be able query continiously and index against the JSON. You're best bet here is going to be to use the raw
JSON
scalar and callUnmarshallJSON
on the ethereum types, e.g. for logs this method. These can then be used to parse out events, like soDB
I'd use DBService here for reference. You're going to want to store all these events in a format that they can easily be aggregated in real time. You'll need a tiny bit of additional data, namely the prices. I'd probably handle this with a sql join.
GraphQL server:
You should be able to staright up copy this schema. This doesn't include analytics methods, but should be a good start to the sever.
The server should be run independently of the indexer.
The text was updated successfully, but these errors were encountered: