Skip to content

Commit

Permalink
release (#313)
Browse files Browse the repository at this point in the history
* Fabo/fix gas (#250)

* fix gas estimate

* linted

* fixed test

* do not keep data sources (#251)

* track failing transactions in Sentry (#249)

* correctly set the tx schema for a failing tx (#248)

* Fabo/remove per block caching as not working (#247)

* remove per block caching as not working

* fix memoized results

Co-authored-by: Ana G. <[email protected]>

* delete perblockcachedatasource (#253)

* Ana/fix balances in actionmodal (#255)

* fix action modal available balance

* include regen

* use dictionary for denomlookup

* use correct events for received txs (#257)

* enable account creation for some networks (#252)

* network update time metric added (#256)

* network update time metric added

* added missing dep

Co-authored-by: Fabian <[email protected]>

* Fix proposal deposit (#261)

* Remove denom handling from getDeposit()

* Revert undesired change

* delete package-lock.json

* localtestnet config change (#265)

* Ana/handle "address not from this network" error (#263)

* add check address function for all queries

* apply suggestions

* Ana/add fiatvalue to balances query (e-Money) (#262)

* preparation

* more preparation

* add fiatvalue field to balances query

* fix get account info

* apply suggestions

* apply one last suggestion

* suggestions+

Co-authored-by: Fabian <[email protected]>

* Ana/emoney fix expected returns with inflation and totalbacked (#243)

* fix expected returns with inflation and supply

* minor fixes. dictionary

* query exchange rates from emoney api

* fix infinite expected returns

* convert api url to const

* add eur value to totalbackedvalue. totalngm gains

* add important comment

* finish calculation

* lint

* catch errors with sentry

Co-authored-by: Fabian <[email protected]>

* readd coin conversion (#268)

* delete amount field (#274)

* Fabo/increase gas again (#271)

* icrease gas again

* fixed test

* Fabo/load all txs (even if more then first page in response) (#270)

* load all txs (even if more then first page in response)

* improved handling of txs

* missing renaming

* fixed paginated load

* add pagination fix also to cosmosV0-source

Co-authored-by: iambeone <[email protected]>
Co-authored-by: Ana G. <[email protected]>

* fixing issue with multiple senders in one event (#273)

* fixing issue with multiple senders in one event

* Update lib/source/cosmosV2-source.js

Co-authored-by: Fabian <[email protected]>

* Fabo/allow signing for terra + emoney (#267)

* allow signing for terra

* readd coin conversion

* enable actions for terra

* fix correct terra testnet url

* comments and guards

* enabled more txs for emoney and fixed broadcasting

* added a catch for wrongly formatted broadcast urls

* recover default field. change some network titles (#277)

* Fabo/add network data to API (#278)

* non desctructive introduction of better address prefix wording

* added address creator to API

* adjusted test

* added ledger app to networks config

* add icon property to schema (#281)

* add icon property to schema

* fix network schema validation

Co-authored-by: Ana G. <[email protected]>

* filter out validator specific txs (#279)

* Ana/balances coinreducer good fix (#269)

* balances coinreducer good fix

* refactored fiat value logic

Co-authored-by: Fabian <[email protected]>

* Create network_integration.md

* Update network_integration.md

* Update network_integration.md

* Fabo/avoid 500 errors (#288)

* avoid using the latest query

* cleanup

* Ana/filter validator tx cross network and add txvalue reducer (#285)

* filter validators cross network

* add value reducer. necessary for multi claim txs

* add validator txs filter also for cosmosv0 source

* filter and make array only claim rewards msg value

* filter txs by whitelist

* change length in multi claim reward reducer

* add withdrawvalidators

* replace dictionary for set

* refactor transaction snippet. avoid repetition

* Ana/emoney upgrade (mergeable) (#282)

* update emoney api_url

* fix denom. add default fiat currency

* fix rpc endpoint

* fix value (my bad) (#293)

* fix value (my bad)

* trigger another ci flow

* erase space

* set correct new chain id (#294)

* restart API

* restart API

* fix pr alert (#297)

* Fabo/298 tendermint reconnect (#300)

* reconnect on tendermint disconnect

* cleanup

* comments

* Update cosmos-node-subscription.js

* Fabo/299 trigger a chain hangup error (#301)

* trigger a chain hangup error

* increase chain hangup time

* Apply suggestions from code review

* Fabo/store validator addresses (#296)

* add validator addresses to db

* linted

* ignore in local dev

* revert

* fixed fetch

* comment

* refactored db into constructor

* cleanup

* add clearTimeout to avoid reconnection hell (#306)

* add clearTimeout to avoid reconnection hell

* removed console.log

* Aleksei/luniedb replaced (#303)

* add validator addresses to db

* linted

* ignore in local dev

* revert

* fixed fetch

* comment

* refactored db into constructor

* cleanup

* replaced luniedb

* linted

Co-authored-by: Fabian <[email protected]>

* disable reconnection logic

* clear polling interval for tendermint connection

* simple api fixes (#310)

* Fabo/remove tendermint (#311)

* remove tendermint

*  fixed empty blockHeight issue

* small refactoring

* catch on fetches to get logging

* delay block updates

* add retry logic

* refactored getBlockByHeight

* remove pm2 dep

* validator profiles were returned as array (#312)

Co-authored-by: Ana G. <[email protected]>
Co-authored-by: Aleksey Rudometov <[email protected]>
Co-authored-by: Mario Pino <[email protected]>
Co-authored-by: Jordan Bibla <[email protected]>
  • Loading branch information
5 people authored Feb 12, 2020
1 parent 8ed1f54 commit 406a364
Show file tree
Hide file tree
Showing 13 changed files with 144 additions and 734 deletions.
4 changes: 1 addition & 3 deletions lib/apollo.js
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@ const resolvers = require('./resolvers')
const { networkList } = require('./networks')
const NetworkContainer = require('./network-container')

const LunieDBAPI = require('./source/luniedb-source')
const config = require('../config')

const networks = networkList.map(network => new NetworkContainer(network))
Expand All @@ -19,8 +18,7 @@ function getDataSources(networks) {
})

return {
...sources,
LunieDBAPI: new LunieDBAPI()
...sources
}
}
}
Expand Down
118 changes: 33 additions & 85 deletions lib/block-listeners/cosmos-node-subscription.js
Original file line number Diff line number Diff line change
@@ -1,6 +1,4 @@
const _ = require('lodash')
const io = require('@pm2/io')
const Tendermint = require('../rpc/tendermint')
const {
publishBlockAdded,
publishUserTransactionAdded,
Expand All @@ -10,115 +8,65 @@ const Sentry = require('@sentry/node')
const database = require('../database')
const config = require('../../config.js')

const WAIT_FOR_BLOCK_DELAY = 5000
const POLLING_INTERVAL = 1000
const EXPECTED_MAX_BLOCK_WINDOW = 120000
// apparently the cosmos db takes a while to serve the content after a block has been updated
// if we don't do this, we run into errors as the data is not yet available
const COSMOS_DB_DELAY = 2000

let reconnectionTimeout = {}

// This class establishes an rpc connection to Tendermint.
// This class polls for new blocks
// Used for listening to events, such as new blocks.
class CosmosNodeSubscription {
constructor(network, CosmosApiClass, store) {
this.network = network
this.cosmosAPI = new CosmosApiClass(network)
this.store = store
this.lastupdate = 0
this.metric = io.metric({
name: `${this.network.id}_update`
})
const networkSchemaName = this.network.id.replace(/-/g, '_')
this.db = new database(config)(networkSchemaName)
this.chainHangup = undefined
this.height = undefined

this.connectTendermint(this.network)
this.pollForNewBlock()
}

async connectTendermint(network) {
console.log('Connecting to Tendermint on', network.rpc_url)
// Create a RPC subscription for each network that will react to new block events.
Tendermint()
.connect(network.rpc_url)
.then(connectedClient => {
console.log('Connected to Tendermint on', network.rpc_url)
connectedClient.subscribe({ query: "tm.event='NewBlock'" }, event => {
// this tracks the block times
// issue: this will only trigger if there are actually blocks I guess
if (this.lastupdate) {
const diff = Date.now() - this.lastupdate
this.metric.set(diff)
}
this.lastupdate = Date.now()

setTimeout(
() => this.newBlockHandler(event.block.header.height),
WAIT_FOR_BLOCK_DELAY
)
async pollForNewBlock() {
this.pollingTimeout = setTimeout(async () => {
const block = await this.cosmosAPI.getBlockByHeight()

// if there are no new blocks for some time, trigger an error
// TODO: show this error automatically in the UI
if (this.chainHangup) clearTimeout(this.chainHangup)
this.chainHangup = setTimeout(() => {
console.error(`Chain ${this.network.id} seems to have halted.`)
Sentry.captureException(
new Error(`Chain ${this.network.id} seems to have halted.`)
)
}, EXPECTED_MAX_BLOCK_WINDOW)
})

// on connection lost, reconnect to tendermint + Sentry error
connectedClient.ondisconnect = () => {
console.log('Lost connection to Tendermint for', network.rpc_url)
if (this.height !== block.height) {
// apparently the cosmos db takes a while to serve the content after a block has been updated
// if we don't do this, we run into errors as the data is not yet available
setTimeout(() => this.newBlockHandler(block), COSMOS_DB_DELAY)

Sentry.withScope(function(scope) {
scope.setExtra('network', network.id)
scope.setExtra('rpc_url', network.rpc_url)
Sentry.captureException(new Error(`Lost Tendermint connection`))
})
// need to clear previous timeout to evoid connection hell
clearTimeout(reconnectionTimeout[network.id])
reconnectionTimeout[network.id] = setTimeout(
() => this.connectTendermint(network),
3000
)
}
})
.catch(e => {
Sentry.withScope(function(scope) {
scope.setExtra('network', network.id)
scope.setExtra('rpc_url', network.rpc_url)
Sentry.captureException(e)
})
// we are safe, that the chain produced a block so it didn't hang up
if (this.chainHangup) clearTimeout(this.chainHangup)
}

clearTimeout(reconnectionTimeout[network.id])
// if can't connect, retry
reconnectionTimeout[network.id] = setTimeout(
() => this.connectTendermint(network),
3000
)
})
this.pollForNewBlock()
}, POLLING_INTERVAL)

// if there are no new blocks for some time, trigger an error
// TODO: show this error automatically in the UI
this.chainHangup = setTimeout(() => {
console.error(`Chain ${this.network.id} seems to have halted.`)
Sentry.captureException(
new Error(`Chain ${this.network.id} seems to have halted.`)
)
}, EXPECTED_MAX_BLOCK_WINDOW)
}

// For each block event, we fetch the block information and publish a message.
// A GraphQL resolver is listening for these messages and sends the block to
// each subscribed user.
async newBlockHandler(height) {
if (height) {
Sentry.configureScope(function(scope) {
scope.setExtra('height', height)
})
}

const block = await this.cosmosAPI.getBlockByHeight({
blockHeight: height
async newBlockHandler(block) {
Sentry.configureScope(function(scope) {
scope.setExtra('height', block.height)
})
// in the case of height being undefined to query for latest
// eslint-disable-next-line require-atomic-updates
height = block.height

const validators = await this.cosmosAPI.getAllValidators(height)
const validators = await this.cosmosAPI.getAllValidators(block.height)
const validatorMap = await this.getValidatorMap(validators)
this.updateDBValidatorProfiles(validators)
this.store.update({ height, block, validators: validatorMap })
this.store.update({ height: block.height, block, validators: validatorMap })
publishBlockAdded(this.network.id, block)
// TODO remove, only for demo purposes
// publishEvent(this.network.id, 'block', '', block)
Expand Down
10 changes: 6 additions & 4 deletions lib/database/helpers.js
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ const graphQLQuery = ({ hasura_url, hasura_admin_key }) => async query => {

if (data.errors || data.error) {
console.error('Query failed:', query)
console.error('GraphQL query failed:', data.error)
console.error('GraphQL query failed:', data.error || data.errors)
throw new Error('GraphQL query failed')
}

Expand Down Expand Up @@ -94,17 +94,19 @@ const read = ({ hasura_url, hasura_admin_key }) => schema => async (
filter
) => {
keys = Array.isArray(keys) ? keys : [keys]
// schema could be set or not
let schema_prefix = schema ? schema + '_' : ''

const query = `
query ${schema}_${queryName} {
${schema}_${table}${filter ? `(${filter})` : ''} {
query ${schema_prefix}${queryName} {
${schema_prefix}${table}${filter ? `(${filter})` : ''} {
${keys.join('\n')}
}
}
`

const res = await graphQLQuery({ hasura_url, hasura_admin_key })(query)
return res.data[`${schema}_${table}`]
return res.data[`${schema_prefix}${table}`]
}

module.exports = {
Expand Down
17 changes: 16 additions & 1 deletion lib/database/index.js
Original file line number Diff line number Diff line change
@@ -1,11 +1,26 @@
const { insert, read } = require('./helpers')
const { getValidatorsInfo, getMaintenance } = require('./methods')

function database({ hasura_url, hasura_admin_key }) {
return schema => {
const methods = {
insert: insert({ hasura_url, hasura_admin_key })(schema),
upsert: insert({ hasura_url, hasura_admin_key }, true)(schema),
read: read({ hasura_url, hasura_admin_key })(schema)
read: read({ hasura_url, hasura_admin_key })(schema),
getValidatorsInfo: getValidatorsInfo({ hasura_url, hasura_admin_key })(
schema
),
getValidatorInfoByAddress: async validatorId => {
const validatorInfo = await getValidatorsInfo({
hasura_url,
hasura_admin_key
})(schema)(validatorId)
return validatorInfo[0]
},
getMaintenance: getMaintenance({
hasura_url,
hasura_admin_key
})(schema)
}

return {
Expand Down
34 changes: 34 additions & 0 deletions lib/database/methods.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
const { read } = require('./helpers')

const getValidatorsInfo = ({
hasura_url,
hasura_admin_key
}) => schema => async validatorId => {
return await read({
hasura_url,
hasura_admin_key
})(schema)(
`validatorprofiles`,
`validatorprofiles`,
['operator_address', 'name', 'picture'],
validatorId ? `where: {operator_address: {_eq: "${validatorId}"}}` : false
)
}
const getMaintenance = ({
hasura_url,
hasura_admin_key
}) => schema => async () => {
return await read({
hasura_url,
hasura_admin_key
})(schema)(`maintenance`, `validatorprofiles`, [
'id',
'message',
'show',
'type'
])
}
module.exports = {
getValidatorsInfo,
getMaintenance
}
32 changes: 16 additions & 16 deletions lib/resolvers.js
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,13 @@ const { encodeB32, decodeB32 } = require('./tools')
const { UserInputError, withFilter } = require('apollo-server')
const { formatBech32Reducer } = require('./reducers/livepeerV0-reducers')
const { networkList, networkMap } = require('./networks')
const database = require('./database')
const config = require('../config.js')

function createDBInstance(network) {
const networkSchemaName = network ? network.replace(/-/g, '_') : false
return new database(config)(networkSchemaName)
}

function remoteFetch(dataSources, networkId) {
if (dataSources[networkId]) {
Expand All @@ -26,12 +33,9 @@ function localStore(dataSources, networkId) {
}

// TODO updating the validators with the profiles should happen already in the store so we don't query the db all the time
async function getValidatorInfosMap(dataSources, networkId) {
const validatorInfo = await dataSources.LunieDBAPI.getValidatorsInfo(
networkId
)
async function getValidatorInfosMap(networkId) {
const validatorInfo = await createDBInstance(networkId).getValidatorsInfo()
const validatorInfoMap = keyBy(validatorInfo, 'operator_address')

return validatorInfoMap
}

Expand All @@ -58,7 +62,7 @@ async function validators(
if (activeOnly) {
validators = validators.filter(({ status }) => status === 'ACTIVE')
}
const validatorInfoMap = await getValidatorInfosMap(dataSources, networkId)
const validatorInfoMap = await getValidatorInfosMap(networkId)
validators = validators.map(validator =>
enrichValidator(validatorInfoMap[validator.operatorAddress], validator)
)
Expand All @@ -75,10 +79,9 @@ async function validators(
}

async function validator(_, { networkId, operatorAddress }, { dataSources }) {
const validatorInfo = await dataSources.LunieDBAPI.getValidatorInfoByAddress(
operatorAddress,
const validatorInfo = await createDBInstance(
networkId
)
).getValidatorInfoByAddress(operatorAddress)

const validator = localStore(dataSources, networkId).validators[
operatorAddress
Expand Down Expand Up @@ -117,7 +120,7 @@ async function delegations(
dataSources,
networkId
).getDelegationsForDelegatorAddress(delegatorAddress, validatorsDictionary)
const validatorInfoMap = await getValidatorInfosMap(dataSources, networkId)
const validatorInfoMap = await getValidatorInfosMap(networkId)

return Promise.all(
delegations.map(async delegation => ({
Expand All @@ -140,7 +143,7 @@ async function undelegations(
dataSources,
networkId
).getUndelegationsForDelegatorAddress(delegatorAddress, validatorsDictionary)
const validatorInfoMap = await getValidatorInfosMap(dataSources, networkId)
const validatorInfoMap = await getValidatorInfosMap(networkId)

return Promise.all(
undelegations.map(async undelegation => ({
Expand Down Expand Up @@ -216,9 +219,7 @@ const resolvers = {
block: (_, { networkId, height }, { dataSources }, { cacheControl }) => {
const maxAge = height ? 60 : 10
cacheControl.setCacheHint({ maxAge })
return remoteFetch(dataSources, networkId).getBlockByHeight({
blockHeight: height
})
return remoteFetch(dataSources, networkId).getBlockByHeight(height)
},
network: (_, { id }) => {
const network = networkMap[id]
Expand Down Expand Up @@ -248,8 +249,7 @@ const resolvers = {
})
return networks
},
maintenance: (_, __, { dataSources }) =>
dataSources.LunieDBAPI.getMaintenance(),
maintenance: () => createDBInstance().getMaintenance(),
balances: async (
_,
{ networkId, address, fiatCurrency },
Expand Down
Loading

0 comments on commit 406a364

Please sign in to comment.