Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Define and setup monitoring of Key Performance Indicators (KPIs) for Bisq #3

Open
ripcurlx opened this issue Nov 15, 2017 · 4 comments
Assignees
Labels

Comments

@ripcurlx
Copy link
Contributor

ripcurlx commented Nov 15, 2017

Learning Goal

To be able to tell if changes in a new version of Bisq were a success or failure we first have to define the meaning behind these words. Defining success as an increase in volume probably has nothing to do with the actual changes, but more with the current market situation of the cryptocurrency space or just plain with public holidays in our main markets. The goal of setting up KPIs is to give everyone a tool to be able to measure the success of any change on the platform, find out problems more quickly and to be able to tackle issues we may not be aware of at the moment at all.

This issue should be a starting point of a discussion on what and how we want to measure our success.

Hypothesis

If we track following KPIs [1] and make it accessible for everyone then everyone should be able to see if a change to the platform was a success or a failure. This should cover different parts of the client/network without being changed for every feature we roll out.

[1] KPIs: To be able to differentiate between different versions of the client we need to be able to segment every metric we collect by version number of the client. Additionally we want to collect the data on a daily/hourly basis to be able to make assumptions about external incidents that occur. To be able to measure conversion rates within the client we could collect data for specific onion addresses and store it by hashing the address with a salt.

Metric Why?
Number of downloads per timeframe Acquisition: To know if anyone downloaded/updated the client
Number of users who started the client per timeframe Activation: Of those who downloaded this version, how many are actually starting the client afterwards?
Number of users who interacted with the client again per timeframe Retention: Of those who started the client once, how many come back within a certain timeframe (1st, 7th, 14th day retention)?
Number of users who create a offer Revenue: Of those who use the client, how many are creating actually an offer?
Number of users who write a review/give feedback Referral: Of those who use the client, how many are as satisfied with their experience that they write a review or invite a friend

Metric

This quantitative metrics we are collecting could give us e.g. following actionable insights:

  • # of downloads: How many downloads do we have on certain days or for certain versions? This could give us a hint on the success of certain marketing activities (we'll soon be able to measure conversions on our website as well (segmentation by referral)) and/or the acceptance of certain versions of our client
  • # of users who start the client: What percentage of users is able/willing to start the client after downloading it? Do we have tor/bisq-network problems? Does it take too long to connect, so users drop out?,...
  • # of users who start the client more than once: What percentage of users are using the client multiple times over a certain timeframe. After what time do they drop out? How many users do we loose after 1, 7, 14 days? If the retention rate is low - why is that? ...
  • # of users who create an offer: What percentage of users that are using the client create an offer? Why is it that high/low? Do we usability/trust/privacy issues?
  • # of users that recommend/review/share the client: Do the users like the client/idea that much, that they follow a link directing them to a review site, leaving feedback within the client,... What can we do to create a scalable referral source that is driven by the community? What can we do to increase social proof/trust?

Experiment

The collected metrics should be accessible over an API with a JSON response that can be used e.g. in a Google Spreadsheet or other analytics tools.

Result

- What happened?
- What data did we collect?
- Anything unexpected?

Next Step

- Pivot or persevere?
- Another experiment for this goal?
- Do we need to clean up?

@ripcurlx ripcurlx added the Epic label Nov 15, 2017
@ripcurlx
Copy link
Contributor Author

Thanks to @ManfredKarrer and @cbeams for giving feedback upfront on the first draft of metrics!

@ripcurlx
Copy link
Contributor Author

Based on this trackings/event log we also would be able to see:

  • Average Number of Trades per User per Timeframe
  • Average Revenue Per User (ARPU)
  • Average Revenue Per Paying User (ARPPU)
  • Daily Active Users (DAU)
  • Monthly Active Users (MAU)

Besides this standard KPIs, we also could think about metrics that would represent the core (privacy, trust, security) of our product (credits to @cbeams and @ManfredKarrer) as:

  • Time since last chargeback
  • $ volume traded on Bisq in total
  • Total downloads
  • Uptime

As always if you have any questions on any of this, feel free to chat with us on https://bisq.network/slack-invites - #analytics channel. 😄

@ripcurlx
Copy link
Contributor Author

ripcurlx commented Nov 21, 2017

After discussing with @mrosseel I'd like to go into further detail on the events we should collect to have actionable metrics. Every event should, if possible, have a distinct identifier (e.g. distinct_id) so it is possible to see at what point of the app users have problems with. Without a distinct_id it is not possible for us to differentiate between events sent from different users. Of course the generation of the identifier (e.g. hashing of onion address with salt, that is only known to user) has to be secure and protect the privacy of the user. User should be able to opt-out of this at anytime or to change the salt of the identifier. Additionally to the distinct_id a timestamp has to be stored for every event to be able to monitor events over time. Last but not least we'll need to store the app version for each event to differentiate behavior between app versions

Event Properties
Started App
Created Offer {"type":"Sell|Buy","amount":"0.25"}
Accepted Offer {"type":"Sell|Buy", "amount":"0.25"}
Completed Trade {"type":"Sell|Buy", "amount":"0.25"}
Reviewed App {"channel":"cryptocompare.com"}
Submitted Feedback {"type":"positive|negative", "message":"I love Bisq!"}
Stopped App

Following events can be collected without connecting a user id for drawing conclusions

Event Properties
Downloaded App {"app_version":"v0.6.0"}
Updated App {"app_version":"v0.6.0"}

Topics to discuss:

  • What tools should be used for analytics?
  • How can we prevent anyone to draw conclusions if the attacker is monitoring the peer-to-peer network and the communication to the analytics server?

@ripcurlx
Copy link
Contributor Author

Regarding analytics tools for getting meaning out of all the data, I would use mixpanel as it is quite sophisticated and we would still be able to get everything we need within the free plan.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant