Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create A/B Test for website changes #883

Closed
ktjnyc opened this issue Mar 3, 2021 · 4 comments
Closed

Create A/B Test for website changes #883

ktjnyc opened this issue Mar 3, 2021 · 4 comments

Comments

@ktjnyc
Copy link
Member

ktjnyc commented Mar 3, 2021

Dependencies

Issue for new map page #868
Issue for new landing page #862

Overview

We need to do a A/B test to see which one is better. A = Landing Page. B = Map.

Action Items

  • [ ]

Resources/Instructions

Update this with figma link
Update this with usability hub link

@ktjnyc ktjnyc changed the title Create A/B Test Create A/B Test for website changes Mar 3, 2021
@fancyham
Copy link
Collaborator

fancyham commented Apr 15, 2021

Hi all, I think we might be able to employ other methods to validate if our designs are improving the site that are a better fit for a team with our current resources and infrastructure.

A/B tests are usually quite a large endeavor. They’re randomized, statistically-driven tests to see if a variant performs better than a control, typically with large numbers (often thousands) of users.

To do such a test, we’d need the following:

  1. Measurable performance and success criteria
  2. A randomizer that tracks version A measurements vs version B
  3. A significant amount of web traffic for a target audience

My understanding is that we don’t really have these things right now, and like I said, I think this is overkill for most of what we need. I’ll suggest an alternative next.

Please read these for more detail about A/B Tests:
https://vwo.com/ab-testing/
https://hbr.org/2017/06/a-refresher-on-ab-testing

@fancyham
Copy link
Collaborator

fancyham commented Apr 15, 2021

So, there are many other ways to test the success or failure of a design — since we’re a small team, we should look at solving problems with that in mind.

Also, we can do this incrementally with individual changes instead of as a giant test.

A. For us, I’d argue that for show-stopping bugs or usability problems — just fix it right away and follow up. If we had analytics showing success rates and time-to-success, then that would be helpful here.

B. Another way: Use the site and see if the new version is an obvious improvement. (We are not our exact target audience but we are similar to many of them) If not then…

C. If there’s a feeling that we’re not sure if one version is better than the other or feel there’s significant risk of making the site worse, then a test may be appropriate (if we have the resources to do so and it’s a priority).

I think it’s important to check our work, so I’ll add an one way of doing a lightweight test in the next comment.

@fancyham
Copy link
Collaborator

One to do some lightweight performance testing:

1. What are our criteria for judging this landing page?

I suggest we’d want whatever performs better at:

  • Preferred by users
  • Usability testing
  • Time till getting the data they need
  • Bandwidth requirement measurements

Also, we need to ask ourselves if we (FOLA) have some needs as well — branding, perhaps — that are currently satisfied with the landing page, and include that as part of our criteria for success.

If there are such needs, we should identify them before testing and certainly before choosing.

2. Test with a small batch of users:

  1. Give instructions: “Find a free food pantry near you using (version A URL)”
  2. Have them load page and start a timer.
    Measure the amount of time it takes to do that.

Repeat with version B

What were the times for each version?
Which version did the user prefer and why?

Usability testing is easy — in fact, several FOLA members have done this already via heuristic analysis. This can be done while watching users complete their tasks.

We can and should do it with our own small group and expand it to others. If we have access to our target food seeker users, even better. It also serves as a QA/bug test. Usability issues often show themselves with < 7 respondents.

This can be done over email or even ask the internet! Just ask people to check out the sites and follow a script and send back their results.

3. Which version performed better?

If one version performed obviously better, then our job is done. Use that one.

If both versions test similarly, then figure out what additional criteria make sense and repeat. Or do the one that feels right and move on to the next priority.

@entrotech
Copy link
Member

We should discuss this in the next Team Meeting and decide on next steps. So far, Bryan and I have created some dev issues to move toward Option b and eventially Option C from hist clickable prototype.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

No branches or pull requests

3 participants