-
-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create A/B Test for website changes #883
Comments
Hi all, I think we might be able to employ other methods to validate if our designs are improving the site that are a better fit for a team with our current resources and infrastructure. A/B tests are usually quite a large endeavor. They’re randomized, statistically-driven tests to see if a variant performs better than a control, typically with large numbers (often thousands) of users. To do such a test, we’d need the following:
My understanding is that we don’t really have these things right now, and like I said, I think this is overkill for most of what we need. I’ll suggest an alternative next. Please read these for more detail about A/B Tests: |
So, there are many other ways to test the success or failure of a design — since we’re a small team, we should look at solving problems with that in mind. Also, we can do this incrementally with individual changes instead of as a giant test. A. For us, I’d argue that for show-stopping bugs or usability problems — just fix it right away and follow up. If we had analytics showing success rates and time-to-success, then that would be helpful here. B. Another way: Use the site and see if the new version is an obvious improvement. (We are not our exact target audience but we are similar to many of them) If not then… C. If there’s a feeling that we’re not sure if one version is better than the other or feel there’s significant risk of making the site worse, then a test may be appropriate (if we have the resources to do so and it’s a priority). I think it’s important to check our work, so I’ll add an one way of doing a lightweight test in the next comment. |
One to do some lightweight performance testing: 1. What are our criteria for judging this landing page? I suggest we’d want whatever performs better at:
Also, we need to ask ourselves if we (FOLA) have some needs as well — branding, perhaps — that are currently satisfied with the landing page, and include that as part of our criteria for success. If there are such needs, we should identify them before testing and certainly before choosing. 2. Test with a small batch of users:
Repeat with version B What were the times for each version? Usability testing is easy — in fact, several FOLA members have done this already via heuristic analysis. This can be done while watching users complete their tasks. We can and should do it with our own small group and expand it to others. If we have access to our target food seeker users, even better. It also serves as a QA/bug test. Usability issues often show themselves with < 7 respondents. This can be done over email or even ask the internet! Just ask people to check out the sites and follow a script and send back their results. 3. Which version performed better? If one version performed obviously better, then our job is done. Use that one. If both versions test similarly, then figure out what additional criteria make sense and repeat. Or do the one that feels right and move on to the next priority. |
We should discuss this in the next Team Meeting and decide on next steps. So far, Bryan and I have created some dev issues to move toward Option b and eventially Option C from hist clickable prototype. |
Dependencies
Issue for new map page #868
Issue for new landing page #862
Overview
We need to do a A/B test to see which one is better. A = Landing Page. B = Map.
Action Items
Resources/Instructions
Update this with figma link
Update this with usability hub link
The text was updated successfully, but these errors were encountered: