Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a scan "score" or "grade" visualization to the results page #3690

Open
carlcamera opened this issue Apr 22, 2020 · 8 comments
Open

Add a scan "score" or "grade" visualization to the results page #3690

carlcamera opened this issue Apr 22, 2020 · 8 comments
Assignees
Labels
agenda msft-consider Issues being considered for contributions from Microsoft type:new-feature

Comments

@carlcamera
Copy link

🚀 Feature request

Description

Add a scan "score" or "grade" to the results page. The current scan results UX is uninspiring. Everyone wants to get a good grade when they test themselves -- a score (or series of scores for each section) prominently displayed at the top of the results page would provide good user feedback and quick confirmation of improvement.

What scenarios will this solve?

  1. I run the scan.
  2. I make improvements based on feedback.
  3. I run the scan again.

-- there is no quick, glance-able visualization of improvement.

Why do you want/need this?

The current UI is more cumbersome than other built-in accessibility checkers to use. I have to scroll and expand sections rather than having the problem areas quickly highlighted.

@molant
Copy link
Member

molant commented Apr 22, 2020

The current scan results UX is uninspiring.

What scan results UX are you talking about exactly? I believe you talk about the website, but it could also be the browser extension, CLI, or even VS Code (which we can not modify).
We added different severity level defaults to different issues, but the website is not taking advantage of them.

Knowing what things are more critical should help developers see progress when they make changes and motivate them.

@carlcamera
Copy link
Author

carlcamera commented Apr 23, 2020

Here is a perfect score with webhint

image

Now another in-browser checker

image

Perfect results from webhint are meh.

Web developers are people and not robots. We like to receive positive feedback when we put our work up to a test. There's a lot of work put into creating a site. I'd like some acknowledgement that my work is recognized by the webhint checker. The truth is webhint only points out my failures while the other checker acknowledges my work - and ways to improve my score. This is a psychological distinction. Webhint is a downer. The other checker is inspirational.

@molant
Copy link
Member

molant commented Apr 23, 2020

I see our approach similar to that of a linter. ESLint does not give you a grade. Your "grade" is when there aren't any issues detected. That said, I know not everyone has the same type of motivations, not everything works for everybody, and our goal is to help web developers create better websites.

I think we can find a balance between what we have and what Lighthouse (or other tools) have, is just that we haven't figured it out so thanks for bringing this subject back.

Have you thought about what type of score you would like to see and how it could be calculated? I think there were some conversations in the past about replacing the 0 with something more visual like a ✔.

To give you a bit more background about my opinion, the rest of @webhintio/core might disagree with me, my biggest concern with scores is people believing they are "done" with a category or focusing too much in getting a perfect one when it might not relevant to them. And I'm worried because I've seen it: Developers adding service workers because a tool told them, management pushing to have a PWA to have a perfect score even though it didn't make any sense to them, etc.

@molant molant added the agenda label Apr 24, 2020
@molant
Copy link
Member

molant commented Apr 24, 2020

I'm adding this to today's agenda so we can talk about it.

@hxlnt hxlnt self-assigned this Apr 24, 2020
@hxlnt hxlnt added the msft-consider Issues being considered for contributions from Microsoft label Apr 24, 2020
@hxlnt hxlnt added this to the 2005-1 milestone Apr 24, 2020
@carlcamera
Copy link
Author

thanks for considering this. since you asked for a suggestion, here's mine. I think a balance between a score/grade while also acknowledging improvement would be to provide a donut for each severity level. In this case four concentric donuts. The outermost would be error since it would be the most obvious if errors exist. The donuts continue inward with the percentage exposed being number-of-errors/number-of-tests per severity of course. With each run, the donut gets greener and greener as improvements are made. Please DON'T use my palette here -- it's just something I threw together as an example. I'm sure we could find a more pleasing palette of colors.

webhint-donut

@hxlnt
Copy link
Member

hxlnt commented Apr 27, 2020

Thanks for the feedback, @carlcamera. I'm going to schedule some UX/design research in this area to help us investigate this further.

@hxlnt hxlnt modified the milestones: 2005-1, 2005-2 May 8, 2020
@hxlnt hxlnt modified the milestones: 2005-2, 2006-2 Jun 12, 2020
@hxlnt hxlnt removed the agenda label Jun 12, 2020
@hxlnt hxlnt modified the milestones: 2006-2, 2007-1 Jun 26, 2020
@hxlnt
Copy link
Member

hxlnt commented Jul 10, 2020

Here is a proposed update for visualizing the results in the browser extension. This new version displays green checkmarks to indicate which areas have passed.

Full height

Happy to hear feedback and thoughts on this.

@antross
Copy link
Member

antross commented Jul 21, 2020

@hxlnt This proposal looks good to me. I like the addition of the green checkmark to call out a positive state.

@carlcamera Do you think this helps address your feedback?

@hxlnt hxlnt modified the milestones: 2007-1, 2008-1 Jul 24, 2020
@antross antross removed this from the 2008-1 milestone Oct 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
agenda msft-consider Issues being considered for contributions from Microsoft type:new-feature
Projects
None yet
Development

No branches or pull requests

5 participants