Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: Make it clearer when Pango calls are expected to be wrong #761

Open
corneliusroemer opened this issue Mar 28, 2022 · 1 comment
Open
Labels
docs Documentation related issues package: nextclade_web

Comments

@corneliusroemer
Copy link
Member

corneliusroemer commented Mar 28, 2022

It doesn't seem entirely clear to users when Pango assignments are trustworthy and when not, see #760

One could for example add parentheses about the Pango assignment when the quality is bad to make the reduced confidence clear.

Or maybe the tooltip needs to be updated to give explicit hints on when to trust it and importantly when not.

@AnonymousUserUse
Copy link

AnonymousUserUse commented Oct 18, 2022

Is this issue already tackled, and if yes, is it visible in CoV-Spectrum?

I wonder whether the assignment of XAP from pangoLEARN (80 sequences with coverage >0.9) or Nextclade (1811 sequences with coverage >0.9) is more correct and hope that I can see the answer for those kinds of questions in certain website.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
docs Documentation related issues package: nextclade_web
Projects
No open projects
Development

No branches or pull requests

2 participants