Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expand functional test coverage to prevent regressions #1004

Open
eloquence opened this issue Mar 25, 2020 · 9 comments
Open

Expand functional test coverage to prevent regressions #1004

eloquence opened this issue Mar 25, 2020 · 9 comments
Labels

Comments

@eloquence
Copy link
Member

(Converted from card by @redshiftzero)

We need to critically examine bugs in securedrop-client from the last ~month or so and add functional tests sufficient to cover them (to prevent regressions going forward).

@sssoleileraaa
Copy link
Contributor

sssoleileraaa commented Apr 2, 2020

I think many regressions and bugs are around hard-to-see visual discrepancies. Since the machines can diff better than we humans can, I think we should prioritize tests that catch whether or not our code changes affect the look of the application. When we do expect the application to look differently, we can take new screenshots of the changes and add those to our PRs so that these tests can pass.

@kushaldas pointed out that this kind of test cannot be a functional test, rather we would need to use integration test (see related discussion about adding integration tests at the workstation level: freedomofpress/securedrop-workstation#524). This has something to do with needing to run on a fixed display size, but I not sure I understand. Also, we'll need to think about how we may need to require Qubes screenshots to make sure widgets look the same in screenshots, etc.

@eloquence
Copy link
Member Author

Just noting that our first agreed upon goal for the 4/2-4/15 sprint is to inventory and prioritize areas where we want to increase coverage. Once we have agreement on that, we can see if we still have time to begin adding some high priority tests.

@kushaldas pointed out that this kind of test cannot be a functional test, rather we would need to use integration test

Is that indeed a requirement? I'd also like to understand that better. How feasible would it be to attempt to detect visual differences in CI, without Qubes, just using the Docker dev env? It would seem to require that we would update the expected state with each PR that changes the appearance of the client -- which could be an acceptable price to pay.

@sssoleileraaa
Copy link
Contributor

Since I've seen some gui regressions in the past few months around alignment, spacing, wrapping, and other visual side effects that can be hard to notice during development, I think it would be valuable to have some snapshots to compare against. For instance, one snapshot of the client with a populated source list and conversation view can help us detect dozens of possible regressions. It would be reassuring to know that adding say a wordwrap parameter to SecureQLabel or adding an animation to the FileWidget doesn't result in alignement or issues with text being hidden or cut off, etc. For example, see this snapshot that Conor provided: https://github.com/freedomofpress/securedrop-client.

Right now we don't have coverage for the sizing of widgets and layout positioning, so let's add this to the list whether or not we try to tackle it with a few screenshots vs many more unit and functional tests.

Another place we don't have coverage is how long things should take. Like at what point do we say this is taking so long that it's a regression? This can be handled in functional tests by carefully selecting how long we wait for client actions or responses to requests.

@eloquence
Copy link
Member Author

eloquence commented Apr 23, 2020

We'll only be able to spend limited time on functional tests during the 4/22-5/6 sprint given the SD 1.3.0 release, but we've agreed that if we spot regressions during Workstation/Client QA, we may opportunistically increase functional test coverage at that time, with approximately a 4 hour timebox.

@eloquence
Copy link
Member Author

We didn't really get to client QA yet in the previous sprint, but the same commitment still applies -- @kushaldas will also add his recommendations for areas to prioritize to this ticket.

@sssoleileraaa
Copy link
Contributor

sssoleileraaa commented May 27, 2020

Once #1082 is merged then we'll have new UI coverage where we didn't before. Also some of the issues from @kushaldas's list above already have coverage. That said, we should be testing any major functionality in functional tests and it's fine to have overlap.

Here's a little additional info on the links above:

@sssoleileraaa
Copy link
Contributor

Was just thinking today about how we might want to have an easy securedrop screenshot test backup to use whenever we need to make a new screenshot due to a UI update.

@sssoleileraaa
Copy link
Contributor

Also adding to the list:

  • We need a placeholder regression test to make sure long journalist_designations don't get cut off. This would be most easily done in a screenshot test. /cc @kushaldas

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants