Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial code for functional tests. #788

Merged
merged 13 commits into from
Mar 18, 2020

Conversation

ntoll
Copy link
Contributor

@ntoll ntoll commented Feb 10, 2020

Description

REQUESTING FEEDBACK

Fixes #381.

The work done so far on this branch adds the scaffolding for functional tests using the pytest-qt plugin (see: https://pytest-qt.readthedocs.io/en/latest/intro.html). TL;DR It works, my approach and coding decisions need checking/feedback from the wider dev group and we should add more tests..!

Please take a look and I'd love feedback via the comments. Work / decisions done so far.

  • I've decided to use pytest-qt since this seems to be the obvious choice, it's well documented, and I know at least two of the core maintainers and they're good developers who I trust (i.e. this will be a good choice).
  • I've updated the Makefile to include a func-test option. This runs the functional tests found in the func_tests folder.
  • There is a problem if I add the func-test target to make check. The xvfb command fails since it's already running (i.e. both test-random and func-test use xvfb). Not sure how to fix this, but open to suggestions.
  • For the functional tests to run properly I think we'll need to carefully mock away the network based communication with the "server". I wonder about using "record and replay" plugins to help make this easier / automated / obvious for future maintainers (something like Betamax for requests but which wraps the API layer). Thoughts..?
  • In any case, make func-test works well and I've decided to put functional tests into their own folder since I don't want to pollute the unit tests (found in the tests folder) with test_* files that don't mirror the source code in the application. This can, of course, be very easily changed and is just an aesthetic decision on my part.
  • I've added a simple test that exercises the login dialog by entering just the username and then clicking the submit button. It checks the error message has been updated. This is the "Hello, World" we can use as a template to build more complicated tests.
  • On that note, we need more functional tests. I'm not clear how we specify these. In past projects I've scripted key user journeys for such types of testing (logging in, logging out, key activities such as changing sources, posting a reply, downloading a file). These would be my suggestions to start with although I'm painfully aware that I'm speculating here. Feedback please..!

Test Plan

Make sure you have the latest dev-requirements and then:

make func-test

A single functional test exercising the login dialog should pass. You should NOT see and widgets on your screen thanks to xvfb.

Checklist

If these changes modify code paths involving cryptography, the opening of files in VMs or network (via the RPC service) traffic, Qubes testing in the staging environment is required. For fine tuning of the graphical user interface, testing in any environment in Qubes is required. Please check as applicable:

  • I have tested these changes in the appropriate Qubes environment
  • I do not have an appropriate Qubes OS workstation set up (the reviewer will need to test these changes)
  • These changes should not need testing in Qubes

If these changes add or remove files other than client code, packaging logic (e.g., the AppArmor profile) may need to be updated. Please check as applicable:

  • I have submitted a separate PR to the packaging repo
  • No update to the packaging logic (e.g., AppArmor profile) is required for these changes
  • I don't know and would appreciate guidance

Copy link
Contributor

@redshiftzero redshiftzero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so for the network mocking, one good place to look is this test data (since we'll need to take a similar approach here) in the securedrop-sdk repository here: https://github.com/freedomofpress/securedrop-sdk/tree/master/data (JSON for connections to the server via securedrop-proxy, YAML are vcr.py cassettes for direct HTTP connections). In this repo, I think we'd want to:

  1. mock calls to securedrop-sdk/sdclientapi for the purpose of running the functional tests. This makes sense for devs locally running these tests as the server behavior will be constant, but the functional tests should catch issues that are less tied to the implementation.
  2. (at a later point) run these tests as integration tests: we'd run the client's functional tests with proxy=False against the server container in a CI job. We actually have this already rigged up in CI for the SDK: https://circleci.com/gh/freedomofpress/securedrop-sdk/681. We start up the server container, populate it with a bunch of sources/submissions/replies, and run the tests against it (this is done in order to check that the latest version of the SDK works against the latest version of the API at the time of merge).

Thoughts welcome @kushaldas @zenmonkeykstop or others on this

func_tests/test_client.py Outdated Show resolved Hide resolved
Makefile Outdated Show resolved Hide resolved
func_tests/test_client.py Outdated Show resolved Hide resolved
@ntoll
Copy link
Contributor Author

ntoll commented Feb 11, 2020

@redshiftzero ack on all your comments. I'll move the functional tests as you suggest, revise the Makefile, undo the mocks (more a habitual first stab - totally agree with your reasoning), have a go at vcr.py things and check out the description of https://github.com/freedomofpress/securedrop-client/wiki/Test-plan#basic-client-testing as a basis for the tests themselves.

@ntoll
Copy link
Contributor Author

ntoll commented Feb 11, 2020

So I've just discovered that the registering of widgets with qtbot causes crashes. Took a while to try to figure out what was causing the problem. As the docs state (https://pytest-qt.readthedocs.io/en/latest/tutorial.html), you don't have to add the widget to the qtbot, so I'm removing it from the code so far.

I.e. DO NOT DO THIS (my mistake):

    w = Window()
    login_dialog = LoginDialog(w)
    login_dialog.error_bar.set_message = mocker.MagicMock()
    login_dialog.show()
    qtbot.addWidget(login_dialog)

The addWidget call sets up some state which, once that specific test has finished, causes the crash to happen. Our tests work without this call, AFAICT.

@ntoll
Copy link
Contributor Author

ntoll commented Feb 13, 2020

I want to give an update to my work so far.

  • Bringing up the client application so that it's ready to run in a clean-state-per-test is done as a group of functions in the test_client.py. These could be refactored to somewhere else if need be, but they're convenient as they are for the time being as I'm building this out.

  • VCR like logging/replaying of requests to a server doesn't work because, AFAICT, we make such requests on new QThreads which appear invisible to tools like VCR.py. I've spent most of today trying to work around this (since vcr.py gives us a really clean, simple and easy to use solution for dealing with the API). There is a way to customise how vcr patches things, the docs are not very clear how this works (see: https://vcrpy.readthedocs.io/en/latest/advanced.html#custom-patches), and the example given contains errors (which I worked around). Ultimately this didn't work. Another potential solution would be to patch VCR.py itself as suggested as a "hack" from three years ago (Second thread not recorded kevin1024/vcrpy#295 (comment)), I tried this and it still didn't work. Ergo, I'm calling it a day with this approach - I see network calls to the test server, but no cassettes are getting recorded, saved or replayed, no matter the attempted work-around.

  • For the latter part of this afternoon I've been exploring another approach. As a simple test, I use a context handler to patch away sdclientapi in the securedrop_client.logic module. This results in SqlAlchemy errors (Error binding parameter 0 - probably unsupported type.) with the stack trace showing [parameters: (<MagicMock name='sdclientapi.API().token_journalist_uuid' id='139834656702416'>,)]. This is positive since the mock is working across threads (as one would expect) and the test server isn't hit with a request. The next step is to ensure the mock is returning something that reflects the expected result (i.e. we're re-inventing something VCR-ish).

I've pushed the WiP, which I'll continue with on Monday unless otherwise instructed. Feedback welcome. 👍

@ntoll
Copy link
Contributor Author

ntoll commented Feb 13, 2020

BTW... I expect this branch (at the moment) not to build (yet).

@ntoll
Copy link
Contributor Author

ntoll commented Feb 17, 2020

Good news..! I think we've a solution for functional tests.

Put simply, there are three aspects to the functional tests which interact with each other in sometimes non-obvious ways. It took several failed attempts as spikes, but this effort was needed to explore the context in order to find the solution presented in this PR.

The three aspects are:

  1. We should be able to record and replay requests to the test server. The vcrpy and pytest-vcr packages are used for this.
  2. We should be able to programmatically interact with the application to run the functional tests as per a test plan. The pytest-qt allows us to do this.
  3. Our functional tests should be easy to write and integrate into our current testing-related workflow. To this end I've written some utility functions, documented them with comments and updated the Makefile so make check and (specifically) make test-functional will run the functional tests in the expected manner.

Some interesting asides that you should be aware of:

  • Test coverage doesn't work with pytest-qt. While a test may appear to pass, it is immediately followed by a crash and core dump. This is why the functional tests are included in their own clause in the Makefile (one without coverage). Since the unit tests should be comprehensive and functional tests are more about replaying user journeys and core functionality I don't think this lack of coverage is a problem.
  • The recording and replaying of vcrpy based requests requires some effort on the part of the test author. Since the HTTP requests are always done on a new thread, you need to tell the qtbot (used to run the functional tests), to pause until a certain signal is detected. Otherwise, the test completes before the thread is run and nothing is detected by vcrpy. The test resumes (or fails on a timeout) once the signal is detected and only at that point can you assert outcomes.
  • When writing tests that require the user to log in, on first run of the test you must make sure the TOTP is correct for the time at which the test is run. For any further run of the test, this doesn't need to be the case since vcrpy will replay the original response from the test server.

Next steps: I'll remove the "draft" status from the PR since we have a working solution..! However, I'll continue to add tests based on the afore referenced test plan since I get the feeling that further edge cases and gotchas will reveal themselves as more complicated test cases are implemented.

At time of writing checking for login form errors and successful logging in are the only tests completed.

@ntoll ntoll marked this pull request as ready for review February 17, 2020 16:56
@ntoll
Copy link
Contributor Author

ntoll commented Feb 19, 2020

I've been asked a couple of questions, which I'll answer here so there's a log of the answers:

What happened regarding the issues with multi-threaded support for vcrpy?

It turns out several things made this hard to resolve:

  • It turns out vcrpy is actually able to deal with multi-threading, but it didn't used to. While Googling I was getting back relatively recent (2017) but incorrect descriptions of multi-threading support (or lack thereof) and vcrpy. Furthermore, it wasn't obvious it had been fixed. Ergo my assumption.
  • The way pytest-qt works means that you have to explicitly add methods into your test to wait for events, signals and states to change (which, upon reflection - makes sense) -- this is so the Qt event loop can carry on and react to whatever you've got qtbot to do with the UI. The original version of the tests didn't do this (assumed that changes in code were immediate). Ergo, calls to the API on different threads never got chance to even start before the test was finished and everything was garbage collected. That's why vcrpy didn't pick up any of the API calls. Rather obvious really, but when you're used to tests that are not working with an event loop (such as our unit tests) then I guess it's easy to fall into this false picture of the world.

Any theories why the crash/core dump occurs at the end of the test execution?

There were several crashes / core dumps / problems:

  • The xvfb (the virtual frame buffer that allows us to run our GUI based tests in a headless manner) would always try to create its X server on the same port. Multiple uses of the xvfb-run command would happen too close to each other in time such that the first X server hadn't finished shutting down and releasing the port before the second X server would start up. I fixed this with the -a flag to start the X server at the first available port.
  • If you do atbot.addWidget() with the widget you're testing, it appears to crash at reliably random points. Trial and error of removing lines of code to identify the qtbot.addWidget call led to the discovery that this was the problem. The crash error reports that a QThread was stopped before it had finished and it referenced the queue related code in the client in the stack trace. Since the pytest-qt docs say that the purpose of qtbot.addWidget is to cleanly tidy up such objects I can only but assume it's not being very clean about it or our use case doesn't fit with the norm. In any case,a new Window and Controller is created for each new test, network calls are isolated via vcrpy and everything is eventually cleaned up at the end of the test run anyway - so the purpose of qtbot.addWidget isn't really useful to us.
  • The puzzler core dump was again found after trial and error removing / changing lines of code or configuration. It appears that the coverage tool interferes with either pytest-qt or vcrpy (I'm more inclined to think the former) and causes yet-another-core-dump-complaining-about-prematurely-stopped-QThreads. As I mention above, since coverage is more the domain of unit tests, and these are functional tests which are not completely comprehensive in scope, I believe the solution of just removing any coverage reporting is both simple and safe. No further core dumps have been observed.

I've added comments to the functions I've created in the functional tests so folks just need to read the code to see what to do. This discussion on GitHub is just so folks can discover why the tests work in the way they do.

One final "gotcha" I want to highlight is that qtbot only works with widgets that are (ultimately) children of QWidget. We have, however, some aspects of our UI which do not meet this requirement. For instance, in order to log out you need to activate a QAction which doesn't inherit from QWidget. (Aside: I was trying to activate the action via the QMenu instance which contains it and qtbot, but it simply doesn't work -- despite a very obvious mouse click event being recorded at the correct coordinates). In any case, I've found that programmatically calling the trigger method is the only way to get this to work, rather than via the qtbot object. The alternative is to mock such things away, but this feels like it's defeating the point of the functional tests in exercising the whole application.

Finally, everything should "just work" ™️ via make check or make test-functional.

I'll continue to add further functional tests as per the specs.

More feedback welcome..! 👍

Copy link
Contributor

@kushaldas kushaldas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is amazingly simple.

tests/functional/test_client.py Outdated Show resolved Hide resolved
tests/functional/test_client.py Outdated Show resolved Hide resolved
@ntoll
Copy link
Contributor Author

ntoll commented Feb 20, 2020

Further updates:

  • Tests should be isolated and deterministic in terms of the way they are executed so we can guarantee that the code is exercised in a predictable and clearly defined manner. If something breaks because we've changed the project code, the test code will consistently complain in the right sort of a way. Sadly, this wasn't happening: there's something about the test_logout_as_journalist test which means that the test that immediately follows it is guaranteed to crash. I suspect (but cannot prove as yet) that some sort of state was being set in the qtbot object. My solution is to isolate the tests into their own modules and use the Makefile to run the problem test_logout module in a separate test run. This could do with further investigation but I'd already spent a long time trying to work on this, plus, the splitting out of tests into appropriately named modules makes the code less monolithic and easier to grok IMHO. ;-)
  • I've added a good example of a quite complicated test which requires the user to be logged in, click on a source, enter some test, press the send button and then check the state of the conversation includes the expected message they just sent. Now that I've gone through this process I hope to be able to add some utility functions so subsequent testers can easily select_first_source (for example) so they don't have to navigate the UI widget hierarchy each time. As more tests are added I'll be able to work out what such convenience functions should be.
  • When the tests run via the make check or make test-functional they're run "headless" in the virtual X frame-buffer. This is very convenient, but "for fun" I've recorded a screenie of the tests running in non-headless mode so you get to see the application starting up several times, and interactions with the UI as driven by the qtbot object with data replayed via vcrpy. There's something strangely satisfying about this. I know, I know... small things please small minds. 🤡 🤦‍♂️

functest

@ntoll
Copy link
Contributor Author

ntoll commented Feb 24, 2020

Progress report for today. I created a bash script to further isolate the tests from each other and added a whole bunch of tests. The current coverage of functional tests to date is:

  • I can successfully log in as journalist
  • I can successfully logout as journalist
  • I can send a reply to a source from the client and see it show up in the reply box
  • I can send a document and message from the Source Interface and see it in the client
  • I can refresh and see new source messages appear in the client (same as above, but now with auto-refresh)
  • I can delete a source and their docs
  • I can star and unstar a source and see the same star-state when I log back into the client
  • I cannot send replies, delete sources, star sources, or refresh the client when in offline mode
  • I can see an error message when I try to delete or star sources in the client when in offline mode
  • I can read conversations when in offline mode

The remaining tests are offline versions of existing tests to be run as if the client is online (meaning I'll be able to re-use some of the UI driving code). It's close..! 👍

@ntoll
Copy link
Contributor Author

ntoll commented Feb 25, 2020

OK. I've finished adding all the test cases described in the checklist. Rebased with master. Feedback welcome. 🎉 😌 I'll demo / talk through this on Friday's call. 👍

@ntoll
Copy link
Contributor Author

ntoll commented Mar 2, 2020

I just timeboxed a couple of hours to try to recreate the test fails we saw in Friday's demo. Sadly, the damn things work as expected. From the tracebacks I had on Friday the errors were for basically for SQLite not being able to find/read/write to/from the file it was using as part of the client. I can only assume this is down to file permissions or something like that isolated to my dev machine (I had had problems with it that afternoon which required me to do an update and reboot). In any case, I'm not able to recreate the fails today.

@sssoleileraaa
Copy link
Contributor

just gave this a little rebase before i begin review

@sssoleileraaa sssoleileraaa force-pushed the func-tests branch 2 times, most recently from 3bf591f to 2ac8c62 Compare March 13, 2020 23:43
Copy link
Contributor

@sssoleileraaa sssoleileraaa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see a segfault when I run make test-functional. I'll provide details in a separate comment. I was able to run tests individually by running python -m pytest -v tests/functional/test_delete_source.py for example. It looks like some tests fail because they just need to be updated to include recent changes to the client. I did come across other tests that failed with unexpected errors, such as test_delete_source.py:

ERROR    securedrop_client.queue:queue.py:131 CannotOverwriteExistingCassetteException: Can't overwrite existing cassette ('/home/creviera/workspace/freedomofpress/securedrop-client/tests/functional/cassettes/test_delete_source_and_their_docs.yaml') in your current record mode ('once').
                                              No match for the request (<Request (DELETE) http://localhost:8081/api/v1/sources/fb800891-09a8-4482-94c9-b516c5960b90>) was found.
                                              Found 1 similar requests with 1 different matcher(s) :
                                              
                                              1 - (<Request (DELETE) http://localhost:8081/api/v1/sources/2145abc6-367f-4d32-807b-06c5b2a42f3c>).
                                              Matchers succeeded : ['method', 'scheme', 'host', 'port', 'query']
                                              Matchers failed :
                                              path - assertion failure :
                                              /api/v1/sources/fb800891-09a8-4482-94c9-b516c5960b90 != /api/v1/sources/2145abc6-367f-4d32-807b-06c5b2a42f3c

if command -v xvfb-run > /dev/null; then \
xvfb-run $$TEST_CMD ; else \
xvfb-run -a $$TEST_CMD ; else \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 allows us to run concurrent builds

@sssoleileraaa
Copy link
Contributor

Here are details on the make test-functional segfault:

(.venv) ~/workspace/freedomofpress/securedrop-client(func-tests○) » make test-functional                                                                                                  1 ↵
python -m pytest -v --random-order-bucket global tests/functional/test_login.py
==================================================================================== test session starts =====================================================================================
platform linux -- Python 3.7.5, pytest-5.2.1, py-1.7.0, pluggy-0.13.0 -- /home/creviera/workspace/freedomofpress/securedrop-client/.venv/bin/python
cachedir: .pytest_cache
Using --random-order-bucket=global
Using --random-order-seed=476474

PyQt5 5.11.3 -- Qt runtime 5.11.2 -- Qt compiled 5.11.2
rootdir: /home/creviera/workspace/freedomofpress/securedrop-client, inifile: pytest.ini
plugins: random-order-1.0.4, cov-2.8.1, xdist-1.30.0, mock-1.10.0, qt-3.3.0, vcr-1.0.2, forked-1.1.1
collected 2 items                                                                                                                                                                            

tests/functional/test_login.py::test_login_as_journalist Fatal Python error: Segmentation fault

Current thread 0x00007fe6d56e4500 (most recent call first):
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pytestqt/wait_signal.py", line 51 in wait
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pytestqt/wait_signal.py", line 151 in __exit__
  File "/home/creviera/workspace/freedomofpress/securedrop-client/tests/functional/test_login.py", line 48 in test_login_as_journalist
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/_pytest/python.py", line 170 in pytest_pyfunc_call
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pluggy/manager.py", line 86 in <lambda>
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pluggy/manager.py", line 92 in _hookexec
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/_pytest/python.py", line 1423 in runtest
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/_pytest/runner.py", line 125 in pytest_runtest_call
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pluggy/manager.py", line 86 in <lambda>
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pluggy/manager.py", line 92 in _hookexec
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/_pytest/runner.py", line 201 in <lambda>
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/_pytest/runner.py", line 229 in from_call
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/_pytest/runner.py", line 201 in call_runtest_hook
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/_pytest/runner.py", line 176 in call_and_report
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/_pytest/runner.py", line 95 in runtestprotocol
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/_pytest/runner.py", line 80 in pytest_runtest_protocol
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pluggy/manager.py", line 86 in <lambda>
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pluggy/manager.py", line 92 in _hookexec
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/_pytest/main.py", line 256 in pytest_runtestloop
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pluggy/manager.py", line 86 in <lambda>
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pluggy/manager.py", line 92 in _hookexec
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/_pytest/main.py", line 235 in _main
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/_pytest/main.py", line 191 in wrap_session
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/_pytest/main.py", line 228 in pytest_cmdline_main
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pluggy/manager.py", line 86 in <lambda>
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pluggy/manager.py", line 92 in _hookexec
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/_pytest/config/__init__.py", line 90 in main
  File "/home/creviera/workspace/freedomofpress/securedrop-client/.venv/lib/python3.7/site-packages/pytest.py", line 101 in <module>
  File "/home/creviera/.pyenv/versions/3.7.5/lib/python3.7/runpy.py", line 85 in _run_code
  File "/home/creviera/.pyenv/versions/3.7.5/lib/python3.7/runpy.py", line 193 in _run_module_as_main
Segmentation fault
Makefile:50: recipe for target 'test-functional' failed
make: *** [test-functional] Error 1

@sssoleileraaa
Copy link
Contributor

@ntoll
Copy link
Contributor Author

ntoll commented Mar 16, 2020

@creviera OK... I've been poking at this, this morning. There are two aspects to this:

  1. Getting segfaults is never (for me) a consistent thing. It's hard to know what to do when they mostly work (except when I'm trying to do a demo). :-/

  2. I think the UI has changed to the extent that some of the functional tests simply no longer work any more. I rebased with master and I notice that the UI now reports things like a failure to star a source (when the previous version of the UI worked fine). My guess is I'll need to re-run the tests with new cassettes to record the network traffic, and re-check the code to make sure the tests are actually doing what we want them to do given the changes in the way the UI works.

This'll take some time and probably shouldn't happen until after the changes to the client have settled down (we currently have lots of things in flight).

I also can't help but wonder about the overhead such functional tests will introduce in terms of maintaining / updating this code given such ongoing changes. I suspect when the client is in "business as usual" mode these tests will be very helpful for both checking things work as expected but also to encode key user journeys through the app.

@sssoleileraaa
Copy link
Contributor

sssoleileraaa commented Mar 17, 2020

@kushaldas what does your environment look like? I wonder what's different between your development environment and mine? Here's what I did to set up my environment:

  1. checked out the latest on this PR branch
  2. deleted my .env and recreate it with python3.7
  3. installed requirements via pip install -r dev-requirements.txt
  4. ran make test-functional (see details about my environment in the output):

(see #788 (comment))

@sssoleileraaa
Copy link
Contributor

Looks like the problem with my dev environment was my version of xvfb which comes with Stretch:

xvfb:
  Installed: 2:1.19.2-1+deb9u5
  Candidate: 2:1.19.2-1+deb9u5

I tested that running make test-functional no longer segfaults if i use Buster with xvfb version (tested in a Qubes VM since that's what I already had with Buster installed):

xvfb:
  Installed: (none)
  Candidate: 2:1.20.4-1

@kushaldas kushaldas force-pushed the func-tests branch 3 times, most recently from 45afe1c to 9e07519 Compare March 18, 2020 14:53
@kushaldas kushaldas dismissed their stale review March 18, 2020 16:17

code updated

@rmol
Copy link
Contributor

rmol commented Mar 18, 2020

👍 I think this can be merged after @creviera dismisses her review and it's rebased one last time.

redshiftzero
redshiftzero previously approved these changes Mar 18, 2020
Copy link
Contributor

@redshiftzero redshiftzero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

read through the latest version of this diff, looks great! I dropped a couple of minor comments as I was reading through but I think if @rmol and @creviera are cool with it we should merge as is and address anything else including my comments (unless someone finds major issues) in followups

PASSWORD = "correct horse battery staple profanity oil chewy"


def get_safe_tempdir():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can we use the existing tmpdir fixture in pytest here?

def wait_for_login():
assert gui.login_dialog is None

qtbot.waitUntil(wait_for_login, timeout=10000)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: for readability we could put the common timeout values we're using in the tests (at least 1000, 10000) into a common place in this utils files as named constants we can reuse e.g. TIMEOUT_SOURCELIST_RENDER, TIMEOUT_LOGIN

Copy link
Contributor

@sssoleileraaa sssoleileraaa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tests are all passing! lgtm:)

@sssoleileraaa sssoleileraaa merged commit 78d1bbe into freedomofpress:master Mar 18, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add functional tests
5 participants