Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement sync logic #5

Closed
redshiftzero opened this issue Jul 30, 2018 · 5 comments
Closed

Implement sync logic #5

redshiftzero opened this issue Jul 30, 2018 · 5 comments

Comments

@redshiftzero
Copy link
Contributor

redshiftzero commented Jul 30, 2018

Porting my suggested implementation from freedomofpress/securedrop-workstation#88 without the trash/archive workflow:

Sends request to /api/v1/sources/ endpoint and /api/v1/submissions/ endpoint

For each source uuid:

If uuid does not exist in local db, create it and save all attributes
else, if uuid does exist in local db but not server:
    delete locally
    if row still exists, compare attributes, update local database row with any changes

Note that deleting a source locally should delete the submissions on disk (i.e. cascade delete).

For each submission uuid:

If uuid does not exist in local db, create it and save all attributes (but do NOT download)
else, if uuid does exist in local db but not the server:
    delete locally
    if row still exists, compare attributes, keep local database row updated with any changes
@ntoll
Copy link
Contributor

ntoll commented Sep 11, 2018

Questions:

  • The test directory is in the same directory as the application code. Any reasons why this can't be moved up a directory to keep tests and application code separate..? If not, I'll just move it and fix.
  • Obviously this will need new models via SQLAlchemy. The current single test doesn't actually appear to test anything (no asserts). I guess this is just a stub to check things work? Any reason why we're testing SQLAlchemy class definitions above and beyond ensuring they have the expected attributes? (i.e. is that all we need to test, if at all.)
  • I'm guessing (please confirm) that the logic described in the ticket should be in the storage module? How do you imagine communication with the server? I've been having plenty of fun with the securedrop-sdk, and I notice in that repository discussion about using qrexec to work via a proxy running in another VM. Any place I can find out how we're definitively going to do this..?
  • What do you mean by "if source is not archived"? I can't see any way to tell from the response from the API or SDK if this flag is set. Or is this perhaps a local archive - in which case, how is this signaled?
  • Mark things as pending deletion... why not just delete? (Just trying to get my head around the life cycle of this stuff.)

If I think of any more, I'll add them in further comments. ;-) Thanks in advance!

@redshiftzero
Copy link
Contributor Author

The test directory is in the same directory as the application code. Any reasons why this can't be moved up a directory to keep tests and application code separate..? If not, I'll just move it and fix.

Nope, feel free to git mv!

Obviously this will need new models via SQLAlchemy. The current single test doesn't actually appear to test anything (no asserts). I guess this is just a stub to check things work? Any reason why we're testing SQLAlchemy class definitions above and beyond ensuring they have the expected attributes? (i.e. is that all we need to test, if at all.)

Just merged #9 which should have the models needed for this ticket. In terms of the string representation tests (a few more were merged in #9), those were just added such that the __repr__ method is covered (just checks that calling __repr__ doesn't raise an exception).

Regarding broader testing, I think tests of the storage module for overall sync behavior - e.g. the case where source is deleted on server is indeed deleted by client - is sufficient (mocking out the SDK methods), we don't need a bunch of other model tests (since the storage tests will be using these models).

I'm guessing (please confirm) that the logic described in the ticket should be in the storage module?

Yep, exactly.

How do you imagine communication with the server? I've been having plenty of fun with the securedrop-sdk

For the storage layer, we'll use the securedrop-sdk to communicate with the server, which we can import in this repo (from git, since it's not on PyPI yet).

I notice in that repository discussion about using qrexec to work via a proxy running in another VM. Any place I can find out how we're definitively going to do this..?

Yep, the plan is that the SDK will handle responses/requests from/to the proxy:

screen shot 2018-09-11 at 6 20 15 pm

From the perspective of working on the client, we can expect that the SDK won't change in the scenario where it is sending traffic to a SecureDrop server directly and the scenario where traffic is mediated by the Qubes proxy (i.e. the SDK API shouldn't change). We do need to make some changes to the SDK to work with the Qubes proxy, which is tracked in this ticket: freedomofpress/securedrop-sdk#16

What do you mean by "if source is not archived"? I can't see any way to tell from the response from the API or SDK if this flag is set. Or is this perhaps a local archive - in which case, how is this signaled?
Mark things as pending deletion... why not just delete? (Just trying to get my head around the life cycle of this stuff.)

Ah, so we should implement this sync functionality assuming that the server and all clients will be synchronized: if a source is deleted from the server, it is deleted from all clients, and vice versa (if a source is deleted from a client, it's deleted on the server and all the other clients).

The archiving language was in there from some discussions about an "archiving" or "trash" workflow where we would allow journalists to keep sources/submissions in their local workstation even if they have been deleted by other journalists or on the server. We may one day support such a workflow, but for this initial development (for the MVP of the client) we should delete everywhere when a source or submission is deleted anywhere (see discussion here: #18).

I've updated this ticket to reflect this simpler scenario, apologies!

@ntoll
Copy link
Contributor

ntoll commented Sep 12, 2018

Ack. On it. :-)

@ntoll ntoll mentioned this issue Sep 13, 2018
@ninavizz
Copy link
Member

ninavizz commented Sep 19, 2018

Nina to Gherkin behavior & provide wireframe... commenting to get this on my radar

@eloquence
Copy link
Member

Resolved via #27, any improvements can be tracked separately.

legoktm pushed a commit that referenced this issue Dec 11, 2023
legoktm pushed a commit that referenced this issue Dec 11, 2023
Provides usb preflight checks and standardizes error messages
legoktm pushed a commit that referenced this issue Dec 11, 2023
Typo fixes for pypi upload
legoktm pushed a commit that referenced this issue Dec 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants