Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

qubesd gone missing during make all #378

Closed
kushaldas opened this issue Dec 16, 2019 · 6 comments
Closed

qubesd gone missing during make all #378

kushaldas opened this issue Dec 16, 2019 · 6 comments

Comments

@kushaldas
Copy link
Contributor

I was running make all from master, and during the process, the securedrop-updater also started, and I got the following error.

  Summary for sd-svs-disp-buster-template
  ------------
  Succeeded: 9
  Failed:    0
  ------------
  Total states run:     9
  Total run time:  22.206 s
sd-proxy-buster-template: ERROR (exception Failed to connect to qubesd service: [Errno 2] No such file or directory)
Traceback (most recent call last):
  File "/bin/qubesctl", line 100, in <module>
    sys.exit(main())
  File "/bin/qubesctl", line 87, in main
    return max(exit_code, runner.run())
  File "/usr/lib/python2.7/site-packages/qubessalt/__init__.py", line 265, in run
    if 'state.highstate' not in self.command or self.has_config(vm):
  File "/usr/lib/python2.7/site-packages/qubessalt/__init__.py", line 258, in has_config
    top = caller.function('state.show_top')
  File "/usr/lib/python2.7/site-packages/salt/client/__init__.py", line 2071, in function
    return func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/salt/modules/state.py", line 1616, in show_top
    info=st_.opts['pillar']['_errors'])
salt.exceptions.CommandExecutionError: Pillar failed to render. Additional info follows:

- Failed to load ext_pillar qvm_prefs: Failed to connect to qubesd service: [Errno 2] No such file or directory
Makefile:11: recipe for target 'all' failed
make: *** [all] Error 1
@eloquence
Copy link
Member

Ah yeah, this just bit me as well. Indeed seems to be due to the updater cron job running in the middle.

@kushaldas
Copy link
Contributor Author

This time the sd-app could not come up due to qrexec service not responding. Had to remove the vm and redo the steps to get it working.

@conorsch
Copy link
Contributor

seems to be due to the updater cron job running in the middle

Given that the legacy updater was removed in #430, and we haven't received any more reports of this particular problem, I'm inclined to close. Also worth noting that

Failed to load ext_pillar qvm_prefs

may be helped by #530, but that's a bit of a stretch.

@eloquence
Copy link
Member

Unfortunately that hypothesis cannot be fully sufficient as I encountered it again since the legacy updater removal; see #514 (comment)

@conorsch
Copy link
Contributor

A new lead: the logrotate config in dom0 will bounce the qubesd service, making it briefly unavailable. See dom0:/etc/logrotate.d/qubes. The cron daily config invokes logrotate, which is why we saw the timing correlation noted above

Indeed seems to be due to the updater cron job running in the middle.

Next time someone observes it, run sudo journalctl -ab | grep -P '(logrotate|Qubes OS daemon)' -A5 -B5 in dom0, that should be enough to determine whether the qubesd service was bounced as part of the logrotate maintenance.

See also QubesOS/qubes-issues#5004

@conorsch
Copy link
Contributor

The fix for this issue has been backported to Qubes 4.0 and is now available via the stable channel. The qubesd service no longer logs to a flat file, so logrotate doesn't need to bounce it for rotation. Closing as resolved, but please re-open if you encounter the "Failed to connect to qubesd service" error again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants