Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use mgrctl to call spacecmd? #460

Open
digdilem-work opened this issue Oct 2, 2024 · 5 comments
Open

How to use mgrctl to call spacecmd? #460

digdilem-work opened this issue Oct 2, 2024 · 5 comments

Comments

@digdilem-work
Copy link

digdilem-work commented Oct 2, 2024

Have migrated 2024.08 to Podman.

One issue I have is that I wish to use Spacecmd to schedule various events as I did pre-containers. I thought I could use mgrctl to do so, but it seems there are problems here.

Eg:

uyuni01:~/.spacecmd # mgrctl exec -- spacecmd
(Long pause)
ERROR: Failed to connect to http://ata-oxy-uyuni01.atass.com/rpc/api

Sometimes I can get partial replies, without the --, like

ata-oxy-uyuni01:~ # mgrctl exec spacecmd help whoamitalkingto
*** No help on whoamitalkingto whoamitalkingto
ata-oxy-uyuni01:~ # mgrctl exec spacecmd whoamitalkingto
(Long pause)
ERROR: Failed to connect to http://ata-oxy-uyuni01.atass.com/rpc/api

I have tried specifying credentials and a server with

mgrctl exec -- "spacecmd -u validuser -p password -s uyuni01.fqdn" and variations of, but again, not able to get spacecmd to do anything but reply to help.

If I try "mgrctl term" to open a terminal in the container and use that, then

Welcome to spacecmd, a command-line interface to Spacewalk.

Type: 'help' for a list of commands
      'help <cmd>' for command-specific help
      'quit' to quit

Followed by a long pause before I get the failure to reach api again.

Anyone able to help, please?

@rjmateus
Copy link
Member

rjmateus commented Oct 3, 2024

Can you try mgrctl exec -it "spacecmd -h"

the all part for spacecmd should be in quotes.

@digdilem-work
Copy link
Author

That outputs the expected response below - pathing to spacecmd's not a problem.

However, any operation that requires spacecmd to talk to the API fails, because there is no network routing from the container to the https:// api

I have found a workaround for myself and this issue - I have installed spacecmd from the EPEL repos onto a different vm entirely, running Rocky 8. Spacecmd from there can talk to the Podman Uyuni's API perfectly, so I've moved my automations across to that.

So I'm sorted - but happy to help test further if required for the benefit of others.

Requested output:

uyuni01:~/.spacecmd # mgrctl exec -it "spacecmd -h"
usage: spacecmd [options] [command] [-- [command options]]

optional arguments:
  -h, --help            show this help message and exit
  -c CONFIG, --config CONFIG
                        config file to use [default: ~/.spacecmd/config]
  -u USERNAME, --username USERNAME
                        use this username to connect to the server
  -p PASSWORD, --password PASSWORD
                        use this password to connect to the server (insecure).
                        Use config instead or spacecmd will ask.
  -s SERVER, --server SERVER
                        connect to this server [default: local hostname]
  --nossl               use HTTP instead of HTTPS
  --nohistory           do not store command history
  -y, --yes             answer yes for all questions
  -q, --quiet           print only error messages
  -d, --debug           print debug messages (can be passed multiple times)

Example of output that requires spacecmd to reach the API:

uyuni01:~/.spacecmd # mgrctl exec -it "spacecmd whoamitalkingto"
(Long pause/timeout)
ERROR: Failed to connect to http://ata-oxy-uyuni01.atass.com/rpc/api

@aaannz
Copy link
Contributor

aaannz commented Oct 3, 2024

spacecmd from within container has to talk to localhost otherwise it is triggering hairpin problem.

Do you happen to have some custom configuration for spacecmd?

# mgrctl exec -i "spacecmd whoamitalkingto"
INFO: Connected to http://localhost/rpc/api as admin
localhost

@digdilem-work
Copy link
Author

Aha! That was the missing piece. This is now working, but needed a couple of extra hoops jumped through.

  • Yes, I had a ~/.spacecmd/config file as per the docs to avoid specifying credentials in every script that uses it. (Block below)
  • I didn't know that localhost was needed whilst in the container. I don't think that is documented anywhere (if it is, forgive me) and I've always used fqdn before.
  • Even when overriding that config file with -s localhost (and later changing the server= line within config), this still failed, but with a new error message:
uyuni01:~# mgrctl exec -it "spacecmd -s localhost whoamitalkingto"
ERROR: Failed to connect to https://localhost/rpc/api
  • BUT! Finally success when I added --nossl
uyuni01:~# mgrctl exec -it "spacecmd -s localhost --nossl whoamitalkingto"
INFO: Connected to http://localhost/rpc/api as validuser
localhost

THAT is what works! spacecmd appears to default to https://, which is not being provided on localhost. The other commands in spacecmd are now working as expected as well, so thanks for helping clear up that puzzle.

I do think better documentation could have saved some wasted time here, but appreciate it's not possible to cover everything, and I hope this issue helps other spacecmd users.

Feel free to close this, or re-assign to the docs team if they're able to add a paragraph about how to use spacecmd in containers - or adapt the existing spacecmd documentation that is now misleading.

Thank you

Sample contents of ~/.spacecmd/config

[spacecmd]
server=uyuni.fqdn
username=validuser
password=longpassword

@aaannz
Copy link
Contributor

aaannz commented Oct 3, 2024

Indeed it's not documented and should be, particularly for spacecmd when used from the container environment. When spacecmd is used from somewhere else, then there should be no change.
Also take a look into mgrctl api if you are interested only in API calls. Upcoming release will also have some improvements wrt. session handlings and error logging.

There were quite a lot of changes particularly related to change of internally using localhost instead of fqdn.

However for the future I am afraid we will need to change this once more. When new containers will be introduced and product split up, we'll again need to change internal hostname to something else then localhost.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants