Migration guide to convert Uyuni installation from RPM to container setup? #8816
Replies: 17 comments 52 replies
-
I'm aware of
|
Beta Was this translation helpful? Give feedback.
-
And some more confusion: https://youtu.be/0eiQIDTGY-U?t=88 uses Leap 15.5 and 15.6 while the docs in https://www.uyuni-project.org/uyuni-docs/en/uyuni/installation-and-upgrade/container-deployment/uyuni/opensuse-leap-micro-deployment.html use only Leap Micro 5.5 |
Beta Was this translation helpful? Give feedback.
-
For now we, are only supporting the migration or existing servers to a new machine. The reason is that we didn't had the time to test an in-place migration yet. The migration was tested for uyuni running on podman and kubernetes. Thank you for point out the documentation, I also think it's not clear enough, and we will work on improving it. |
Beta Was this translation helpful? Give feedback.
-
@rjmateus , got one more question on this for testing. |
Beta Was this translation helpful? Give feedback.
-
Migration is non-destructive. With migration from 4.3 to 5.0, the old machine will stay untouch. If something going wrong in the migration, you can just revert the DNS and have the old suma working. |
Beta Was this translation helpful? Give feedback.
-
Does the migration script handle migrating imported/custom signed cert/key pairs for the server if they're in use? If not, do you have some guidance on how to modify settings in the container and deploying the associated CA chain certs? Other customizations of concern would be the salt pillars for copying GPG keys according to https://www.uyuni-project.org/uyuni-docs/en/uyuni/client-configuration/gpg-keys.html#_user_defined_gpg_keys (which currently doesn't seem to actually work for us even in an rpm installation), generating new management GPG keys for signing repo data in the containerized model https://www.uyuni-project.org/uyuni-docs/en/uyuni/administration/repo-metadata.html. How does that get done in the container environment? When can we expect updated documentation for doing that? For anything that isn't supported by the migration script, I'm going to guess that we would use mgrctl term, similar to what you've indicated for the PAM setup in #8675 (which we also use), but are there any gotchas? I'm also not clear on how updates are supposed to happen in the containerized environment. If we just update the container, but we've had to make these changes in the container overlay, won't they get lost with the container update? Having to redo all these changes manually with each new container release/update would be a big step backwards. |
Beta Was this translation helpful? Give feedback.
-
I think ppanon2022 makes some very valid points, all of which I share. The timescale seems very short for all of those reasons, plus the complexity of what may be quite evolved installations. Migration scripts in my experience rarely manage to resolve every single issue without fault, so it seems likely there will be a lot of testing and issues arising. That might mean several migration attempts for each user, then if they do hit problems, having to restore from backups, report the issue and wait for the dev team to fix. It feels unrealistic for that to happen within one month at any time of the year, but especially during the northern hemisphere's summer break. Admission: I hadn't actually realised until now that this container change would be mandatory and that Uyuni would only be available as a container option going forwards, so perhaps I hadn't given it my full attention. But this makes me uneasy and not every admin will have familiarity with the tools and methods going forwards. I would have expected some quite big notifications to make users aware of this, and at least a six month transition period. |
Beta Was this translation helpful? Give feedback.
-
OK, this is actually pretty important to avoid filling the database disk. How are we supposed to be doing database backups to clear the wal logs with the container? With the legacy install, we were setting wal logging to minimal, and doing VM level snapshot backups that got deduped. If I try to do this with the container, postgresql won't startup with the error I mean I guess I'm supposed to use |
Beta Was this translation helpful? Give feedback.
-
I've got one more question. On one of our Uyuni systems, we are using an NFS volume for the /var/spacewalk package repos (and a separate virtual disk for /var/lib/pgsql). But mgr-storage-server takes two disk devices as arguments. How do we deal with the migration and that command if we still want that mapped storage to be an NFS drive. Is there something fundamental about podman that requires that to be a disk interface? Or is it safe to run that command using a small dummy disk, then move its contents into an NFS volume and replace the small disk mount with the NFS mount? We would still use a virtual local disk for the Postgresql database. If it does require a local disk interface, why? |
Beta Was this translation helpful? Give feedback.
-
Um, our two openSUSE Leap Micro container hosts for Uyuni (in different sites) both appear to have spontaneously rebooted at 4:30AM PST today. Is this typical? What would be the trigger? |
Beta Was this translation helpful? Give feedback.
-
Hmm. I don't seem to be able to use spacecmd from within the container context with a config file.
I do see some SELinux errors but they are for SEL context re-labeling, which seems to try to happen as a result of the spacecmd. It's possible it's misreporting the error. With no tcpdump packet capture available, it's not possible to be sure whether it's actually sending a packet or if it's very coarse-grained exception handling. It does take a long time to return the error.
Suggestions? I could add a policy to allow the relabeling, but that seems like it would significantly reduce the security of the container. |
Beta Was this translation helpful? Give feedback.
-
To use spacecmd from inside the container please use localhost and call it with the insecure flag, to use HTTP instead of https |
Beta Was this translation helpful? Give feedback.
-
I also hit this.
Two options:
As Ricardo mentioned, using localhost and –nossl works.
From the host: mgrctl exec -- spacecmd -s localhost --nossl -u ValidUser -p UserPass
The second option is to download spacecmd onto a different machine on your network and run commands from there. (The binaries are in most EL distros – “yum install spacecmd.noarch” )
Then you can use spacecmd as you did on the RPM distro. Eg: spacecmd -u ValidUser -p UserPass -s fqdn-of-original-server
In both cases you can create the old ~/.spacecmd/config file with the server, user and password values to save entering them each time.
From: Paul-Andre Panon ***@***.***>
Sent: 09 October 2024 22:42
To: uyuni-project/uyuni ***@***.***>
Cc: Subscribed ***@***.***>
Subject: Re: [uyuni-project/uyuni] Migration guide to convert Uyuni installation from RPM to container setup? (Discussion #8816)
Hmm. I don't seem to be able to use spacecmd from within the container context with a config file.
uyuni-server:~ # spacecmd
Welcome to spacecmd, a command-line interface to Spacewalk.
Type: 'help' for a list of commands
'help <cmd>' for command-specific help
'quit' to quit
ERROR: Failed to connect to http://CARMD-NV-UYUNI1.sierrawireless.local/rpc/api
I do see some SELinux errors but they are for SEL context re-labeling, which seems to try to happen as a result of the spacecmd. It's possible it's misreporting the error. With no tcpdump packet capture available, it's not possible to be sure.
/var/log/audit # audit2allow -a
#============= container_init_t ==============
#!!!! This avc is a constraint violation. You would need to modify the attributes of either the source or target types to allow this access.
#Constraint rule:
# constrain dir { create relabelfrom relabelto } ((u1 == u2 -Fail-) or (t1 == can_change_object_identity -Fail-) ); Constraint DENIED
# Possible cause is the source user (system_u) and target user (unconfined_u) are different.
# Possible cause is the source level (s0:c303,c621) and target level (s0) are different.
allow container_init_t container_file_t:dir relabelfrom;
#!!!! This avc is a constraint violation. You would need to modify the attributes of either the source or target types to allow this access.
#Constraint rule:
# constrain file { create relabelfrom relabelto } ((u1 == u2 -Fail-) or (t1 == can_change_object_identity -Fail-) ); Constraint DENIED
# Possible cause is the source user (system_u) and target user (unconfined_u) are different.
# Possible cause is the source level (s0:c303,c621) and target level (s0) are different.
Suggestions?
—
Reply to this email directly, view it on GitHub<#8816 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/A7AIN5J7TUNM7D7QTMIDHLDZ2WPJFAVCNFSM6AAAAABIIWRW7SVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAOBZG42DGNA>.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.******@***.***>>
|
Beta Was this translation helpful? Give feedback.
-
Can you reach that endpoint with curl?
mgrctl term
Then when inside the container; curl http://localhost/rpc/api
Should return with no error.
From: Paul-Andre Panon ***@***.***>
Sent: 10 October 2024 19:18
To: uyuni-project/uyuni ***@***.***>
Cc: Simon Avery ***@***.***>; Comment ***@***.***>
Subject: Re: [uyuni-project/uyuni] Migration guide to convert Uyuni installation from RPM to container setup? (Discussion #8816)
That doesn't work for me
myuyuniserver:/home/itadmin # mgrctl exec -- spacecmd -s localhost --nossl -d
DEBUG: stdout is not a TTY, setting TERM=dumb
DEBUG: command=, return_value=False
DEBUG: Read configuration from /root/.spacecmd/config
DEBUG: Loading configuration section [spacecmd]
DEBUG: Current Configuration: {'server': 'localhost', 'username': 'swi_admin', 'password': '************', 'nossl': True}
DEBUG: Configuration section [localhost] does not exist
DEBUG: Connecting to http://localhost/rpc/api
ERROR: <class 'xmlrpc.client.ProtocolError'>
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/spacecmd/misc.py", line 295, in do_login
self.api_version = self.client.api.getVersion()
File "/usr/lib64/python3.6/xmlrpc/client.py", line 1112, in call
return self.__send(self.__name, args)
File "/usr/lib64/python3.6/xmlrpc/client.py", line 1452, in __request
verbose=self.__verbose
File "/usr/lib64/python3.6/xmlrpc/client.py", line 1154, in request
return self.single_request(host, handler, request_body, verbose)
File "/usr/lib64/python3.6/xmlrpc/client.py", line 1187, in single_request
dict(resp.getheaders())
xmlrpc.client.ProtocolError: <ProtocolError for localhost/rpc/api: 404 Not Found>
ERROR: Failed to connect to http://localhost/rpc/api
DEBUG: Error while connecting to the server http://localhost/rpc/api: <ProtocolError for localhost/rpc/api: 404 Not Found>
—
Reply to this email directly, view it on GitHub<#8816 (reply in thread)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/A7AIN5JODWJJ4WFMQJ4GR7LZ23AFZAVCNFSM6AAAAABIIWRW7SVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAOJQG4ZTSNY>.
You are receiving this because you commented.Message ID: ***@***.******@***.***>>
|
Beta Was this translation helpful? Give feedback.
-
OK , one more question. Post-migration, we're having some problems updating some Redhat 8 systems with perl packages in Appstream modules. The Appstreams are enabled on the Uyuni system entry and running dnf module list on the client shows the same modules enabled.
24 packages show as having newer versions needing to update from Uyuni, but none from the client when running a dnf update. Attempting to push the updates from Uyuni results in a code 02 result and
The versions on the client seem older, i.e. el8.1.0 for DBD SQLlite, and el8.3.0 - although one outdated package is also el8.6.0 - whereas all the new packages it's trying to update to are el8.6.0. Dependencies seem OK otherwise. There are no filters on the LCM project used to create the channels assigned to this system. Looking at the Appstream tab on the assigned LCM channel shows the expected option for those modules and nothing else. But the same is true with the sync channel used as a source
Thus, it seems like the available package list being transmitted to the client does not include the packages in the Appstream modules that are enabled for the system, despite showing up in the proposed updates list. The minion package version installed on the client is venv-salt-minion-3006.0-33.1.uyuni.x86_64, which seems to be the right one. I believe this used to work before the container migration but is broken now. |
Beta Was this translation helpful? Give feedback.
-
Your post was very useful, Cedric, in solving my own issue which I believe is related as it’s giving the same hostname error.
As a test of an affected system, I’ve run those commands you mention in the container and this one
cat /etc/rhn/rhn.conf 2>/dev/null | grep 'java.hostname' | cut -d' ' -f3 || true
Produces a double output
uyuni-server:/ # cat /etc/rhn/rhn.conf 2>/dev/null | grep 'java.hostname' | cut -d' ' -f3 || tru
ata-oxy-uyuni01.domain.com
ata-oxy-uyuni01.domain.com
On inspection, this file contains TWO java.hostname lines
uyuni-server:/ # grep java.hostname /etc/rhn/rhn.conf
java.hostname = ata-oxy-uyuni01.domain.com
java.hostname = ata-oxy-uyuni01.domain.com
Removing one of these lines (Commenting out doesn’t fix it, obviously since it’s grepping) allowed me to run the update “mgradm update podman”
So – it looks like the migration adds a valid java.hostname value to this file without checking whether one exists already.
I suspect this could be fixed within mgradm by restricting the output of the value by one, ie, changing that command to:
cat /etc/rhn/rhn.conf 2>/dev/null | grep 'java.hostname' | cut -d' ' -f3 | head -n 1 || true
Hope this helps.
—
Reply to this email directly, view it on GitHub<#8816 (reply in thread)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/A7AIN5LXRKQYBMSZOEBELSLZ5JCDLAVCNFSM6AAAAABIIWRW7SVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTCMBVGI4TCMA>.
You are receiving this because you commented.Message ID: ***@***.******@***.***>>
|
Beta Was this translation helpful? Give feedback.
-
Please keep discussions focused on the original topic and start a new discussion for each new topic. Mixing multiple topics can make it hard to follow and may lead to confusion. While related discussions are common, organizing them separately ensures clarity and helps everyone stay on track. So if the original issue has been resolved, please mark this discussion as "Close as resolved " |
Beta Was this translation helpful? Give feedback.
-
With the release of 2024.05 we got two announcements that will affect existing Uyuni installations:
So what are the supported migration or upgrade paths to switch an existing installation from RPM to the new container setup? Do we have a migration guide that we can follow?
Beta Was this translation helpful? Give feedback.
All reactions