Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LVMActivationController fails repeatedly with exit status 5 #9365

Closed
jfroy opened this issue Sep 24, 2024 · 7 comments · Fixed by #9422
Closed

LVMActivationController fails repeatedly with exit status 5 #9365

jfroy opened this issue Sep 24, 2024 · 7 comments · Fixed by #9422
Assignees

Comments

@jfroy
Copy link
Contributor

jfroy commented Sep 24, 2024

Bug Report

Description

On a single node cluster with Talos v1.8.0 and a Rook-Ceph cluster composed of 8 encrypted disk with one OSD per disk, Talos fails to activate the lvm volumes at boot. This process appears to keep retrying and the controller makes no progress.

In beta versions of v1.8.0, the lvm volumes also did not become active at boot, but there was also no attempt to activate them. The new code for that was introduced late in the 1.8 cycle (see #9300) and this is the first time I ran a build with the new controller.

Workaround

I can run a privileged Alpine pod and issue vgchange -a y to activate all the lvm volumes. It does not allow the controller to make progress, but Ceph OSDs do start and the Ceph cluster does become healthy.

Logs

192.168.1.13: user: warning: [2024-09-24T02:44:35.693842796Z]: [talos] checking device for LVM volume activation {"component": "controller-runtime", "controller": "block.LVMActivationController", "device": "/dev/nvme0n1"}
192.168.1.13: user: warning: [2024-09-24T02:44:35.772287796Z]: [talos] checking device for LVM volume activation {"component": "controller-runtime", "controller": "block.LVMActivationController", "device": "/dev/nvme1n1"}
192.168.1.13: user: warning: [2024-09-24T02:44:35.872301796Z]: [talos] checking device for LVM volume activation {"component": "controller-runtime", "controller": "block.LVMActivationController", "device": "/dev/nvme2n1"}
192.168.1.13: user: warning: [2024-09-24T02:44:35.976264796Z]: [talos] checking device for LVM volume activation {"component": "controller-runtime", "controller": "block.LVMActivationController", "device": "/dev/nvme4n1"}
192.168.1.13: user: warning: [2024-09-24T02:44:36.064223796Z]: [talos] checking device for LVM volume activation {"component": "controller-runtime", "controller": "block.LVMActivationController", "device": "/dev/nvme5n1"}
192.168.1.13: user: warning: [2024-09-24T02:44:36.188478796Z]: [talos] checking device for LVM volume activation {"component": "controller-runtime", "controller": "block.LVMActivationController", "device": "/dev/nvme6n1"}
192.168.1.13: user: warning: [2024-09-24T02:44:36.276572796Z]: [talos] checking device for LVM volume activation {"component": "controller-runtime", "controller": "block.LVMActivationController", "device": "/dev/nvme7n1"}
192.168.1.13: user: warning: [2024-09-24T02:44:36.364331796Z]: [talos] controller failed {"component": "controller-runtime", "controller": "block.LVMActivationController", "error": "7 errors occurred:\n\t* failed to check if LVM volume backed by device /dev/nvme0n1 needs activation: exit status 5: File descriptor 36 (socket:[18689]) leaked on lvm invocation. Parent PID 1: /sbin/init\nFile descriptor 102 (pipe:[575]) leaked on lvm invocation. Parent PID 1: /sbin/init\nFile descriptor 103 (pipe:[575]) leaked on lvm invocation. Parent PID 1: /sbin/init\n\n\t* failed to check if LVM volume backed by device /dev/nvme1n1 needs activation: exit status 5: File descriptor 36 (socket:[18689]) leaked on lvm invocation. Parent PID 1: /sbin/init\nFile descriptor 102 (pipe:[575]) leaked on lvm invocation. Parent PID 1: /sbin/init\nFile descriptor 103 (pipe:[575]) leaked on lvm invocation. Parent PID 1: /sbin/init\n\n\t* failed to check if LVM volume backed by device /dev/nvme2n1 needs activation: exit status 5: File descriptor 36 (so...

Environment

  • Talos version:
    Client:
        Tag:         v1.7.6
        SHA:         ae67123a
        Built:       
        Go version:  go1.22.5
        OS/Arch:     linux/amd64
    Server:
        NODE:        192.168.1.13
        Tag:         v1.8.0-jfroy.1
        SHA:         d941b1c8
        Built:       
        Go version:  go1.22.7
        OS/Arch:     linux/amd64
        Enabled:     RBAC
    
  • Kubernetes version: 1.31.1
  • Platform: baremetal

support.zip

@jfroy
Copy link
Contributor Author

jfroy commented Sep 24, 2024

There is a good chance this is a dup of #9300, but the details are different, so I filed a separate issue.

@tpretz
Copy link

tpretz commented Sep 29, 2024

same, 1.8.0, drive used for rook-ceph, noticed the loss of osds as /dev/mapper nodes not configured

user: warning: [2024-09-29T09:18:42.27049804Z]: [talos] checking device for LVM volume activation {"component": "controller-runtime", "controller": "block.LVMActivationController", "device": "/dev/sdc"} user: warning: [2024-09-29T09:18:42.32296204Z]: [talos] controller failed {"component": "controller-runtime", "controller": "block.LVMActivationController", "error": "1 error occurred:\n\t* failed to check if LVM volume backed by device /dev/sdc needs activation: exit status 5: File descriptor 35 (socket:[1666]) leaked on lvm invocation. Parent PID 1: /sbin/init\nFile descriptor 76 (pipe:[3069]) leaked on lvm invocation. Parent PID 1: /sbin/init\nFile descriptor 77 (pipe:[3069]) leaked on lvm invocation. Parent PID 1: /sbin/ init\n\n\n"}

@smira
Copy link
Member

smira commented Sep 30, 2024

I can't reproduce this fully, not sure what is the Ceph version and OSD setup:

$ talosctl -n 172.20.0.5 get dv     
NODE         NAMESPACE   TYPE               ID        VERSION   TYPE        SIZE     DISCOVERED   LABEL                                    PARTITIONLABEL
172.20.0.5   runtime     DiscoveredVolume   dm-0      1         disk        11 GB    luks                                                  
172.20.0.5   runtime     DiscoveredVolume   dm-1      1         disk        11 GB    luks                                                  
172.20.0.5   runtime     DiscoveredVolume   dm-2      1         disk        11 GB    bluestore                                             
172.20.0.5   runtime     DiscoveredVolume   dm-3      1         disk        11 GB    bluestore                                             
172.20.0.5   runtime     DiscoveredVolume   loop0     1         disk        78 MB    squashfs                                              
172.20.0.5   runtime     DiscoveredVolume   nvme0n1   1         disk        11 GB    lvm2-pv      cm8gaU-jiyj-C18T-8rsh-Wvq8-eJMf-mdS4eB   
172.20.0.5   runtime     DiscoveredVolume   nvme0n2   1         disk        11 GB    lvm2-pv      Wfg1iN-NhFu-vDHx-1mck-bq95-diQ3-o04Rfq   
[    4.460633] [talos] activating LVM volume {"component": "controller-runtime", "controller": "block.LVMActivationController", "name": "ceph-72a65210-c3d5-443c-bcc4-4ba069666dc1"}
[    4.508764] [talos] checking device for LVM volume activation {"component": "controller-runtime", "controller": "block.LVMActivationController", "device": "/dev/nvme0n2"}
[    4.588751] [talos] activating LVM volume {"component": "controller-runtime", "controller": "block.LVMActivationController", "name": "ceph-bec485da-9222-4f93-a7af-97a120de6cae"}

@jfroy
Copy link
Contributor Author

jfroy commented Sep 30, 2024

I am using rook-ceph v1.15.2, using the cluster and operator charts. Here's the storage configuration:

      storage:
        useAllNodes: false
        useAllDevices: false
        config:
          osdsPerDevice: "1"
          encryptedDevice: "true"
        nodes:
          - name: <node>
            devices:
              - name: /dev/disk/by-id/nvme-eui.<digits>
              - name: /dev/disk/by-id/nvme-eui.<digits>
              - name: /dev/disk/by-id/nvme-eui.<digits>
              - name: /dev/disk/by-id/nvme-eui.<digits>
              - name: /dev/disk/by-id/nvme-eui.<digits>
              - name: /dev/disk/by-id/nvme-eui.<digits>
              - name: /dev/disk/by-id/nvme-eui.<digits>
              - name: /dev/disk/by-id/nvme-eui.<digits>

Here's my dv output:

NODE           NAMESPACE   TYPE               ID          VERSION   TYPE        SIZE     DISCOVERED   LABEL                                    PARTITIONLABEL
192.168.1.13   runtime     DiscoveredVolume   dm-0        1         disk        88 MB    xfs          STATE                                    
192.168.1.13   runtime     DiscoveredVolume   dm-1        1         disk        999 GB   xfs          EPHEMERAL                                
192.168.1.13   runtime     DiscoveredVolume   dm-10       1         disk        3.8 TB   bluestore                                             
192.168.1.13   runtime     DiscoveredVolume   dm-11       1         disk        3.8 TB   bluestore                                             
192.168.1.13   runtime     DiscoveredVolume   dm-12       1         disk        3.8 TB   bluestore                                             
192.168.1.13   runtime     DiscoveredVolume   dm-13       1         disk        3.8 TB   bluestore                                             
192.168.1.13   runtime     DiscoveredVolume   dm-14       1         disk        3.8 TB   bluestore                                             
192.168.1.13   runtime     DiscoveredVolume   dm-15       1         disk        3.8 TB   bluestore                                             
192.168.1.13   runtime     DiscoveredVolume   dm-16       1         disk        3.8 TB   bluestore                                             
192.168.1.13   runtime     DiscoveredVolume   dm-17       1         disk        3.8 TB   bluestore                                             
192.168.1.13   runtime     DiscoveredVolume   dm-2        1         disk        3.8 TB   luks                                                  
192.168.1.13   runtime     DiscoveredVolume   dm-3        1         disk        3.8 TB   luks                                                  
192.168.1.13   runtime     DiscoveredVolume   dm-4        1         disk        3.8 TB   luks                                                  
192.168.1.13   runtime     DiscoveredVolume   dm-5        1         disk        3.8 TB   luks                                                  
192.168.1.13   runtime     DiscoveredVolume   dm-6        1         disk        3.8 TB   luks                                                  
192.168.1.13   runtime     DiscoveredVolume   dm-7        1         disk        3.8 TB   luks                                                  
192.168.1.13   runtime     DiscoveredVolume   dm-8        1         disk        3.8 TB   luks                                                  
192.168.1.13   runtime     DiscoveredVolume   dm-9        1         disk        3.8 TB   luks                                                  
192.168.1.13   runtime     DiscoveredVolume   loop0       1         disk        148 kB   squashfs                                              
192.168.1.13   runtime     DiscoveredVolume   loop2       1         disk        2.6 MB   squashfs                                              
192.168.1.13   runtime     DiscoveredVolume   loop4       1         disk        7.4 MB   squashfs                                              
192.168.1.13   runtime     DiscoveredVolume   loop5       1         disk        244 MB   squashfs                                              
192.168.1.13   runtime     DiscoveredVolume   loop6       1         disk        6.8 MB   squashfs                                              
192.168.1.13   runtime     DiscoveredVolume   loop7       1         disk        68 MB    squashfs                                              
192.168.1.13   runtime     DiscoveredVolume   nvme0n1     1         disk        3.8 TB   lvm2-pv      MMEtjc-Beuo-7dTy-3CC4-UoAe-jLMJ-14muZ0   
192.168.1.13   runtime     DiscoveredVolume   nvme1n1     1         disk        3.8 TB   lvm2-pv      GGAzaG-X77x-2sdk-ofJs-wGbj-5oT9-lbl0NQ   
192.168.1.13   runtime     DiscoveredVolume   nvme2n1     1         disk        3.8 TB   lvm2-pv      8dCOWm-XT3k-zA1Y-a0wr-VYpL-us6I-Eq5Bu0   
192.168.1.13   runtime     DiscoveredVolume   nvme3n1     1         disk        3.8 TB   lvm2-pv      DVP4kA-Tofw-Yzzu-BobS-cajW-FKVt-LGrLLv   
192.168.1.13   runtime     DiscoveredVolume   nvme4n1     1         disk        3.8 TB   lvm2-pv      fuGJ9v-Dz1S-SKaT-ixyI-FXx6-roc3-KSl0nH   
192.168.1.13   runtime     DiscoveredVolume   nvme5n1     1         disk        3.8 TB   lvm2-pv      rNX7Od-557z-jfU0-zipb-Awn1-14ie-AnKUd2   
192.168.1.13   runtime     DiscoveredVolume   nvme6n1     1         disk        3.8 TB   lvm2-pv      UrZ4Xd-MCQV-OmBP-yGrf-nC5m-40RX-JjJ96S   
192.168.1.13   runtime     DiscoveredVolume   nvme7n1     1         disk        3.8 TB   lvm2-pv      Nlrzu0-IhRS-W5OU-kUda-xehj-eZ7a-skE9gx   
192.168.1.13   runtime     DiscoveredVolume   nvme8n1     1         disk        2.0 TB                                                         
192.168.1.13   runtime     DiscoveredVolume   nvme9n1     1         disk        1.0 TB   gpt                                                   
192.168.1.13   runtime     DiscoveredVolume   nvme9n1p1   1         partition   1.2 GB   vfat                                                  EFI
192.168.1.13   runtime     DiscoveredVolume   nvme9n1p2   5         partition   524 kB   talosmeta                                             META
192.168.1.13   runtime     DiscoveredVolume   nvme9n1p3   1         partition   105 MB   luks                                                  STATE
192.168.1.13   runtime     DiscoveredVolume   nvme9n1p4   1         partition   999 GB   luks                                                  EPHEMERAL
192.168.1.13   runtime     DiscoveredVolume   rbd0        1         disk        107 GB   extfs                                                 
192.168.1.13   runtime     DiscoveredVolume   rbd1        1         disk        11 GB    extfs                                                 
192.168.1.13   runtime     DiscoveredVolume   rbd10       1         disk        2.1 GB   extfs                                                 
192.168.1.13   runtime     DiscoveredVolume   rbd11       1         disk        54 GB    extfs                                                 
192.168.1.13   runtime     DiscoveredVolume   rbd2        1         disk        215 GB   extfs                                                 
192.168.1.13   runtime     DiscoveredVolume   rbd3        1         disk        11 GB    extfs                                                 
192.168.1.13   runtime     DiscoveredVolume   rbd4        1         disk        1.1 GB   extfs                                                 
192.168.1.13   runtime     DiscoveredVolume   rbd5        1         disk        2.1 GB   extfs                                                 
192.168.1.13   runtime     DiscoveredVolume   rbd6        1         disk        11 GB    extfs                                                 
192.168.1.13   runtime     DiscoveredVolume   rbd7        1         disk        1.1 GB   extfs                                                 
192.168.1.13   runtime     DiscoveredVolume   rbd8        1         disk        1.1 GB   extfs                                                 
192.168.1.13   runtime     DiscoveredVolume   rbd9        1         disk        215 GB   extfs                                                 
192.168.1.13   runtime     DiscoveredVolume   sda         1         disk        18 TB    gpt                                                   
192.168.1.13   runtime     DiscoveredVolume   sda1        1         partition   18 TB    zfs          0000000000000000                         zfs-92fe474a100e25c7
192.168.1.13   runtime     DiscoveredVolume   sda9        1         partition   67 MB                                                          
192.168.1.13   runtime     DiscoveredVolume   sdb         1         disk        18 TB    gpt                                                   
192.168.1.13   runtime     DiscoveredVolume   sdb1        1         partition   18 TB    zfs          0000000000000000                         zfs-a6f4899b27ecdbcf
192.168.1.13   runtime     DiscoveredVolume   sdb9        1         partition   67 MB                                                          
192.168.1.13   runtime     DiscoveredVolume   sdc         1         disk        18 TB    gpt                                                   
192.168.1.13   runtime     DiscoveredVolume   sdc1        1         partition   18 TB    zfs          0000000000000000                         zfs-2dbf510074d818a8
192.168.1.13   runtime     DiscoveredVolume   sdc9        1         partition   67 MB                                                          
192.168.1.13   runtime     DiscoveredVolume   sdd         1         disk        18 TB    gpt                                                   
192.168.1.13   runtime     DiscoveredVolume   sdd1        1         partition   18 TB    zfs          0000000000000000                         zfs-02ea63df0fb4a5c4
192.168.1.13   runtime     DiscoveredVolume   sdd9        1         partition   67 MB                                                          
192.168.1.13   runtime     DiscoveredVolume   sde         1         disk        18 TB    gpt                                                   
192.168.1.13   runtime     DiscoveredVolume   sde1        1         partition   18 TB    zfs          0000000000000000                         zfs-04fb16276bd6922d
192.168.1.13   runtime     DiscoveredVolume   sde9        1         partition   67 MB                                                          
192.168.1.13   runtime     DiscoveredVolume   sdf         1         disk        18 TB    gpt                                                   
192.168.1.13   runtime     DiscoveredVolume   sdf1        1         partition   18 TB    zfs          0000000000000000                         zfs-942d875d0b236cb6
192.168.1.13   runtime     DiscoveredVolume   sdf9        1         partition   67 MB                                                          
192.168.1.13   runtime     DiscoveredVolume   sdg         1         disk        18 TB    gpt                                                   
192.168.1.13   runtime     DiscoveredVolume   sdg1        1         partition   18 TB    zfs          0000000000000000                         zfs-5a666efa6b9504f4
192.168.1.13   runtime     DiscoveredVolume   sdg9        1         partition   67 MB                                                          
192.168.1.13   runtime     DiscoveredVolume   sdh         1         disk        18 TB    gpt                                                   
192.168.1.13   runtime     DiscoveredVolume   sdh1        1         partition   18 TB    zfs          0000000000000000                         zfs-43aff1ef9a01c700
192.168.1.13   runtime     DiscoveredVolume   sdh9        1         partition   67 MB                                                          
192.168.1.13   runtime     DiscoveredVolume   sdi         1         disk        18 TB    gpt                                                   
192.168.1.13   runtime     DiscoveredVolume   sdi1        1         partition   18 TB    zfs          0000000000000000                         zfs-99abcaa306915738
192.168.1.13   runtime     DiscoveredVolume   sdi9        1         partition   67 MB                                                          
192.168.1.13   runtime     DiscoveredVolume   sdj         1         disk        18 TB    gpt                                                   
192.168.1.13   runtime     DiscoveredVolume   sdj1        1         partition   18 TB    zfs          0000000000000000                         zfs-9094d8d719bf1b9f
192.168.1.13   runtime     DiscoveredVolume   sdj9        1         partition   67 MB                                                          
192.168.1.13   runtime     DiscoveredVolume   sdk         1         disk        18 TB    gpt                                                   
192.168.1.13   runtime     DiscoveredVolume   sdk1        1         partition   18 TB    zfs          0000000000000000                         zfs-81464007a4e9f998
192.168.1.13   runtime     DiscoveredVolume   sdk9        1         partition   67 MB                                                          
192.168.1.13   runtime     DiscoveredVolume   sdl         1         disk        18 TB    gpt                                                   
192.168.1.13   runtime     DiscoveredVolume   sdl1        1         partition   18 TB    zfs          0000000000000000                         zfs-5ab8efd958b80541
192.168.1.13   runtime     DiscoveredVolume   sdl9        1         partition   67 MB                                                          
192.168.1.13   runtime     DiscoveredVolume   sdm         1         disk        18 TB    gpt                                                   
192.168.1.13   runtime     DiscoveredVolume   sdm1        1         partition   18 TB    zfs          0000000000000000                         zfs-d13ade567e7337ea
192.168.1.13   runtime     DiscoveredVolume   sdm9        1         partition   67 MB                                                          
192.168.1.13   runtime     DiscoveredVolume   sdn         1         disk        18 TB    gpt                                                   
192.168.1.13   runtime     DiscoveredVolume   sdn1        1         partition   18 TB    zfs          0000000000000000                         zfs-63f5caec739ea1ba
192.168.1.13   runtime     DiscoveredVolume   sdn9        1         partition   67 MB                                                          
192.168.1.13   runtime     DiscoveredVolume   sdo         1         disk        18 TB    gpt                                                   
192.168.1.13   runtime     DiscoveredVolume   sdo1        1         partition   18 TB    zfs          0000000000000000                         zfs-59babe01ad624458
192.168.1.13   runtime     DiscoveredVolume   sdo9        1         partition   67 MB                                                          

@jfroy
Copy link
Contributor Author

jfroy commented Oct 2, 2024

@smira OK, I think I know the root cause. At some point I made a change to the node, either a firmware change or more likely the kernel to a new version or config, that impacted PCIe enumeration. This resulted in a different assignment of nvme device nodes to physical devices (e.g. nvme1 became nvme2 and vice-versa).

lvm keeps "pvs_online" files when --cache is specified to speed up lookups of online PVs. These files are stored in PVS_ONLINE_DIR, which is #define PVS_ONLINE_DIR DEFAULT_RUN_DIR "/pvs_online", which for Talos's lvm2 package resolves to /var/run/lvm/pvs_online.

When pvscan is asked to cache, it will check if an existing "pvs_online" file exists, and if it does, it performs some quick validation on it. This basically boils down to checking if the major and minor of the device matches what's recorded in the file for the given PVID.

Because of the device reordering that happened on the node, this check fails:

pvscan[] device/online.c  Create pv online: /var/run/lvm/pvs_online/MMEtjcBeuo7dTy3CC4UoAejLMJ14muZ0 259:13 /dev/nvme2n1
pvscan[] device/online.c  pvscan[] PV /dev/nvme2n1 259:13 is duplicate for PVID MMEtjcBeuo7dTy3CC4UoAejLMJ14muZ0 on 259:7 /dev/nvme3n1.
pvscan[] pvscan.c  pvscan[] PV /dev/nvme2n1 failed to create online file.

The pvscan command then fails with exit code 5 (ECMD_FAILED).

--

I think the lvm2 design assumes that these "pvs_online" files do not survive reboot. Indeed, according to the FHS 3 /var/run has the same semantics as /run 1 (on most modern systems this is implemented as a symlink to /run), and FHS 3 states that

Files under [/run] must be cleared (removed or truncated as appropriate) at the beginning of the boot process. 2

Unfortunately, while /run is a tmpfs on Talos and does respect these semantics, /var/run is a normal directory on the EPHEMERAL partition and thus survives reboots.

I think the right fix for this issue, and potentially many other issues, is to fix Talos's rootfs to have /var/run be a symlink to /run, or a bind mount to /run, or a mount point for a tmpfs, or be explicitly cleared by machined early during boot.

Footnotes

  1. https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch05s13.html

  2. https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch03s15.html

@smira
Copy link
Member

smira commented Oct 2, 2024

@jfroy I actually found the problem yesterday, and never got back to the issue. It's even worse a bit.

There are two separate issues:

  • /var/run in Talos lives on persistent /var, while it should be tmpfs, or it should be cleaned up on each boot (that should be fixed anyways), this leads to lvm cache being persistent and breaks activation
  • in fact, /run is tmpfs in Talos, and lvm should be reconfigured to use that, as it breaks the dependency on /var which we don't need

@smira
Copy link
Member

smira commented Oct 2, 2024

So the issue takes two Talos reboots to reproduce - e.g. if I install Rook/Ceph with encrypted drives (to trigger LVM, otherwise it creates bluestore directly on the device), everything is fine (as Ceph activates LVM itself on creation)

On the second reboot, everything is activated correctly, as /var/run/lvm doesn't have the entries yet, but on the third reboot the cache in /var/run/lvm will lead to one of the two outcomes:

  • lvm thinks the volume is activated already (so no errors, and no activation)
  • lvm cache doesn't match blockdevices (your case), so we see an activation error

@smira smira self-assigned this Oct 2, 2024
smira added a commit to smira/pkgs that referenced this issue Oct 2, 2024
See siderolabs/talos#9365

This allows to break dependency on `/var` availability, and also
workaround issue with `/var/run` being persistent on Talos right now
(which is going to be fixed as well).

Signed-off-by: Andrey Smirnov <[email protected]>
jfroy pushed a commit to jfroy/siderolabs-pkgs that referenced this issue Oct 2, 2024
See siderolabs/talos#9365

This allows to break dependency on `/var` availability, and also
workaround issue with `/var/run` being persistent on Talos right now
(which is going to be fixed as well).

Signed-off-by: Andrey Smirnov <[email protected]>
smira added a commit to smira/talos that referenced this issue Oct 3, 2024
For new installs, simply symlink to `/run` (which is `tmpfs`).

For old installs, simulate by cleaning up the contents.

Fixes siderolabs#9432

Related to siderolabs#9365

Signed-off-by: Andrey Smirnov <[email protected]>
smira added a commit to smira/talos that referenced this issue Oct 3, 2024
For new installs, simply symlink to `/run` (which is `tmpfs`).

For old installs, simulate by cleaning up the contents.

Fixes siderolabs#9432

Related to siderolabs#9365

Signed-off-by: Andrey Smirnov <[email protected]>
smira added a commit to smira/pkgs that referenced this issue Oct 7, 2024
See siderolabs/talos#9365

This allows to break dependency on `/var` availability, and also
workaround issue with `/var/run` being persistent on Talos right now
(which is going to be fixed as well).

Signed-off-by: Andrey Smirnov <[email protected]>
(cherry picked from commit ae205aa)
smira added a commit to smira/talos that referenced this issue Oct 8, 2024
For new installs, simply symlink to `/run` (which is `tmpfs`).

For old installs, simulate by cleaning up the contents.

Fixes siderolabs#9432

Related to siderolabs#9365

Signed-off-by: Andrey Smirnov <[email protected]>
(cherry picked from commit f711907)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants