Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to confirm RBD PVC encryption is operating? #10371

Closed
jfathman opened this issue Jun 1, 2022 · 6 comments
Closed

How to confirm RBD PVC encryption is operating? #10371

jfathman opened this issue Jun 1, 2022 · 6 comments

Comments

@jfathman
Copy link

jfathman commented Jun 1, 2022

Thank you for your excellent open source software.

I am using RBD PVC encryption on rook-ceph v1.9.2 on microk8s v1.23.3-2 running on Debian 10.11 buster cluster nodes.

An app file path based on an encrypted PVC works properly (write/read) for an app running in the cluster. Good.

The SC runtime config indicates encryption is enabled:

jfathman@server-1:~$ kubectl get sc rook-ceph-block-encrypted -o yaml | grep -B8 -A2 -i encrypted: | grep -v csi.storage 
parameters:
  clusterID: rook-ceph
  encrypted: "true"
  encryptionKMSID: user-secret-metadata
  fsName: myfs

The PV runtime config similarly indicates encryption is enabled:

jfathman@server-1:~$ kubectl get pv pvc-dfa4dd20-c290-4ae9-9dde-34ebc88ad9e4 -o yaml | grep -B2 -A2 encrypted:
    volumeAttributes:
      clusterID: rook-ceph
      encrypted: "true"
      encryptionKMSID: user-secret-metadata
      fsName: myfs

How can I confirm externally that encryption is operating, and perhaps what level of encryption is being used? Is there a command that can be run in the rook-ceph toolkit or on the outer host Linux command line?

Note that this is RBD PVC encryption, not OSD encryption.

Thanks.

@Madhu-1
Copy link
Member

Madhu-1 commented Jun 2, 2022

@jfathman if you exec in the application pod and run df -h you will see the mountPoint is pointing to the DeviceMapper path (which is encrypted path) or if you run lsblk in rbdplugin pod on the node where app is running you will see below output

$ lsblk
rbd0                            253:0    0    1G  0 disk
└─luks-pvc-8a710f4c934811e9 252:0    0 1020M  0 crypt /var/lib/kubelet/pods/9eaceaef-936c-11e9-b396-005056af3de0/volumes/kubernetes.io~csi/pvc-8a710f4c934811e9/mount

@jfathman
Copy link
Author

jfathman commented Jun 2, 2022

@Madhu-1 Thank you for your assistance.

When I exec df -h in the application pods this is what I see:

jfathman@server-1:~$ kubectl get pods -A -o wide | grep -e NAME -e nginx
NAMESPACE     NAME                                                 READY   STATUS      RESTARTS        AGE     IP             NODE       NOMINATED NODE   READINESS GATES
careview      nginx-deployment-686d7648c7-4rh7z                    1/1     Running     1 (5h15m ago)   46h     10.1.178.110   server-3   <none>           <none>
careview      nginx-deployment-686d7648c7-4gsbd                    1/1     Running     1 (5h15m ago)   46h     10.1.125.252   server-1   <none>           <none>
careview      nginx-deployment-686d7648c7-k5jvd                    1/1     Running     1 (5h15m ago)   46h     10.1.37.91     server-2   <none>           <none>
jfathman@server-1:~$ kubectl exec -it nginx-deployment-686d7648c7-4rh7z -n careview -- df -h
Filesystem                                                                                                                                                 Size  Used Avail Use% Mounted on
overlay                                                                                                                                                    125G   19G  100G  16% /
tmpfs                                                                                                                                                       64M     0   64M   0% /dev
tmpfs                                                                                                                                                      1.5G     0  1.5G   0% /sys/fs/cgroup
/dev/sda1                                                                                                                                                  125G   19G  100G  16% /etc/hosts
shm                                                                                                                                                         64M     0   64M   0% /dev/shm
10.152.183.122:6789,10.152.183.209:6789,10.152.183.15:6789:/volumes/csi/csi-vol-8a9a2d86-e1bb-11ec-815d-1ee9a6f0ba76/b5783a5f-3d5d-458a-9196-e04a47c6dfb1  9.5G     0  9.5G   0% /usr/share/nginx/html
tmpfs                                                                                                                                                      2.9G   12K  2.9G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                                                                                                                                      1.5G     0  1.5G   0% /proc/acpi
tmpfs                                                                                                                                                      1.5G     0  1.5G   0% /sys/firmware
jfathman@server-1:~$ kubectl exec -it nginx-deployment-686d7648c7-4gsbd -n careview -- df -h                                 
Filesystem                                                                                                                                                 Size  Used Avail Use% Mounted on
overlay                                                                                                                                                    125G   24G   95G  21% /
tmpfs                                                                                                                                                       64M     0   64M   0% /dev
tmpfs                                                                                                                                                      1.5G     0  1.5G   0% /sys/fs/cgroup
/dev/sda1                                                                                                                                                  125G   24G   95G  21% /etc/hosts
shm                                                                                                                                                         64M     0   64M   0% /dev/shm
10.152.183.122:6789,10.152.183.209:6789,10.152.183.15:6789:/volumes/csi/csi-vol-8a9a2d86-e1bb-11ec-815d-1ee9a6f0ba76/b5783a5f-3d5d-458a-9196-e04a47c6dfb1  9.5G     0  9.5G   0% /usr/share/nginx/html
tmpfs                                                                                                                                                      2.9G   12K  2.9G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                                                                                                                                      1.5G     0  1.5G   0% /proc/acpi
tmpfs                                                                                                                                                      1.5G     0  1.5G   0% /sys/firmware
jfathman@server-1:~$ kubectl exec -it nginx-deployment-686d7648c7-k5jvd -n careview -- df -h                                 
Filesystem                                                                                                                                                 Size  Used Avail Use% Mounted on
overlay                                                                                                                                                    125G   24G   96G  20% /
tmpfs                                                                                                                                                       64M     0   64M   0% /dev
tmpfs                                                                                                                                                      1.5G     0  1.5G   0% /sys/fs/cgroup
/dev/sda1                                                                                                                                                  125G   24G   96G  20% /etc/hosts
shm                                                                                                                                                         64M     0   64M   0% /dev/shm
10.152.183.122:6789,10.152.183.209:6789,10.152.183.15:6789:/volumes/csi/csi-vol-8a9a2d86-e1bb-11ec-815d-1ee9a6f0ba76/b5783a5f-3d5d-458a-9196-e04a47c6dfb1  9.5G     0  9.5G   0% /usr/share/nginx/html
tmpfs                                                                                                                                                      2.9G   12K  2.9G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                                                                                                                                      1.5G     0  1.5G   0% /proc/acpi
tmpfs                                                                                                                                                      1.5G     0  1.5G   0% /sys/firmware

When I exec lsblk in the rbdplugin pods this is what I see:

jfathman@server-1:~$ kubectl get pods -n rook-ceph | grep -e NAME -e rbdplugin
NAME                                                 READY   STATUS      RESTARTS        AGE
csi-rbdplugin-fmm8l                                  3/3     Running     6 (21h ago)     2d
csi-rbdplugin-w2glv                                  3/3     Running     6 (21h ago)     2d
csi-rbdplugin-4cfpn                                  3/3     Running     6 (21h ago)     2d
csi-rbdplugin-provisioner-ddd46676-7827k             6/6     Running     13 (21h ago)    2d
csi-rbdplugin-provisioner-ddd46676-7r4sn             6/6     Running     12 (21h ago)    2d

jfathman@server-1:~$ kubectl exec -it -n rook-ceph csi-rbdplugin-fmm8l -c csi-rbdplugin -- lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0    7:0    0  55.5M  1 loop 
loop1    7:1    0 110.5M  1 loop 
loop2    7:2    0   4.7M  1 loop 
loop3    7:3    0   4.4M  1 loop 
loop4    7:4    0  55.5M  1 loop 
loop5    7:5    0    20K  1 loop 
loop6    7:6    0 209.4M  1 loop 
loop7    7:7    0  99.4M  1 loop 
sda      8:0    0   128G  0 disk 
|-sda1   8:1    0   127G  0 part /var/snap/microk8s/common/var/lib/kubelet/plugins
|-sda2   8:2    0     1K  0 part 
`-sda5   8:5    0   975M  0 part [SWAP]
sdb      8:16   0    10G  0 disk 
sr0     11:0    1  1024M  0 rom  

jfathman@server-1:~$ kubectl exec -it -n rook-ceph csi-rbdplugin-w2glv -c csi-rbdplugin -- lsblk                   
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0    7:0    0    20K  1 loop 
loop1    7:1    0   4.4M  1 loop 
loop2    7:2    0  99.4M  1 loop 
loop3    7:3    0  55.5M  1 loop 
loop4    7:4    0  55.5M  1 loop 
loop5    7:5    0 209.4M  1 loop 
loop6    7:6    0 110.5M  1 loop 
loop7    7:7    0   4.7M  1 loop 
sda      8:0    0   128G  0 disk 
|-sda1   8:1    0   127G  0 part /var/snap/microk8s/common/var/lib/kubelet/plugi
|-sda2   8:2    0     1K  0 part 
`-sda5   8:5    0   975M  0 part [SWAP]
sdb      8:16   0    10G  0 disk 
sr0     11:0    1  1024M  0 rom  

jfathman@server-1:~$ kubectl exec -it -n rook-ceph csi-rbdplugin-4cfpn -c csi-rbdplugin -- lsblk                   
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0    7:0    0 209.4M  1 loop 
loop1    7:1    0    20K  1 loop 
loop2    7:2    0   4.4M  1 loop 
loop3    7:3    0  55.5M  1 loop 
loop4    7:4    0   4.7M  1 loop 
loop5    7:5    0  99.4M  1 loop 
loop6    7:6    0 110.5M  1 loop 
loop7    7:7    0  55.5M  1 loop 
sda      8:0    0   128G  0 disk 
|-sda1   8:1    0   127G  0 part /var/snap/microk8s/common/var/lib/kubelet/plugins
|-sda2   8:2    0     1K  0 part 
`-sda5   8:5    0   975M  0 part [SWAP]
sdb      8:16   0    10G  0 disk 
sr0     11:0    1  1024M  0 rom  

There is no indication of rdb0 or luks-pvc above.

Ceph is using sdb on each cluster node as confirmed here:

jfathman@server-1:~$ kubectl exec -it rook-ceph-tools-55587cd5f9-gvg7c -n rook-ceph -- ceph device ls
DEVICE                             HOST:DEV      DAEMONS  WEAR  LIFE EXPECTANCY
VBOX_HARDDISK_VB24a9dadb-48a8235d  server-2:sdb  osd.2                         
VBOX_HARDDISK_VB7ad0ab44-23b9257a  server-1:sdb  osd.1                         
VBOX_HARDDISK_VBb0b05fae-1fbd97c4  server-3:sdb  osd.0 

jfathman@server-1:~$ kubectl exec -it rook-ceph-tools-55587cd5f9-gvg7c -n rook-ceph -- ceph osd status
ID  HOST       USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE      
 0  server-3  7680k  9.99G      0        0       0        0   exists,up  
 1  server-1  7620k  9.99G      0        0       1      105   exists,up  
 2  server-2  7428k  9.99G      0        0       0        0   exists,up  

jfathman@server-1:~$ kubectl exec -it rook-ceph-tools-55587cd5f9-gvg7c -n rook-ceph -- ceph pg dump osds
OSD_STAT  USED     AVAIL   USED_RAW  TOTAL   HB_PEERS  PG_SUM  PRIMARY_PG_SUM
2         7.3 MiB  10 GiB   7.3 MiB  10 GiB     [0,1]      65              23
0         7.5 MiB  10 GiB   7.5 MiB  10 GiB     [1,2]      65              20
1         7.4 MiB  10 GiB   7.4 MiB  10 GiB     [0,2]      65              22
sum        22 MiB  30 GiB    22 MiB  30 GiB                                  
dumped osds

Should I try configuring for OSD encryption instead of RBD PVC encryption, or is there something else to try first?

Thank you.

@Madhu-1
Copy link
Member

Madhu-1 commented Jun 3, 2022

@jfathman your application is using cephfs PVC not rbd PVC, you will see like below output in application pod

[🎩︎]mrajanna@fedora rbd $]kubectl exec -it csirbd-demo-pod -- sh
# df -h
Filesystem                                                                                      Size  Used Avail Use% Mounted on
overlay                                                                                          28G  6.0G   21G  23% /
tmpfs                                                                                            64M     0   64M   0% /dev
tmpfs                                                                                           2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/vda1                                                                                        28G  6.0G   21G  23% /etc/hosts
shm                                                                                              64M     0   64M   0% /dev/shm
/dev/mapper/luks-rbd-0001-0009-rook-ceph-0000000000000002-a1ffc1e6-e2f9-11ec-b4a3-4aa9403aebce  977M  2.5M  958M   1% /var/lib/www/html
tmpfs                                                                                           3.9G   12K  3.9G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                                                                           2.0G     0  2.0G   0% /proc/acpi
tmpfs                                                                                           2.0G     0  2.0G   0% /proc/scsi
tmpfs                                                                                           2.0G     0  2.0G   0% /sys/firmware

@jfathman
Copy link
Author

jfathman commented Jun 3, 2022

@Madhu-1 Thanks, that clarifies things.

https://github.com/rook/rook/blob/master/Documentation/Storage-Configuration/Ceph-CSI/ceph-csi-drivers.md

There are three CSI drivers integrated with Rook that will enable different scenarios:

  • RBD: This block storage driver is optimized for RWO pod access where only one pod may access the storage.
  • CephFS: This file storage driver allows for RWX with one or more pods accessing the same storage.
  • NFS (experimental)

#7032

Madhu-1 commented on Jan 21, 2021
cephcsi supports RBD PV encryption with KMS (vault) and custom kubernetes secrets.
The same Encryption Feature for CephFS will be added in the future.

ceph/ceph-csi#1563

Madhu-1 commented on Oct 27, 2021
CephFS already supports fscrypt based encryption?

jtlayton commented on Oct 27, 2021
No, it's still a work in progress. We're hoping to have it ready to ship for Quincy release.

We are using CephFS because we need RWX with multiple pods on separate nodes accessing the same storage.

If we switch to RBD, do we lose RWX and only have RWO per the reference above?

Is there an existing rook ceph option that supports both RWX and encryption?

@Madhu-1
Copy link
Member

Madhu-1 commented Jun 6, 2022

@Madhu-1 Thanks, that clarifies things.

https://github.com/rook/rook/blob/master/Documentation/Storage-Configuration/Ceph-CSI/ceph-csi-drivers.md

There are three CSI drivers integrated with Rook that will enable different scenarios:

  • RBD: This block storage driver is optimized for RWO pod access where only one pod may access the storage.
  • CephFS: This file storage driver allows for RWX with one or more pods accessing the same storage.
  • NFS (experimental)

#7032

Madhu-1 commented on Jan 21, 2021
cephcsi supports RBD PV encryption with KMS (vault) and custom kubernetes secrets.
The same Encryption Feature for CephFS will be added in the future.

ceph/ceph-csi#1563

Madhu-1 commented on Oct 27, 2021
CephFS already supports fscrypt based encryption?
jtlayton commented on Oct 27, 2021
No, it's still a work in progress. We're hoping to have it ready to ship for Quincy release.

We are using CephFS because we need RWX with multiple pods on separate nodes accessing the same storage.

You might need to wait for ceph/ceph-csi#3158 (cephcsi v3.7.0 release)

If we switch to RBD, do we lose RWX and only have RWO per the reference above?

RBD supports BlockMode RWX PVC, not the filesystem Mode.

Is there an existing rook ceph option that supports both RWX and encryption?

The best option is to use osd encryption as cephcsi doesn't support cephfs encryption yet.

@jfathman
Copy link
Author

jfathman commented Jun 6, 2022

@Madhu-1 Thanks for your assistance. Very helpful.

@jfathman jfathman closed this as completed Jun 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants