Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Direct attached storage disk behavior. #155

Open
Simon-Wood1980 opened this issue Nov 28, 2023 · 0 comments
Open

Direct attached storage disk behavior. #155

Simon-Wood1980 opened this issue Nov 28, 2023 · 0 comments

Comments

@Simon-Wood1980
Copy link

We are experiencing some odd behavior when using some of the new direct attached storage vms which appears to conflict somewhat with the documentation. The vms we are testing with are the i4i and the i3en instances and was expecting to simply be able to replace the ephemeral disk with the instance storage by applying the use_instance_storage flag.

We performed the following four tests and only one of the configuration appears to boot. The below is from a i3en.2xlarge which has two drives attached but we have also performed the same tests on i4i with a single disk.

It appears that the flag encrypted: null must be set or an error for incorrect mapping is thrown. Bosh always appears to create an EBS volume mounted as /dev/xvda and I thought it was failing due to the drive size of 5gb not being enough to load the bosh agent. This doesn't explain why our Test-2 fails though.

We are testing with the bosh jumpbox release and can not ssh to any of the vms other than the one which has booted to investigate further.

Bosh Director. Version 280.0.9 Director Stemcell ubuntu-22.04.1/1.301

--- Test-1
  - cloud_properties:
    instance_type: i3en.2xlarge
    raw_instance_storage: true
/dev/xvda	5GB
/dev/sdb	10GB
Timeout on boot.

--- Test-2
- cloud_properties:
    instance_type: i3en.2xlarge
    raw_instance_storage: true
    root_disk:
      size: 30360
      type: gp3
/dev/xvda	30GB
/dev/sdb  10GB
Timeout on boot.

--- Test-3
- cloud_properties:
    ephemeral_disk:
      encrypted: null
      use_instance_storage: true
    instance_type: i3en.2xlarge
    root_disk:
      size: 30360
      type: gp3
/dev/xvda	30GB
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
nvme2n1     259:0    0  2.3T  0 disk 
nvme1n1     259:1    0  2.3T  0 disk 
nvme0n1     259:2    0   30G  0 disk 
├─nvme0n1p1 259:3    0  4.8G  0 part /home
│                                    /
├─nvme0n1p2 259:4    0 12.6G  0 part [SWAP]
└─nvme0n1p3 259:5    0 12.6G  0 part /var/tmp
                                     /tmp
                                     /opt
                                     /var/opt
                                     /var/log
                                     /var/vcap/data

--- Test-4
- cloud_properties:
    ephemeral_disk:
      encrypted: null
      use_instance_storage: true
    instance_type: i3en.2xlarge
  name: i3en.2xlarge
/dev/xvda	5GB
Timeout on boot.

We have written an os-conf script to raid and mount the disks on Test-3 which is giving some very good performance results but believe the current cloud-config configuration we are using maybe incorrect or the cpi might not be working correctly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Waiting for Changes | Open for Contribution
Development

No branches or pull requests

1 participant