Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

migration, crush map and alternate roots #1091

Closed
swiftgist opened this issue Apr 17, 2018 · 1 comment
Closed

migration, crush map and alternate roots #1091

swiftgist opened this issue Apr 17, 2018 · 1 comment
Assignees
Labels

Comments

@swiftgist
Copy link
Contributor

Description of Issue/Question

Migration from filestore to bluestore (or any other configuration) does not address OSDs assigned to alternate roots in the crushmap. The crushmap has many features, but we are specifically interested in creating/recreating an OSD in an intended alternate root. The scope of this feature is extremely limited.

Currently, we track the configuration of a device in the Salt pillar. Ceph assigns the ID to an OSD. This number has always been unimportant with regards to a fresh deployment or a migration. The most common use case seems to be when one server has a mixture of hardware. For example, the admin purchases several servers that have SATA, SSD and NVMe. In their environment, the requirement is to have devices for archival and performance. In the end two hardware profiles are created to represent each server. One is a combination of SATA with SSD for wal/db, the other is SSD with NVMe for wal/db.

Since the two configurations have different performance characteristics, Ceph needs to not share data between them. For DeepSea, the interactive/iterative part of running proposal.populate during Stage 1 would seem to be the most likely place to set an alternate root. Much like the format (i.e. filestore/bluestore) of an OSD is stored in the Salt pillar, the desired root could be as well.

Since the crushmap can be edited and roots removed/renamed, we would need some validations. I think failing gracefully with decent error messages are sufficient if the configurations are out of sync. Changes to the crushmap are generally infrequent.


Ceph provides an alternate strategy of calling ceph osd destroy rather than the traditional triple. The result is that an OSD ID remains in the crushmap. Unfortunately, this means the ID suddenly becomes something that must be tracked in relation to the device name or the characteristics of the drive. In a migration scenario, DeepSea would need to remember which ID went with which device. This feels error prone except for the simplest situations. This also implies that DeepSea must own all migrations or the admin would need to inform DeepSea if any operations were done manually.

We do not have the ID dependency currently and I prefer the direct correlation of the configuration in the Salt pillar with the creation/recreation of an OSD.

@jschmid1
Copy link
Contributor

implemented with #1216

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants