Skip to content

Commit

Permalink
ceph-common: purge ceph.conf file
Browse files Browse the repository at this point in the history
Since ##461 we have been having the ability to override ceph default
options. Previously we had to add a new line in the template and then
another variable as well. Doing a PR for one option was such a pain. As
a result, we now have tons of options that we need to maintain across
all the ceph version, yet another painful thing to do.
This commit removes all the ceph options so they are handled by ceph
directly. If you want to add a new option, feel free to to use the
`ceph_conf_overrides` variable of your `group_vars/all`.

Risks, for those who have been managing their ceph using ceph-ansible
this is not a trivial change as it will trigger a change in your
`ceph.conf` and then restart all your ceph services. Moreover if you did
some specific tweaks as well, prior to run ansible you should update the
`ceph_conf_overrides` variable to reflect your previous changes.

To avoid service restart, you need to know a bit of ansible for this,
but generally the idea would be to run ansible on a dummy host to
generate the ceph.conf, then scp this file to all your ceph hosts and
you should be good.

Closes: #693

Signed-off-by: Sébastien Han <[email protected]>
(cherry picked from commit 47860a8)
  • Loading branch information
leseb authored and andrewschoen committed May 10, 2016
1 parent 3f67741 commit 197d827
Show file tree
Hide file tree
Showing 4 changed files with 16 additions and 245 deletions.
15 changes: 15 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,21 @@ ceph_conf_overrides:
**Note:** we will no longer accept pull requests that modify the ceph.conf template unless it helps the deployment. For simple configuration tweaks
please use the `ceph_conf_overrides` variable.

## Special notes

If you are looking at deploying a Ceph version older than Jewel.
It is highly recommended that you apply the following settings to your `group_vars/all` file on the `ceph_conf_overrides` variable:

```
ceph_conf_overrides:
osd:
osd recovery max active: 5
osd max backfills: 2
osd recovery op priority: 2
osd recovery threads: 1
```


## Setup with Vagrant using virtualbox provider

* Create vagrant_variables.yml
Expand Down
67 changes: 0 additions & 67 deletions group_vars/all.sample
Original file line number Diff line number Diff line change
Expand Up @@ -166,21 +166,7 @@ dummy:
#generate_fsid: true

#cephx: true
#cephx_require_signatures: true # Kernel RBD does NOT support signatures for Kernels < 3.18!
#cephx_cluster_require_signatures: true
#cephx_service_require_signatures: false
#max_open_files: 131072
#disable_in_memory_logs: true # set this to false while enabling the options below

# Debug logs
#enable_debug_global: false
#debug_global_level: 20
#enable_debug_mon: false
#debug_mon_level: 20
#enable_debug_osd: false
#debug_osd_level: 20
#enable_debug_mds: false
#debug_mds_level: 20

## Client options
#
Expand Down Expand Up @@ -223,9 +209,6 @@ dummy:
#rbd_client_log_path: /var/log/ceph
#rbd_client_log_file: "{{ rbd_client_log_path }}/qemu-guest-$pid.log" # must be writable by QEMU and allowed by SELinux or AppArmor
#rbd_client_admin_socket_path: /var/run/ceph # must be writable by QEMU and allowed by SELinux or AppArmor
#rbd_default_features: 3
#rbd_default_map_options: rw
#rbd_default_format: 2

## Monitor options
#
Expand All @@ -234,36 +217,15 @@ dummy:
#monitor_interface: interface
#monitor_address: 0.0.0.0
#mon_use_fqdn: false # if set to true, the MON name used will be the fqdn in the ceph.conf
#mon_osd_down_out_interval: 600
#mon_osd_min_down_reporters: 7 # number of OSDs per host + 1
#mon_clock_drift_allowed: .15
#mon_clock_drift_warn_backoff: 30
#mon_osd_full_ratio: .95
#mon_osd_nearfull_ratio: .85
#mon_osd_report_timeout: 300
#mon_pg_warn_max_per_osd: 0 # disable complains about low pgs numbers per osd
#mon_osd_allow_primary_affinity: "true"
#mon_pg_warn_max_object_skew: 10 # set to 20 or higher to disable complaints about number of PGs being too low if some pools have very few objects bringing down the average number of objects per pool. This happens when running RadosGW. Ceph default is 10

## OSD options
#
#journal_size: 0
#pool_default_pg_num: 128
#pool_default_pgp_num: 128
#pool_default_size: 3
#pool_default_min_size: 0 # 0 means no specific default; ceph will use (pool_default_size)-(pool_default_size/2) so 2 if pool_default_size=3
#public_network: 0.0.0.0/0
#cluster_network: "{{ public_network }}"
#osd_mkfs_type: xfs
#osd_mkfs_options_xfs: -f -i size=2048
#osd_mount_options_xfs: noatime,largeio,inode64,swalloc
#osd_mon_heartbeat_interval: 30

# CRUSH
#pool_default_crush_rule: 0
#osd_crush_update_on_start: "true"

# Object backend
#osd_objectstore: filestore

# xattrs. by default, 'filestore xattr use omap' is set to 'true' if
Expand All @@ -273,33 +235,6 @@ dummy:
# type.
#filestore_xattr_use_omap: null

# Performance tuning
#filestore_merge_threshold: 40
#filestore_split_multiple: 8
#osd_op_threads: 8
#filestore_op_threads: 8
#filestore_max_sync_interval: 5
#osd_max_scrubs: 1
# The OSD scrub window can be configured starting hammer only!
# Default settings will define a 24h window for the scrubbing operation
# The window is predefined from 0am midnight to midnight the next day.
#osd_scrub_begin_hour: 0
#osd_scrub_end_hour: 24

# Recovery tuning
#osd_recovery_max_active: 5
#osd_max_backfills: 2
#osd_recovery_op_priority: 2
#osd_recovery_max_chunk: 1048576
#osd_recovery_threads: 1

# Deep scrub
#osd_scrub_sleep: .1
#osd_disk_thread_ioprio_class: idle
#osd_disk_thread_ioprio_priority: 0
#osd_scrub_chunk_max: 5
#osd_deep_scrub_stride: 1048576

## MDS options
#
#mds_use_fqdn: false # if set to true, the MDS name used will be the fqdn in the ceph.conf
Expand Down Expand Up @@ -330,8 +265,6 @@ dummy:
#restapi_interface: "{{ monitor_interface }}"
#restapi_address: "{{ monitor_address }}"
#restapi_port: 5000
#restapi_base_url: /api/v0.1
#restapi_log_level: warning # available level are: critical, error, warning, info, debug

## Testing mode
# enable this mode _only_ when you have a single node
Expand Down
67 changes: 0 additions & 67 deletions roles/ceph-common/defaults/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -158,21 +158,7 @@ fsid: "{{ cluster_uuid.stdout }}"
generate_fsid: true

cephx: true
cephx_require_signatures: true # Kernel RBD does NOT support signatures for Kernels < 3.18!
cephx_cluster_require_signatures: true
cephx_service_require_signatures: false
max_open_files: 131072
disable_in_memory_logs: true # set this to false while enabling the options below

# Debug logs
enable_debug_global: false
debug_global_level: 20
enable_debug_mon: false
debug_mon_level: 20
enable_debug_osd: false
debug_osd_level: 20
enable_debug_mds: false
debug_mds_level: 20

## Client options
#
Expand Down Expand Up @@ -215,9 +201,6 @@ rbd_client_directory_mode: null
rbd_client_log_path: /var/log/ceph
rbd_client_log_file: "{{ rbd_client_log_path }}/qemu-guest-$pid.log" # must be writable by QEMU and allowed by SELinux or AppArmor
rbd_client_admin_socket_path: /var/run/ceph # must be writable by QEMU and allowed by SELinux or AppArmor
rbd_default_features: 3
rbd_default_map_options: rw
rbd_default_format: 2

## Monitor options
#
Expand All @@ -226,36 +209,15 @@ rbd_default_format: 2
monitor_interface: interface
monitor_address: 0.0.0.0
mon_use_fqdn: false # if set to true, the MON name used will be the fqdn in the ceph.conf
mon_osd_down_out_interval: 600
mon_osd_min_down_reporters: 7 # number of OSDs per host + 1
mon_clock_drift_allowed: .15
mon_clock_drift_warn_backoff: 30
mon_osd_full_ratio: .95
mon_osd_nearfull_ratio: .85
mon_osd_report_timeout: 300
mon_pg_warn_max_per_osd: 0 # disable complains about low pgs numbers per osd
mon_osd_allow_primary_affinity: "true"
mon_pg_warn_max_object_skew: 10 # set to 20 or higher to disable complaints about number of PGs being too low if some pools have very few objects bringing down the average number of objects per pool. This happens when running RadosGW. Ceph default is 10

## OSD options
#
journal_size: 0
pool_default_pg_num: 128
pool_default_pgp_num: 128
pool_default_size: 3
pool_default_min_size: 0 # 0 means no specific default; ceph will use (pool_default_size)-(pool_default_size/2) so 2 if pool_default_size=3
public_network: 0.0.0.0/0
cluster_network: "{{ public_network }}"
osd_mkfs_type: xfs
osd_mkfs_options_xfs: -f -i size=2048
osd_mount_options_xfs: noatime,largeio,inode64,swalloc
osd_mon_heartbeat_interval: 30

# CRUSH
pool_default_crush_rule: 0
osd_crush_update_on_start: "true"

# Object backend
osd_objectstore: filestore

# xattrs. by default, 'filestore xattr use omap' is set to 'true' if
Expand All @@ -265,33 +227,6 @@ osd_objectstore: filestore
# type.
filestore_xattr_use_omap: null

# Performance tuning
filestore_merge_threshold: 40
filestore_split_multiple: 8
osd_op_threads: 8
filestore_op_threads: 8
filestore_max_sync_interval: 5
osd_max_scrubs: 1
# The OSD scrub window can be configured starting hammer only!
# Default settings will define a 24h window for the scrubbing operation
# The window is predefined from 0am midnight to midnight the next day.
osd_scrub_begin_hour: 0
osd_scrub_end_hour: 24

# Recovery tuning
osd_recovery_max_active: 5
osd_max_backfills: 2
osd_recovery_op_priority: 2
osd_recovery_max_chunk: 1048576
osd_recovery_threads: 1

# Deep scrub
osd_scrub_sleep: .1
osd_disk_thread_ioprio_class: idle
osd_disk_thread_ioprio_priority: 0
osd_scrub_chunk_max: 5
osd_deep_scrub_stride: 1048576

## MDS options
#
mds_use_fqdn: false # if set to true, the MDS name used will be the fqdn in the ceph.conf
Expand Down Expand Up @@ -321,8 +256,6 @@ email_address: [email protected]
restapi_interface: "{{ monitor_interface }}"
restapi_address: "{{ monitor_address }}"
restapi_port: 5000
restapi_base_url: /api/v0.1
restapi_log_level: warning # available level are: critical, error, warning, info, debug

## Testing mode
# enable this mode _only_ when you have a single node
Expand Down
Loading

0 comments on commit 197d827

Please sign in to comment.