Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • configure_firewall: Boolean to configure the firewall (true by default)
  • ceph_origin: Origin of the ceph installed files (must be set to distro). SEAPATH installs ceph with an installer (see the installation section)
  • ceph_osd_disk: Device to stock datas (only for ceph-osd hosts), it's on this disk ceph will build a rbd. To success the CI, the path should be in /dev/disk/by-path
  • cluster_network: Address block to access to cluster network
  • dashboard_enabled: Boolean to enable a dashboard (must be set to false)
  • devices: List of devices to use for the shared storage. All specified devices will be used entirely
  • lvm_volumes: List of volumes to use for the shared storage. To use to replace devices to take a part of the device
  • monitor_address: Address where the host will bind
  • ntp_service_enabled: Boolean to enable the NTP service (must be set to false). SEAPATH installs a specific NTP client and configure it
  • osd_pool_default_min_size: Minimal number of available OSD to ensure cluster success (best: ceil(osd_pool_default_size / 2.0))

  • osd_pool_default_size: Number of OSD in the cluster

  • public_network: Address block to access to public network

Volumes specifications

If lvm_volumes is defined, the devices variables is ignored.

When a volume is defined for the shared storage, several variables some fields should be set for seapath-ansible and ceph-ansible.

...

  • mon_osd_min_down_reporters: (must be set to 1)
  • osd_crush_chooseleaf_type: (must be set to 1)
  • osd_pool_default_min_size: Minimal number of available OSD to ensure cluster success (best: ceil(osd_pool_default_size / 2.0))
  • osd_pool_default_pg_num: (must be set to 128)
  • osd_pool_default_pgp_num: (must be set to 128)
  • osd_pool_default_size: Number of OSD in the cluster

...

Image Added

Override mon configuration

...

Ceph provides ansible rules to configure the software, you can read the documentation documentation here.

Warnings


Note

If this step is failed, you must restart it at the previous step. Use snapshot LVM to do this.

At end of this step, make sure that:

  • there is a rbd pool with this command: ceph osd pool ls.
  • there is a ceph pool with this command: virsh pool-list.

RADOS Block Devices

During this step, ceph will build a RADOS block device (RBD) from ceph_osd_disk. A storage entry (a pool) will be automatically generated for libvirt. When the service will be started, the hypervisor should be used to launch VMs.

...

This disk will be used with the librbd library provided by ceph.

draw.io DiagrambordertruediagramNamecommunication-with-rbdsimpleViewerfalsewidthlinksautotbstyletoplboxtruediagramWidth671revision2Image Added

More details here.