You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

Configuration

The inventory may define these hosts to run the step completely:

  • clients: Set of hosts that must be ceph-client. These hosts will access to the storage cluster

  • mons: Set of hosts to that must be ceph-mon. These hosts will maintain a map of the state of the cluster

  • osds: Set of hosts to that must be ceph-osd. These hosts will interact with the logical disk to stock the data

More details in the documentation here.

The inventory must define these variables to run the step:

  • ceph_cluster_network: Address block to access to cluster network
  • ceph_public_network: Address block to access to public network (ie to the world)
  • ceph_osd_disk: Device to stock datas (only for ceph-osd hosts), it's on this disk ceph will build a rbd.

  • osd_pool_default_min_size: Minimal number of available OSD to ensure cluster success (best: ceil(osd_pool_default_size / 2.0))

  • osd_pool_default_size: Number of OSD in the cluster

Ceph provides ansible rules to configure the software, you can read the documentation here.

Warnings

If this step is failed, you must restart it at the previous step. Use snapshot LVM to do this.

At end of this step, make sure that:

  • there is a rbd pool with this command: ceph osd pool list.
  • there is a ceph pool with this command: virsh pool-list.

RADOS Block Devices

During this step, ceph will build a RADOS block device (RBD) from ceph_osd_disk. A storage entry (a pool) will be automatically generated for libvirt. When the service will be started, the device should be used by the VMs.

This disk will be used with the librbd library provided by ceph.

More details here.

  • No labels