You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

The inventory must define these hosts to run:

  • cluster_machines: Set of hosts in the cluster
  • hypervisors: Set of hosts to launch virtual machines
  • standalone_machine: To define only the cluster is composed with one host (replace cluster_machines)

Prerequisite

This step is useful only for the installed hosts by a debian installer build with build_debian_iso.

The inventory must define these variables to run the step:

  • apply_network_config: Boolean to apply the network configuration

  • admin_ip_addr: IP address for SNMP

  • cpumachinesnort: Range of allowed CPUs for no RT machines

  • cpumachines: Range of allowed CPUs for machines (RT and no RT)

  • cpumachinesrt: Range of allowed CPUs for RT machines

  • cpuovs: Range of allowed CPUs for OpenVSwitch

  • cpusystem: Range of allowed CPUs for the system

  • cpuuser: Range of allowed CPUs for the user

  • irqmask: Set the IRQBALANCE_BANNED_CPUS environment variable, see irqbalance manual
  • logstash_server_ip: IP address for logstash-seapath alias in /etc/hosts

  • main_disk: Main disk device to observe his temperature

  • workqueuemask: The negation of the irqmask (= ~irqmask)

Network

The inventory must define these variables to run the step:

  • br_rstp_priority: TODO Multiple of 4096

  • cluster_ip_addr: IP address for team0 interface

  • gateway_addr: IP address of a gateway, it doesn't have to work

  • ip_addr: IP address to communicate with the host

  • network_interface: Network interface to communicate with the host

  • ntp_primary_server: Address of a NTP server, it's the first server to requests

  • ntp_secondary_server: Address of a NTP server, it's the secondary server to requests

  • syslog_server_ip: Address of a SYSLOG server

  • team0_0: Network interface to connect to team0 bridge

  • team0_1: Other network interface to connect to team0 bridge

Shared storage (via ceph)

The inventory may define these hosts to run the step completely:

  • clients: Set of hosts that must be ceph-client. These hosts will access to the storage cluster

  • mons: Set of hosts to that must be ceph-mon. These hosts will maintain a map of the state of the cluster

  • osds: Set of hosts to that must be ceph-osd. These hosts will interact with the logical disk to stock the data

More details in the documentation here.

The inventory must define these variables to run the step:

  • ceph_cluster_network: Address block to access to cluster network
  • ceph_public_network: Address block to access to public network (ie Internet)
  • ceph_osd_disk: Device to stock datas (only for ceph-osd hosts)

  • osd_pool_default_min_size: Minimal number of available OSD to ensure cluster success (best: ceil(osd_pool_default_size / 2.0))

  • osd_pool_default_size: Number of OSD in the cluster

Ceph provides ansible rules to configure the software, you can read the documentation here.


High availability (via corosync and pacemaker)

  • No labels