Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This page describes the configuration and deployment of virtual machines on SEAPATH.

SEAPATH uses QEMU and Libvirt to deploy virtual machines. With this technology, two elements are needed:

  • A VM disk image. This file represents the disk of the virtual machine, it will contain the kernel as well as the filesystem.
  • A domain XML format. This file describes all the devices needed by the VM. It can be memory, cpu information, interfaces ...

Build disk image

SEAPATH can host custom virtual machines. If you already have an image, skip to the next section.

However, if you need a virtual machine for testing or for deploying your application, you can use SEAPATH default disk image file :
Information on how to build a disk image in detail is available here for a Yocto VM, or here for a Debian VM.

Configure the virtual machine with domain XML file

If you use a custom VM, it is likely that you already have this XML file. We advise to still read the below information to have recommendations about VM configuration on SEAPATH.

In order to create your own XML for your virtual machines, here are two useful links:

A preconfigured XML using Ansible template is also provided. It does not have the flexibility of a fully handwritten XML, but is useful for testing or prototyping. For more information, read this page: Deploy with preconfigured XML.

Cluster specific

When you use SEAPATH in cluster, some part of the XML configuration is done by SEAPATH. This is the case for the name, uuid and disk.

The fields name and uuid will be ignored if there are present in the XML configuration file.

The disk configuration shall not be configured inside the XML otherwise the VM deployment will fail.

Note that SEAPATH only supports one disk per VM in cluster for the moment.

Virtual machine and slices

If you configured SEAPATH with machine-rt and machine-nort slices (see Scheduling and priorities), you must put the VM into it. This is done using libvirt resources. The VM will then only have access to the CPU associated with the slice.

Possible values:

  • /machine/nort
  • /machine/rt

Example, for a virtual machine with the real-time:

<resource>
    <partition>/machine/rt</partition>
</resource>

CPU tuning

In the project, this element will be used to limit the virtual machine (more details here).

  • The vcpupin element specifies which host's physical CPUs the domain vCPU will be pinned to. Use this value only for performance driven VMs. 
  • The vcpusched element specifies the scheduler type for a particular vCPU.  Use this value only for performance driven VMs.
  • The emulatorpin element specifies which of host physical CPUs the emulator will be pinned to. The emulator describes the management processes of the VM (watchdog, creation, timers, modification). This value is not useful, most of the time.
Info
If you configured CPU restriction for slices (see Scheduling and priorities), all the CPU ranges that you provide for emulatorpin and vcpupin must be part of the allowed CPUs of the slice where the VM is. By default, the VM is part of the machine slice, but it can be in the machine-rt or machine-nort slice following your configuration.


Note

If you deploy a machine with real time vcpu scheduler, you must set the emulatorpin cpu range.  It can be set on the system cpu range or on specific cores.
Remember that, for maximal performance, each vCPU must be scheduled alone on its core. Emulatorpin must then be set on another core.

Deployment of VM with Ansible

To deploy the virtual machines, use the playbooks ansible/playbooks/deploy_vms_cluster.yaml and ansible/playbooks/deploy_vms_standalone.yaml depending on your setup. These are supposed to work with the vm example inventories (see here for yocto version and here for debian version)

These playbooks will call the library

Configuration

The inventory may define these hosts to run:

  • hypervisors: Set of hosts to launch virtual machines

RADOS Block Device

libvirt will automatically detect the RBD and create a pool with this option --enable-rbd. The found resources can be used to launch a VM (if it's an image for a virtual machine).

Components of VM

Each components of VM should have a VirtIO interface (like the network and the disk). This reduces problems with the virtual hardware.

Some examples are available here.

Troubleshooting
Bootloader

If the VM is in UEFI mode, it's possible that it doesn't boot because the firmware is not the same. Follow this steps:

  1. Boot the machine on the cluster
  2. Re-install the bootloader
  3. Update the bootloader configuration

Deployment of VM with Ansible

The playbook ansible/playbooks/cluster_setup_deploy_vms.yaml is used to deploy a virtual machine. There is an ansible library (ansible/library/cluster_vm.py) to wrap which wraps the vm_manager tool.

Configuration

The inventory must define these hosts to run this step:

  • VMs: Sets of hosts to launch on hypervisors

Manage virtual machines on the cluster

  • Check the execution of the resource:

    Code Block
    languagebash
    crm status


  • Get the status of the resource:

    Code Block
    languagebash
    vm-mgr status --name NAME


  • Delete VM in the cluster:

    Code Block
    languagebash
    vm-mgr remove --name NAME

For more information about the vm_manager tool, check out this page : The vm_manager tool

...