Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This page describe describes the configuration and deployement deployment of virtual machines on SEAPATH.

SEAPATH uses QEMU and Libvirt to deploy virtual machines. With this technology, two elements are needed:

  • A VM disk image. This file represents the disk of the virtual machine, it will contain the kernel as well as the filesystem.
  • A domain XML format. This file describes all the devices needed by the VM. It can be memory, cpu information, interfaces ...

Build disk image

SEAPATH can host custom virtual machines. If you already have a ready VM disk an image, skip directly to deployement partthe next section.

However, if you need a virtual machine for testing or for deploying your application, you can use SEAPATH default disk image file file :
Information on how to build a disk image in detail is available here for a Yocto VM, or here for a Debian VM.

Components of VM

Each components of VM should have a VirtIO interface (like the network and the disk). This reduces problems with the virtual hardware.

Some examples are available here.

Troubleshooting
Bootloader

If the VM is in UEFI mode, it's possible that it doesn't boot because the firmware is not the same. Follow this steps:

  1. Boot the machine on the cluster
  2. Re-install the bootloader
  3. Update the bootloader configuration

Deployment of VM with Ansible

The playbook ansible/playbooks/cluster_setup_deploy_vms.yaml is used to deploy a virtual machine. There is an ansible library (ansible/library/cluster_vm.py) to wrap vm_manager.

Manage virtual machine on the cluster

Check the execution of the resource:

Code Block
languagebash
crm status

Get the status of the resource:

Code Block
languagebash
vm-mgr status --name NAME

...

Delete VM in the cluster:

Code Block
languagebash
vm-mgr remove --name NAME

For more information about the vm_manager tool, check this page : The vm_manager tool

VM configuration

The official documentation on the XML format of libvirt is here.

Resources

Configure the virtual machine with domain XML file

If you use a custom VM, it is likely that you already have this XML file. We advise to still read the below information to have recommendations about VM configuration on SEAPATH.

In order to create your own XML for your virtual machines, here are two useful links:

A preconfigured XML using Ansible template is also provided. It does not have the flexibility of a fully handwritten XML, but is useful for testing or prototyping. For more information, read this page: Deploy with preconfigured XML.

Cluster specific

When you use SEAPATH in cluster, some part of the XML configuration is done by SEAPATH. This is the case for the name, uuid and disk.

The fields name and uuid will be ignored if there are present in the XML configuration file.

The disk configuration shall not be configured inside the XML otherwise the VM deployment will fail.

Note that SEAPATH only supports one disk per VM in cluster for the moment.

Virtual machine and slices

If you configured SEAPATH with machine-rt and machine-nort slices (see Scheduling and priorities), you must put the VM into it. This is done using libvirt resources. The VM will then only have access On the XML configuration of a virtual machine, the resource can be specified to know which slice should be used (more details here). So, the virtual machine will only have acces to the CPU associated with the slice.

...

<resource>
    <partition>/machine/rt</partition>
</resource>

CPU

...

tuning

In the project, this element will be used to limite limit the virtual machine (more details here).

  • The emulatorpin vcpupin element specifies which of host's physical CPUs the emulator, a subset of a domain not including vCPU or iothreads will be pinned to. The vcpupin Use this value only for performance driven VMs. 
  • The vcpusched element specifies the scheduler type for a particular vCPU.  Use this value only for performance driven VMs.
  • The emulatorpin element specifies which of host 's physical CPUs the domain vCPU emulator will be pinned to. It's used to reserved one or more CPUs for a critical virtual machine. So, it's important not use this CPU on another VM.The vcpusched element specifies the scheduler type for a particular vCPU. A priority can be setting. In the project, all values greats than 10, it's for the host; equals to 10, it's for the RCU and less than 10, it's to set the priority of the RT vCPU among themselves.The emulator describes the management processes of the VM (watchdog, creation, timers, modification). This value is not useful, most of the time.
Info
If you configured CPU restriction for slices (see Scheduling and priorities), all the CPU ranges that you provide for emulatorpin and vcpupin must be part of the allowed CPUs of the slice where the VM is. By default, the VM is part of the machine slice, but it can be in the machine-rt or machine-nort slice following your configuration.


Note

If you deploy a machine with real time vcpu scheduler, you must set the emulatorpin cpu range.  It can be set on the system cpu range or on specific cores.
Remember that, for maximal performance, each vCPU must be scheduled alone on its core. Emulatorpin must then be set on another core.

Deployment of VM with Ansible

To deploy the virtual machines, use the playbooks ansible/playbooks/deploy_vms_cluster.yaml and ansible/playbooks/deploy_vms_standalone.yaml depending on your setup. These are supposed to work with the vm example inventories (see here for yocto version and here for debian version)

These playbooks will call the library ansible/library/cluster_vm.py which wraps the vm_manager tool.

Manage virtual machines on the cluster

  • Check the execution of the resource:

    Code Block
    languagebash
    crm status


  • Get the status of the resource:

    Code Block
    languagebash
    vm-mgr status --name NAME


  • Delete VM in the cluster:

    Code Block
    languagebash
    vm-mgr remove --name NAME

For more information about the vm_manager tool, check out this page : The vm_manager tool