Choosing a network card

Depending on your needs, the network card will have to support several features

None of these features are required by SEAPATH, however, PTP is required by the IEC 61850 standard.

If your machine doesn't have enough interfaces, you can use an usb-ethernet interface. For example, the TRENDnet TU3-ETG was tested and does work natively on SEAPATH. These added interfaces will not be compatible with any of the features described above.

SEAPATH network on one hypervisor

The following description fits the case of a standalone hypervisor, but is also applicable to a machine in the cluster.

Physical interfaces

There are four different types of interfaces to consider on a SEAPATH hypervisor:

VM interfaces

There are three different ways to transmit data from a physical network interface to a virtual machine:

Recommendations

Many network configurations are possible using SEAPATH. Here are our recommendations:

Administration

To avoid using too many interfaces, the administration of all VMs can use the same physical NIC. Hypervisor administration can also be done on this interface.
This is possible by using a Linux or OVS bridge and connecting all VMs and the hypervisor to it.

This behavior is achieved by the br0 bridge, preconfigured by Ansible (see Ansible inventories examples)

IEC 61-850 traffic

The Sample values and GOOSE messages should be received on an interface using PCI-passthrough or SR-IOV, the classic virtio interface is not fast enough.
In this situation, all VMs receiving IEC 61-850 data must have one dedicated interface.

For testing purposes, to avoid using one interface per VM, these data can be received on classic interfaces using virtio. This is done by using a Linux or OVS bridge connected both to the physical NIC and to all the virtual machines.

See the variables `bridges` and ovs_bridges` in Ansible inventories examples.

PTP

PTP synchronization does not generate much data. The synchronization of the hypervisor can thus be used:

Please remember that PTP requires specific NIC support.

In order to synchronize the virtual machines, the PTP clock of the host can be used (ptp_kvm). It is also possible to pass the PTP clock on the PCI-passthrough/SR-IOV interface given to the VM.

See the Time synchronization page for more details.

Example of configuration

Below is an example of a SEAPATH machine with two VMs. VM1 receives SV/GOOSE and must thus be synchronized with PTP and use PCI-passthrough. VM2 is just for monitoring, and does not require either PTP or PCI-passthrough.

Connecting machines in a cluster

SEAPATH cluster is used to ensure redundancy of the machines (if one hypervisor fails) and of the network (if one link fails).

In order to connect all the machines together, two options are available:

The second method is recommended on SEAPATH because it doesn’t use an additional device (the switch). In that case, every machine is connected to it’s neighbors, forming a triangle. If one link breaks, the paquets still have another route to reach the targeted machine. See on Github for more information.

In that case, two more interfaces per machines are needed, which leads to a minimum of four interfaces per machine (two for the cluster, one for the administration, and one for IEC 61-850 traffic)

Observer 

An observer machine doesn’t run any virtual machines. It is only used in the cluster to discriminate if a problem occurs between the two hypervisors.

On this machine, no IEC 61-850 paquets will be received.
It can be synchronized in PTP but it is not needed; it can stick to the simple NTP synchronization.

Even if PTP is not required, time synchronization must be ensured, at least in NTP. Otherwise, the machines will not be able to form the SEAPATH cluster.

So, on an observer, only three interfaces are needed : two for the cluster and one for administration.

Administration machine

In order to configure all the machines in the cluster, we advise using a switch connected to the administration machine.

Below is an example with two hypervisors and one observer, all connected together in a triangle cluster. The administration machine is connected to them using a switch.

Receiving Sample values and PTP frames

Machines

The standard way to generate sample values is to use a merging unit. For PTP, it is a Grand master clock.

However, in a test environment, these two machines can be simulated:

For this section example, we will use only one machine to simulate both PTP frames and IEC 61-850 traffic. This machine will be named “Publisher”.

Connections

In order to connect the publisher (or the merging unit/Grand master clock), a direct connection, with only one cable is the best. It avoids unnecessary latency, but necessitates having a machine with many interfaces.

A switch can then be used between the publisher and the cluster machine. In this situation, we advise you to use a separate switch from the one used for administration. It is possible to use only one switch for both, but in that case, the two networks must be separated using VLAN.

Note: To manage PTP, the best is to use a PTP-compliant switch. See more information on the Time synchronization page

Below is a schema of a cluster with two hypervisors and an observer. The administration network is isolated on his switch. Another switch is used for PTP and IEC 61-850 traffic. Remember that if you use PCI-passthrough, you will need as many interfaces as you have Virtual machines.

To avoid PTP frames interrupting with IEC 61-850 traffic, the best is to isolate PTP frames on a VLAN. This is done with the variable `ptp_vlanid` in the ansible inventories.