Co-simulation

Co-simulation is a simulation technique in which the data exchange between subsystems is restricted to discrete communication points. *) In the time between communication points, each system is solved independently from the others by its own solver. [1] Compared to other model coupling methods, co-simulation has many advantages, especially in an industrial setting. This article will explain the basics of the method and why we recommend it. A comparison with other approaches is given in the article on modular modelling of complex systems.

The flowchart to the right illustrates the basic co-simulation process. After initialisation, the simulation runs in a cycle as follows:

  1. Each subsimulator performs its calculations independently, advancing logical time by some amount—a “macro” time step—which is specified by a central algorithm in the co-simulation software.
  2. When all subsimulators have completed their calculations for the current time step, their results are made available as output variables.
  3. The co-simulation software transfers the values of output variables over to the input variables of other sub-simulators, according to how the model is coupled.
  4. The whole process repeats until the simulation has reached the end.

Some things worth noting are:

  • No communication takes place between subsimulators during a macro time step, they are completely independent at this time.
  • The simplest co-simulation algorithm is exactly as shown above, with a fixed-length macro time step. More advanced co-simulation algorithms may use variable step sizes which are adjusted adaptively based on the system behaviour.
  • Internally, the subsimulators may perform their integration using “micro” time steps which are at most the same as the macro time steps, but usually smaller.

The advantages of co-simulation compared to other coupling approaches are:

  • It allows the use of specialised solvers for models that benefit from it, and to adapt the micro step size to the dynamics of each subsystem, which can improve both performance, stability and accuracy.
  • The interface to each subsimulator can be minimal and opaque, simply consisting of transfer of input and output values. This allows the models to be “black boxes” where the implementation details can be well hidden, which is often desirable in an industrial setting.
  • It typically represents a very loose coupling between the subsystems, both in the physical sense and in the software-architectural sense. This allows for a great degree of encapsulation, which makes it easier to reuse models in other contexts later, as they will have minimal dependencies on other subsystems.
  • The loose coupling and comparatively large macro step length makes it very well suited for distributed simulations, where the workload can be spread over multiple CPU cores, or even multiple machines in a network. The latter case opens up possibilities to run cross-platform simulations, which may be required in order to combine certain simulation tools. A few case studies relevant for marine applications using co-simulations are discussed in [2].

Disadvantages of co-simulation include:

  • It is poorly suited for tight couplings. Such couplings often require very short step sizes to obtain stable solutions, and are generally best handled by solving the model equations together under one solver.
  • It requires special support from the simulation tools involved, which must either be able to export models together with their solver to a form where they can be used by a dedicated co-simulation software, or the software itself needs to have built-in co-simulation functionality. An example of the former would be if the software supports FMI for Co-simulation, which an increasing number of tools do.

In real-time simulations, the co-simulation software is responsible for synchronising logical time with wall clock time, issuing “next step” commands to the sub-simulators at the right moments. The subsimulators must be able to perform their calculations fast enough to allow some time for the transfer of variable values at communication points. In network-distributed simulations, this time may be significant. This, together with the fact that variable values get exchanged at each communication point, may also place a lower limit on the length of the macro time steps.

The terms “co-simulation” and “distributed simulation” are often confused, but they're not exactly the same. “Distributed simulation” usually refers to the act of running different parts of a simulation on different computers in a network, to distribute the workload between them. Often this is simply done to improve performance, but other concerns such as running on different operating systems or hardware architectures may also play a part. Sometimes, people will also call it “distributed simulation” when the work is distributed between several processes or CPU cores on a single machine.

Co-simulation is very well suited as a method for running distributed simulations, due to its loosely coupled nature and relatively long communication intervals. However, one may also use other methods of distributing simulations, and one may also run co-simulations that are not distributed. Therefore, the terms should not be mixed.

When dealing with different types of co-simulation software, one may come across two different communication and control paradigms, which we shall refer to as “federation” and “master/slave”. Both consist of subsimulators that communicate through input and output variables, but the manner in which the communication takes place is different.

In a federation, the subsimulators are called federates, and they communicate through a common run-time infrastructure (RTI). The RTI acts as a communication channel for variable values, and is also responsible for time synchronisation. The communication is based on a publish-subscribe pattern, where each federate publishes its output values and actively subscribes to the input values it needs. For example, a ship federate that represents a vessel model could publish its position as an attribute named position. Then, any other federate that needs the position of the ship can subscribe to ship.position and start receiving updates.

In the master/slave paradigm, the subsystems are slaves which are under complete control of a co-simulation master. The main difference between slaves and federates is that slaves are passive recipients of input data, they do not actively request it. In fact, slaves do not have any information about other slaves or the system as a whole, not even which other slaves and variables they are connected to. They only see the values they receive for their input variables. The master is responsible for making the connections and routing output values to the correct input variables. Time synchronisation is also handled by the master.

We can summarise the above by saying that federates are more autonomous, while slaves are more independent. For some applications this distinction will not be important, and the two approaches could work equally well. In others there will be clear advantages to one or the other.

Most traditional co-simulation middleware, such as HLA and its predecessors, is built around the federation concept. These have their origin in a military setting, where they are often used for wargames where each federate represents a plane, a car, a tank, and so on. In this context, it makes a lot of sense that the federates are autonomous, as they represent autonomous entities in the real world.

The drawback of this model is that the federates need to have some information about the other federates, such as their variable names. This creates dependencies between them and gets in the way of modularity and scalability. In a virtual prototyping setting, where it is important to be able to mix and match components, and to substitute one model for another one of the same category—even models created by completely different authors, using completely different tools—keeping interdependencies at a minimum is crucial. We don't want vendor X's gearbox model to only be usable with vendor X's engine models; we want it to be usable with all engine models.

For these reasons, we believe that the master/slave paradigm is usually preferable for the purposes of virtual prototyping. The FMI standard is based on this structure, and we have also adopted it for our co-simulation software, Coral.

  • Co-simulations can sometimes be prone to instability issues. Read our article on stability to learn more about this.
  • Some model couplings are more problematic than others, especially in co-simulations. We deal with this in our article on tightly coupled systems.

*) Communication points are sometimes referred to as sampling points or synchronisation points.