Tutorials

  • Reliability Design of Complex Systems - Modeling and Efficient Simulation

    Reliability is an important non-functional requirement of many man-made systems, especially when failures may lead to catastrophic events. When such systems are too complex to be understood and designed by one person, the resulting effect of local design decisions on overall system properties are not obvious. Mathematical models can help to describe such systems and to compute their reliability with the help of appropriate software tools.

    Unavoidable faults may be masked or tolerated by static or dynamic redundancy measures, all at a considerably increasing cost. The main task is to design a system such that its reliability and safety requirements are achieved with the least amount of resources. Classic models and tools for static analysis are not able to cover systems in which the complex behaviour influences failures, or if dynamic reconfigurations are applied (possibly because of a better resource / reliability trade-off).

    Depending on the complexity of the system behaviour and the corresponding size of the state space, Markov chains and stochastic Petri nets are applied to reliability problems. They are attractive models as long as the underlying assumption of a Markov behaviour is realistic (Phase-type distributions can emulate others up to a certain accuracy, but this is paid for with an even larger state space). Petri nets have been adopted as a suggested tool for reliability engineering of complex systems in an international standard recently.

  • Reinforcement Learning Based Control System Design

    Control system research and design have traditionally been based on detailed mathematical models of the system with well-known models of uncertainties. In contrast, Reinforcement Learning (RL) attempt to learn control actions and models directly from experiments and data using a trial and error formalization, akin to human learning. The systems for which detailed mathematical models have already been developed with known disturbance models, RL have limited scope to be utilized in the feedback controller design. However, the RL approach can prove to be extremely beneficial in situations where such precise mathematical models are absent, the multiple control objective goals are too complex and diverse, or there is a significant amount of uncertainty from unidentified sources. The advanced automatic feature extraction abilities of deep neural networks are used to improve learning in RL algorithms. With only raw visual input signals, the advanced Deep Reinforcement Learning (DRL) algorithms can resolve some of the most challenging decision-making problems. In challenging board games like Go, Chess, and Shogi, DRL agents have outperformed humans. However, extending the RL agent from application like game playing and extracting data from raw visual inputs to feedback controller development is not trivial. The designer needs to have system understanding in order to formulate the RL agent effectively to solve the problem and bring out effective performance from the control system.