KEYNOTE SPEAKERS

Corina Pasareanu

Corina Pasareanu is an ACM Distinguished Scientist, working at NASA Ames. She is affiliated with KBR and Carnegie Mellon University's CyLab. Her research interests include model checking, symbolic execution, compositional verification, probabilistic software analysis, autonomy, and security. She is the recipient of several awards, including ETAPS Test of Time Award (2021), ASE Most Influential Paper Award (2018), ESEC/FSE Test of Time Award (2018), ISSTA Retrospective Impact Paper Award (2018), ACM Impact Paper Award (2010), and ICSE 2010 Most Influential Paper Award (2010). She has been serving as Program/General Chair for several conferences including: ICSE 2025, SEFM 2021, FM 2021, ICST 2020, ISSTA 2020, ESEC/FSE 2018, CAV 2015, ISSTA 2014, ASE 2011, and NFM 2009. She is on the steering committees for the ICSE, TACAS and ISSTA conferences. She is currently an associate editor for IEEE TSE and for STTT, Springer Nature.


Keynote Title: Compositional Verification and Run-time Monitoring for Learning-Enabled Autonomous Systems [Slides]


Providing safety guarantees for autonomous systems is difficult as these systems operate in complex environments that require the use of learning-enabled components, such as deep neural networks (DNNs) for visual perception. DNNs are hard to analyze due to their size (they can have thousands or millions of parameters), lack of formal specifications (DNNs are typically learnt from labeled data, in the absence of any formal requirements), and sensitivity to small changes in the environment. We present compositional techniques for the formal verification of safety properties of such autonomous systems. The main idea is to abstract the hard-to-analyze components of the autonomous system, such as DNN-based perception and environmental dynamics, with either probabilistic or worst-case abstractions. This makes the system amenable to formal analysis using off-the-shelf model checking tools, enabling the derivation of specifications for the behavior of the abstracted components such that system safety is guaranteed. The derived specifications are used as run-time monitors deployed on the DNN outputs. We illustrate the idea in a case study from the autonomous airplane domain.

Saddek Bensalem

Prof. Bensalem led the "Rigorous System Design" research team in Verimag/Université Grenoble Alpes, from 2008 to 2020. His activity focuses on rigorous system design as a coherent and accountable process aimed at building cost-effectively systems of guaranteed quality. The aim is to provide the theoretical underpinnings, methods and tools for moving from empirical approaches to a well-founded discipline. Since October 2020, Prof. Bensalem created a new research project on "Engineering Trustworthy Learning-enabled Systems". The aim is to develop foundations for continuous engineering of trustworthy LE-systems. The targeted scientific breakthrough lies within the convergence of "data-driven" and "model-based" engineering, where this convergence is further complicated by the need to apply verification and validation incrementally and avoid complete re-verification and re-validation efforts. The three scientific directions of this research are: (1) integration of learning-enabled components and model-based components via a contract-based methodology which allows incremental modification of systems including threat models for cyber-security, (2) adaptation of verification techniques applied during model-driven design to learning components in order to enable unbiased decision making, and finally, (3) incremental synthesis techniques unifying both the enforcement of safety properties as well as the optimization of performance.


Keynote Title: Monitoring of Learning-Enabled Components


Using Artificial Intelligence (AI) in machine learning can modify many safety-critical applications, such as automotive and healthcare applications. The maturity of these technologies has reached a point where they are now clearly offering benefits through various safety applications. Unfortunately, the use of learning-enabled components that do not guarantee trust can lead to dramatic situations. Rigorous design and trust are imperative for AI’s widespread adoption and, thus, overall success. Establishing trust, however, is a complex process. Several constituent elements of trust cover all the dimensions of an AI system. In this presentation, I will focus on three essential questions. These are: 1- Is AI in mission-critical systems hype, or is it something sustainable? 2- What is the difference between pure software engineering and learning-enabled systems, especially mission-critical ones? 3- How can we ensure the trustworthiness of these critical systems with learning-enabled components? I'll devote a good part of my presentation to this last question, particularly to the problem of runtime monitoring of learning-enabled components.