skip to main content

Mechanical and Civil Engineering Seminar: Ph.D. Thesis Defense

Monday, May 22, 2023
3:00pm to 4:00pm
Add to Cal
Gates Annex B122
Robust Safety-Critical Control: A Lyapunov and Barrier Approach
Andrew Taylor, Graduate Student, Control & Dynamical Systems, Caltech,

Abstract: Accompanying the technological advances of the past decade has been the promise for widespread growth of autonomous systems into nearly all domains of human society, including manufacturing, transportation, and healthcare. At the same time, there have been several tragic failures that reveal potential risks with the expansion of autonomous systems into everyday life, indicating that it is vital to account for safety in the design of control systems. My work develops a theory of robust safety-critical control for autonomous systems built upon the foundational tools of Control Lyapunov Functions (CLFs) and Control Barrier Functions (CBFs), which provide a powerful paradigm for the design of model-based safety-critical controllers. The dependence of CLF and CBF-based controllers on a system model makes them susceptible to modeling inaccuracies, potentially resulting in unsafe behavior when deploying these controllers on real-world systems. I construct methods for resolving four classes of model inaccuracies referred to as model error, disturbances, measurement error, and input sampling. A hallmark of these methods is a focus on enabling control synthesis through convex optimization, which ensures that controllers can be efficiently computed on real-world robotic platforms.

My talk is divided into two parts: addressing model error through data-driven learning techniques and addressing input sampling through sampled-data control using approximate discrete-time models. In the first part, I will begin by exploring how different types of model error can arise, and how it impacts safety-critical control. I present an episodic learning framework that iteratively improves the safety of a system by utilizing collected data to augment CBF-based controllers with learning models that capture model errors. I then provide a theoretical characterization of the impact of residual learning errors through the notion of Projection-to-State Safety. I will highlight some structural challenges of this learning framework due to limited access to data when working with high-dimensional systems. Resolving these challenges leads to an alternative episodic learning framework that is demonstrated on a bipedal walking platform, and a data-driven control framework specified through robust convex optimization. In the second part of my talk, I will begin by presenting consistency properties that permit achieving rigorous theoretical safety guarantees using a common class of approximate discrete-time models. Following this, I will present the notion of Sampled-Data Control Barrier Functions and establish how they provide practical safety guarantees. Lastly, I will establish how controller synthesis through convex-optimization can be achieved through an appropriate choice of approximate discrete-time model order. I will conclude with a discussion on future directions stemming from these results.

For more information, please contact Mikaela Laite by email at [email protected].