Devansh Agrawal

Email  /  GitHub  /  Google Scholar  /  LinkedIn

profile photo

Architectures for Safe Autonomy: Provable Guarantees Across Control, Planning, and Perception.

Devansh Agrawal
University of Michigan, Ann Arbor
2025
pdf / video / slides / /

This thesis focuses on the design of safety-critical autonomous systems - systems that must always satisfy a set of safety constraints. The primary objective is to develop a cohesive architecture for the entire autonomy stack, ensuring that, under specific and verifiable assumptions, a robot can execute its mission while maintaining safety. Modern autonomous systems present unique challenges because their autonomy stacks are composed of interdependent modules: (1) a mission-level planning module that makes high-level decisions, (2) a perception module that processes sensor data to estimate the robot’s state and the operating environment, (3) a planning module that generates a trajectory for execution, and (4) a control module that computes actuation commands. Guaranteeing safety requires a systematic approach to the design and integration of these modules. To achieve this, we take a bottom-up approach, starting with the design of a safety-critical controller and identifying the assumptions necessary for its safe operation. These assumptions impose requirements on upstream autonomy modules, such as the planning and perception modules. We then propose methods to design or augment each module to ensure that, when composed, the entire autonomy stack maintains safety guarantees. The focus is not only on individual module correctness but making assumptions for each module that can be satisfied by upstream modules, to be able to achieve system-level guarantees. The main contributions of this thesis include: (A) the gatekeeper architecture - a flexible framework for establishing rigorous safety guarantees at the planning level, (B) the development of certifiably correct perception algorithms that generate accurate obstacle maps while providing error bounds to account for odometry drift, and (C) the introduction of clarity and perceivability - concepts that quantify a robotic system’s ability to gather information about its environment, considering the environment model as well as the robot’s actuation and sensing capabilities. Each contribution is supported by formal proofs and validated through simulations and hardware experiments with aerial and mobile robots.

Abstract

This thesis focuses on the design of safety-critical autonomous systems - systems that must always satisfy a set of safety constraints. The primary objective is to develop a cohesive architecture for the entire autonomy stack, ensuring that, under specific and verifiable assumptions, a robot can execute its mission while maintaining safety. Modern autonomous systems present unique challenges because their autonomy stacks are composed of interdependent modules: (1) a mission-level planning module that makes high-level decisions, (2) a perception module that processes sensor data to estimate the robot’s state and the operating environment, (3) a planning module that generates a trajectory for execution, and (4) a control module that computes actuation commands. Guaranteeing safety requires a systematic approach to the design and integration of these modules. To achieve this, we take a bottom-up approach, starting with the design of a safety-critical controller and identifying the assumptions necessary for its safe operation. These assumptions impose requirements on upstream autonomy modules, such as the planning and perception modules. We then propose methods to design or augment each module to ensure that, when composed, the entire autonomy stack maintains safety guarantees. The focus is not only on individual module correctness but making assumptions for each module that can be satisfied by upstream modules, to be able to achieve system-level guarantees. The main contributions of this thesis include: (A) the gatekeeper architecture - a flexible framework for establishing rigorous safety guarantees at the planning level, (B) the development of certifiably correct perception algorithms that generate accurate obstacle maps while providing error bounds to account for odometry drift, and (C) the introduction of clarity and perceivability - concepts that quantify a robotic system’s ability to gather information about its environment, considering the environment model as well as the robot’s actuation and sensing capabilities. Each contribution is supported by formal proofs and validated through simulations and hardware experiments with aerial and mobile robots.

Video

Design and source code modified from Jon Barron's website. Edit here.