Overview

Consider critical autonomous operations among multiple cyber and physical assets, together with interactions with humans. Such autonomous operations will rely on a pipeline of machine learning (ML) algorithms executing in real-time on a distributed set of heterogeneous platforms, both stationary and maneuverable. The algorithms will have to deal with both adversarial control and data planes. Our project is designing secure algorithms that can provide probabilistic guarantees on security and latency, under powerful, rigorously quantified adversary models, moving away from the trend of one-off security solutions for specific attack vectors. The project seeks to make fundamental research contributions under three pillars—robust adversarial algorithms, interpretable algorithms aiding the trust of the warfighter on the results of the autonomous algorithms, and secure, distributed execution of the autonomy pipeline among multiple platforms.