Incubator Reports

Digital Engineering Enhanced T&E of Learning-Based Systems

December 2022

AUTHORS: Dr. Peter Beling1, Dr. Laura Freeman1, Dr. Jitesh Panchal2

The current approach to Test and Evaluation (T&E) involves treating the system in a blackbox fashion, i.e., the system is presented with sample inputs, and the corresponding outputs are observed and characterized relative to expectations. While such an approach works well for traditional static systems, test and evaluation of autonomous intelligent systems presents formidable challenges due to the dynamic environments of the agents, adaptive learning behaviors of individual agents, complex interactions between agents and the operational environment, difficulty in testing blackbox machine learning (ML) models, and rapidly evolving ML models and AI algorithms. 

The broad objective of this incubator research is to develop approaches to the design of T&E programs and the acquisition of data/model rights for learning-based systems. The principal objective is to understand how increasing government access to the models and learning-agents (AI algorithms) used in designing next-generation military systems might decrease the need and expense of testing and increase confidence in results. Current approaches to T&E cannot address the challenges of identifying changes in operating conditions or adversarial actions that might cause the performance of an Artificial Intelligence/Machine Learning (AI/ML) model to deviate from design limits, particularly when considering autonomous functions that may engage in self-learning over the long life cycles of military systems.