Speaker
Description
Deep neural networks (DNNs) have been demonstrated to be valuable components for the automation of tasks that are difficult to program, such as computer vision tasks. Unfortunately, DNNs are inexplicable by design, which makes testing the only viable solution to acquire confidence about their reliability.
This talk provides an overview of Test, Improve, Assure (TIA), an ESA activity that supports the development of trustworthy DNNs. We developed means to characterise failure scenarios, thus helping data analysts and engineers determine when DNNs may fail. Further, we developed means to combine evolutionary algorithms, simulators, and generative models to test DNNs more effectively than just by relying on test sets and improve DNNs with automatically generated training data.