A Step Towards Efficient Evaluation of Complex Perception Tasks in Simulation
Jonathan Sadeghi, Blaine Rogers, James Gunn, Thomas Saunders, Sina Samangooei, Puneet Kumar Dokania, John Redford
NeurIPS 2021 Workshop on Machine Learning for Autonomous Driving
Abstract
There has been increasing interest in characterising the error behaviour of systems which contain deep learning models before deploying them into any safety-critical scenario. However, characterising such behaviour usually requires large-scale testing of the model that can be extremely computationally expensive for complex real-world tasks. For example, tasks involving compute intensive object detectors as one of their components. In this work, we propose an approach that enables efficient large-scale testing using simplified low-fidelity simulators and without the computational cost of executing expensive deep learning models. Our approach relies on designing an efficient surrogate model corresponding to the compute intensive components of the task under test. We demonstrate the efficacy of our methodology by evaluating the performance of an autonomous driving task in the Carla simulator with reduced computational expense by training efficient surrogate models for PIXOR and CenterPoint LiDAR detectors, whilst demonstrating that the accuracy of the simulation is maintained.
Download the full paper