Authors: Muhammad Zubair Irshad, Mauro Comi, Yen-Chen Lin, Nick Heppert, Abhinav Valada, Rares Ambrus, Zsolt Kira, Jonathan Tremblay

How do we represent 3D world knowledge for spatial intelligence in next-generation robots? This is the question ELLIS-ELIZA PhD student Nick Heppert, ELLIS-ELIZA Fellow Abhinav Valada, and their co-authors tried to answer in a recent extensive survey paper “Neural Fields in Robotics: A Survey.”
But let’s start from the beginning: what even is a Neural Field? The term Neural Field itself is quite simple: it consists of two parts, Neural and Field. Let’s start by looking at the latter, Field. In robotics, a Field is usually some sort of spatial and/or temporal representation that represents structure. Classical examples in robotics include building an occupancy field when a robot is roaming around. If we now add the Neural part to it, we enhance this structure by allowing it to be a learnable function. In the context of occupancy, this means that instead of storing the occupancy for every section in a grid, we have a learnt function which we query with a location to return the occupancy state.
The survey paper generalizes this concept and puts it in a coherent format across different types of common fields in robotics. The survey then further highlights more than 200 recent papers in different key fields of robotics, spanning Pose Estimation, Manipulation, Navigation, Physics, and Autonomous Driving. Each section is further divided into subfields. Finally, the survey goes beyond the current state of the art and proposes interesting further research directions.
Publication: Currently under review
This blog post was written by Nick Heppert.