An Object-Based Representation for Multisensory
Robotic Perception
J.C. Rodger and R.A. Browse
Abstract
The inclusion of multiple sensors in robot systems emphasizes the need for general techniques to
combine data across sensor systems. This paper describes a simple, uniform, object-based
representation and consistency scheme that facilitates the combining of input from an arbitrary
number of sensors in order to recognize and locate objects. The approach assumes that raw
sensor data yields features that invoke objects, together with associated orientation and location
constraints. This makes it appropriate for model-based recognition systems using spatial sensors
(eg. Photometric, tactile, ranging). The uniform representation permits application of simple
consistency operations to extract interpretations that are supported by all sensor inputs. This
approach has been tested in a demonstration system that simulates the combination of
photometric and force array sensing in order to recognize and locate modeled objects.