Using Depth to Increase Robot Visual Attention Accuracy during Tutoring

We explore the problem of attention models for robot tutoring as related to the cognitive development of infants. We discuss the factors that have an important influence in the attention of infants and the way these factors can be taken into consideration to develop robot attention models that simulate infants’ cognitive stimuli. In particular, we focus on the attention given to objects that appear closer to the infant when they are shown by an adult. Using the distance of an object as an important factor to increase visual attention, our model uses depth information along with the well-known Bottom-Up Visual Attention Model Based on Saliency in order to increase attention accuracy even if non-salient feature objects are shown to the robot or if tutoring activity takes place under clutter environments. Our model also considers the presence or absence of a human tutor to decide whether a tutoring activity might take place. Experimental results suggest that depth information is a key factor to emulate effective attention of infants.

 

Demo Videos

Demo Demo Demo Demo

Related Publications

Christian I. Penaloza, Yasushi Mae, Kenichi Ohara, and Tatsuo Arai: "Using Depth to Increase Robot Visual Attention Accuracy during Tutoring", IEEE International Conference on Humanoid Robots - Workshop of Developmental Robotics, Osaka, Japan. November 29- December 1st, 2012.