A few days ago, I remembered when someone told me that only the human beings, the dolphins and some species of apes can recognize themselves in front of a mirror. Qbo has two independent nodes, one for face recognition and another for object recognition. Despite being obvious for everyone who works in Robotics what would happen if we put Qbo in front of a mirror, we posted a video for those who still doubt if Qbo recognizes himself as a person (eyes, nose, round face) or as a simple object.

 This video corresponds to a small experiment in which we put Qbo in front of the mirror to see if he can learn to recognize himself. For that, we used the “Object Recognition” mode and the “Face Recognition” mode. Qbo, using its stereoscopic vision, selects his image in the mirror and, with the help of one of the engineers, learns how to recognize himself. This quite simple experiment touches interesting psychological aspects of self-consciousness, whose complexity can be proved by the fact I already mentioned of the few species that can recognize themselves in front of the mirror. In this first version, a human guide presents Qbo to himself, but we are working so as the robot could present and self-recognize himself autonomously when found in front of the mirror.

[ UPDATED Nov.30th ]

Due to the large impact this video is having on the Internet, we have seen fit to explain how this “real” experiment was carried out  in our laboratories.

In the video, Qbo transits to  the “Object Recognition” state from its internal state machine,  to learn to recognize its mirror image, as if it were a regular object. Technically, how does Qbo do it?

The “Object Recognition” state is implemented to execute simultaneously different ROS nodes: one responsible for the head and base movement; another to select an object from the image using the stereoscopic vision; another to recognize objects or learn new ones. The object recognition algorithm uses “SURF” descriptors and the “Bags of Words” approach (through  the OpenCV library), and stores images in Qbo’s internal storage system.

Qbo has several stored answers and behaviors in an internal knowledge base, that we upgrade as the projects evolves, to make questions or orders to Qbo such as “What it this? or “Do this”. Qbo interprets the object “Myself” as a an ordinary object, for which it has special answers in its internal knowledge base such as “Woah. I’m learning myself” or “Oh. This is me. Nice”. Qbo selects its reflection in the mirror in the image that he sees using the stereoscopic vision, and one of our engineers interacts (speaks) to him so that Qbo can learn to recognize himself as another object. For direct interaction, Qbo uses the open-source software Julius for speech recognition (in the video, you see how Qbo receives the order to turn around, and he responds to it by moving its base 90 degrees), and Festival for voice synthesis.

And to the question: “What would happen if a Qbo sees another Qbo in front of him?”. The answer will be out there pretty soon!

Comments are closed.