Some days ago we were visited by the people in charge of the future assembly line of Qbo, and therefore we felt obliged to prepare for them a live demo of the robot. During the previous days of their visit we configured Qbo in order to make him capable of interacting with our visitors; and with the aim of polishing the mistakes he could commit during the presentation, we created a “log” file for storing in real time all the operations he made. The presentation consisted in “releasing” Qbo in a completely autonomous way and letting him interact with his environment as naturally as possible. The ROS nodes we activated were the ones for artificial vision, synthesis and speech recognition, the chatterbot and the modules in charge of sending and receiving data to the two Qbo internal controlers.
As it was the first time that our robot would interact with someone outside the development team we thought it would be a good idea to video record all the presentation in order to keep a graphic document.

Come the day and done the presentation, we proceeded to check the log file created and stored in the robot. As you can see in the video, the demo went very well, so we expected to find few errors. However, to our surprise we found out that some of the functions that had been programmed, especially in artificial vision, did not fit exactly to the action that the robot should have made ​​to  certain information received from the outside. So, why haven’t we noticed these errors while Qbo was interacting with our visitors?
According to the file “log”, Qbo was losing in some cases the vision of the person in front of him and later re-locating it. Later we found out that this happened because in some areas of the room light was coming directly from the outside, saturating the image received by the cameras and preventing the algorithm from doing it work properly.

However, don’t we react in the same way when we receive a very strong light directly in the eyes? As long as that is the natural behavior we humans are used to, we failed to perceive that the robot was behaving in a wrong way.

Another Qbo mistake when using artificial vision, that we did not perceived during the demo, was the difference in physical distances that he kept from our guests. As showed later in the logs, the distance that Qbo kept from a person of higher stature was slightly different than the distance he kept with a shorter one, despite being programmed so that the distance was always the same. When someone stands up, the distance of his face from yours depends on his height, and this circumstance that we had not taken into account provided Qbo with an unexpected naturalness.

All of us who have ever programmed know the importance of a code without errors. The depuration process, although arduous, is necessary for the proper functioning of the code, and is a fundamental part of a “traditional” software development. However, we must point out that a program written for a PC system receives very limited and controlled stimulus from his physical environment. This makes the code complexity and therefore the cleaning of it manageable and highly deterministic.

The nature of the software developed for Qbo is very different. The purpose of a social robot is to adapt its behavior, in real time, to the vast amount of information received from various sensors. Imagine the complexity of debugging a  software of this kind, not only by the number of code lines to debug, but because of the infinity of possible situations in the environment to be considered during the debugging process.

I consider that in a robot it is important to debug the code so that everything works properly, but it is not always important that it behaves exactly as we want it to behave. Our experience with Qbo tells us that the “controlled” actions and mistakes based on the physical environments in which the robot moves, make him to react more naturally to stimuli received from outside.

So the question is: How far should we improve and control all actions and algorithms in social robots? Should we take our imperfections also to social robots to make them behave like us?