When I started I was sure that this robot had to be “realistic”. I did not want to develop a biped robot or a robot with arms because I knew I would fail due to the lack of resources. Once I had that point clear, I focused my efforts on thinking what I would like to have in my own house. Some of my first ideas were that it should be small, something similar to a cat or a dog, it would move quickly in order to interact with me in a logical way and it would look nice to fit in my environment. However, the most important thing was being able to experiment in my own robot with all the information about artificial vision, speech recognition and synthesis which was under an open source license on the Internet and was used in research robots. It was also important that as the hardware improved, my robot would improve along with it and that every component was as much standard as possible (aim nearly achieved), that is to say components which were easily found in shops for two obvious reasons: first one because they were already made so I did not have to make them and second one because their price is generally low.
My other goal was being able to sell the robot in the market at a reasonably low price in order to be affordable for as many users as possible and thus create a powerful community around it. On the other hand, as the software, diagrams and firmware of the boards which control the components of the robot would be distributed under an open source license (not determined yet) any user could decide to create his own.
Considering all the information above, it seemed easy to start but that was not the case. How would the design be? How would I turn that design into a 3D software? What about the hardware components? Where would I find people to help me? I started to realize that finishing the project would not be easy at all because I would have to deal with significant expenses to do research, create prototypes, develop software, design and make controller boards in order to release it into the market at a price as much reduced as possible. Many designs went through my head but none was convincing enough. However, one day, while I was in a shopping centre, I laid my eyes on a vacuum cleaner (yes, that is right!) which was displayed horizontally with all its accessories and it caught my attention. I opened it and I realized it was perfect to fit all the components I had in my mind (motors, PC board, sensors, performers and even a battery). Then I thought that if it was in a vertical position with the wheels behind and if I added an idler wheel in front it would have a really attractive body and I would only have to think about the head. “Before I go on and to avoid misunderstandings, after 5 years and hundreds of changes in the 3D design, the current look of the robot is far away from that vacuum cleaner I saw in the shopping centre, actually that vacuum cleaner did not have more than 6 plastic parts and Qbo is esthetically made up of more than 30 plastic parts”.
Thanks to this unexpected discovery I could picture the body of the robot but I still had to think about the head. I was sure about two things: it would be big because with no arms or hands it would be the only element which would interact and it had to be attractive but completely different from a real human face. Personally, I do not like creating robots which look like humans (I will talk about this in future posts). Considering those aspects I had to decide what components would be part of the head since they would affect its design. It was obvious that the robot had to be able to interact with a human being so it was necessary to set microphones and web cams to receive data from the outside, some speakers to transfer information and some elements which could show some kind of emotion (the most difficult part).
Some of Qbo’s “current” skills:
Stereoscopic vision: webcam calibration (2), depth, face, objects and colours recognition, face and object tracking, map generating (under development)
Speech Recognition System
Speech Synthesis System: it offers a general framework for building speech synthesis systems. Only available in English at the moment.
Thecorpora’s API: Developed to interact with the hardware components of the robot and third parties’ software.
WEB control panel: the robot is accessible through web explorer.
Internet connection through a WIFI controller placed in the head. Real-time software and firmware update.
Obstacles: the robot avoids crashes and falls thanks to ultrasound sensors.
Autocharging: auto-charge battery (testing and developing phase)