Language Chooser
Subscription Options:
Email Quick Subscribe
About me
Categories
August 2014
M T W T F S S
« Apr    
 123
45678910
11121314151617
18192021222324
25262728293031

Q.bo Robot

A CHILD’S DREAM

I belong to that generation who grew up devouring science fiction novels which promised us that by the year 2000 we would spend the holidays in other planets, we would go to work on self-propelled airships and we would have a small fleet of robots at home. I spent my childhood dreaming of that wonderful future that they had projected in my mind, too late to stop my imagination, and I grew up with the disappointment of all those broken promises. It wasn’t until a few years ago that I thought it would be feasible to take this small step to bring my dreams closer to reality, and build my own domestic robot.

This day marks the end of over 6 years of illusion and hard work that have now become a reality. Only time will tell if I really managed to help Robotics and Artificial Intelligence to reach homes around the world but, whatever happens, I have the satisfaction of knowing that all I have experienced during this project was worth it and I will always feel proud for having pursued my dream.

Completion of the project

It’s been a little over six years since I enrolled in this wonderful adventure. During the first years I was on my own, but as the project moved forward successfully, I had the opportunity to surround myself with a diverse team of people who were never limited to just carrying out their mission, but believed in this romantic project and became involved giving the best of themselves.

After so much work from so many people, we have finally reached our goal. Many things have happened during this journey, some wonderful and some not so wonderful, but instead of thinking about what I had to leave behind, I’ll remember what I have found. I have learned more than I could have ever imagined, I have met some extraordinary people and I can confirm that effort finally brings its reward.

Objective, Concept, Vision and Open Source

This project was conceived with the firm intention of creating a platform of Artificial Intelligence that could become a little bit smarter each day.

To achieve this, there are two lines of work that I consider essential: on the one hand, developing a robotic platform as standard as possible, that encourages the creation of an ecosystem of research and development around it. On the other hand, achieving a design that is attractive enough for people to emphasize with the platform and adopt it in their daily lives, feeding the ecosystem with their interactions.

I’ve always believed that Robotics should have its base in an open community where knowledge flows and that we must banish, once and for all, the model of closed laboratories surrounded by absolute secrecy, where a group of engineers works on prototypes that will never see the light or that, in the best case scenario, will be used as marketing tools for other products completely unrelated to robotics.

It’s for all the reasons mentioned above that I thought it was necessary to create a low cost open structure around which we could give birth to a real ecosystem aimed at the advancement of Artificial Intelligence. A community of robotics, electronics and computer science experts and amateurs, that would exchange their expertise, designs and experience to get an unlimited range of applications that other people can use for their own benefit or for others (security, telemedicine, telepresence, therapeutic purposes, etc).

This project was made possible thanks to the companies who share this vision: Willow Garage, founded by Scott Hassan in 2006 and that, thanks to their ROS development platform, is fueling a revolution in the world of robotics. Also thanks to Arduino, whose flexibility has allowed us to design two boards (of the five contained in Q.bo) compatible with this platform, and that we will deliver to the community in Open Source format. And thanks, of course, to Linux and its creator, Linus Torwald, which make it possible for all the components of the platform to function together and robustly.

The use of standard electronic components that can be found in any robotics store has been another factor that I considered key from the very beginning, in order to minimize efforts and dedicate all my resources to the base platform, since there’s no need to reinvent the wheel.

Thanks to all these OpenSource benefits, Q.bo was born as a platform that combines design and the latest technology:

 

Design + Linux + Arduino + ROS + Standards = QBO

Formats in which Qbo is supplied and information on the preorder

From the beginning, all our efforts have been focused on creating a platform where design and quality set the standards for development. Our obsession to find the highest quality, both in the plastic parts that make up the casing as in the finishing of the chassis, has played a very important role in the progress of the project.

After 6 years of hard work, we offer you an extraordinary base so that you can experiment with all types of hardware and software. We are aware that not all people have the same interests regarding Q.bo, and some of you would prefer to ignore the building phase of the robot. That’s the reason why we offer you several possibilities, so that you can choose the one that best suits your profile:

Q.bo Basic Kit serves as a mounting base to build your Q.bo as you please. This Kit includes plastic covers, steel chassis, mechanical parts, webcams and wifi antenna. Therefore, you can upgrade it with the servo motors, controller boards, motherboard, microprocessor or memory of your choice, thereby creating a completely customized Q.bo. In this site you will find a list of the components that you can use, recommended and tested by the team of TheCorpora.

The Q.bo Complete version consists of a fully assembled Q.bo. You can choose between Q.bo Complete Pro and Q.bo Complete Lite. The Q.bo Complete Pro version includes a Mini ITX DQ67EP motherboard with an Intel Core I3 low power consumption microprocessor and a 40 GB SSD disk, while the Q.bo Complete Lite comes with an Intel D2700MUD motherboard, an Intel ATOM D2700 microprocessor, a 40 GB mechanical disk and it does not include the eyelids mechanism. Both include a 2 GB DDR3 RAM memory. Both versions of Q.bo Complete (Lite and Pro) are designed for those who want to skip the assembling steps and choosing the components. If you are a software developer, these versions will allow you forget worrying about the mechanical and electronic level and concentrate on the software.

The Q.bo Complete Pro version is sold at a higher price than the Q.bo Complete Lite version but, it also provides an eyelid mechanism, it allows the execution of computationally heavier algorithms, greater fluidity of movements and greater execution speed for algorithms thanks to its Core I3 microprocessor, far more powerful than the Intel ATOM D2700.

Special Thanks

I can’t finish this post without expressing my gratitude to the people that have been on our side all this time, the people who have made it possible for Q.bo to become a reality today. Thanks to all of those who have sent us hundreds of emails asking about the project, to all the technology blogs that have followed us and shared the news published on our blog, to all the friends who follow us on social networks and also to all of those who have talked about us and have made us known in their circles. THANK YOU.

 

I HAVE ALWAYS IMAGINED A ROBOT THAT WOULD BE MORE INTELLIGENT EVERY DAY THANKS TO THE INFORMATION PROVIDED BY OTHER MEMBERS OF THE SAME SPECIES AROUND THE WORLD.

 


The winner of TheCorpora’s contest is:


Click on image


Qbo’s assembling time-lapse

Throughout the life of this project, we must have assembled and disassembled our robot thousands of times in our labs. Q.bo is not a toy, is a complex platform that requires some knowledge of robotics and that’s why we decided to shoot one of these occasions, in Time-Lapse format, so that you can see how one of our engineers starts to assemble, almost from scratch, every component of the robot.

Soon enough, with patience and some knowledge of robotics, you too will be able to assemble, either on Q.bo’s Base or some other Base platform created by you, every single standard component that we have ever tested and selected for Q.bo throughout this time. You can have fun assembling the five (5 ) Open Source Hardware boards designed by TheCorpora, assembling EMG-30 motors, SRF10 ultrasonic sensors, an LCD screen, a hard drive, a PC motherboard and many other components which will be listed very soon. And of course, if you are no handyman, you can still have your own fully assembled Q.bo.

Together we will try to create a true ecosystem, where a big community can create and share projects and ideas around the Q.bo platform. It will be very exciting to see how far your imagination can go.

Q.bo and the Xtion Pro Live 3D sensor

The ability of autonomous localization and simultaneous mapping are crucial for autonomous robots who need to adapt to their environments. In Robotics, this method is known as SLAM (Simultaneous Localization And Mapping) and it can be implemented in several algorithms relevant to 2D or 3D environments by using different types of input sensors (laser, sonars, odometry, webcams, etc.).

Even though the Q.bo robot is equipped with two HD webcams and the possibility of having 4 ultrasound sensors (2 in the front, and 2 in the rear), our team wanted to go much further by incorporating the ASUS’s Xtion Pro Live sensor over the Q.bo’s head. This is accomplished via an adapter created for Q.bo by Thecorpora’s engineers with our own measurements. The decision to use this ASUS’s sensor instead of similar ones has been mainly due to its small size and weight, adapting accurately our Q.bo robot once the adapter was made.

This sensor emits a 3D point cloud that, along with the robot’s odometry sensor and the incorporated gyroscope, enables Q.bo to build maps, 3D modeling of objects and autonomous localization in real-time. This system can be seen as a more accurate and sophisticated visual perception compared to the stereoscopic cameras or the ultrasound sensors. However, the joint use of all systems (ultrasounds, webcams and Xtion) can generate a much more complete information than the separate use of each.

The video can be divided in the following chapters, three of which show experiments made with Q.bo and the Xtion Pro live sensor:

- Chapter I: Mounting the Xtion Pro Live sensor over a prototype mold designed by Thecorpora’s team.

- Chapter II: Real-time 3D visualization of the point cloud emitted by the Xtion Pro Live. We used the ROS visualization tool called RViz to view Q.bo’s 3D model in a desktop with a NVIDIA GeForce GTX 295 as GPU.

- Chapter III: SLAM (Simultaneous Localization And Mapping) in which the robot builds a 2D map of his environment using the laser scan emitted by the Xtion Pro Live sensor. The ROS package called “Gmapping” is used for the SLAM algorithm which was developed by Giorgio Grisetti, Cyrill Stachniss and Wolfram Burgard.

- Chaper IV: Autonomous navigation, reusing the built 2D map stored after using SLAM. The initial location and goal position of Q.bo robot is indicated using the RViz visualization tool. For the autonomous localization, we have used the “amcl” ROS package developed by Brian P. Gerkey. It contains an implementation of a particle filter-based localization algorithm that exploits the laser scan obtained by the Xtion Pro live and the 2D Map. For the movement instructions, we used the ROS package “move_base” developed by Eitan Marder-Eppstein. The “move_base” package contains the implementation of a global and a local 2D motion planners which use the laser scan (emitted by Xtion Pro Live) to detect close obstacles.

Play with us & win a Q.bo Robot

Since the early of existence this Blog, a lot of people have been interested in the official release date of Q.bo and they haven´t had a clear answer yet.

 

I’ve been following closely many projects similar to Q.bo that had serious problems because  they anticipated their release dates, from having to explain this to their followers to even be forced to abandon the project. In most cases, these setbacks are due to problems with suppliers, other companies or simply a lack of experience in this type of projects. We knew that we would not be an exception, since our project is complex, handles many variables and had been done with limited resources, both personal and economic. That is why we never wanted to give a release date with no guarantee of keeping it.

After several years of hard work, thousands of problems solved and some others still to be solved, Q.bo is finally about to see the light. We would like you to participate in its birth by giving you the opportunity to play with us and guess the exact release date of Thecorpora’s official website and win one of our Q.bo robots.

You can enter the contest from TheCorpora’s Home page and take a look at the contest terms and conditions. We wish you the best of luck, a Q.bo is waiting for you.


Thanks to everyone who made it possible for this project to see the light very soon.

OpenQbo – A Robotic Ubuntu-based Linux Distro, Version 2.0 (Beta Release)

Changes from the last version of the OpenQbo Distro:

- Architecture was changed from i386 (32 bits) to 64 bits Support

- Upgrade to the latest version of Ubuntu – 11.10 (Oneiric Ocelot)

- Gnome 3 is now used as the desktop manager

- Some theme changes (login screen, wallpaper, icon and windows theme)

- Upgrade of Ubiquity installerUpgrade of ver ROS version from “diamondback” to “electric”

- Mozilla Firefox is now the default browser instead of Google chrome

This Beta Release version is at your disposal so that you can help us improve it by sending all your suggestions to the following email: openqbo@thecorpora.com.

- bugs

- desktop improvements (themes, fonts, wallpaper, etc.)

- packages and/or services you would add or remove

- start up and shut down errors of the system

- security

- any other thing you may consider of interest

You can download the Beta Release here:

VERY IMPORTANT NOTEIt is a version in development and its use is NOT recommended in production equipments.


QBO meets QBO

Those who have been following this project for a while know that it started 6 years ago. If something surprised and pleased me in equal measure, it was the controversy brought by our latest video in which Qbo recognizes itself in the mirror. It was not due to the publicity generated around the project, but because, somehow, we’ve updated this debate that was lost long time ago: can a machine become intelligent or self-conscious? We went from being interested in creating artificial brains to just develop mechatronic systems sophisticated but less intelligent or already programmed.

After our latest video, you had consistently sent us a very interesting series of questions which can be summarized in two: what would happen if a Qbo has another of the same “species” identical to him in front of him? And, is Qbo conscious of its aspects and of what he sees? Here you’ll have some answers in which we have tried to simplify the debate without entering in philosophical or ethical issues, and of course always presenting our own point of view.

From an early age, humans learn to recognize themselves before a mirror using two mechanisms: we have learned how we look like since we are 8 months old, and we verify if our actions or movements are “replicated” in the image of the mirror. Because we are aware of the actions we take and  we can recognize those actions in the mirror’s reflection, we have the ability to distinguish ourselves from another “identical” being if we were placed in front of he/she. This is the problem that a robot like Qbo must face in the case of being in front of a mirror or find another robot identical to him.

Inspired by this process of self-recognition in humans, we developed a new ROS node that is executed when the node “Object Recognizer”, previously trained, has identified a Qbo in the image. Using nose signals to see if the image seen by the robot matches its action, a Qbo can tell in real time whether he sees his image reflected in a mirror or he is watching another Qbo robot in front of him. The sequence of flashes of the nose is randomly generated in each process of recognition, so the probability that two robots generate the same sequence is very low, and even lower that they start to transmit it at the same time.

After the green Qbo recognizes another member of his “kind”, both hold a short conversation. For this brief chat we have used the Julius software (speech recognizer) and Festival (speech synthesis), so that each robot recognizes what the other says and responds according to the question or the comment heard, using a small base pre-programmed knowledge base for this purpose. Thanks to the speakers, the microphone and the software systems mentioned, the Qbo robots are able to hold a conversation using synthetic speech.

 

CONSCIOUSNESS IN MACHINES

Some think that “consciousness” is an intellectual capability that enables to store knowledge about the world and exploit that knowledge to constantly make decisions and predictions, as suggested in his book “On IntelligenceJeff Hawkins, co-founder of PALM & HandSpring companies. Jeff suggests that human beings can be seen as complex forms of statistics machines which constantly attempt to predict in real-time immediate responses of the actions that are generated in our environment.

However, might Qbo be aware of what he learns and sees around him? To find it out, we should first answer the question of whether it is possible to algorithmically program consciousness or not. There are several theories about it, some advocated by Roger Penrose, whose work is the most critical with the development of a true artificial brain?, claiming that self-consciousness may be artificially reproduced but not algorithmically simulated. Others, however, affirm that the self-consciousness is a process that emerges as a result of algorithms and an appropriate learning. Today, thanks to neuroscience and neurobiology, we know that what we do is the result of learned and unconscious mechanisms.

To accept that a machine can have some sort of consciousness could be a deep wound to human narcissism. However, the question I’m willing to ask you is: Is it necessary to have a complex corporeal system (for example, that senses pain or cold) for a machine to be truly self-aware? Personally, I think not.

If we take the theory and the words of Roger Penrose, Qbo is NOT self-conscious, since it just “simulates” a behavior algorithmically learned. If we adopt the other theory, Qbo can be seen as a conscious being because it exploits knowledge of its appearance and the actions it takes.

In 2007, Hod Lipson, a robotic engineer at the Cornell University, gave a brilliant talk about the mother of all designs “evolution” and claimed that the self-consciousness might emerge from the evolution of a robot’s intellectual capabilities, including a demonstration of his postulate. On one hand, Hod Lipson argues that we should stop creating robots manually so that they can evolve themselves, and on the other hand he states and shows that we have to let the machines learn through reinforcement and learning.

According to Hod Lipson’s point of view, at the moment Qbo is not more than a bunch of motors, electronic boards, cameras, etc., forming a mechanical being that, through software, can be submitted to natural evolution. By manipulating the rewards of a series of games, challenges, or just learning, we can mold Qbo behavior to, for example, recognize a face of the same person in different situations (which is already done by J).

Does all this mean that Qbo may be aware of his actions or surroundings? The answer is that, for the moment, as Hod Lipson would say, Qbo is just a programmable electro-mechanical set that can see, hear, speak and move. The ecosystem and the community around Qbo, that can program him and teach him, will tell in the future if we could witness the birth of  “some spontaneous intelligence”. One thing is for sure: it will be a terribly difficult task, impossible to achieve by only a few engineers. Only through the work of a true community of programmers, designers and professors, we can solve this challenge.

&

HAPPY NEW YEAR

 

QBO and the mirror. What if…(UPDATED)

A few days ago, I remembered when someone told me that only the human beings, the dolphins and some species of apes can recognize themselves in front of a mirror. Qbo has two independent nodes, one for face recognition and another for object recognition. Despite being obvious for everyone who works in Robotics what would happen if we put Qbo in front of a mirror, we posted a video for those who still doubt if Qbo recognizes himself as a person (eyes, nose, round face) or as a simple object.

 

 

This video corresponds to a small experiment in which we put Qbo in front of the mirror to see if he can learn to recognize himself. For that, we used the “Object Recognition” mode and the “Face Recognition” mode. Qbo, using its stereoscopic vision, selects his image in the mirror and, with the help of one of the engineers, learns how to recognize himself. This quite simple experiment touches interesting psychological aspects of self-consciousness, whose complexity can be proved by the fact I already mentioned of the few species that can recognize themselves in front of the mirror. In this first version, a human guide presents Qbo to himself, but we are working so as the robot could present and self-recognize himself autonomously when found in front of the mirror.

 

[ UPDATED Nov.30th ]

Due to the large impact this video is having on the Internet, we have seen fit to explain how this “real” experiment was carried out  in our laboratories.

In the video, Qbo transits to  the “Object Recognition” state from its internal state machine,  to learn to recognize its mirror image, as if it were a regular object. Technically, how does Qbo do it?

The “Object Recognition” state is implemented to execute simultaneously different ROS nodes: one responsible for the head and base movement; another to select an object from the image using the stereoscopic vision; another to recognize objects or learn new ones. The object recognition algorithm uses “SURF” descriptors and the “Bags of Words” approach (through  the OpenCV library), and stores images in Qbo’s internal storage system.

Qbo has several stored answers and behaviors in an internal knowledge base, that we upgrade as the projects evolves, to make questions or orders to Qbo such as “What it this? or “Do this”. Qbo interprets the object “Myself” as a an ordinary object, for which it has special answers in its internal knowledge base such as “Woah. I’m learning myself” or “Oh. This is me. Nice”. Qbo selects its reflection in the mirror in the image that he sees using the stereoscopic vision, and one of our engineers interacts (speaks) to him so that Qbo can learn to recognize himself as another object. For direct interaction, Qbo uses the open-source software Julius for speech recognition (in the video, you see how Qbo receives the order to turn around, and he responds to it by moving its base 90 degrees), and Festival for voice synthesis.

And to the question: “What would happen if a Qbo sees another Qbo in front of him?”. The answer will be out there pretty soon!

Qbo’s Halloween

 

IMG_8252_v1.JPG

IMG_8252_v1.JPG

IMG_8257_v1.JPG

IMG_8257_v1.JPG

IMG_8259_v1.JPG

IMG_8259_v1.JPG

IMG_8277_v1.JPG

IMG_8277_v1.JPG

IMG_8253.JPG

IMG_8253.JPG

IMG_8293_v2.JPG

IMG_8293_v2.JPG

 

MicroSoft – Diary of a dream in Silicon Valley ( Day 5 )

 

Cartoon by Carmen Cordoba

 

This has been one of the biggest surprises of the trip, for two reasons, one because I could enter in the future house of Microsoft by the hand of one of its funders, who is also a professor of the Stanford University, in which I could see the Microsoft’s vision of robotics.

 

 

However, the best part was the privilege to meet and talk personally with Alex Acero, who is one of the best engineers that Microsoft has in its facilities and who works there from the year 1994. Alex Acero joined Microsoft Research in 1994, became manager of the speech grup in 2000 and since 2006 is currently a Research Area Manager directing an organization with over 50 researchers and engineers working on audio, speech, multimedia, communication, and natural language. Prior to joining Microsoft, he was the manager of the speech group at Telefonica Investigacion y Desarrollo (1992-1993) and a Senior Engineer at Apple Computer (1990-1991). He has 93 granted US patents.

Since 2000, Alex Acero is also Affiliate Professor of Electrical Engineering at the University of Washington and has taught Spoken Language Processing. He has participated in the PhD thesis committee of 7 students.

 

Alex got his Ph.D. in EE from Carnegie Mellon University in 1990, his MS from Rice University in 1987 and a Telecommunications Engineering Degree from the Universidad Politecnica de Madrid in 1985, all in Electrical Engineering.

 

Alex Acero & me

 

His work, as he says, is to play while they pay him. We talked about voice recognition systems and artificial vision, which continue to be too far from those used by Qbo with the difference that the systems of Microsoft are far more trained than ours of course. To give you an idea its acoustic model consists of several hundreds of hours of training while Qbo uses the open source model of Voxforge that has no more than 50 hours,  even though, every day, the number of minutes trained increases through the open-source recognition system Julius that is used by Qbo.

For the recognition and image processing used to make the Kinect recognize every movement patterns of a person, they recruited hundreds of people of all ages, heights, clothes, etc ., encountering many problems when some people wore for example a hat or skirt. And we may have talked for hours and hours. However Alex is someone who you know that exists but which is almost impossible to access to, so it was a real pleasure to meet and chat with him in person about robotics. Thanks Alex for your support to the project.

 

IMG_0828.JPG

IMG_0828.JPG

IMG_0826.JPG

IMG_0826.JPG

IMG_0825.JPG

IMG_0825.JPG

IMG_0821.JPG

IMG_0821.JPG

IMG_0819.JPG

IMG_0819.JPG

IMG_0816.JPG

IMG_0816.JPG

 

 

 

 

 

 

 

 

Thank you so much Alex…