Your brain is great and all, but it has a serious limitation: You can’t just download new information instantly, like in The Matrix. Robots, however, certainly can. Just imagine a future where they’re hooked up in the cloud—when one of them learns something, they all learn something. Let’s just hope that certain something is nice, like how to give hugs.
The problem, though, is that you can’t just have a little rover learn to grasp something, then expect that knowledge to translate into a hulking bipedal robot. But new research out today from the MIT Computer Science and Artificial Intelligence Laboratory takes a big step toward making such seamless transfers of knowledge a reality. It all begins with a little robot named Optimus and its friend, the famous 6-foot-tall humanoid Atlas.
The researchers started by teaching Optimus—a two-armed robot meant for bomb disposals—how to pull a tube out of another tube. First, they gave it some information about how different objects require different manipulations. Then they held its hand in a sim. “Imagine kind of a videogame where the robot is inside that 3-D world,” says roboticist Claudia Perez-D’Arpino, co-author of the study. “With the mouse you can basically grab the hands and move them around.”
This way, you don’t have to be a gifted coder to be able to command a robot. And it’s all the more intuitive for the operator because it’s a lot like how humans learn: Toddlers have a knowledge base of, say, grasping a binky, but can recontextualize that knowledge of manipulation as they encounter new objects.
Now, how to transfer the robot’s skills to a biped Atlas many times its size? After all, this bot has a new challenge: not falling on its face. “So mathematically that can be written as another series of constraints,” says Perez-D’Arpino, “which if you can imagine is like, keep your center of mass within some region.” Essentially, the operator has to give the new robot some rules, like how to balance correctly, to perform the same task as Optimus. Combine those rules with what Optimus has already learned about manipulating the tubes, and you get a smooth transfer of knowledge. It’s not an automatic handoff, to be sure, but it’s a start.
At the moment, Atlas can only do the handoff in a simulator. But the development is a glimpse into a future where more and more, robots communicate without humans at all. They might, for instance, teach themselves to pull tubes out of tubes through a process known as reinforcement learning—essentially trying and trying and trying until they finally get it right.
Imagine the power of this in a factory setting: If one robot learns how to manipulate something more efficiently, it can distribute that knowledge to its comrades through the cloud. And with tweaks like what Perez-D’Arpino has demonstrated, that knowledge might even work with other species of robot as well. Meaning soon enough, robots will think gooder without human help and disseminate those skills freely.
Skills like hugging, right?