If I hand you an apple, you know from experience that it isn’t something you can drive. And the tree it came from can’t be woven, or its seeds bandied. You know that’s nonsense, but AIs don’t have the benefit of years spent navigating the world, and as such have no real idea as to what you can do with what — but a little common sense could be coming their way.
Researchers at Brigham Young want to make sure that future androids and AI entities that interact with the real world have at least a basic understanding of what certain things are and do.
“When machine learning researchers turn robots or artificially intelligent agents loose in unstructured environments, they try all kinds of crazy stuff,” said Ben Murdoch, co-author of the BYU study, in a news release. “The common-sense understanding of what you can do with objects is utterly missing, and we end up with robots who will spend thousands of hours trying to eat the table.”
That specific situation probably doesn’t come up particularly often, but you get the idea. What’s needed is a sort of database of objects and common actions or attributes associated with them. That way, a robot knows that a dumbbell is meant to be picked up, not pushed, and that it’s heavy, not light — all of which affects how it goes about fetching or moving it.
To create such a database, or at least a prototype of it, you could of course assemble it manually — but that would be incredibly time-consuming. Instead, the researchers fed the English Wikipedia corpus to a computer, letting it chew through millions of words in context. Apply a little math and you find that apples are generally bitten, chairs sat in, trees climbed (or shaken), and so on.
This is a great cheat sheet for when an AI has to either interact with those objects itself, or to understand what someone else is doing or saying in relation to them. The researchers tested the system out by having an AI attempt to navigate a little text adventure with and without the list; it was much better with.
Common sense is a good start. Who wants to have to explain to their robot everything it should and shouldn’t try to do to every object?
The team presented their work at the International Joint Conference on Artificial Intelligence.
Featured Image: Bryce Durbin