Algorithm based on human errors will help in AI training

Researchers at MIT have created an algorithm capable of identifying goals and plans, even if their plans might fail. This type of exploration will improve assistive technology, collaboration or grooming robots, and digital assistants like Siri and Alexa.

In a classic experiment on human social intelligence by psychologists Felix Warneken and Michael Tomasello, an 18-month-old toddler watches as a man carries a stack of books to a closed cabinet. When he approaches the cabinet, he clumsily knocks his books on the cabinet door several times, then makes a puzzled sound.

Then something amazing happens: the baby offers help. Having identified the person’s purpose, the baby goes to the closet and opens its doors, allowing the man to put his books inside. But how can a toddler with such limited life experience make such a conclusion?

Recently, computer scientists have redirected this question to computers: How can machines do the same?

Errors are a critical component in building this understanding. Just as a toddler can only infer a person’s goal based on his failures, the machines that determine a person’s goals must consider our faulty actions and plans.

In a quest to recreate this social intelligence in machines, researchers at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Department of Brain and Cognitive Sciences have created an algorithm capable of identifying goals and plans, even if those plans might fail.



This type of exploration could ultimately be used to improve a range of assistive technologies, collaboration or grooming robots, and digital assistants such as Siri and Alexa.

“This ability to account for errors can be critical to creating machines that reliably draw conclusions and act on our behalf,” explains Tang Chih-Xuan, Ph.D., a student at the Massachusetts Institute of Technology (MIT) and lead author of a new research paper. “Otherwise, AI systems may mistakenly conclude that because we failed to achieve our higher-order goals, those goals were ultimately undesirable.”

To create their model, the team used Gen, a new AI programming platform recently developed at MIT, to combine symbolic AI planning with Bayesian inference. Bayesian inference provides an optimal way to combine uncertain beliefs with new data and is widely used for financial risk assessment, diagnostic testing, and election prediction.

In developing the Inverse Plan Sequential Search (SIPS) algorithm, scientists were inspired by a general way of human planning that is largely sub-optimal. A person may not plan everything, but rather form partial plans, carry them out, and, based on new results, make plans again. While it can lead to errors due to insufficient thinking “in advance,” this type of thinking reduces cognitive load.

Scientists hope their research will lay the new philosophical and conceptual foundations needed to create machines that truly understand human goals, plans, and values. The new basic approach of modeling people as imperfect thinkers seems very promising to engineers.

Google News button