Accurately and Efficiently Interpreting Human-Robot Instructions of Varying Granularities

Overview

Robots that work alongside humans must be able to understand directions and instructions in natural language. But not all instructions are created equal. For example, a human operating a forklift intuitively understands the difference between what researchers would call a high-level action, like “grab a pallet,” and a low-level action, like “tilt back a little bit.”

While robots already can connect commands given in natural language to the tasks they need to fulfill, they generally assume a single, fixed level of abstraction in these tasks. Tellex’s invention allows a robot to interpret commands from humans at varying levels of granularity.

Market Opportunity

Robots can and will aid humans in an untold number of ways, from helping with search and rescue in places too dangerous for people to work to acting as personal assistants who help us clean our homes. Whatever they do, it is paramount that machines can understand and act upon spoken commands.

Commands that sound simple to a human may be exceedingly difficult for a robot, especially if it is working under uncertain conditions such as having no prior familiarity with the environment in which it is working. This is a problem for existing approaches to solving this problem, which generally map natural language commands to some formal representation at some fixed level of abstraction. This is effective for directing robots to complete certain predefined tasks, but ill-suited to helping a robot figure out what to do in a changing environment.

Innovation and Meaningful Advantages

Tellex’s invention features a system in which robots possess a new kind of module to interpret human-robot instructions of varying granularities or levels of abstraction. It also includes a method for mapping natural language commands of varying complexities to reward functions at different levels within a hierarchical planning framework. This system includes a deep neural network that learns how to map the natural language commands to reward functions, and to do so at an appropriate level of the hierarchical planning framework.

By integrating a solution for abstraction level inference with the overall problem of grounding a natural language request, Tellex’s invention fully exploits the ability of a robot’s hierarchical planning system to efficiently execute tasks.

Collaboration Opportunity

We are seeking a licensing opportunity for this innovative technology.

Principal Investigator

Stefanie Tellex, PhD
Associate Professor of Computer Science; Associate Professor of Engineering
Brown University

IP Information

US Patent 10,606,898, Issued March 31, 2020 and US Patent 11,086,938, Issued August 10, 2021

 

Contact

Brian Demers
Director of Business Development, School of Engineering and Physics
Brown Tech ID: 2509
Patent Information:
For Information, Contact:
Brown Technology Innovations
350 Eddy Street - Box 1949
Providence, RI 02903
tech-innovations@brown.edu
401-863-7499
Inventors:
Stefanie Tellex
Lawson Wong
Nakul Gopalan
Dilip Arumugam
Siddharth Karamcheti
Keywords:
© 2024. All Rights Reserved. Powered by Inteum