Sequence-to-Sequence Language Grounding of Non-Markovian Task Specifications

Overview

There are existing ways to translate a person’s instructions for a robot into a series of logical functions that the machine can execute to achieve the goal, such as Linear Temporal Logic (LTL). Such approaches are powerful, but they are also technical and complex; a non-expert could not be expected to learn this language to communicate with a robot.

Tellex and colleagues have invented a sequence-to-sequence mapping technique that allows a robot to learn how to translate between instructions said in plain English LTL expressions that the machine knows how to execute.

Market Opportunity

Imagine the household robots of the future that live in our homes and can complete a variety of domestic tasks. This advancement will not be possible if robots require advanced instructions in technical language to fulfill tasks, but only if they correctly understand commands given in plain English and can reliably translate common language into the correct series of actions.

However, even commands that sound simple to the human ear are deceptively complex for a robot to understand, given the nuance that can be communicated with just a few words. Consider telling a robot to, “go down the right side of the hallways to the bedroom,” an order that restricts the paths a robot could choose on its journey, or “watch the sink for dirty dishes and wash any that you see,” which requires the robot to enter a loop in which it is constantly scanning for new dirty dishes to clean.  

Innovation and Meaningful Advantages

To resolve overcome this linguistic challenge, Tellex’s invention uses a probabilistic variant of LTL as a language to specify goals via a Markov Decision Process (MDP). Its sequence-to-sequence neural learning models can successfully ground human language to this semantic representation and provide analysis that highlights generalization to novel, unseen logical forms as an open problem for this class of model.

The present invention enables a robot to learn a mapping between English commands and LTL expressions and includes a dataset that maps between English and LTL in a model environment. Neural sequence-to-sequence learning models are used to infer the LTL sequence corresponding to a given natural language input sequence. By integrating a solution for abstraction-level inference with the overall problem of grounding a natural language request, Tellex’s invention fully exploits the ability of a robot’s hierarchical planning system to efficiently execute tasks.

Collaboration Opportunity

We are seeking a licensing opportunity for this innovative technology.

Principal Investigator

Stefanie Tellex, PhD
Associate Professor of Computer Science; Associate Professor of Engineering
Brown University

IP Information

US Patent 11,034,019, Issued June 15, 2021

 

Contact

Brian Demers
Director of Business Development, School of Engineering and Physics
Brown Tech ID: 2559
Patent Information:
Category(s):
Robotics
Software
For Information, Contact:
Brown Technology Innovations
350 Eddy Street - Box 1949
Providence, RI 02903
tech-innovations@brown.edu
401-863-7499
Inventors:
Stefanie Tellex
Keywords:
© 2024. All Rights Reserved. Powered by Inteum