Overview
Picking up an object is one of the simplest tasks a person can perform. For a robot, however, such a task requires a suite of surprisingly complex abilities. The machine must have a capable grasping hand capable of exerting just the right amount of force—enough to avoid dropping the object, but not so much as to crush it. The robot must also have sophisticated vision and object recognition, especially if it is to navigate an open environment to find and grasp a particular object, such as a household robot may be asked to do.
Tellex’s method teaches a manipulator robot to model objects in its surroundings using a light field, which is a function that describes how light flows in every point in space. This model allows the robot to reliably pick and place objects.
Market Opportunity
Consider a home assistant robot tasked with helping humans around the house. The robot might be given simple commands such as “bring me a coffee cup.” Even if the machine can understand the natural language request, it still must be able to locate the cup and determine how to safely pick it up.
However, the visual perception of today’s robotics is insufficient for such a task. Even if a robot could fetch a person’s coffee cup 95 percent of the time, the one time in twenty that it failed, and possibly broke the mug, would be enough for the user to distrust the robot or avoid using one. The ability to safely and accurately assess and pick up objects would affect robots’ work in home, factories, and a variety of other environments.
Innovation and Meaningful Advantages
Tellex’s approach uses light fields to enable efficient object detection and localization. Tellex’s method incorporates information from every pixel observed from across multiple camera locations. Using the model, a robot could identify objects, localize them, and extract a 3D structure telling the robot how to go about picking them up.
The Brown team tested this model on a Baxter industrial robot and demonstrated that the bot could pick an object hundreds of times in a row without failure. Moreover, the Baxter robot required only 30 seconds to model one side of a new object when presented with it for the first time and could accurately locate the object within two millimeters. Crucially, the model allows a robot to merge models it has set up for specific objects to create models for categories of objects, which in turn improves the model’s accuracy when the robot is asked to grasp a previously unencountered objects.
Collaboration Opportunity
We are seeking a licensing opportunity for this innovative technology.
Principal Investigator
Stefanie Tellex, PhD
Associate Professor of Computer Science; Associate Professor of Engineering
Brown University
IP Information
US Utility US20200368899A1, Issued December 19, 2023
Contact
Brian Demers
Director of Business Development, School of Engineering and Physics
Brown Tech ID: 2534