Time-Lapse Light Field Photography With a 7 DoF Arm

Overview

Robots that have an arm for gripping and grasping may be asked to pick up objects with a wide variety of shapes, weights, sizes, and colors. One particularly challenging job for a robot is to grasp shiny or transparent objects, whose translucence or reflections may fool the robotic vision system.

Tellex and colleagues invented a robotic arm design in which the camera that acts as the machine’s “eye” is embedded within the arm itself. In this way, the robot necessarily sees the shiny or transparent object from a variety of angles as its arm attempts to pick up the item, increasing its spatial awareness of the object and its success at grasping.

Market Opportunity

Consider a future robot that assists a surgeon. To pick up the correct tool and hand it to the physician, the machine must be able to identify metal tools on a metal tray and define their shapes and edges. A household robotic assistant might wash silverware or glasses in a sink full of running water, a similarly complicated task.

Items such as metal tools and silverware are called “non-Lambertian objects,” and their shiny or reflective nature creates vivid colors and gradients that change dramatically with camera position. Existing computer vision and object detection strategies struggle with non-Lambertian objects because the systems rely on a single camera position. An innovative approach like Tellex’s would dramatically improve robotic capability at gripping the full variety of objects they may be confronted with.

Innovation and Meaningful Advantages

As single-image systems struggle to understand non-Lambertian objects, Tellex’s invention focuses on ways that a robotic arm system can see the shiny object from a variety of angles. One approach is to embed a camera within the arm itself, so that the robot is forced to view the object from new angles as it reaches to grab it, thereby more clearly defining the object within its environment. Another, similar approach could house the camera in another location other than the robotic arm, but still move it around to a variety of locations.

The invention pushes robotic perception even further via the use of light field photography, which incorporates data about both the intensity and direction of light rays it receives. Light fields naturally capture phenomena such as reflection or refraction, improving a machine’s ability to understand non-Lambertian objects.

Collaboration Opportunity

We are seeking a licensing opportunity for this innovative technology.

Principal Investigator

Stefanie Tellex, PhD
Associate Professor of Computer Science; Associate Professor of Engineering
Brown University

IP Information

US Patent 10,766,145, Issued September 8, 2020

 

Contact

Brian Demers
Director of Business Development, School of Engineering and Physics
Brown Tech ID: 2472
Patent Information:
For Information, Contact:
Brown Technology Innovations
350 Eddy Street - Box 1949
Providence, RI 02903
tech-innovations@brown.edu
401-863-7499
Inventors:
Stefanie Tellex
John Oberlin
Keywords:
© 2024. All Rights Reserved. Powered by Inteum