Interactive, Multilayer, Multivalued Volume-Rendering for Superior 3D Images from MRI or Other Data (Case 1492)

Principal Investigator:

 

David Laidlaw, PhD, Professor

Department of Computer Science

Brown University

Providence, RI

 

Brief Description:

 

Magnetic resonance imaging (MRI) is a well-established and powerful technique for elucidating structure and is used, for example, in medical applications to discriminate between normal and pathological tissue.  The MRI image contrast and clarity depends on several technical and complex parameters including data capture, quality and processing, and media, among others.  MR images (MRIs), as well as other imaging modalities (PET, CT, EEG, etc.) and modeling and simulations, generate large multi-valued datasets that represent a 3-dimensional (3D) structure, e.g. human tissue.  The process of dataset conversion to a displayable form for use and interpretation is called volume rendering.  However, the use of volume rendering to create comprehensive and accurate visualizations for exploring 3D multi-valued data presents several formidable challenges.  Significant challenges include the ability to create visualizations in which the data nearer to the viewer does not excessively obscure data farther away, the ability to represent many values and their interrelationships at each spatial location and to convey relationships among different spatial locations, and to render the large datasets interactively.

 

Existing techniques of diffusion tensor field visualization, vector field visualization, hardware-accelerated volume rendering, and thread rendering have attempted to address these challenges, but have been inadequate for various technical reasons related most broadly to 2D-based techniques and obscured data points that do not generalize well to 3D in terms of providing accurate unambiguous structural information, redundant 3D volume data copies that must be stored in texture memory, and the use of commercial graphics cards or distributed hardware that does not target the visualization of multi-valued datasets.  Hence, a need exists to overcome these and other problems related to volumetric imaging of multi-valued, large datasets.

 

The innovative technology is a method, computer program product and apparatus/system that surpasses the current state-of-the-art for rendering multi-valued volumes in an interactive and flexible manner.  This unique invention provides a multilayer, interactive, volume-rendering system and method to more accurately and fully explore multi-valued 3D scientific data.  Primary data is used to display improved images of 3D objects (e.g., human brain derived from MRI-generated data) that convey greater information.  Data may also be derived from simulated fluid flow calculations and/or monoscopic and stereoscopic image presentations.  Technical aspects of the method include at least one multi-valued dataset transformed into an interactive visualization, use a filament- or thread-like density volume to represent continuous directional information, a complementary volume to generate a halo around each thread for visually distinct threads, a derived exploratory culling volume to interactively control layer complexity, and two-dimensional (2D) transfer function editing.

 

Functionally, the apparatus/system achieves a superior 3D image with less computationally expensive interactive manipulation widgets than those in the prior art.  The system consists of a rendering engine with an input coupled to a source of multi-valued primary data and an output coupled to a visual display (e.g., CRT, flat panel screen, wearable stereoscopic, etc.).  The rendering engine is comprised of a data processor that performs the four basic method steps.  Step 1 calculates new, additional data values from the primary data.  Step 2 derives at least one visual representation from abstracts of both the primary data and the additional data values.  Step 3 maps the data - derived visual representation - through transfer functions, which produce color and opacity, to hardware primitives for volumetrically rendering the derived visual representation.  Step 4 renders the layers interactively as a user manipulates the transfer functions.  Moreover, this practical, interactive volumetric rendering system does not require high cost and complex graphical display systems; rather, it can be implemented more easily and directly via commercial personal computer (PC) graphics hardware, a card with 3D texturing capability.

 

Applications for this invention are in any number of medical specialties and scientific fields for diagnosis, detection, and/or exploration of tissues, objects, flows, etc.  The system can be applied via several different imaging modalities (PET, CT, MRI, EEG, MEG, confocal and multi-photon microscopes, and/or optical projection tomography system, in order to visualize/image brain - or other - anatomical and pathological structures, arterial blood flow, and/or any object or fluid flow where 3D images of multi-valued 3D scientific, medical and/or other data are desired. 

Markets for this invention include clinical imaging and/or diagnostic medical devices, imaging computer software, and scientific R&D research tools for visualization and imaging.

 

Information:

         

US patent 7,355,597 is issued (04/08/2008) 

Patent Information:
For Information, Contact:
Brown Technology Innovations
350 Eddy Street - Box 1949
Providence, RI 02903
tech-innovations@brown.edu
401-863-7499
Inventors:
David Laidlaw
Andreas Wenger
Keywords:
© 2024. All Rights Reserved. Powered by Inteum