Skip to content
Slavomir Hudak edited this page May 22, 2012 · 35 revisions

Visualization of Volume Data from Confocal Microscopy

Visualization of Volume Data from Confocal Microscopy

Slavomir Hudak

2006

Master's Thesis

Faculty of Mathematics, Physics and Informatics

Comenius University, Bratislava, Slovakia

Abstract

This thesis provides a presentation of a parallel volume data visualization system for data supplied from confocal laser scanning microscopy. It describes and compares various methods for volume visualization and optimalization and their implementations on parallel architectures. The parallel volume renderer we have developed is implemented on a 16 nodes PC cluster connected by a high-speed Myrinet network using MPI, ray-casting and binary-swap algorithm. To improve efficiency, several optimalizations for increasing the speed of the ray casting process are used, including empty regions skipping and an efficient bricking addressing scheme. We also suggest several methods of improving the performance when using MPI and compilers. The system is capable of running interactively (several frames per second) on a PC cluster.

Keywords: Volume ray-casting, ray-tracing, parallel rendering, PC clusters, binary swap algorithm, volume data.

Abbreviations

Name Description
BTF Back-To-Front
CLSM Confocal Laser Scanning Microscopy
CPU Central Processing Unit
CT Computed Tomography
FTB Front-To-Back
GNU GNU’s Not Unix (Free Software Fundation)
GPL General Public Licence
GPU Graphics Processor Unit
HPC High Performance Clusters
IEEE The Institute of Electrical and Electronics Engineers
ILC International Laser Centre
ISO International Standards Organization
I/O Input/Output
LAN Local Area Network
LSM Laser Confocal Microscopy
MIMD Multiple Instruction, Multiple Data
MISD Multiple Instruction, Single Data
ILC International Laser Centre
MPI Message Passing Interface
MRI Magnetic Resonance Imaging
NUMA Non-Uniform Memory Access
RADC Ray Acceleration by Distance Coding
PGI The Portland Group, Inc.
PSF Point Spread Function
SIMD Single Instruction, Multiple Data
SISD Single Instruction, Single Data
SMP Symmetrical Multi Processing
SPMD Single Program, Multiple data
SSE Streaming SIMD Extensions
SSH Secure Shell
TIFF Tagged Image File Format

Chapter 1 - Introduction

Volume visualization is well known branch of computer graphics. To be more precise - it is part of scientific visualization area. It allows to explore internals and complex behavior of the volume objects. Mainly thanks to needs of medicine - precisely, correctly and fast enough to display organs of the human body on common personal computers - the volume visualization develops so fast. Today new techniques and available hardware make it possible to interactively visualize volume data to the doctors. Result of long research and development improved both software and hardware side of the problem. There are known and well explored techniques to visualize complex structures from various areas of science. In comparison to medicine, the structures on microscopic level are much more complex and a lot of them is still unexplored.

Recently the multidimensional visualization greatly expanded in the field of microscopy. Many techniques were successfully applied in the light or electron microscopy. Confocal laser scanning microscopy (CLSM) brings new possibilities of optical visualization that lead to rapid usage of visualization techniques for all kinds of samples.

In the first chapter we introduce the problem of interactive visualization, clarify term confocal microscopy and describe confocal data itself. Second chapter gives overview of the volume visualization methods and in detail describes idea and principles of ray tracing algorithm. Third chapter introduces the reader to the problematic of creating parallel algorithms and presents approaches to parallel volume visualization. Chapter number four presents couple of optimizations techniques that could be applied to the approaches described in previous chapters. Chapter five contains all details about design and implementation of parallel application we've built. It also describes challenges and problems experienced during the development. Results of the work are presented in chapter six. Chapter seven contains summary and possible extensions of the work. References ends the thesis.

1.1 Problem Area

3D image processing is area of computer graphics that is recently getting more and more attention. Mainly because of growing speed of computers and availability of new technologies. Thanks to that we are today able to process image data much faster than yesterday. Usage of 3d image processing we can experience in our daily life. It plays important role in medicine where it helps doctors to analyze data from devices like magnetic resonance imaging (MRI) or computer tomography (CT). Another use cases we can observe for example in geography (visualize of gas, petroleum or water models extracted from seismic data), physics (visualization of results from experiments and simulations), industry (analysis of particle fractures), navy (display signals from sonar) or biology (reconstruction 3d model of an object from series of slices gathered by confocal microscope). In this work we focus on the later area. Note that algorithms and techniques described in this paper are not coupled to confocal data.

1.2 Motivation and Goal

Goal of the thesis was studying, comparison and selection of best methods for volume visualization of confocal data on parallel architectures. We designed and created tools for displaying confocal data on parallel computer - PC cluster. In confocal microscopy the main goal of usage visualization is to producing easy to interpret results. Realistic results without artifacts where if is perhaps possible to highlight areas of interest. Confocal microscopy allows relatively easily taking pictures in non invasive way. However the visualization of such data is quite complex process.

To get to our goal it was necessary to study problematics of volume visualization and parallel rendering (chapter 2 and 3). In those chapters we discuss important techniques, comparisons but also related problems and pitfalls. Volume rendering is only one of many methods of volume data visualization. In the work we compare advantages and disadvantages of volume rendering. In detail is described method of ray casting (chapter 2) and also building of parallel program and challenges related to it (chapter 3). In that chapter we also discuss creating and rendering of image on parallel architectures. To enhance performance of ray casting with describe in chapter 4 many software optimizations of ray casting including skipping of not interesting areas in volume data, effective addressing scheme and lookup table of homogeneous cells. Description of architecture and implementation of the system reader can find in chapter 5. The chapter contains specification of used hardware, details regarding implementation of core components and our problems we've met while developing the software. Results also in graph form are presented in chapter 6. We describe settings of compilers for getting best results, compare the values and in tables present results from Ethernet and Myrinet networks. Also comparisons of basic and optimized version of program is presented.

Main asset of the research and development is in creating parallel process of visualizing CLSM data of direct volume methods on PC cluster. The architecture tries to be generic to be used on any parallel architecture. Implementation of classes and algorithms tries to be most portable and effective. Visualization of the data is interactive in the end, but it also depends on size of volume data. Thesis also provides overview of volume visualization, creating parallel programs and parallel volume visualization.

Work on thesis and program was done with cooperation with International Laser Centre (ILC) that resides on Faculty of Mathematics, Physics and Informatics of Comenius University, Bratislava, Slovakia.

International Laser Centre is organization focused on educating, research and development in area of progressive methods and technology of photonics; and its application in various areas of national and international cooperation. On the department of biophotonics is available confocal microscope Zeiss LSM 510 Meta NLO that includes spectral detector and inverse microscope Axiovert 200M with pre-loading of specimens (electro stimulation, perfusion, warming, etc). The device is used in multiple projects aimed to study mitochondrial metabolism, modulation of excitation and contractility in isolated cardiomeocytes in relation to the development and course of hypertension. It is also used to study polymetric membranes and matrices made ​​of polyelectrolytic complexes and study focused on the application of different photosensitiser in photodynamic therapy of tumors.

Image processing of confocal microscopy is closely tied to the possibility of using computer cluster of IBM eServer 1350. Currently, the cluster is mainly used for confocal data analysis and validation of models studied processes occurring in living systems. However, ILC assumes the use of cluster for direct imaging - visualizing large-scale data using parallel rendering techniques.

Currently the laser center does not have any satisfactory tool for interactive rendering of captured specimens. There are programs enabling to create two-dimensional image or animation (rotation of the camera around captured object). It is important to note that most of these programs work on commonly available computer. This corresponds to the speed of creation, and of course quality. It should be noted that there are some commercial solutions available for quite high price.

Attempts were made ​​to visualize confocal data on graphics cards using f3dvr (Červeňanský, 2004), but the results were unacceptable. The first problem was size of data that the program is able to visualize. As will be mentioned later, the size of confocal data exceeds the 1 gigabyte. The second problem again shows the characteristics of data. They contain very few samples in z-direction. Using graphics card solution, the result is displayed in one plane. Experiments have shown the loss of 3D object perception and lower image quality.

Calculation of spatial data is a processing time and storage intensive process. In order to display confocal data preferably interactive, we decided to choose a parallel approach using the PC cluster, located in the ILC.

1.3 Interactive visualization of volume data

Before we analyze the interactive volume visualization, let's look at the meaning of certain terms. The following paragraph is based on (Sramek, 1998), (Bentum, 1996), (Zara, 1998). Visualization can be seen as a graphical representation of abstract data mostly stored in the form of numbers or text. Table with n samples and their associated values ​​is not saying much of the first look. However, if we ​​create a graph from the values, although we lose the accuracy of the numbers, from the graph it is possible to say much more about the dimensions and characteristics of data.

That said we can define volumetric visualization as a process of three-dimensional data projection on two-dimensional plane in order to understand the structure of objects contained in the data. In literature the terms volumetric rendering are also used.

Under the words interactive visualization we mean data visualization where user is able to change the parameters in real time. The requirement for interactive visualization is critical because it shows objects we have no structure information about. Changes done by user should take effect immediately. For continuous display of the scene it is necessary that the program is able to render data at speed of at least 25 frames per second (in practice very difficult to achieve and depends of structure and size of data). As we shall see later, (interactive) visualization of volume data is computationally very expensive and even parallel approach does not lead to satisfactory results. In is necessary to use special techniques and optimization to (partially) solve this problem.

Interactive volume visualization has several advantages in comparison to classical static rendering:

  • Arbitrary view of the scene (position of camera). User can set the camera and its distance from the objects. Using rotation we can better understand structure and shape of the objects in scene.
  • Cutting. We won't display some parts of the object. Thanks to that we are able to display inner structures of the object. Or remove areas we are not interested it. By not displaying some parts, we get also better performance.
  • Customizing of visualization method. User can customize parameters of the projection on the fly.

1.4 Confocal Microscopy

Idea is to point laser beam to the specimens. The laser is focused on one point (Pawley, 1995). In the next step the emitted light (fluorescence) is captured by aperture that is located in front of the detector (see picture). This way, point by point, the confocal microscopy allows to take pictures of the 3d objects in non invasive and selective way. The microscope produces set of slices (images) of the object. The images are often taken using various florescence colors which allow to highlight parts of the object we are interested in. Spot lightning and capturing data point by point leads to much better volume resolution in comparison to traditional microscopy. It allows us to better study and explore the layers the microscope focused at. This is due to the fact that everything else outside of focal plane is not really visible.

Latest confocal microscopes are able to do spectral decomposition of the emitted light thanks to parallel checking the light on multiple frequencies (Dickinson et al, 2001). Final multi spectral image represents not only space (or also time) distribution of intensity, but also contains information of spectral signature visualized objects. Improvements in fluorescent confocal microscopy also allows to measure dynamic changes many generations of fluorescent molecules in biological samples. Fluorescent solution is standard tool for studying structure or functional aspects of living cells (Pawley, 1995). More information are available at here, here and here.

Confocal Microscopy

1.4.1 CLSM data characteristics

Goal of (not only) biological data visualization is to display object and its inner structures. CLSM data are usually provided as stack of 2D images representing slices of the sample. Example of a slice can be found on the next image. Before user starts to gather the image, she/he must decides on the image size. Typical sizes are 256x256, 512x512, 1024x1024 but those are not mandatory. User can set up also custom sizes. Images in the stack are aligned with X,Y and Z axis. Color depth is usually 8 or 12 bites. Microscope can produce the pictures in 32 channels. That means we can obtain four dimensional data. The settings influence the size of image series (or series in case of more channels). For example stack that consists of one series of twenty 1024x1024 12-bit images takes around 50MB. Value for a pixel represents the amount of light. It does not contain any color information. In real world applications we usually work with much bigger data of hundreds or thousands of megabytes. Data contains raw images stored in LSM format. This format is basically multi page TIFF (Adobe, 1992) with couple of extensions. Detailed description can be found in its specification (Zeiss). LSM file containing a stack of image will be input to our program.

Two channels LSM data

Example of two channels confocal data.

1.5 Pre-processing

Before visualization itself it is important to pre-process the image stack to gain better results.

One of the most used methods is deconvolution. By its application we get much sharper images with better contrast and signal-noise ratio. Deconvolution is basic of high details by displaying biological data at cell level (Geert et al, 1998). Deconvolution algorithms are used in traditional fluorescent microscopy. In case of confocal microscopy the deconvolution usage is still problematic due to problematic setup of point spread function PSF and image recovery using PSF. We need to take into the account also size of the images (e.g. 2048x2048x256x32).

To pre-processing step in case of confocal data also belongs calculation of additional slices using interpolation. Speed is not a problem in this case as the processing will be done only once. Also various filters can be applied to the data with goal to reduce artifacts, sharpen image or adjust contrast or lightning (Bentum, 1996). It is important to realize that by applying those filters the space reconstruction will be dramatically affected. That's why they are used only for rendering and core calculations are done using raw data.

Before 3D reconstruction it is possible to do segmentation of the objects of our interests - identify interesting voxels that will be in the end rendered. By common color depth 12 bits we can last four bits (of 16 bits) to identify at maximum 16 different objects in the space (Bruckner, 2004). This additional information is used by rendering final image. Various objects are assigned to different colors to highlight areas of interest. Note that in reality it is not trivial to set borders between objects.

1.6 Visualization challenges

In the document (Sakas et al, 1996) authors summarized CLSM data characteristics and challenges by the visualization.

  1. Size of the data. Due to large amount of data it is necessary to use effective processing methods.

  2. Low contrast, bad light, bad signal/noise ratio. Contrast is going down with as the light needs to go through the inner materials. Straightforward segmentation of the objects is practically impossible because all techniques (color difference, homogeneous regions, thresholding, etc) are based on binary decision without checking if the voxel belongs to the structure or not.

  3. Different resolution in X,Y and Z axis. Visualization method have to work with blocks instead of cubic voxels. This brings artifacts due to interpolation and bigger data (that it is hard to process them on available computer).

  4. We are exploring unknown structures. Visualization method shall reduce artifacts in greatest possible way. Viewer often has no experience with the structure (what is and what is not displayed), that's why rendering of artifacts has bad impact on correct understanding of the object and its inner structures. Thanks to good lighting model we can gain clear information about the object. Also the overall rendering speed is important. Usually the parameters of the renderer are set following try-error-fix flow. This procedure can take some amount of time if user has to wait several minutes to get new picture. In general exploring the unknown object (potentially specially very complex) requires views from various locations and different settings of lighting and visualization methods.

To the other important visualization problems belongs asymmetric shape of PSF followed by 3D shape distortion and fluorescent material attenuation.

Chapter 2 - Volume Visualization

Volume visualization is about exploring and understanding multidimensional data. Bentum in his work separated process of volume visualization into the three main steps (Bentum, 1996).

First step is pre-processing.

Second step depends on used technique. Two dimensional pictures (slices) are mapped onto three dimensional matrix. This way 3D data are created.

  • Surface based techniques creates temporary geometric surface. In the 3D data they search for points and borders of the surface; and using interpolation create surface. It is indirect method.

  • Volume based techniques renders the data directly - they make use of full volume information. It is possible assign to every sample color and opacity.

Third step is the rendering of the final image. Temporary surface is displayed using classics methods of computer graphics. Volume data with information about color and opacity are rendered by true volume rendering, for example ray tracing.

2.1 Visualization methods

There are many visualization methods that are using in praxis with bigger or smaller success. It is possible to categorize them by various criteria (Elvins, 1992). One classification is common: algorithms rendering surfaces and volume rendering algorithms.

2.1.1 Algorithms rendering surfaces

These methods are trying to approximate surface of the volume data using geometric primitives and those visualize using well known methods of computer graphics. There also 3D graphics accelerators can be used on common hardware. Most known algorithms are

  • Countour tracing (Keppel, 1975)

  • Marching cubes (Lorensen et al, 1987)

  • Marching tetrahedra (Shirley et al, 1990)

  • Dividing cubes (Cline et al, 1988)

  • Opaque cubes, Cuberille (Herman et al, 1979)

Detailed overview of the algorithms can be found in (Elvins, 1992) and (Sramek, 1998). Algorithms create temporary surface that can be quickly rendered thanks to data reduction and using latest graphics accelerators. On the other size we lost information about inner structures. In general for each sample a test is execute that tells if it belongs to the object or not. Low resolution data or amorphous data (fog, etc) produces bad results in form of holes that does not exist in the original object or in form of false surfaces.

2.1.2 Volume algorithms

Volume methods are using full volume information to generate final image and are not dependent on complexity of the scene. Every sample is assigned with a color and opacity. These data are later merged into the final image. Whole 3D matrix of samples is used to calculate image. This means the methods are very time and memory consuming. Rendering of every frame requires traversing through whole volume of the data. On the other hand we can display any inner detail of the scene. In general volume algorithms gives much more information than surface based algorithms.

With aim for better performance many optimizations were introduces in volume algorithms. Random sampling or low resolution sampling can quickly generate view in low resolution. Once the user is happy with the view, it sets parameters to produce high resolution image. This approach is called progressive refinement. Other option is to use special hardware. Overview of some of possible improvements is discussed in chapter 2.4 or in detail in chapter 4.

Volume algorithms can be divided the function used for data classification (Sramek, 1998):

  • binary
  • probabilistic

Binary algorithms covers every voxel totally or not at all. They are surface oriented algorithms.

Probabilistic algorithms (also known as semi-transparent rendering algorithms) assign voxels percentual weight. They are based on the collecting values from all samples of a ray going through a pixel. Many authors in volume rendering area is focusing only on those algorithms. Probabilistic algorithms belongs under volume oriented techniques.

Other categorization is by the order the voxels are processed. In other words it is classification by algorithm domain (Zuiderveld, 1995).

Algorithms working in image space

For each pixel of the final image the algorithm is searching the scene for matching voxels. Final color is calculated from matched voxels. Usually the position of a matched voxel is not directly in a corner of the matrix. The value used for color calculation is calculated using interpolation. Into this category we can put algorithms

  • Ray Tracing, Ray Casting (Levoy, 1988)
  • Sabella method (Sabella, 1988)

Algorithms working in object space

For each voxel in the object space the algorithm is searching for a pixel in final image, that is influenced by the voxel. Splatting is a technique that traverses object space. On each voxel the convolution with 3D reconstruction filter is applied. Value of filtered points is aggregated into the image space. Processing of such algorithm can be imagined as throwing snow balls on a wall. Value in the center of the hit is biggest and is decreasing with increasing distance from the hit point. Algorithms:

  • V-buffer (Upson, 1989)
  • Splatting (Westover, 1990)

Hybrid algorithms

There are also hybrid algorithms that make use of both approaches. Example can be algorithm Shear-Warp (Lacroute et al., 1994) which is considered as fastest pure software rendering algorithm (Bruckner, 2004). Idea is to shear the slices in object space in the way that mapping onto 2D layer (final image) is easy and fast. Samples of the rays are located in slices. Traversing through the data is easy addressing and uses 2D re-sampling filter. After projection the deformed picture is fixed by warping the image. Steps of the algorithm is better explained by picture.

Step of Shear-Warp algorithm

Step of Shear-Warp algorithm. Picture and more description available online.

Algorithm is very fast and working on available hardware. On the other hand it produces artifacts, it is view dependent and produces images in low quality. Lower quality is caused by (Bentum, 1996):

  • Algorithm consists of two re-sampling steps. Multi re-sampling can cause lose of detailed information and blurring.
  • Reconstruction filter is only 2D. Inside the slices only bilinear interpolation is used, between slices only nearest neighbor interpolation is used. This point is main disadvantage of shear-warp algorithm.
  • Number of rays is equal to number of voxels of single slice, algorithm produces alias due to re-sampling.

Some improvements were introduced (for example usage of min-max octal trees), but final image quality is still behind algorithms like ray casting.

2.1.3 Comparison of visualization methods

Nowdays the trend is to avoid surface based algorithms and make use for volume rendering. We can tune them to provide same or better result as surface based algorithms. Fast algorithms like Shear-Warp produce images in low quality. Binary volume algorithms are easy to implement and don't need to consume much memory in comparison to probabilistic algorithms. On the other hand they suffer from same problems as surface based algorithms. By binary clasification (testing if sample belongs to the surface) it is possible to produce artificial surfaces or not existing holes. Ray-casting and Splatting produces images of the same quality. Rendering time depends on the data and its clasification. Implementation of ray casting is straightforward but includes slow resampling. For the algorithms working in object space it is necessary to apply anti-aliasing techniques. It is easier to implement anti-aliasing techniques by doing ray-casting. Its ability to make use of full volume data, straighforward implementation, well known optimalizations and other reasons that we discuss in next chapters (like easy to parallelize) made it clear in our decision to implement ray-casting algorithm in our program. Good summary and detailed comparison of various algorithms for volume visualization can be found in (Elvins, 1992).

2.2 Basics

This chapter touches points like coordinate systems, clasification, shading, resampling principles, interpolation and composition. Those techniques are basics of ray-casting/ray-tracing. Following parts are based on (Zara, 1998), (Bentum, 1996), (Elvins, 1992).

2.2.1 Multidimensional representation of the data