# VolumeRender

## DESCRIPTION

VolumeRender combines two volume rendering algorithms into a single module. The algorithms offer different speed and quality advantages, so the user can trade off between interactivity and quality.

The "Transform" renderer gives interactive rendering speeds for moderately sized volumes. It works by compositing axis-aligned slices of the lattice into an intermediate buffer and then drawing the result to the screen.

The technique of drawing a volume in slices back to front (or front to back) using compositing yields a reasonably good rendering. For an orthographic projection, these slices are parallelograms, which unfortunately are somewhat difficult to draw. Systems capable of compositing rectilinear images are much more common than those capable of drawing images as parallelograms, although such systems exist and have been exploited for volume rendering.

It turns out that it is possible to composite rectilinear images and linearly warp the result into the correct geometry. The math looks something like this:

Assume A, B, C, and D are 2x2 submatrices of the 4x4 orthographic view matrix. We can decompose the view matrix in the following way:

```| A  B | = | A  B | * | Ainv  0 | * | A  0 | = | I      B | * | A  0 |
| C  D |   | C  D |   | 0     I |   | 0  I |   | CAinv  D |   | 0  I |
```

In the last formulation, the first multiplicand looks like:

```| 1  0  a  0 |
| 0  1  b  0 |
| dx dy dz 0 |
| x  y  z  1 |
```

Passing a slice of the volume through this matrix results in a rectilinear mapping to the screen. The placement of the image depends on x, y, dx, and dy.

The key is that the second multiplicand is a pure 2D transformation which does not depend on z. This means that the result of images composited according to the above matrix can be warped into the correct mapping with a simple 2D linear geometric operation.

Express graphics systems (XS24, Elan, Extreme) have an unsupported image compositing feature that this module can use to increase performance. To activate Express acceleration, setenv CXEXPRESS 1.

The "Slicer" algorithm produces high quality images with longer rendering times. It renders volume data by computing slices of the volume parallel to the screen, sampling the voxel data on pixel centers, and compositing the pixel values from back to front.

The slice intersection and voxel sampling are performed on the host CPU, and consumes most of the rendering time. A plane of constant distance from the eye is intersected with the faces of the volume, forming a simple polygon. The polygon is projected onto the screen and each pixel inside the polygon is then projected back into the volume to determine which voxel it samples. The resulting voxel value is either the nearest voxel value, or it is found by trilinearly interpolating from the 8 voxel corners enclosing it. The voxel value is then passed through a color lookup table to obtain color and opacity information, which is composited using the graphics hardware.

The work performed (and thus the time consumed) by this algorithm is proportional to the number of slices computed and the size of the rendered image. The number of slices required depends on the voxel aspect ratio and the orientation of the volume. The number of pixels that must be sampled is proportional to the size of volume in the window. For a constant window size and an NxNxN volume, the rendering time varies with N. For a constant volume size, the rendering time varies with the area of the window.

The Slicer algorithm supports parallel execution through the use of IRIX lightweight processes (sproc(2)) in order to increase the rendering speed. From 85% to 98% of the work can be done in parallel, depending on volume and image size. As a result, one can expect speedups of from 4x to 7x on an 8 processor system.

The standard interactive controls present in the Render module are available. When moving the volume interactively, a lower resolution volume is rendered using only a few slices and no interpolation. A bounding box is also displayed. The bounding box is red, green and blue along the minimum X, Y, and Z axes, respectively. The colors then blend to white at the maximum X, Y, and Z extent of the volume. When the mouse button is released, the volume is rendered in the desired quality and interpolation.

## INPUTS

Port: Volume
Type: Lattice
Optional: This port is optional.
Constraints: 3-D
Constraints: 1-vector
Constraints: byte
Constraints: uniform
The volume data to be rendered.

Port: Colormap
Type: Lattice
Optional: This port is optional.
Constraints: 1-D
Constraints: 4-vector
Constraints: float
An array that defines the color and opacity for each voxel value. If no colormap is present, the voxel value is used for the red, green, blue and opacity components.

Port: Camera
Type: Geometry
Optional: This port is optional.
Camera position, for example, from another rendering module.

Port: Snap On Redraw
Type: Parameter
Optional: This port is optional.
When non-zero, the rendered image is output as a lattice on the Snapshot output. Used for animation.

Port: Interpolation
Type: Parameter
Optional: This port is optional.
If zero, no interpolation is used. The nearest voxel value to a sample will be used. If non-zero, trilinear interpolation will be used for subvoxel pixel positions.

Port: Image Quality
Type: Parameter
Optional: This port is optional.
Increase or decrease the number of slices to obtain higher image quality or faster rendering. The default spacing between slices is chosen to most efficiently sample the voxels, depending on the voxel aspect ratio and the orientation of the volume. This defines the image quality of 1.0.

Setting the quality to 2.0 halves the spacing of the slices, resulting in a higher quality image and that takes twice as long to render. Setting the quality to 0.5 doubles the slice spacing and results in a lower quality image, but renders twice as fast. Image Quality values of less than 1.0 will miss data. It is mainly useful to speed up rendering while previewing.

Care should be used in adjusting the Image Quality. Changing the slice spacing requires adjusting the opacity values so that the image intensity remains constant. If there are more slices, then each slice must be made less opaque. Opacity values are stored in a single byte for compositing by the hardware, so they have limited precision. It is possible that image quality values greater than 1 will cause opacity values to be rounded to zero and that values less than 1 will cause opacity values to be clamped at 1.

Type: Parameter
Optional: This port is optional.
If there is a parameter present on this port, it defines the number number of processes (threads) to be used by the algorithm. (The default is 1, giving non-parallel execution.) If there is no parameter on the port and the environment variable VOLUMERENDER_NUM_THREADS is set to an integer, that number of processes is used. If neither of these is true and the host machine has more than one physical processor, the number of processes is set automatically. If the host has 2-5 processors then 2 processes are used, and if it has 6 or more processors, 3 processes are used.

## WIDGETS

Port: Window
Type: Drawing Area

Port: Algorithm
Selects the rendering algorithm, as described above.

## OUTPUTS

Port: Snapshot
Type: Lattice
Constraints: 2-D
Constraints: 3-vector
Constraints: byte
Constraints: uniform
Screen snapshot of the rendered volume.

## KNOWN PROBLEMS

The Transform algorithm works by rendering into the frame buffer and reading the result back out for further processing. If the window is partially covered or is too small, the result of the frame buffer read will not be what was written.

The Slicer algorithm may take a very long time for large volumes, large image sizes, or high image quality. There should be a way of interrupting the processing.

The number of Parallel Threads cannot be decreased, unless the input volume or rendering algorithm is also changed. If you wish to decrease the number of threads, switch to the Transform algorithm, change the number of threads, then switch back to the Slicer algorithm.

These algorithms can only render orthographic projections. If the input camera has a perspective projection, the rendered image will be incorrect; switch the input camera to orthographic mode to correct.