Comparison
of IBFV, LEA, UFAC, and AUFLIC
/ UFLIC
in temporal-spatial coherence
(click for the
animations)
IBFV [1], LEA [2], UFAC [3], and AUFLIC / UFLIC [4] are the most competitive methods for visualizing unsteady flow fields, each with its own advantages and disadvantages. Each IBFV frame is the result of Line Integral Convolution [5] of a sequence of images along pathlines. The exponential decay convolution filter used in IBFV to low-pass filter noise textures is well suited for introducing temporal coherence in the animation, however, the spatial coherence it constructs in each frame may be insufficient. Thus flow directions are either noisy or artificially blurred [3] as the texture-scale varies (Figure 1). Second, the increasing (unsteady) flow complexity greatly compromises the performance unless the field is highly sub-sampled to create a warping mesh, as was done in [1], [6] in order to achieve high frame rates. Third, 3D IBFV is limited in the range of velocities it can display as stated in [7]. Fourth, 3D IBFV handles only time-independent 3D flows since time-varying flows require a continuous update of the velocity texture [7], which is difficult to achieve. Finally, IBFV depends on hardware capabilities coupled with single-step forward integration to achieve high frame rates. |
(a)
A smaller texture scale is used (169k).
|
(b)
A larger texture scale is used (171k).
|
Figure 1. IBFV images produced by Jarke J. van Wijk (http://www.win.tue.nl/~vanwijk/ibfv/, posted on Dec 22, 2003). The flow directions are either noisy in (a) or blurred in (b) as the texture scale varies. |
LEA also employs single-step integration, though backward, to access the last frame for advected texture values. It resorts to blending of successive textures to represent spatial correlation along a dense set of pathline segments to approximate short streamlines, but the exponentially decreasing temporal filter does not produce sufficient spatial coherence either [3]. Despite the application of LIC to suppress aliasing artifacts created where the noise is advected more than one cell per integration, only shorter kernel lengths can be used since streamlines would otherwise significantly deviate from actual pathlines, causing flashing in the animation and degraded image contrast. Thus there exists a trade off between the spatial coherence in an image and the temporal coherence in the animation. Flow directions are noisy / obscure in low-magnitude areas when the length of the streaks is proportional to the velocity magnitude (Figure 2). |
|
UFAC was derived from a generic spacetime-coherent framework, which provides an explicit, direct, and separate control over temporal coherence and spatial coherence to emulate IBFV, LEA, and UFLIC. However, as stated in [3], it still fails to solve the inconsistency between temporal and spatial patterns since the evolution of streamlines along pathlines might not lead to streamlines of the subsequent time step. Its ad hoc solution to this problem is limited to only an explicit control over the length of the spatial structures based on the flow unsteadiness to retain temporal coherence. In regions where the flow changes rapidly, the correlated segments along streamlines have to be very short and even degenerate to points (particles) to suppress flickering, which inevitably affects spatial coherence. As a result, high spatial coherence in a frame causes flickering artifacts in the animation while high temporal coherence in the animation causes a noisy pateern in the constitunent frames. In a word, UFAC can not achieve both high temporal coherence and high spatial coherence, i.e., one is achieved at the cost of the other (Figure 3). Finally, UFAC is limited to DirectX 9.0 compliant GPUs, or OpenGL with fragment support (pixel shader programs). |
(a)
UFAC-emulated LEA (927k).
|
(b)
UFAC without velocity masking (983k).
|
(c)
Application of long-kernel LIC filtering (983k).
|
|
AUFLIC / UFLIC possesses the advantage of conveying very high temporal and spatial coherence by scattering fed-forward texture values. Value scattering along a long pathline over several time steps not only correlates a considerable number of intra-frame pixels to establish strong spatial coherence, but also correlates sufficient inter-frame pixels to build tight temporal coherence. Texture feed-forwarding that takes an output frame, after noise-jittered high-pass filtering, as the input texture for the next frame constructs an even closer correlation between the two consecutive frames to enhance temporal coherence. Flow directions are clearly depicted in individual images for instantaneous flow investigation and the animation is also quite smooth (Figure 4). The inconsistency between temporal and spatial patterns in IBFV, LEA, and UFAC is successfully resolved by scattering fed-forward texture values in AUFLIC / UFLIC. Also, AUFLIC / UFLIC can be easily extended to time-varying 3D flows [8], [9]. |
(a)
Vortex data set (17.00M).
|
(b)
Weather data set (6.94M).
|
|
REFERENCES |
[1] Jarke J. van Wijk, "Image Based Flow Visualization," Proceedings of ACM SIGGRAPH 02, July 21-26, San Antonio, Texas, pp. 745-754, 2002. |
[2] Bruno Jobard, Gordon Erlebacher, and M. Yousuff Hussaini, "Lagrangian-Eulerian Advection of Noise and Dye Textures for Unsteady Flow Visualization," IEEE Transactions on Visualization and Computer Graphics, Vol. 8, No. 3, pp. 211-222, July-September 2002. |
[3] Daniel Weiskopf, Gordon Erlebacher, and Thomas Ertl, "A Texture-Based Framework for Spacetime-Coherent Visualization of Time-Dependent Vector Fields," Proceedings of IEEE Visualization 03, Oct 19-24, Seattle, Washington, pp. 107-114, 2003. |
[4] Han-Wei Shen and David L. Kao, "A New Line Integral Convolution Algorithm for Visualizing Time-Varying Flow Fields," IEEE Transactions on Visualization and Computer Graphics, Vol. 4, No. 2, pp. 98-108, April-June 1998. |
[5] Brian Cabral and Leith (Casey) Leedom, "Imaging Vector Fields Using Line Integral Convolution," Proceedings of ACM SIGGRAPH 93, Aug 2-6, Anaheim, California, pp. 263-270, 1993. |
[6] Robert S. Laramee, Bruno Jobard, and Helwig Hauser, "Image Space Based Visualization of Unsteady Flow on Surfaces," Proceedings of IEEE Visualization 03, Oct 19-24, Seattle, Washington, pp. 131-138, 2003. |
[7] Alexandru Telea and Jarke J. van Wijk, "3D IBFV: Hardware-Accelerated 3D Flow Visualization," Proceedings of IEEE Visualization 03, Oct 19-24, Seattle, Washington, pp. 233-240, 2003. |
[8] Zhanping Liu and Robert J. Moorhead II, "Visualizing Time-Varying Three-Dimensional Flow Fields Using Accelerated UFLIC," The 11th International Symposium on Flow Visualization, Aug 9-12, Notre Dame, Indiana, pp. 1~10, 2004. |
[9] Zhanping Liu and Robert J. Moorhead II, "A Texture-Based Hardware-Independent Technique for Time-Varying Volume Flow Visualization," Journal of Visualization, Vol. 8, No. 3, pp. 235-244, 2005. |