• Tidak ada hasil yang ditemukan

Email: { k.a.hawick, a.leist, d.p.playne }@massey.ac.nz Tel: +64 9 414 0800 Fax: +64 9 441 8181

N/A
N/A
Protected

Academic year: 2023

Membagikan "Email: { k.a.hawick, a.leist, d.p.playne }@massey.ac.nz Tel: +64 9 414 0800 Fax: +64 9 441 8181"

Copied!
9
0
0

Teks penuh

(1)

Technical Report CSTN-101

Visualising Volumetric Fourier Transforms of Asymmetric 3D Growth Models

Ken A. Hawick, Arno Leist and Daniel P. Playne

Computer Science, Institute for Information and Mathematical Sciences, Massey University, North Shore 102-904, Auckland, New Zealand

Email: { k.a.hawick, a.leist, d.p.playne }@massey.ac.nz Tel: +64 9 414 0800 Fax: +64 9 441 8181

September 2011

ABSTRACT

Volumetric visualisation of Fourier spectral data is a powerful but computationally intensive tool for under- standing growth models. Visualising frequency-domain spectral representations of simulation model variables that are normally defined on a spatial mesh can be a use- ful aid to understanding time evolutionary behaviour.

While it is common to study the symmetric or spheri- cally averaged FFT of models, we consider rendering the full 3D FFT representations in interactive time. We use volume rendering methods to study the FFT and also consider asymmetric model cases where spherical averaging would be misleading. We discuss visualisa- tion of Fourier-transformed data fields and some tech- niques for analysing and interpreting them. We present some application examples and implementations of ren- dering iso surfaces in the spectral scattering represen- tation of of the normal and small-world rewired Ising model using Graphical Processing Units(GPUs) that al- low both simulation and rendering to be achieved in in- teractive time.

KEY WORDS

3D Fast Fourier Transforms; spectral model representa- tion; GPU computing; volumetric visualisation.

1 Introduction

A commonly recurring problem in scientific visualisa- tion is to “see inside” a block of three dimensional data that is associated with a simulation model. Many phys- ical and engineering models in materials science, fluid dynamics, chemical engineering and other application

areas including medical reconstruction [1] fit this prob- lem pattern.

Volume rendering [2–4] is a relatively long standing problem in computer graphics [5] with a number of approaches for “seeing into volumes” [6] and for pro- viding visual cues into volumes [7] having been ex- plored. Approaches vary from identifying the surfaces present [8] in the interior of a data volume [9] and either colouring them or texturing [10] them accordingly.

It is often hugely useful to a scientific modeller to be able to visualise an interactively running model and ex- periment with different parameter combinations prior to committing computational resources to detailed statisti- cal investigations. The visualisation often leads to some insight into a particular phenomena that bears close nu- merical study or might lead to a hypothesis or expla- nation as to a numerical observation such as a phase transition or other critical phenomena. Some data mod- els require visualisation of complex data fields that may have vector elements [11] or complex numbers [12].

We focus solely on scalar voxel element data and in the case of the complex numbers that arise from FFT data we visualise as separate real or imaginary parts, or more usefully as the normalised amplitudes.

In recent times the notion of computational-steering [13] has been identified as attractive to users, as has the potential use of remote visualisation [14] to link fast graphics processing devices to high-performance com- pute systems to enable interactive real time simulation of large scale model systems with associated real time volume rendering [15] or visualisation [16]. To this end data-parallel computing systems such as the MAS- PAR [17] and other specialist hardware [18] were once employed.

(2)

An obvious continued goal is to incorporate on-demand high-performance simulation into fully interactive vir- tual reality systems [19]. While this goal is still some economic way away for desktop users, modern GPUs do offer some startling performance potential for both computation [20–22] and volume visualisation [23] for interactive simulations in computational science.

In this paper we consider some of the recent high- performance hardware capabilities of GPUs and just as importantly recent software developments that have led to programming languages for accelerator devices like NVIDIA’s Compute Unified Device Architecture (CUDA) [24]. GPUs can help accelerate the numerical simulation as well accelerate the rendering performance of the graphical system. Together, these two capabili- ties allow exploration of model systems that are large enough and yet can be simulated fast enough in interac- tive time, to spot new and interesting phenomena.

The application example we visualise in this paper is the well known Ising model of a computational mag- net [25]. This model has been very well-studied numer- ically in two and three dimensions and also graphically in two dimensions. We harness the power of GPUs to show how both the computations and the graphical ren- dering can be accelerated sufficiently so that a system of sizes up to around2563individual spin cells can be simulated interactively. We add small-world link re- wirings to the model to experiment with visualisation of asymmetric model systems in the spectral represen- tation.

Our article is structured as follows: In Section 2 we dis- cuss the meaning and interpretation of a discrete Fourier transform of a model such as the Ising system. We de- scribe how the Fourier field in 3D can be visualised and appropriately rendered in Section 3. We give a brief summary of pertinent aspects of the GPU that enable interactive time computation of FFTs and visualisation in Section 4. We present a set of rendered screen-dumps and spherical averaged data plots in Section 5 to show how the rendering exposes the asymmetry from model perturbations such a s small-world shortcut re-wirings.

We offer some discussion of the results in Section 6 and concluding remarks and areas for further development of these techniques in Section 7.

2 Fourier Spectral Fields

Direct visualisation of field data can often be aug- mented by examination of the Fourier transformed field.

A number of particle scattering techniques involving electrons or neutrons are commonly used in materials science experimentation. These techniques yield a scat-

tering pattern that is mathematically equivalent to the Fourier transform of the atomic pattern of the sample.

A similar “numerical scattering” approach can be em- ployed [26,27] to represent the bulk properties of a sim- ulated model. The scattering pattern formed by collect- ing the scattered particles in suitable detectors can give quantitative indications of the size and sometimes shape of spatial fluctuations and structures that are forming in the material. In the numerical simulation, a computa- tion of the scattering pattern is possible and this can reveal properties of the simulated data field that would otherwise be hard to extract and observe. We give a brief derivation of the scattering formula.

A neutron scattering experiment conducted on a sample with volumeV with scattering angleΩand scattering cross-section (probability) ofσis given by:

dΩ ≈(ρp−ρm)2 1 V Z

V

eiQ.rdr

2

(1) The term(ρp−ρm)2is the difference between the den- sities of the phase of interest and the background den- sity and is known as the small angle scattering contrast, denoted as 4ρ2. A large value of 4ρ2 gives higher relativescattering intensity from the precipitate phase.

Equation 1 can then be rewritten as:

dΩ ≈(ρp−ρm)2 1

VS(Q) (2)

which defines the structure function as:

S(Q) = Z

V

eiQ.rdr

2

(3) which is strictly a dimensionless quantity.

In material science it would be usual to factor out var- ious bulk constants and volume factors to separate the scattering intensity I from the the structure function.

For the purposes of this paper, we can treat the S(Q) as the same as the scattering intensity and which cor- responds to the Fourier transform of the “contrast” or the physical configuration in the model system. We can take the contrast as simply the value of the field.

So S(Q) or I(qx, qy) can be computed by a two- dimensional Fast Fourier Transform (FFT) of the simu- lated model data fieldu.

The Fourier transform of wholly real data such as for the scalar field of the Cahn Hilliard system on a sym- metric geometry is also symmetric and it is common to spatially shift it to show the centred scattering pat- tern. We can visualise a movie time sequence of FFT data in much the same way as we visualise the direct model field. In the case of a symmetric scattering pat- tern, no one dimension is dominant and it is also pos- sible to make a spherical average (or circular average

(3)

in 2 dimensions). Various properties such as the char- acteristic length scale and effective dimensions of the internal model structures can be obtained from an anal- ysis of the position of peaks and the fitted power-laws of these averaged scattering curves [28, 29].

The new contribution we explore in this present pa- per however involves treating the full three dimen- sional scattering pattern as a data set we can colour, fly through and visualise looking for features and iso- surfaces.

The multi-dimensional discrete Fourier TransformHis defined: where the l’th dimension is subdivided intoNl

discrete values andnlandklrange from0...Nl−1.

For our purposes in this present paper we use three di- mensional transforms andN1 = N2 = N3 = 2p so that the standard fast Fourier Transform library routines for CPU and GPU work straightforwardly without the need for explicit data padding. In effect we are trans- forming a complex functionhdefined on a 3D mesh in physical space and transforming it to its corresponding complex functionH defined on a mesh in spatial fre- quency space. Spatial “frequencies” are usually known as wave-numbers, and we defineQ= 2π/λfor a wave- lengthλ.

The spatial model we are focusing on in this paper is the Ising model of spins that are located on a spatial grid.

In taking the FFT of the Ising spins we are effectively computing a scattering pattern from the spatial arrange- ment of spins. This is analogous to measuring the actual neutron scattering from a single perfect magnetic crys- tal. Due to the symmetry properties of FFTs, the trans- form of a wholly real function has some twofold redun- dancy in the transform. Phase information is introduced – the transform consists of complex numbers with both a real and an imaginary part, but the transformed val- ues are mirror imaged in each dimension. Large length scale features in the original spatial data correspond to short spatial frequencies in the transformed data.

In a growth model system such as the Ising model, a quench experiment starts off with a noisy system with noise on all length scales. As the system is quenched, small domains form rapidly - corresponding to long spatial frequencies. As the system anneals and these small domains grow, these long wavelengths corre- spondingly shorten. The scattering probability (distri- bution) of wave-numbers will typically start out flat, then develop a peak corresponding to the reciprocal of the most commonly occurring length scale in the ma- terial. The location of the peak typically decreases in wavenumber as the time annealing growth process con- tinues.

It is difficult to track a three-dimensional system visu-

ally and reliance in normally placed on the size char- acterisation of the separating phases by spherically av- eraged Fourier transforms (or circular averages on 2D scattering patterns). The peaks vary considerably in magnitude, but the peak positions can be well deter- mined, and mapped to characteristic domain sizes.

This much is known from standard small angle scatter- ing experiments on real materials. In this present paper however we are able to examine the full 3D transform as it changes in time and furthermore to examine asymme- tries that correspond to asymmetric growth phenomena.

The Ising model is a discrete spin model that exhibits interesting phase transitional properties. We have ex- perimented with a number of visualisation techniques for the 3D Ising system including cut away sections [30]. Spectral methods can also be applied to the Ising system and a particularly interesting area of study is the effect of long range small-world re-wirings to the regular nearest neighbour lattice [31]. The unaveraged Fourier Transform of the Ising system yields an effec- tive scattering pattern which in turn reveals the length scales present in the Ising system. This is especially useful because these scattering patterns can be added together to get a meaningful average, which is not pos- sible in the spatial domain.

The Ising model gives rise to a bit fields(x, y, z)which we transform to obtain the spatial structure function S(qx, qy, qz), where we follow the literature conven- tion and use Q = (qx, qy, qz) for the wavenumber.

These are the spatial analogies of the h(t)and H(f) functions described above in equation 4 in more con- ventional time-frequency notation. We are interested in the location of features in wavenumber space and these will suggest (reciprocal) length scales that are pertinent to the spatial structure and growth going on inside the Ising bit field.

The transform we obtain in 3D is a set of complex num- bers, but it is useful to interpret them as scattering inten- sities so we take the norm of each cell thus combining its real and imaginary parts and losing the phase. When averaging over many independent simulation runs we can directly average intensities with the interpretation of enhancing the scattering beam signal strength. We thus have a 3D field of (real) scalar intensity numbers to visualise.

3 Visualisation Methods

When visualising a three-dimensional FFT, the usual challenges of visualising a three-dimensional field are encountered. FFT transforms of a single system have

(4)

H(n1, n2, ..., nl) =

Nl−1

X

kl=0

· · ·

N2−1

X

k2=0 N1−1

X

k1=0

e2πiklnl/Nl· · ·e2πik2n2/N2e2πik1n1/N1h(k1, k2, ..., kl) (4)

both a real and imaginary part which can further com- plicate the visualisation. Features and structure in Fourier space data reflect characteristics of the real- space field. We visualise the magnitudes of the complex Fourier values averaged over many simulations, the av- erages produce Fourier data representative of the entire model rather than one individual system. Visualising these averaged magnitudes rather than a single FFT re- duces the data to be visualised from a complex field to a scalar field.

A number of different methods are available for visu- alising a scalar field such as this. The visualisation method selected for analysing these FFTs is construct- ing and rendering iso-surfaces. Various ISO values can be used to construct different iso-surfaces within the FFTs. Cycling through these different iso-surfaces can reveal the internal features and structure of the FFTs.

We use the marching cubes algorithm [32] to construct a polygon representation of the iso-surfaces. This al- gorithm generates a polygon surface representing the iso-surface in the FFT and has been implemented on the GPU using CUDA. The GPU implementation can accelerate the processing speed and also avoid unneces- sary data transfer between the GPU and the host CPU.

Once the surfaces have been created, they can be ren- dered using either a flat or smooth shading model.

Smooth shaded polygon surfaces will appear more like the smooth iso-surfaces they approximate. However, as with any visualisation, care should be taken that this graphics interpolation does not introduce any incor- rect information into the rendered image. Algorithm 1 shows the two methods of calculating the normals. The flat shading implementation computes a single normal for each face while smooth shading computes a normal for each vertex. The renderings in Figure 1 and Figure 2 both use the flat shading model.

4 Graphical Processing Units &

CUDA

Data parallel and array processing computer systems are typically well suited to the highly regular computa- tional structure of Fast Fourier Transforms and volume rendering. Systems such as the DAP [33,34], MASPAR [35, 36] and Connection Machine [37] all had highly optimised FFT library routines in the 1980s and early

Algorithm 1Calculating normals for smooth shading of an iso-surface produced by the Marching Cubes al- gorithm. Note that×denotes the cross product opera- tion.

declarevertexes[],f aces[],normals[]

vertexes ← iso-surface vertexes from marching cubes

f aces←iso-surface faces frommarching cubes //Flat Shading

for allf inf acesdo declared0, d1, n

d0←vertexes[f.i1]−vertexes[f.i0]

d1←vertexes[f.i2]−vertexes[f.i0]

n=d0×d1 normals[f]←n end for

//Smooth Shading for allf inf acesdo

declared0, d1, n

d0←vertexes[f.i1]−vertexes[f.i0]

d1←vertexes[f.i2]−vertexes[f.i0]

n=d0×d1

normals[f.i0]←normals[f.i0] +n normals[f.i1]←normals[f.i1] +n normals[f.i2]←normals[f.i2] +n end for

1990s. This capability is now available for highly data- parallel processing accelerators in the form of Graphi- cal Processing Units (GPUs) [38].

To keep up with the demands of the 3D games and ren- dering industries, Graphical Processing Units (GPUs) have been designed for very high compute through- put using increasingly parallel processing architectures.

A current generation NVIDIA GPU (GTX-580), con- tains up to 16 streaming multiprocessors (SMs), each of which contains 32 scalar processors (SPs), and can handle millions of threads that are managed and sched- uled by the GPU hardware.

Threads are grouped into thread blocks of up to 1024 threads (512 for older devices), which are then executed on the SMs. A thread block is further sub-divided into blocks of 32 continuous threads called warps. At ev- ery instruction issue time, a warp scheduler issues an instruction to a warp that is ready to execute. All the threads in the warp either execute this instruction or are individually disabled if they diverge at a data-dependent conditional branch. Full efficiency is thus only achieved when all 32 threads agree on the execution path. This is

(5)

a common challenge when developing on data-parallel architectures.

CUDA devices have a large global memory, a number of fast on-chip memory areas including texture cache, constant memory cache and shared memory, as well as a set of 32-bit registers for each SM. The latest gen- eration of NVIDIA devices provides an additional L2 cache, which is shared among all multiprocessors, as well as an L1 cache that takes up part of the shared memory on each individual SM. The L1 and L2 caches are used to cache global memory accesses and tem- porary register spills. Global memory is by far the slowest memory on the device and the fast on-chip caches should be utilised where possible to improve performance. Global memory bandwidth can be sig- nificantly improved when the threads of a (half-)warp access memory addresses within the same segment of global memory (various restrictions apply depending on the compute architecture of the device).

We use CUDA to harness this massively parallel com- pute power provided by todays commodity GPUs to offload the main workload of our simulations to the graphics device. This allows us to simulate much larger systems than otherwise feasible. Although we have used CUDA and GPUs to do the simulation calculations themselves [39], we focus in this present paper on the volume and FFT processing.

To perform the spectral analysis described in this paper, we utilise NVIDIA’s CUDA Fast Fourier Transform (CUFFT) library to compute the Fourier Transform in parallel on the GPU. This solution is particularly ap- pealing because the input data to the FFT, generated by the simulation, already resides in graphics memory and thus does not need to be copied between the host and the graphics card. CUDA can be used to perform all the processing required to compute the FFT for the current system state (e.g. the spins of the Ising model) in paral- lel: (1) convert the input data structure into the format required by the CUFFT library; (2) compute the Fourier Transform; (3) compute the magnitudes and shift the data to center it.

5 3D FFT Results

We present some illustrative screen-dumps of the 3D FFTs of various 3D Ising model configurations.

Figure 3 illustrates how the spherical mean of the 3D Ising model’s Fast Fourier Transform (FFT) changes with the number of Metropolis update steps performed.

The figure also shows the effects of introducing small- word re-wirings, which change the distances between

individual cells in the model. The increasing domi- nance of the lower frequency components in the regular lattice Ising system as the simulation progresses rep- resents the growth of tightly packed clusters with like spins.

The introduction of small-world rewired long distance links in the x-dimension speeds up the formation of clusters, but it also increases the variance in the mag- nitude of the frequency components.

When edges are only rewired in the x-dimension, then this means that the y- and z-coordinates of the current neighbour are “frozen” and only the x-value is ran- domly modified. The total number of rewired edges thus remains the same—on average—for a given re- wiring probabilityP, no matter if the selection of new neighbours is restricted to certain dimensions or not.

The hyper-surfaces for Ising systems rewired only in the x-dimension are given in Figure 2.

The shape of the hyper-surfaces for the irregular sys- tem configuration easily explains the large error in the measurements of the spherical mean. Limiting the re- wiring to the x-dimension reduces the distance between cells in this dimension only, which produces a disk-like shape (the center value is the constant of the FFT and has been set to 0 as it is of no interest to us) for large ISO values, and a growing hole in the middle of the disk for even larger ISO values. The explanation for the disk-like shape is that the magnitude increases less steeply for the lower frequencies in the x-dimension as it does in the other dimensions. The hole increases in size when the magnitude in the y- and z-dimensions is larger than at any frequency in the x-dimension.

6 Discussion

The tightly packed clusters, or lumps of like spins, de- tected by the FFT are similar to what we would perceive as a cluster when looking at a graphical representation of the system. However, Figure 4 shows that the actual size distribution of the connected components looks quite different. After only one simulation step, when the system is still mostly random, almost all the cells are part of one of the two major clusters, one for “up” spins and the other for “down” spins. This is because with 6 edges connecting every cell to its neighbours (exactly forP= 0.0and on average forP >0.0), it is possible to reach almost all other cells with the same spin value from any given source cell. The spectral analysis does not know about these connections and only looks at the spatial distribution of spin values.

When the temperature T is lower or near the critical

(6)

Figure 1: 3D FFT of the Ising model with a regular lattice after 256 simulation steps. T = 4.5115, ISO values (top left to bottom right) are{0.02,0.03,0.04,0.10,0.70,0.90}.

Figure 2: 3D FFT of the Ising model with re-wiring only in the x-dimension after 256 simulation steps. P = 0.01, T = 4.55, ISO values (top left to bottom right) are{0.03,0.06,0.08,0.11,0.70,0.90}.

(7)

Figure 3: The spherical mean of the 3D FFT computed after{1,2,4, . . . ,1024}simulation steps of the Ising model using the Metropolis update algorithm. The system sizeN = 1283. The plot on the left shows the results for the regular cubic lattice Ising model with temperatureTc≈T = 4.5115. The plot on the right illustrates rewired irregular lattice Ising models with re-wiring probabilityP = 10−2andTc≈T = 4.55, where the arcs were randomly rewired only in the x-dimension. Each data point represents the mean value of 1000 simulation runs. The error bars illustrate the standard deviations. The error has been rescaled by a factor of0.1for frequencyf = 20for the plot on the right.

temperatureTc, then the system becomes less and less random the more update steps are performed. Lumps of like spins form and grow in size, which shows up with an increase in magnitude for the lower frequencies in the FFT results. The distributions for the size of con- nected clusters, on the other hand, broaden around the median of the system size until they eventually become bimodal, with distinct peaks for the “up” and “down”

spin clusters. The bimodal behaviour suggests that the chosen temperatures are lower than the respective crit- ical temperatures. If the temperatures are slightly in- creased toT = 4.52for the regular lattice system or to T = 4.6for the rewired system withP = 10−2, which are above the respective critical temperatures, then the cluster size distributions retain a single peak in the equi- librated system.

The much quicker broadening of the cluster size distri- butions forP = 10−2as compared to P = 0.0 sup- ports the intuitive assumption that the rewired connec- tions facilitate domain growth. While the shape of the regular lattice distribution still changes between steps 1024 and 4096, no significant changes can be observed after 1024 simulation steps for the rewired system. The FFT results may appear to contradict this conclusion, as the low frequencies are less dominant in the rewired system than in the regular lattice system. However, it needs to be considered that a large cluster in the rewired system is likely to consist of two or more spatially sep- arated smaller clusters that are only weakly connected by a few long distance links, which is not picked up by

the spectral analysis.

Another interesting effect clearly visible in the results for the rewired system is that after the distributions ini- tially become more and more front-heavy, this effect reverses after about 64 simulation steps and the distri- butions become flatter again. This is likely to be due to individual clusters beginning to join together to larger clusters of different shapes, which changes the scatter- ing patterns produced by the FFT.

7 Conclusion

We have shown how 3D FFTs can be computed in in- teractive time and used to visualise structural properties of complex 3D data models such as the Ising model.

We have also explored the asymmetries that occur when small-world re-wirings are inserted into the Ising model simulation model – in a single spatial direction. The asymmetric structure and growth properties are clearly visible in the 3D FFT volumetric visualisation whereas they are too subtle to see in the original real-space vol- umetric rendering.

This volumetric visualisation has traditionally been too computationally expensive to carry our interactively. In practice it ha s been of comparable or greater compu- tational cost than the simulation itself and so has been a neglected technique. Using GPUs and data parallel processing techniques, we have found hat it is viable in-

(8)

Figure 4: The cluster size distribution of the 3D Ising system computed after{1,2,4, . . . ,4096}simulation steps using the Metropolis update algorithm. The system sizeN = 1283. The temperature for the regular lattice Ising system (left)Tc≈T = 4.5115. The re-wiring probabilityP = 10−2for the small-world rewired Ising system (right) and the temperatureTc≈T = 4.55. Each data point represents the mean value of 1000 simulation runs. The insets show the standard deviations as error bars on the data sets.

teractively but is also very powerful when FFT data are averaged over many different independently generated model configurations. Not only does this give smoother renderings but also allows quite precise averaging over spherical shells to be carried out and the 3D FFT data reduced to one dimensional plots from which accurate positions of the peaks can be measured. These then give accurate indications of the characteristic length scales present in the simulated models. This is an important contribution to the set of tools available for carrying out computational experiments on simulated growth mod- els and systems.

We have focused solely of the use of Fourier transforms, but other spectral techniques such as wavelet analysis is also a promising signal analysis technique that has the potential to be greatly accelerated using GPU comput- ing. An analysis of other models that are based on inte- ger of floating point cells arranged on a lattice will also benefit from these techniques.

References

[1] Pan, Y., Whitaker, R., Cheryauka, A., Ferguson, D.:

Feasibility of gpu-assisted iterative image reconstruc- tion for mobile c-arm ct. In: Medical Imaging 2009: Physics of Medical Imaging. Volume 7258., SPIE (2009) 72585J–1–9

[2] Drebin, R.A., Carpenter, L., Hanrahan, P.: Volume ren- dering. In: ACM SIGGRAPH Computer Graphics. Vol- ume 22. (1988) 65–74 ISSN:0097-8930.

[3] Levoy, M.: Display of surfaces from volume data. IEEE Computer Graphics and Applications8(1988) 29–37

[4] Wang, W., Sun, H., Wu, E.: Projective volume rendering by excluding occluded voxels. Int. J. Image Graphics5 (2005) 413–432

[5] Foley, J.D., van Dam, A., Feiner, S.K., Hughes, J.F.:

Computer graphics: principles and practice (2nd ed.).

Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA (1990)

[6] Ritschel, T.: Fast gpu-based visibility computation for natural illumination of volume data sets. In: Eurograph- ics (Short Papers). (2007) 57–60

[7] Bruckner, S., Groller, E.: Enhancing depth-perception with flexible volumetric halos. IEEE Trans. Visual- ization and Computer Graphics 13(2007) 1344–1351 ISSN:1077-2626.

[8] Schott, M., Pegoraro, V., Hansen, C., Boulanger, K., Bouatouch, K.: A directional occlusion shading model for interactive direct volume rendering. Computer Graphics Forum28(2009) 855–862

[9] Meyer, M., Kirby, R.M., Whitaker, R.: Topology, accuracy, and quality of isosurface meshes using dy- namic particles. IEEE Trans. Visualization and Com- puter Graphics13(2007) 1704–1711

[10] Catmull, E., Smith, A.R.: 3-d transformations of im- ages in scanline order. SIGGRAPH Comput. Graph.14 (1980) 279–285

[11] Sanderson, A.R., Johnson, C.R., Kirby, R.M.: Display of vector fields using a reaction-diffusion model. In:

Proc. Conf on Visualization’04. Number ISBN:0-7803- 8788-0, ACM SIGGRAPH (2004) 115–122

[12] Playne, D., Hawick, K.: Visualising vector field model simulations. In: Proc. 2009 International Conference on Modeling, Simulation and Visualization Methods (MSV’09) Las Vegas, USA. Number CSTN-074 (2009) [13] Smarr, L., Catlett, C.E.: Metacomputing. Communica-

tions of the ACM35(1992) 44–52

[14] Bethel, W., Tierney, B., Lee, J., Gunter, D., Lau, S.: Us-

(9)

ing high-speed wans and network data caches to enable remote and distributed visualization. In: Proceedings of the 2000 ACM/IEEE conference on Supercomputing.

Number ISBN:0-7803-9802-5 (2000) Article 28:1–23 [15] Ng, R., Mark, B., Ebert, D.: Real-time programmable

volume rendering (2009)

[16] Fletcher, P.A., Robertson, P.K.: Interactive shading for surface and volume visualization on graphics worksta- tions. In: VIS ’93: Proceedings of the 4th conference on Visualization ’93, Washington, DC, USA, IEEE Com- puter Society (1993) 291–298

[17] V´ezina, G., Fletcher, P.A., Robertson, P.K.: Volume ren- dering on the maspar mp-1. In: VVS ’92: Proceedings of the 1992 workshop on Volume visualization, New York, NY, USA, ACM (1992) 3–8

[18] Goodnight, N., Woolley, C., Lewin, G., Luebke, D., Humphreys, G.: A multigrid solver for boundary value problems using programmable graphics hardware. In:

Proc. ACM SIGGRAPH/EUROGRAPHICS Conf. on Graphics Hardware. Number ISBN ISSN:1727-3471, San Diego, California, USA. (2003) 102–111

[19] Poupyrev, I., Weghorst, S., Billinghurst, M., Ichikawa, T.: A framework and testbed for studying manipulation techniques for immersive vr. In: VRST ’97: Proceed- ings of the ACM symposium on Virtual reality software and technology, New York, NY, USA, ACM (1997) 21–

28

[20] Sanderson, A.R., Meyer, M.D., Kirby, R.M., Johnson, C.R.: A framework for exploring numerical solutions of advection-reaction-diffusion equations using a gpu- based approach. Computing and Visualization in Sci- ence12(2009) 155–170

[21] Rumpf, M., Strzodka, R.: Nonlinear diffusion in graph- ics hardware. In: Proceedings of EG/IEEE TCVG Sym- posium on Visualization (VisSym ’01). (2001) 75–84 [22] Bolz, J., Farmer, I., Grinspun, E., Schr¨ooder, P.: Sparse

matrix solvers on the gpu: conjugate gradients and multigrid. ACM Trans. Graph.22(2003) 917–924 [23] Beyer, J.: GPU-based Multi-Volume Rendering of

Complex Data in Neuroscience and Neurosurgery. PhD thesis, Institute of Computer Graphics and Algorithms, Vienna University of Technology, Favoritenstrasse 9- 11/186, A-1040 Vienna, Austria (2009)

[24] NVIDIAR Corporation: CUDATM 3.1 Programming Guide. (2010) Last accessed September 2010.

[25] Ising, E.: Beitrag zur Theorie des Ferromagnetismus.

Zeitschrift fuer Physik31(1925) 253?258

[26] Hawick, K.A.: Domain Growth in Alloys. PhD thesis, Edinburgh University (1991)

[27] Hawick, K.A.: Spectral analysis of growth in spa- tial lotka-volterra models. In: Proc. IASTED Interna- tional Conference on Modelling and Simulation. Num- ber CSTN-092, Gabarone, Botswana (2010)

[28] Ciccariello, S., Goodisman, J., Brumberger, H.: On the porod law. Journal of Applied Crystallography21 (1988) 117–128

[29] Windsor, C.G.: An Introduction to Small Angle Neutron Scattering. J. Appl. Cryst.21(1988) 582–588

[30] Leist, A., Playne, D., Hawick, K.: Interactive visuali- sation of spins and clusters in regular and small-world

Ising models with CUDA on GPUs. Journal of Compu- tational Science1(2010) 33–40

[31] Hawick, K.A., James, H.A.: Ising model scal- ing behaviour on z-preserving small-world networks.

Technical Report arXiv.org Condensed Matter: cond- mat/0611763, Information and Mathematical Sciences, Massey University (2006)

[32] Lorensen, W.E., Cline, H.E.: Marching cubes: A high resolution 3d surface construction algorithm. Computer Graphics21(1987) 163–169

[33] Reddaway, S.F.: DAP a Distributed Array Processor.

In: Proceedings of the 1st annual symposium on Com- puter Architecture,(Gainesville, Florida), ACM Press, New York (1973) 61–65

[34] Wilding, N., Trew, A., Hawick, K., Pawley, G.: Scien- tific modeling with massively parallel SIMD computers.

Proceedings of the IEEE79(1991) 574–585

[35] Vozina, G., Fletcher, P., Robertson, P.: Volume Render- ing on the (MasPar) (MP-1). In: Workshop on Volume Visualisation, Boston, MA., USA. (1992) 3–8

[36] Fletcher, P.A.: Visualisation on a massively parallel su- percomputer: Regular mapping and handling of multi- dimensional data on simd architectures,. In: Proceed- ings of the Fourth Australian Supercomputer Confer- ence, Bond University Qld. (1991)

[37] Hillis, W.D.: The Connection Machine. MIT Press (1985)

[38] Leist, A., Playne, D., Hawick, K.: Exploiting Graphi- cal Processing Units for Data-Parallel Scientific Appli- cations. Concurrency and Computation: Practice and Experience21(2009) 2400–2437 CSTN-065.

[39] Hawick, K., Leist, A., Playne, D.: Regular Lattice and Small-World Spin Model Simulations using CUDA and GPUs. Int. J. Parallel Prog.39(2011) 183–201

Referensi

Dokumen terkait

Abstract---The objectives of this research were: 1) to study the management of coastal aquaculture at the Coastal Aquaculture Research and Development Center,