• Tidak ada hasil yang ditemukan

Computational geometry issues in numerical modelling practice

Dalam dokumen Integration of seismic monitoring (Halaman 132-135)

5. General issues and future developments

5.3. Computational geometry issues in numerical modelling practice

Computational geometry is a new discipline dealing with the algorithms of representing geometrical object in a manner suitable for computer programming and especially for the development of computer graphics applications and for solving boundary value problems.

The scope of computational geometry includes algorithmic development for the following problems:

` draw the best-fitting smooth curve to a given set of points

` draw the best-fitting smooth surface to a given set of points

` given a smooth surface, cover it completely with elements or “tiles” which do not overlap.

` given a three-dimensional body, subdivide it into elements of a given geometry. The elements should not overlap and the coverage must be complete i.e. it must cover the whole body.

` given a coverage of a geometrical body with elements, refine the coverage by adding additional elements or make the coverage coarser by removing and merging elements.

These and similar problems arise in a natural way from the needs of the applied and engineering sciences. The interest in computational geometry has been steadily increasing and has lead to the creation of an independent scientific discipline. The development of numerical modelling depend strongly on the progress in computational geometry especially for the discretization of problems originally formulated for a continuum in two or three dimensions.

Of interest to the numerical modelling of seismicity-related phenomena is the generation of two- and three-dimensional grids constrained within some specified domain. The process of generating such grids is known as meshing or gridding. Grids are used for transforming problems formulated initially for a continuum into approximately equivalent discrete problems. The term “approximate equivalence “ as applied to two problems needs some clarifying: two problems are said to be equivalent if they have the same set of solutions. Two problems are approximately equivalent if the solutions of one problem can be regarded as approximations to the corresponding solutions of the other problem. The idea of numerical modelling consist in reformulating a given “tough” problem into an approximately equivalent and easier to solve problem. The problems of continuum mechanics are considered as tough while the problem of solving a system of simultaneous algebraic equations is considered as

“easy” even when the number equations is large.

By imposing a grid over the spatial domain occupied by a solid it is possible to reformulate a boundary value problem for the partial differential equations of continuum mechanics into some approximately equivalent problem of solving a system of algebraic (often even linear) equations. There are three different ways of achieving this:

` One can use the nodes of the grid to obtain discrete analogues of the operators of taking partial derivatives. This idea is implemented in the various finite differences methods.

` One can seek an approximate solutions to the given boundary value problem as an assembly of “pieces”, each piece being a combination of some basis functions and defined within one of the sub-domains into which the grid has partitioned the whole domain. Such “piecewise” approximations are at the basis of various finite element schemes.

` It is possible to regard a solid as a heterogeneous assembly of homogeneous “grains” or

“blocks” and to identify the latter with the elements of a grid. This is the underlying idea of the discrete or distinct element models.

All numerical models of importance to the mining industry fall into one of the above categories. In a sense, the choice of the discretization scheme is inseparable from the choice of modelling strategy. Therefore it is important to understand the effect of the geometrical or grid factors on the performance of a numerical model.

A grid or mesh which has been generated for a particular model can be characterised by:

` Its length-scale or grid-size which is defined as the minimum distance between nearest- neighbour nodes. The grid-size is a local parameter and can, in principle, vary from one part of the grid to another. For regular grids, though, the grid-size is an over-all constant.

` Its connectivity index which is determined by the number of connected nearest- neighbour nodes of the grid. The connectivity of a grid is closely related to the dimensionality of the gridded manifold as well as to the geometry of the unit cell. For regular square grids the connectivity index is equal to twice the Euclidean dimension of the manifold. For tetrahedral grids in three dimensions the connectivity index of a grid- node is defined as the number of tetrahedra to which the node belongs.

` Its topology or deformation-invariant properties. For instance a grid of a sphere is different from any grid of a dough-nut-shaped body.

` The quality of the grid elements or cells. The quality factor of a grid element is

sometimes expressed in terms of the so-called aspect ratio. The aspect ratio is defined in a different way for different types of grid elements even if they have the same

Euclidean dimension. Alternatively the quality of the grid elements and hence of the discretization procedure as a whole can be measured by a quality factor which is the normalised ratio of:

1. the square root of the area over the circumference for two-dimensional elements 2. the cubic root of the volume over the square root of the boundary area for three-

dimensional elements.

The grid topology is not a matter of choice but is dictated by the geometry of the rock-mass to be modelled. The connectivity index of the grid is determined to a great degree by the dimensionality of the problem and to a lesser degree by the choice of the unit cells for the grid to be generated. The other two characteristics of a grid are entirely subject to choice and determine the resolution and accuracy of the approximation provided by the model.

The smaller the grid-size (i.e. the finer the grid) the smaller will be the minimum size of the seismic events generated by the model. The grid size affects also the accuracy of the discrete solution of the continuum problem: the finer the grid, the better the approximation since the latter is related in one way or another to some interpolation procedure. But the limitations of the present day computers in processor speed and in core memory restrict the minimum grid size for a given volume of the modelled body.

The quality factor of the individual grid elements does not alter the resolution power of the model but does affect the accuracy of the approximation. For instance, in a finite differences scheme, the partial derivatives of the unknown function are replaced by partial derivatives of some interpolation polynomials with nodal points defined by the grid elements and if the latter are abnormally elongated the approximation to the derivatives could be very poor.

Better quality factor means better shaped elements and correspondingly more accurate representation of the derivatives. In a finite element scheme the interpolation is via some basis of shape functions and a badly-shaped element can lead to a large error in the replacement of the unknown function with the corresponding combination of the shape functions. In a boundary integral scheme the shape of the elements is directly related to the errors in the numerical integration (which again uses interpolation techniques). Finally, the shape characteristics of the grid elements are of great significance in distinct-element modelling and in molecular-dynamics inspired schemes (friction, fracture etc.). When a heterogeneous rock-mass is modelled as an assembly of homogeneous “blocks” the shape of the grid elements determines the interfacing of the blocks, the area of the contact

surfaces and the contact interactions.

Apart from the grid refinement and the quality of the grid elements the accuracy of a numerical model can be affected by global grid artefacts. For instance, the algorithm for meshing the target body could produce a grid in which most of the nodes sit on preferred

planes or lines which would introduce artificial anisotropies. This problem is common for structured grids and can be avoided by randomising the nodes of the grid while keeping the connections.

5.3.1. The continuum limit and finite-size scaling

It is believed that the solution of a correctly formulated problem for a continuum solid will correspond to the observed behaviour of the object. Unfortunately, the problems of practical interest are essentially heterogeneous, anisotropic and non-linear which means that the exact solution cannot be found analytically. The question is to what degree a numerical solution of a discretized version of the original problem would approximate the existing and yet unknown exact solution?

There are many different numerical approaches to the same problem originally set for a continuum. Further, even within the same numerical approach there can be many different models corresponding to different grids. This means that one and the same problem has an arbitrary number of candidates for “the solution”, some very far off the target, others more acceptable.

A criterion is needed for selecting out the best numerical solution among the available approximate solutions. The following line of reasoning may lead to the formulation of such a selection rule. Suppose that we have a numerical model based on some interpolation scheme. If we could simulate the model for a sequence of grids with increasing refinement one can expect that in the limit when the grid size goes to zero the discretized version of the problem would become infinitely close to the original formulation in the continuum and hence the solution of the model would tend to the exact solution. This procedure is called the exact continuum limit of the model and cannot be carried out in practice because it would require unlimited memory and computer time. Unfortunately it is very difficult and sometimes even impossible to prove that the exact continuum limit exists and if it does that it is unique. With some sacrifice of mathematical rigour and guided by practical considerations one could look for signals of convergence in a sequence of simulations corresponding to an increasing grid refinements. If one assumes that the character of the convergence, at least after some initial transient part of the sequence, is monotonous, then the approach to the limit will be

signalled by a reduced variation of the consecutive members in the sequence. In practice one will have to run the model for several grids with different degree of refinement and look for a decreased sensitivity of the modelled data on the grid size. When it is found that the last refinement of the grid did not change the modelled data by more than an acceptable margin, one can conclude that the practical continuum limit has been reached.

Strictly speaking, one cannot speak of a continuum limit for a discrete element model. Yet the same idea of reduced sensitivity of the modelled data to changes in the element size can be applied to assemblies of distinct elements. For assemblies of distinct elements it is of greater importance to study the so called finite-size scaling, which tests the sensitivity of the model to increasing the number of constituents.

Dalam dokumen Integration of seismic monitoring (Halaman 132-135)