This page documents a utility that generalizes Backus upscaling to general geometry. Elastic moduli specified on a cell-basis on corner-point grids can be upscaled to an elasticity tensor with possibly 21 independent components. The code simulates the 6 load conditions (three normal and three shear forces applied to the geometry) necessary for determining the elastic properties.
$ upscale_elasticity gridfilename=filename.grdecl rock_list=rocklist.txt output=outfilename.txt
filename.grdecl contains the grid as specified by the Eclipse keywords SPECGRID, COORDS and ZCORN, rocklist.txt contains information about where rock-files per rock type is located and outputfilename.txt is the name of the file to write the output.
Elastic moduli for each input cell can also be specified on a per-cell basis in the grdecl-file, by means of locally defined additional keywords, see the page on corner-point grids for details. However, this has a bug in the current version and need to be fixed.
Command line options
- gridfilename - grdecl filename to be upscaled
- vtufilename - Save results to vtu file for visual inspection. If not specified, vtu results are not saved.
- rock_list - A file with a list of one file per satnum specifying elastic properties for each rock type. See description below.
- output - Filename for which results are written in ASCII format. If not specified, output is not written to file.
- method - the kind of boundary couplings to use, can be 'mpc' or 'mortar'. Defaults to 'mortar' (best for non-perfect periodic grids).
- Emin - Minimum E (youngs) modulus. Used to avid numerical issues if contrasts in grid is too large. Defaults to 0.
- ctol - Collapse tolerance in grid parsing. Defaults to 1e-6.
- ltol - Tolerance in iterative linear solvers. Defaults to 1e-8.
- linsolver_type - Type of solver for linear system. Possibilities are 'iterative' and 'direct' (LU factorization). Default is 'iterative'.
There are also some options that are mostly relevant for debugging and testing. These are only included here for completeness.
- verbose - set to true to get verbose output from linear solvers and element assembly handler.
- linsolver_restart - the number of Krylov vectors kept in GMRES before a restart (affects memory usage). Only relevant with an asymmetric preconditioner. Defaults to 1000.
- cells(x|y|z) - the number of cells of the grid if gridfilename=uniform. Defaults to 3.
- lambda(x|y) - order of lagrangian multipliers in first/second direction. Defaults to 2.
- linsolver_coarsen - problem size when coarsening in AMG stops. Defaults to 5000.
- linsolver_presteps - the number of pre smoothing sweeps in the AMG. Defaults to 2.
- linsolver_poststeps - the number of post smoothing sweeps in the AMG. Defaults to 2, will be the same as the presteps if a symmetric preconditioner is utilized.
- linsolver_symmetric - Set to true to use a symmetric preconditioner (and Krylov solver). Defaults to 'true'.
- linsolver_uzawa - Use an Uzawa approach (block Gaussian elimination). Useful for debugging purposes. Defaults to 'false'
- linsolver_zcells - The aggregate size in the z direction used in the AMG. Defaults to 2.
- mortar_precond - the type of preconditioner to use for the multiplier block. Can be 'schuramg' or 'schurdiag'. Defaults to 'schuramg'.
The rock_list contains contains the number of rock types and a file per rock type which specifyi elastic properties per rock type.
3 rock1.txt rock2.txt rock3.txt
The number of rock types (3 in the above example) must correspond to the maximum SATNUM number in the grdecl-file
The files rock1.txt, rock2.txt, ... can have different formats depending on the anisotropy of the rock; isotropic, transverse isotropic or anisotropic
For isotropic materials, the format has four lines;
modulispesification modulus1 modulus2 density density
where modulispecification can be either "vpvs" (p-wave velocity and s-wave velocity), "km" (bulk and shear modulus), "lm" (lame's lambda and shear modulus) or "en" (youngs modulus and poissons ratio).
Example rock file for isotropic rocks
km 8 2 density 3
for a material with bulk modulus 8 GPa, shear modulus 2 GPa and density 3 kg/m3
Transverse isotropic rocks
For transverse isotropic materials, the format has four lines;
ti a b c d e density density
where the material's density is density and the stiffness matrix has elements
For general (anisotropic) materials, the stiffness matrix have the following format
anisotropic ... ... density density
where the materials density is density and the stiffness matrix has elements
The upscaling code incorporates three different approaches for boundary conditions; the MPC approach, the LLM approach and the CLM ("Mortar") approach. Here follows a quick explanation of the three different approaches. They will all be presented in 1D (2D) for clarity of presentation.
The simplest approach is the multi-point-constraint (MPC) approach. Here, one edge is chosen as a master edge, and edge as the slave edge. Strong couplings between nodes are then established, making nodes on the slave edge a linear combination of nodes on the master edge. This is the direct extension of the approach used for matching grids, where nodes are coupled through single-point-constraints (SPCs), e.g. s=m1.
It turns out that while SPCs work very well for matching grids, the MPC approach is not optimal for non-matching grids. The reason for this is that, while we do get pointwise periodic solutions in the nodes, this pointwise periodicity do not guarantee that the boundary integral associated with the weak form of the upscaling problem actually cancels out.
The resulting linear system of equations if of the form
This system is SPD. Thus it can be solved either using a sparse Cholesky (LU) factorization (linsolver_type=direct) or through conjugate gradient iterations with an AMG preconditioner (linsolver_type=iterative).
The Continuous Lagrangian Multiplier (CLM) method was then chosen. This is heavily inspired by Mortar methods, and is referred to as Mortar for simplicity in the application. But strictly speaking it is not a Mortar method in the original definition of the term, so we invented the name CLM to make this clear.
Here we work directly with the boundary integral. The original linear system of equation is augmented with the requirement that
where and refers to the displacements on the left and right boundary, respectively.
Thus far this is a vanilla Mortar approach. What distinguishes our method from a real mortar method, is our choice of mesh for the multiplier . In a normal mortar approach, the multiplier mesh is taken as the footprint/intersection mesh of one boundary on the other. This would be quite the challenge to implement reliably for the highly degenerated meshes we have to deal with. Thus, a different choice is taken here. The interface-mesh is taken as the columns of the corner-point grid. This simplifies the computations substantially since we know that while the boundary meshes do not match in general, they will always both be subdivisions of the columns. We can thus assemble the operator corresponding to the constraints column by column in a straight forward manner.
By default, the multipliers are linear in 'u' and quadratic in 'v', not bilinear as illustrated in the figure above. One can change the order of the multipliers in 'v' direction through the 'lambda(x|y)' command line parameter. Note that the x and y refers to the *constant* coordinate on the cube face. They are always linear in 'u'.
The resulting system of equations are on the form
The code is not parallelized for distributed memory computers. However, it is parallelized for shared memory computers through the use of OpenMP. You can control the amount of threads used as normal, i.e. through the environment variable OMP_NUM_THREADS. e.g.
OMP_NUM_THREADS=3 upscale_elasticity gridfilename=file.grdecl output=file.log
will run the computation using 3 threads.
The program first assembles the linear operators in a single-threaded fashion. It then solves for multiple load cases in a multi-threaded fashion.
Note to sysadmins: You should set the affinity of the OpenMP threads to ensure we utilize the maximum memory bandwidth available on the system, i.e. spread them around on different physical cores. For GNU see here
There are two classes of linear solvers available, either a direct or an iterative solver. By default the iterative solver is used, see the linsolver_type parameter.
A short description of the iterative solver follows.
With the MPC method, the linear system is symmetric and positive definite, and conjugate gradient iterations are appropriate. As the preconditioner , algebraic multigrid is employed. Currently, the AMG is configured with 2 pre smoothing and 2 post smoothing steps, SSOR is taken as the smoother and a direct, coarse solver is employed.
This system is symmetric and indefinite. For this a MINRES solver is appropriate. Optionally, an asymmetric preconditioner can be used. In this case, the GMRES method is used, since MINRES cannot handle asymmetric preconditioners. The preconditioner used is of the Schur type, i.e.
where either or (see mortar_precond command line parameter).
If a symmetric solver is used, the block in the first row is dropped. We explicitly form the preconditioner, and invert using sparse LU factorization.