%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
import numpy as np
import pandas as pd
import sys
import os
import matplotlib.pyplot as plt
import plotly.graph_objects as go
import urllib.request
import gstlearn as gl
import gstlearn.plot as gp
import gstlearn.plot3D as gop
The well information corresponds to the coordinates of their intercepts with the top and bottom surfaces of the target reservoir. It is contained in the ASCII file Wells.dat. In addition, a selection is provided which distinguishes the appraisal wells from the other ones.
The ASCII file is loaded. The file is composed of 189 wells covering a field with an extension of 10 by 15 kilometers. The depth is oriented downwards with a top varying from -983m to -1030m and the bottom from -1036m to -1067m.
url = 'https://soft.minesparis.psl.eu/gstlearn/data/Chamaya/Wells.dat'
filepath, head = urllib.request.urlretrieve(url)
mydb = gl.Db.createFromCSV(filepath,gl.CSVformat())
mydb.setLocators(["X","Y"],gl.ELoc.X)
mydb
Data Base Characteristics ========================= Data Base Summary ----------------- File is organized as a set of isolated points Space dimension = 2 Number of Columns = 6 Total number of samples = 189 Variables --------- Column = 0 - Name = rank - Locator = NA Column = 1 - Name = X - Locator = x1 Column = 2 - Name = Y - Locator = x2 Column = 3 - Name = top - Locator = NA Column = 4 - Name = bot - Locator = NA Column = 5 - Name = appraisal - Locator = NA
dbfmt = gl.DbStringFormat.createFromFlags(True,True,True,True)
mydb.display(dbfmt)
Data Base Characteristics ========================= Data Base Summary ----------------- File is organized as a set of isolated points Space dimension = 2 Number of Columns = 6 Total number of samples = 189 Data Base Extension ------------------- Coor #1 - Min = 50.000 - Max = 9950.000 - Ext = 9900 Coor #2 - Min = 150.000 - Max = 14950.000 - Ext = 14800 Data Base Statistics -------------------- 1 - Name rank - Locator NA Nb of data = 189 Nb of active values = 189 Minimum value = 1.000 Maximum value = 189.000 Mean value = 95.000 Standard Deviation = 54.559 Variance = 2976.667 2 - Name X - Locator x1 Nb of data = 189 Nb of active values = 189 Minimum value = 50.000 Maximum value = 9950.000 Mean value = 4893.386 Standard Deviation = 2909.303 Variance = 8464043.560 3 - Name Y - Locator x2 Nb of data = 189 Nb of active values = 189 Minimum value = 150.000 Maximum value = 14950.000 Mean value = 8163.757 Standard Deviation = 4490.427 Variance = 20163937.740 4 - Name top - Locator NA Nb of data = 189 Nb of active values = 189 Minimum value = -1030.760 Maximum value = -983.260 Mean value = -1007.963 Standard Deviation = 10.246 Variance = 104.980 5 - Name bot - Locator NA Nb of data = 189 Nb of active values = 189 Minimum value = -1067.370 Maximum value = -1036.050 Mean value = -1050.542 Standard Deviation = 6.903 Variance = 47.646 6 - Name appraisal - Locator NA Nb of data = 189 Nb of active values = 189 Minimum value = 0.000 Maximum value = 1.000 Mean value = 0.265 Standard Deviation = 0.441 Variance = 0.195 Variables --------- Column = 0 - Name = rank - Locator = NA Column = 1 - Name = X - Locator = x1 Column = 2 - Name = Y - Locator = x2 Column = 3 - Name = top - Locator = NA Column = 4 - Name = bot - Locator = NA Column = 5 - Name = appraisal - Locator = NA
The data set is represented where the well intercepts with the top surface are displayed either in black or in yellow for the appraisal wells, whereas the intercepts with the bottom surface are displayed in purple.
fig, ax = gp.initGeographic()
ax.symbol(mydb,nameColor="appraisal")
ax.decoration(title="Geometry Information")
plt.show()
Obviously, all wells have penetrated the whole reservoir layer and therefore we can derive the layer thickness at each vertical well location. The thickness varies from 12.21m to 64.25m, as represented in the following histogram.
mydb["thick"] = mydb["top"] - mydb["bot"]
fig, ax = gp.init()
ax.histogram(mydb, name="thick", bins=30)
plt.show()
This step requires the prior definition of a grid used for mapping the surfaces whose extension and cell dimensions must be chosen carefully: it results from a trade-off between the time consumption used for mapping the surfaces and an unnecessary finesse of the grid mesh.
For this study, the optimal choice is a grid with a square mesh of 100m edge, with 101 nodes along X and 151 nodes along Y, covering a surface of 10km along X and 15km along Y. The origin of the grid (lower left corner) is located at 50m along X and 50m along Y.
mygrid = gl.DbGrid.create(nx=[101,151],dx=[100,100],x0=[50,50])
mygrid
Data Base Grid Characteristics ============================== Data Base Summary ----------------- File is organized as a regular grid Space dimension = 2 Number of Columns = 3 Total number of samples = 15251 Grid characteristics: --------------------- Origin : 50.000 50.000 Mesh : 100.000 100.000 Number : 101 151 Variables --------- Column = 0 - Name = rank - Locator = NA Column = 1 - Name = x1 - Locator = x1 Column = 2 - Name = x2 - Locator = x2
The grid (only one cell out of 5 for better legibility) and the well headers are displayed.
fig, ax = gp.initGeographic()
ax.cell(mygrid,color='black',step=5)
ax.symbol(mydb)
plt.show()
In this phase, we are facing the usual problem of estimating linked variables, such as the top, the bottom and the thickness of a layer. The principle consists in processing jointly the set of correlated variables and the uncorrelated one separately. Usually the top and the bottom are non correlated. Then we recommend estimating either the top (respectively the bottom) and the thickness separately and then simply deriving the bottom (respectively the top). Another solution would be to jointly estimate the top and the bottom using a multivariate procedure: this may become difficult if both variables are non-stationary.
Here, and for illustration sake, we will process the three variables (top, bottom and thickness) independently.
We compute the experimental variogram of the top variable, calculated in 4 conventional directions (N0, N45, N90 and N135), for 15 lags of 500m each.
mydb.setLocator("top", gl.ELoc.Z)
varioparam = gl.VarioParam.createMultiple(ndir=4, npas=15, dpas=500)
vario_top = gl.Vario.computeFromDb(varioparam, mydb)
The variogram is displayed for all calculation directions
fig, ax = gp.init()
ax.variogram(vario_top,idir=-1,flagLegend=True)
plt.show()
We check that the top variable is anisotropic with calculation directions overlapping two by two: 90 and 135 on one hand for the higher variability and 0 and 45 on the other hand for lower variability. As the curves are grouped two by two, this prooves that the main anisotropy axis does not coincide with an already calculated directions, but rather with an intermediate one.
Another possibility is to calculate the variogram map. We obtain the following figure:
vmap_top = gl.db_vmap_compute(mydb,gl.ECalcVario.VARIOGRAM,[20,20])
fig, ax = gp.initGeographic()
ax.raster(vmap_top, "*Var")
ax.decoration(title="Variogram Map for Top")
plt.show()
This clearly shows the low variability direction around 20 degrees.
The directional experimental variogram is then used for fitting the model.
model_top = gl.Model()
err = model_top.fit(vario_top)
model_top
Model characteristics ===================== Space dimension = 2 Number of variable(s) = 1 Number of basic structure(s) = 1 Number of drift function(s) = 0 Number of drift equation(s) = 0 Covariance Part --------------- Spherical - Sill = 157.461 - Ranges = 24037.055 5003.649 - Angles = 21.202 0.000 - Rotation Matrix [, 0] [, 1] [ 0,] 0.932 -0.362 [ 1,] 0.362 0.932 Total Sill = 157.461
The model is composed of a single spherical basic anisotropic structure: the anisotropy ellipsoid is rotated (its major axis is oriented 21 degrees) with its ranges are respectively equal to 24km and 5km, as shown in the next figure.
fig, ax = gp.init()
ax = gp.varmod(vario_top, model_top)
plt.show()
We compute the experimental variogram of the thickness variables, calculated in 4 conventional directions for 15 lags of 500m each.
mydb.setLocator("bot", gl.ELoc.Z)
varioparam = gl.VarioParam.createMultiple(ndir=4, npas=15, dpas=500)
vario_bot = gl.Vario.computeFromDb(varioparam, mydb)
fig, ax = gp.init()
ax.variogram(vario_bot,idir=-1,flagLegend=True)
plt.show()
The corresponding variogram map is represented next.
vmap_bot = gl.db_vmap_compute(mydb,gl.ECalcVario.VARIOGRAM,[20,20])
fig, ax = gp.initGeographic()
ax.raster(vmap_bot, "*Var")
ax.decoration(title="Variogram Map for Bottom")
plt.show()
This directional experimental variogram is used for fitting the model.
types = gl.ECov.fromKeys(["SPHERICAL","EXPONENTIAL"])
model_bot = gl.Model()
err = model_bot.fit(vario_bot, types)
model_bot
Model characteristics ===================== Space dimension = 2 Number of variable(s) = 1 Number of basic structure(s) = 2 Number of drift function(s) = 0 Number of drift equation(s) = 0 Covariance Part --------------- Spherical - Sill = 11.914 - Ranges = 2811.249 986.432 - Angles = 25.340 0.000 - Rotation Matrix [, 0] [, 1] [ 0,] 0.904 -0.428 [ 1,] 0.428 0.904 Exponential - Sill = 103.277 - Ranges = 95677.717 24735.006 - Theo. Ranges = 31938.010 8256.749 - Angles = 353.264 0.000 - Rotation Matrix [, 0] [, 1] [ 0,] 0.993 0.117 [ 1,] -0.117 0.993 Total Sill = 115.190
The nested model is composed of two anisotropic basic structures with (different) rotations of the anisotropy ellipsoids. The fitting is shown in the next figure.
fig, ax = gp.init()
ax = gp.varmod(vario_bot, model_bot)
plt.show()
We compute the experimental variogram of the thickness variables, calculated in 4 conventional directions for 15 lags of 500m each.
mydb.setLocator("thick", gl.ELoc.Z)
varioparam = gl.VarioParam.createMultiple(ndir=4, npas=15, dpas=500)
vario_thick = gl.Vario.computeFromDb(varioparam, mydb)
fig, ax = gp.init()
ax.variogram(vario_thick,idir=-1,flagLegend=True)
plt.show()
Next comes the calculation of the Variogram map
vmap_thick = gl.db_vmap_compute(mydb,gl.ECalcVario.VARIOGRAM,[20,20])
fig, ax = gp.initGeographic()
ax.raster(vmap_thick, "*Var")
ax.decoration(title="Variogram Map for Thickness")
plt.show()
The directional experimental variogral is used for fitting a Model
types = gl.ECov.fromKeys(["SPHERICAL"])
model_thick = gl.Model()
err = model_thick.fit(vario_thick, types)
model_thick
Model characteristics ===================== Space dimension = 2 Number of variable(s) = 1 Number of basic structure(s) = 1 Number of drift function(s) = 0 Number of drift equation(s) = 0 Covariance Part --------------- Spherical - Sill = 124.911 - Ranges = 9102.815 4618.492 - Angles = 23.011 0.000 - Rotation Matrix [, 0] [, 1] [ 0,] 0.920 -0.391 [ 1,] 0.391 0.920 Total Sill = 124.911
The experimental variogram and the model are represented
fig, ax = gp.init()
ax = gp.varmod(vario_thick, model_thick)
plt.show()
The estimation of the different variables will be performed using several interpolation methods to estimate the value of the variables on the node of the grid created beforehand. We will focus on the thickness variable only. Before kriging, we will use some traditional techniques, such as:
• the nearest neighbor
• moving average
• moving median
All these methods are based on linear techniques: for each target site, the estimated value is obtained as the linear combination of the data values at the surrounding information data points.
Note that all these method require the definition of a neighborhood (selection of data used for the estimation of each grid node). For a better understanding of the resulting map, the data information is overlaid using a proportional representation.
Moreover, if a Model is defined, we can also evaluate the variance of the estimation error for each method (in fact, we display the standard deviation instead). This requires the definition of a Model.
model = gl.Model.createFromParam(type = gl.ECov.CUBIC, range=2000)
The principle is to set the value of each grid node at the value of the closest data point.
err = gl.nearestNeighbor(mydb, mygrid, flag_std=True, model=model)
fig, ax = gp.initGeographic()
ax.raster(mygrid, name="Nearest.thick.estim",flagLegend=True)
ax.symbol(mydb, c='black')
ax.decoration(title="Nearest Neighbor")
plt.show()
As mentioned, we can produce the map of the standard deviation of the estimation error.
fig, ax = gp.initGeographic()
ax.raster(mygrid, name="Nearest.thick.stdev",flagLegend=True)
ax.symbol(mydb, c='black')
ax.decoration(title="Nearest Neighbor (stdev)")
plt.show()
The method assigns the average of the subset of data selected in the neighborhood centered on the target grid node. The first neighborhood consists in two five closest data selected within a circle of 5km centered on grid grid node.
neigh_small = gl.NeighMoving.create(radius=5000, nmaxi=5)
err = gl.movingAverage(mydb, mygrid, neigh_small, flag_std=True, model=model)
fig, ax = gp.initGeographic()
ax.raster(mygrid, name="MovAve.thick.estim",flagLegend=True)
ax.symbol(mydb, c='black')
ax.decoration(title="Moving Average (Small Neighborhood)")
plt.show()
fig, ax = gp.initGeographic()
ax.raster(mygrid, name="MovAve.thick.stdev",flagLegend=True)
ax.symbol(mydb, c='black')
ax.decoration(title="Moving Average (Small Neighborhood) (stdev)")
plt.show()
The same technique is used with a neighborhood increased to the 20 closest data points. The result is an obviously smoother map.
neigh_large = gl.NeighMoving.create(radius=5000, nmaxi=20)
err = gl.movingAverage(mydb, mygrid, neigh_large, flag_std=True, model = model,
namconv=gl.NamingConvention("MovAve_Large"))
fig, ax = gp.initGeographic()
ax.raster(mygrid, name="MovAve_Large.thick.estim",flagLegend=True)
ax.symbol(mydb, c='black')
ax.decoration(title="Moving Average (Large Neighborhood)")
plt.show()
fig, ax = gp.initGeographic()
ax.raster(mygrid, name="MovAve_Large.thick.stdev",flagLegend=True)
ax.symbol(mydb, c='black')
ax.decoration(title="Moving Average (Large Neighborhood) (stdev)")
plt.show()
The method assignes the values obtained as the weighted average of the information. The weights are calculated as the inverse distance beween a datum and the target, raised to a given exponent (generally 2). The neighboring points have been limited to adistance of 2000 from the target in order to lead to reasonable time for computing the variance map.
err = gl.inverseDistance(mydb, mygrid, dmax = 2000, flag_std=True, model=model)
fig, ax = gp.initGeographic()
ax.raster(mygrid, name="InvDist.thick.estim",flagLegend=True)
ax.symbol(mydb, c='black')
ax.decoration(title="Inverse (squared) distance")
plt.show()
The map of the standard deviation of estimation error.
fig, ax = gp.initGeographic()
ax.raster(mygrid, name="InvDist.thick.stdev",flagLegend=True)
ax.symbol(mydb, c='black')
ax.decoration(title="Inverse (squared) distance (stdev)")
plt.show()
The method is quite similar to the moving average, except that the median is assigned rather than the average. As in the previous paragraph, we first define a small neighborhood composed of the 5 closest data.
err = gl.movingMedian(mydb, mygrid, neigh_small, flag_std=True, model=model)
fig, ax = gp.initGeographic()
ax.raster(mygrid, name="MovMed.thick.estim",flagLegend=True)
ax.symbol(mydb, c='black')
ax.decoration(title="Moving Median (Small Neighborhood)")
plt.show()
The variance map
fig, ax = gp.initGeographic()
ax.raster(mygrid, name="MovMed.thick.stdev",flagLegend=True)
ax.symbol(mydb, c='black')
ax.decoration(title="Moving Median (Small Neighborhood) (stdev)")
plt.show()
The same technique is used with a neighborhood increased to the 20 closest data points. The result is an obviously smoother map.
err = gl.movingAverage(mydb, mygrid, neigh_large, flag_std=True, model=model,
namconv=gl.NamingConvention("MovMed_Large"))
fig, ax = gp.initGeographic()
ax.raster(mygrid, name="MovMed.thick.estim",flagLegend=True)
ax.symbol(mydb, c='black')
ax.decoration(title="Moving Median (Large Neighborhood)")
plt.show()
The variance map
fig, ax = gp.initGeographic()
ax.raster(mygrid, name="MovMed.thick.stdev",flagLegend=True)
ax.symbol(mydb, c='black')
ax.decoration(title="Moving Median (Large Neighborhood) (stdev)")
plt.show()
surface = gop.SurfaceOnDbGrid(mygrid, "MovMed_Large.thick.estim", showscale=False)
fig = go.Figure(data = [ surface ])
fig.update_layout(autosize=True,
scene_aspectmode='manual', scene_aspectratio=dict(x=1,y=1,z=0.2))
f = fig.show()
The Kriging method is a procedure which uses the model fitted above to produce the optimal estimation at each target grid node, based on a linear combination of the values measured at the samples selected in the neighborhood.
The model used for fitting the thickness variable has been fitted earlier.
We must now define the parameters of the neighborhood search, where the neighborhood designates the subset of samples used for performing the estimation of one target grid node.
We could imagine to use systematically all the available information when processing each grid node: this would refer to the unique neighborhood. But as we know that the dimension of the kriging system is given by the number of samples in the neighborhood, this unique neighborhood is no more relevant when the number of samples is too large (a decent limitation is around 300 samples).
We must then define a moving neighborhood for which the selection of the subset of data will move together with the target grid node, hence the name of moving neighborhood. The moving neighborhood is characterized by a set of parameters that must be defined.
In the following figures, we will check the selection of the samples in the neighborhood for the target grid node (IX=45 and IY=80), with the corresponding kriging weights (expressed in percentage).
In the first trial, we limit the extension of the neighborhood to a circle (of radius equal to 2000m) centered on the target grid node, and an optimum number of samples equal to 8. In the next figure, we can check that the 8 closest data are selected. Some samples are not selected although they lie within the circle.
neigh_moving = gl.NeighMoving.create(radius=2000, nmaxi=8)
node = 8030
res = gl.krigtest(mydb, mygrid, model_thick, neigh_moving, iech0=node, verbose=False)
fig, ax = gp.initGeographic()
ax.symbol(mydb, c='black')
ax.neigh(neigh_moving, mygrid, node, flagZoom=True)
ax.neighWeights(res)
plt.show()
Among a large number of other parameters, let us mention the possibility of ensuring that the selected samples are spread more regularly around the target by splitting the neighborhood circle into a number of angular sectors. As an example, we define a moving neighborhood composed of the 2 closest samples per angular sector within a circle of radius of 2000m, centered on the target grid node and split into 6 angular sectors.
neigh_moving = gl.NeighMoving.create(radius=2000, nsect=6, nsmax=2)
res = gl.krigtest(mydb, mygrid, model_thick, neigh_moving, iech0=node, verbose=False)
fig, ax = gp.initGeographic()
ax.symbol(mydb, c='black')
ax.neigh(neigh_moving, mygrid, node, flagZoom=True)
ax.neighWeights(res)
plt.show()
We can check that:
• at most two samples are selected in each angular sector; in some angular sectors, the only sample present is selected,
• some samples are still not selected, although they lie within the neighborhood circle,
• the two closest data still receive large weights
• there are some negative weights.
We recall the Model that has been used for the Estimation of the thickness
fig, ax = gp.init()
ax = gp.varmod(vario_thick, model_thick, showPairs=True)
plt.show()
We notice that the first lags are calculated based on few pairs. We can imagine a second fit introducing a nugget effect component and a spherical basic structure
types = gl.ECov.fromKeys(["NUGGET","SPHERICAL"])
model_thick2 = gl.Model()
err = model_thick2.fit(vario_thick, types)
fig, ax = gp.init()
ax = gp.varmod(vario_thick, model_thick2, showPairs=True)
plt.show()
We can use the second model for the estimation of the same target node and check for the new kriging weights.
res = gl.krigtest(mydb, mygrid, model_thick2, neigh_moving, iech0=node, verbose=False)
fig, ax = gp.initGeographic()
ax.symbol(mydb, c='black')
ax.neigh(neigh_moving, mygrid, node, flagZoom=True)
ax.neighWeights(res)
plt.show()
We can check that:
• the weights attached to the closest data have decreased
• there is no more negative weight: this is one of the usual impacts of a model containing a nugget effect component
The cross-validation is a procedure which allows checking the consistency between the data information, the fitted model and the neighborhood. For the following test, we will use the model with no nugget effect.
The principle is to consider each datum in turn and to estimate it (by kriging) from the other samples in its neighborhood. We can then compare the true value of the target sample to its estimated value. We can also consider the standardized error calculated as the ratio between this experimental error and the standard deviation of the estimation error (or modeled error). In a Gaussian framework, only 3% of the values should lie outside the interval [-2.5;2.5]: they will be called outliers.
The cross-validation procedure results into a set of graphics such as:
• the base map pointing out the outliers,
• the histogram of the standardized errors: it should be close to a Gaussian distribution with few samples in the extreme quantiles,
• the scatter plot of the true values versus the estimated values: it should be close to the first bisector,
• the scatter plot between the standardized error and the estimated values: it should show no correlation (in the simple kriging case).
err = gl.xvalid(mydb, model_thick, neigh_moving, flag_xvalid_est=-1)
fig, axs = plt.subplots(2,2, figsize=(10,10))
axs[0,0].symbol(mydb,nameSize="Xvalid.*.stderr", flagAbsSize=True)
axs[0,0].decoration(title="Standardized Errors (absolute value)")
axs[0,0].set_aspect(1)
axs[0,1].histogram(mydb, name="Xvalid.*.stderr", bins=20)
axs[0,1].axvline(-2.5, color='black', linestyle='dashed')
axs[0,1].axvline(2.5, color='black', linestyle='dashed')
axs[0,1].decoration(title="Histogram of Standardized Errors")
axs[1,0].correlation(mydb, namey="Xvalid.*.stderr", namex="Xvalid.*.estim",
asPoint=True)
axs[1,0].decoration(xlabel="Estimation", ylabel="Standardized Error")
axs[1,0].axhline(-2.5, color='black', linestyle='dashed')
axs[1,0].axhline( 0, color='black', linestyle='solid')
axs[1,0].axhline( 2.5, color='black', linestyle='dashed')
axs[1,1].correlation(mydb, namey="thick", namex="Xvalid.*.estim",
asPoint=True, diagLine=True, flagSameAxes=True)
axs[1,1].decoration(xlabel="Estimation", ylabel="True Value")
plt.show()
The thickness variable is estimated over the whole grid using a neighborhood slightly enlarged (5 data per angular sector within a circle of 5km radius).
The estimation is performed using the first model (without nugget effect) and presented. We clearly see the anisotropy which is responsible for the main elongated bodies throughout the grid.
mydb.setLocator("thick", gl.ELoc.Z)
neigh_moving = gl.NeighMoving.create(radius=5000, nsect=6, nsmax=5)
err = gl.kriging(mydb, mygrid, model_thick, neigh_moving)
fig, ax = gp.initGeographic()
ax.raster(mygrid, name="Kriging.thick.estim",flagLegend=True)
ax.symbol(mydb, c='black')
ax.decoration(title="Thickness: Estimation")
plt.show()
We now display the map presenting the standard deviation of the estimation error.
Obviously, the standard deviation of the error varies from 0 at data locations up to 9.14 in extrapolated areas. The elongated aspect comes from the anisotropy of the model.
fig, ax = gp.initGeographic()
ax.raster(mygrid, name="Kriging.thick.stdev",flagLegend=True)
ax.symbol(mydb, c='black')
ax.decoration(title="Thickness: Standard deviation of Estimation error")
plt.show()
As mentioned before, in order to get consistent results, it makes sense to estimate two of the three variables (from top, bottom and thickness variables) jointly. In practice, we usually process the top (or the bottom) together with the thickness.
Both simple variograms and the cross-variogram are modeled in the scope of the linear model of coregionalization.
mydb.setLocators(["top","thick"],gl.ELoc.Z)
vario_multi = gl.Vario.computeFromDb(varioparam, mydb)
types = gl.ECov.fromKeys(["EXPONENTIAL", "SPHERICAL"])
model_multi = gl.Model()
err = model_multi.fit(vario_multi, types)
The model is composed of a spherical basic structure and an exponential component. Both of them are anisotropic.
model_multi
Model characteristics ===================== Space dimension = 2 Number of variable(s) = 2 Number of basic structure(s) = 2 Number of drift function(s) = 0 Number of drift equation(s) = 0 Covariance Part --------------- Exponential - Sill matrix: [, 0] [, 1] [ 0,] 4.561 17.075 [ 1,] 17.075 63.923 - Ranges = 6933.278 6452.664 - Theo. Ranges = 2314.385 2153.952 - Angles = 23.709 0.000 - Rotation Matrix [, 0] [, 1] [ 0,] 0.916 -0.402 [ 1,] 0.402 0.916 Spherical - Sill matrix: [, 0] [, 1] [ 0,] 151.897 100.451 [ 1,] 100.451 69.070 - Ranges = 23190.112 4994.942 - Angles = 22.305 0.000 - Rotation Matrix [, 0] [, 1] [ 0,] 0.925 -0.380 [ 1,] 0.380 0.925 Total Sill [, 0] [, 1] [ 0,] 156.458 117.526 [ 1,] 117.526 132.992
fig, axs = gp.init(2,2)
fig.varmod(vario_multi, model_multi)
plt.show()
The Top and Thickness variables are estimated jointly using a Co-Kriging procedure. As for Kriging in a previous paragraph, we use the moving neighborhood composed of 5 closest points per angular sector, 6 sectors and a neighborhood maximum circle of 5km. This leads to co-kriging systems of maximum dimension equal to 60 equations (+ the two universality conditions).
err = gl.kriging(mydb, mygrid, model_multi, neigh_moving, namconv=gl.NamingConvention("CoKriging"))
The thickness cokriged map is displayed in the next figure together with the layer thickness information calculated at wells.
fig, ax = gp.initGeographic()
ax.raster(mygrid, name="CoKriging.thick.estim",flagLegend=True)
ax.symbol(mydb, c='black')
ax.decoration(title="Thickness: Estimation (CoKriging)")
plt.show()
The bottom surface is simply derived by adding the resulting thickness map to the top map. Unfortunately, this technique does not provide any information regarding the standard deviation of the estimation of the bottom surface.
mygrid["CoKriging.bot.estim"] = mygrid["CoKriging.top.estim"] - mygrid["CoKriging.thick.estim"]
Jointly estimated surfaces can now be displayed using the 3-D viewer. In addition, the data information is overlaid to testify that the result of this cokriging procedure produce exact interpolators (top and bottom data are honored).
surf_top = gop.SurfaceOnDbGrid(mygrid, "CoKriging.top.estim")
surf_bot = gop.SurfaceOnDbGrid(mygrid, "CoKriging.bot.estim")
fig = go.Figure(data = [ surf_top, surf_bot])
f = fig.show()