SPDE simulation on a sphereΒΆ
import gstlearn as gl
import gstlearn.plot as gp
import gstlearn.document as gdoc
import gstlearn.plot3D as gop
import numpy as np
import matplotlib.pyplot as plt
import plotly.graph_objects as go
import IPython
from IPython.display import Markdown
gdoc.setNoScroll()
Markdown(gdoc.loadDoc("SPDE_Sphere.md"))
SPDE on the Sphere
The aim of this tutorial is to show how to use gstlearn to simulate the solution of
(ΞΊ2βΞSR)Ξ±/2Z=ΟW
on the sphere SR of radius R.
ΞSR is the Laplace-Beltrami operator, i.e, it acts on each point of the sphere as the usual Laplacian on the tangent plane at this point.
ΞΊ is the inverse of the scale parameter
Ξ±β₯2 is an integer describing the smoothness of the solution.
W is a Gaussian white-noise suitably normalized such as Ο2 is the variance of the solution.
In this notebook, we will define the covariance of MatΓ©rn on the sphere, as the covariance of the solution of this SPDE (other extensions of the MatΓ©rn function are possible). By analogy with the Euclidian case, its smoothness parameter will be defined by Ξ½=Ξ±β1. To compute the covariance function with respect on the geodetic distance, one have to use a decomposition on the Legendre polynomial (see below).
We also treat the more general case P1/2(βΞSR)Z=ΟW
where P is a polynom positive on R+
# Markov (if False, MatΓ©rn covariance will be used)
Markov = True
ParametrizationΒΆ
#Sphere radius
R = gl.EARTH_RADIUS
gl.defineDefaultSpace(gl.ESpaceType.SN,param=R)
#Scale parameter (for convenience, it is defined as the proportion of the radius)
ratioRange = 0.2
scale = R * ratioRange
# sill
sill = 2.
# Smoothness parameter (for MatΓ©rn case)
nu = 2
# Markov coefficients (for Markov case)
coeffs = [1,-1,.5]
MeshingΒΆ
mesh = gl.MeshSphericalExt()
err = mesh.resetFromDb(None,None,triswitch = "-r5",verbose=False)
Sampling Db creation
nsample = 4000
#sub-sampling to reduce computational burden
np.random.seed(123)
ind = np.random.choice(mesh.getNApices(),size=nsample,replace=False)
#Creation of the db
X = mesh.getCoordinatesPerApex(0)
Y = mesh.getCoordinatesPerApex(1)
dbdat = gl.Db.create()
dbdat["x"] = np.array(X)[ind]
dbdat["y"] = np.array(Y)[ind]
dbdat.setLocators(["x","y"],gl.ELoc.X)
varsize = np.ones(nsample)
iuid = dbdat.addColumns(varsize, "sizes")
varcolor = np.ones(nsample)
iuid = dbdat.addColumns(varcolor, "colors")
Covariance modelΒΆ
The covariance model is Markov or MatΓ©rn.
if Markov :
model = gl.Model.createFromParam(type=gl.ECov.MARKOV,
range = scale,
sill = sill,
flagRange= False)
model.setMarkovCoeffs(0, coeffs)
else :
model = gl.Model.createFromParam(type=gl.ECov.MATERN,
range = scale,
sill = sill,
param=nu,
flagRange= False)
Precision matrixΒΆ
Q = gl.PrecisionOp(mesh,model.getCovAniso(0))
SimulationΒΆ
result = np.array(Q.simulateOne())
Display the realizationΒΆ
surface = gop.SurfaceOnMesh(mesh, result,opacity=1)
fig = go.Figure(data=[ surface ])
fig.update_scenes(xaxis_visible=False, yaxis_visible=False,zaxis_visible=False )
f = fig.show()
Compute covariance (of discretized solution)ΒΆ
We use the fact that Ξ£=Qβ1 and solve Qx=ej for an arbitrary index j.
Get the distances
ind0 = 12
distances = np.array(mesh.getDistances(ind0))
Compute the covariances
covDiscr = np.array(Q.computeCov(ind0))
Sort for the plot
covDiscrClose = covDiscr[np.argsort(distances)]
deltaLong = np.sort(distances)
print(f"Discretized Covariance = {round(covDiscrClose[0],4)}")
Discretized Covariance = 2.0393
Display the result
plt.plot(deltaLong,covDiscrClose,"--",label = "Discretized covariance")
ax = plt.legend()
print(f"Discretized variance = {round(covDiscrClose[0],4)}")
Discretized variance = 2.0393
Variogram of the realizationΒΆ
The empirical variogram is computed by using the great-circle distance.
nlag = 50 # number of discretization points
dlag = 0.04 # lag with respect to the unit sphere (it will be multiplied
# by R in the creation of the VarioParam.
dbdat["simu"] = np.array(result)[ind]
dbdat.setLocators(["simu"],gl.ELoc.Z)
#Variogram
vp = gl.VarioParam.createOmniDirection(nlag=nlag,dlag=dlag * R)
vario = gl.Vario.create(vp)
ax = vario.compute(dbdat,gl.ECalcVario.VARIOGRAM)
#vario.display()
res = gp.plot(vario, label = "Empirical Variogram", flagLegend=True)
Theoretical covariance functionΒΆ
Markdown(gdoc.loadDoc("Covariance_Sphere.md"))
Covariance on the Sphere
The covariance between two points with great-circle distance d on the sphere of radius R is given by C(d)=Ο2ββi=0f(i)ββi=0f(i)Pi(cosdR)
where the Pi's are the Legendre polynomials computed with the following reccurence formula
P0(x)=1.
P1(x)=x
Pn+1(x)=(2n+1)xPn(x)βnPnβ1(x)n+1
For nβ₯0, f(n)=2n+1(R2ΞΊ2+n(n+1))Ξ±
For numerical computations, the sums are truncated at N.
For more details on the covariances on sphere, see Lantuejoul, Freulon and Renard (2019)
EvaluationΒΆ
ndisc = 100 # number of discretization steps for the covariance
N = 20 # size of the decomposition
h = np.linspace(0,np.max(deltaLong),ndisc)
a = model.getCovAniso(0)
uu = np.array([a.evalCovOnSphere(i,N) for i in h]) # modif dR
gp.plot(vario, label = "Empirical Variogram", flagLegend=True)
plt.plot(h, sill - uu,label = "Theoretical Variogram")
plt.plot(deltaLong,covDiscrClose[0] - covDiscrClose,"--",label = "Discretized model")
plt.show()
There is a slight difference between the theoretical variogram and the one obtained from the SPDE discretization due to a numerical error on the variance introduced by the discretization. The comparison of the covariance shows that this numerical error is rather small :
h = np.linspace(0,np.max(deltaLong),ndisc)
vario = gl.Vario.create(vp)
ax = vario.compute(dbdat,gl.ECalcVario.COVARIANCE)
#ax = gp.variogram(vario,label = "Empirical Covariance")
ax = plt.plot(h, uu, label = "Theoretical Covariance")
plt.plot(deltaLong,covDiscrClose,"--",label = "Discretized model")
ax = plt.legend()
plt.show()
print(f"Theoretical variance = {round(uu[0],4)}")
Theoretical variance = 1.9943
KrigingΒΆ
Plotting the mesh and the data
dbdat
Data Base Characteristics ========================= Data Base Summary ----------------- File is organized as a set of isolated points Space dimension = 2 Number of Columns = 5 Total number of samples = 4000 Variables --------- Column = 0 - Name = x - Locator = x1 Column = 1 - Name = y - Locator = x2 Column = 2 - Name = sizes - Locator = NA Column = 3 - Name = colors - Locator = NA Column = 4 - Name = simu - Locator = z1
mesh = gl.MeshSphericalExt()
err = mesh.resetFromDb(None,None,triswitch = "-r2",verbose=False)
point = gop.PointDb(dbdat, size=1, nameColor = "simu", fromLongLat=True)
blank = gop.SurfaceOnMesh(mesh, opacity=1)
meshing = gop.Meshing(mesh)
fig = go.Figure(data = [blank, meshing, point])
fig.update_scenes(xaxis_visible=False, yaxis_visible=False, zaxis_visible=False )
f = fig.show()
ball = gl.Ball(mesh);
ball.display();
Ball Tree ========= - Number of samples = 320 - Number of Features = 2 - Number of levels = 5 - Number of nodes = 31 - Size of leaf = 10
Highlight One sample and check the closest meshes
point = gop.PointDb(dbdat, size=1, nameColor = "simu", fromLongLat=True)
blank = gop.SurfaceOnMesh(mesh, opacity=1)
meshing = gop.Meshing(mesh)
fig = go.Figure(data = [blank, meshing, point])
fig.update_scenes(xaxis_visible=False, yaxis_visible=False, zaxis_visible=False )
f = fig.show()
def highlight(sample = 0, verbose = False):
if verbose:
print("Rank of the Target sample =", sample)
target = dbdat.getSampleCoordinates(sample)
if verbose:
print("Target = ",target)
mymesh = ball.queryClosest(target)
if verbose:
print("Rank of the Target Mesh = ", mymesh)
veclon = mesh.getCoordinatesPerMesh(mymesh, 0)
veclat = mesh.getCoordinatesPerMesh(mymesh, 1)
if verbose:
print("Longitude = ", veclon)
print("Latitude = ", veclat)
return veclon, veclat
mysample = 5
veclon, veclat = highlight(mysample)
dbdat.setValue("sizes", mysample, 12)
dbdat.setValue("colors", mysample, 1)
point = gop.PointDb(dbdat, nameSize = "sizes", nameColor = "colors", fromLongLat=True)
blank = gop.SurfaceOnMesh(mesh, opacity=1,)
meshing = gop.Meshing(mesh)
scatter = gop.ScatterOnSphere(veclon, veclat, mode="markers", m_color='black', m_size=2)
fig = go.Figure(data = [blank, point, meshing, scatter])
fig.update_scenes(xaxis_visible=False, yaxis_visible=False, zaxis_visible=False )
f = fig.show()
dbdat.setValue("sizes", mysample, 1)
dbdat.setValue("colors", mysample, 1)
Aproj = gl.ProjMatrix(dbdat, mesh)
Aproj.dumpVerticesUsed(30) # Dump the weights for the first samples
Vertices used in the projection matrix -------------------------------------- (Display is limited to 30 samples) Sample 0: 94 [ 0.37] 96 [ 0.50] 141 [ 0.12] Sample 1: 24 [ 0.38] 83 [ 0.50] 110 [ 0.12] Sample 2: 45 [ 0.25] 111 [ 0.12] 146 [ 0.62] Sample 3: 109 [ 0.37] 133 [ 0.25] 144 [ 0.38] Sample 4: 44 [ 0.13] 95 [ 0.25] 121 [ 0.62] Sample 5: 31 [ 0.62] 143 [ 0.38] Sample 6: 60 [ 0.25] 126 [ 0.50] 138 [ 0.25] Sample 7: 91 [ 0.13] 108 [ 0.75] 113 [ 0.13] Sample 8: 37 [ 0.50] 116 [ 0.25] 133 [ 0.25] Sample 9: 55 [ 0.25] 79 [ 0.25] 81 [ 0.50] Sample 10: 8 [ 0.62] 96 [ 0.25] 141 [ 0.12] Sample 11: 116 [ 0.62] 133 [ 0.12] 148 [ 0.25] Sample 12: 27 [ 0.12] 74 [ 0.25] 158 [ 0.62] Sample 13: 5 [ 0.37] 27 [ 0.50] 101 [ 0.13] Sample 14: 138 [ 1.00] Sample 15: 5 [ 0.62] 27 [ 0.25] 56 [ 0.13] Sample 16: 75 [ 0.00] 114 [ 0.75] 118 [ 0.25] Sample 17: 44 [ 0.38] 95 [ 0.62] 121 [ 0.00] Sample 18: 32 [ 0.62] 75 [ 0.38] Sample 19: 25 [ 0.12] 30 [ 0.25] 66 [ 0.63] Sample 20: 52 [ 0.37] 54 [ 0.50] 97 [ 0.12] Sample 21: 125 [ 0.00] 142 [ 0.37] 161 [ 0.63] Sample 22: 46 [ 0.13] 105 [ 0.75] 137 [ 0.13] Sample 23: 16 [ 0.25] 19 [ 0.37] 57 [ 0.38] Sample 24: 1 [ 0.12] 51 [ 0.12] 76 [ 0.75] Sample 25: 3 [ 0.25] 63 [ 0.12] 83 [ 0.63] Sample 26: 3 [ 0.38] 12 [ 0.37] 24 [ 0.25] Sample 27: 10 [ 0.25] 23 [ 0.62] 51 [ 0.13] Sample 28: 28 [ 0.50] 68 [ 0.12] 94 [ 0.37] Sample 29: 52 [ 0.25] 67 [ 0.75]