Block variances¶

In [1]:
import gstlearn as gl
import gstlearn.plot as gp
import gstlearn.widgets as gw
import gstlearn.document as gdoc

import ipywidgets as widgets
from ipywidgets import HBox, VBox
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import display
from IPython.display import Markdown

%matplotlib inline
%matplotlib notebook

import math

We define a set of internal functions used in this script

In [2]:
# So that, global ax variable exists
fig,ax = gp.init(1,2,figsize=(8,4))

### Functions
def params(nc):
    sill = 1
    ndisc = 20
    dx = dy = 120 / nc 
    return sill,ndisc,dx,dy

def simulate(rangev,nc):
    sill,_,_,_ = params(nc) 
    model = gl.Model.createFromParam(gl.ECov.EXPONENTIAL,range = rangev, sill = sill)
    db = gl.DbGrid.create(nx = [120,120],dx = [1,1])
    gl.simtub(None,db,model,nbtuba=1000)
    return db,model

def coarsifyF(db,nc):
    dbc = db.coarsify(nmult=[nc,nc])
    gl.dbStatisticsOnGrid(db,dbc,gl.EStatOption.MEAN)
    return dbc

def computeVar(nc,sto):
    sto.dbc = coarsifyF(sto.db,nc)
    sill,ndisc,dx,dy = params(nc)
    deltaVarTheo = sill - sto.model.evalCvv(sto.dbc.getDXs(),ndisc=[ndisc,ndisc])
    deltaVarEmp = np.var(sto.db["Simu"])-np.var(sto.dbc["Stats*"])
    return deltaVarTheo,deltaVarEmp

def plotVar(nc,sto):
    ax[0].cla()
    ax[1].cla()
    
    sill,ndisc,dx,dy = params(nc)
    db  = sto.db
    dbc = sto.dbc
    
    ax[0].raster(db,"*Simu")
    ax[0].axes.axis("off")
    ax[0].decoration(title="Z(x)")
    ax[0].cell(dbc, color='black', linewidth=0.5)
    
    ax[1].raster(dbc,"Stats*")
    ax[1].axes.axis("off")
    ax[1].decoration(title="Z(v)")
    ax[1].cell(dbc, color='black', linewidth=0.5)

def divisorGenerator(n):
    large_divisors = []
    for i in range(1, int(math.sqrt(n) + 1)):
        if n % i == 0:
            yield i
            if i*i != n:
                large_divisors.append(n / i)
    for divisor in reversed(large_divisors):
        yield int(divisor)

class store :
    def __init__(self,rangev,nc):
        self.db,self.model = simulate(rangev,nc)
        self.dbc = coarsifyF(self.db,nc)

Calculate the block variance¶

This paragraph is concerned with the variance reduction when calculated on different block supports. We start by simulating a random variable on a fine grid, which will be considered as the point support. This simulation is performed with a given covariance model.

Starting from this point-support grid, we define a new grid of coarse cells (exact multiples of the point nodes). For each cell, the value is obtained as the average of the values of the nodes which belong to this cell. Therefore we obtain a grid of the initial random variables, upscaled to the cell support.

Then we compute the (dispersion) variance of the variable over the cells in two different ways:

  • experimentally from the values per cell
  • in the model: as derived from the model after a change of support (the next paragraph gives the formulae for these calculations)
In [3]:
Markdown(gdoc.loadDoc("Cvv.md"))
Out[3]:

Covariance over block¶

To compute ¯C(v,v)=1|v|2∫v∫vC(x,y)dxdyC¯(v,v)=1|v|2∫v∫vC(x,y)dxdy

you need to define vv and the model.

The support vv must be defined by its extension spanned over the space dimension.

We also need to define the discretization [N1,N2][N1,N2] since ¯C(v,v)C¯(v,v) is approximated by

1N11N2N1∑i=1N2∑j=1C(xi,xj)1N11N2∑i=1N1∑j=1N2C(xi,xj)

where xixi and xjxj are some points inside the block vv.

In [4]:
# The Model
model = gl.Model.createFromParam(gl.ECov.EXPONENTIAL,range = 10)

# The target block
v = [5,5]

# The discretization
ndisc = [20,20] 

Then, we just have to compute

In [5]:
Cvv = model.evalCvv(v,ndisc)
print("Cvv = " + str(round(Cvv,2)))
Cvv = 0.49

To compute ¯γ(v,v)=C(0)−¯C(v,v)γ¯(v,v)=C(0)−C¯(v,v)

We can simply do

In [6]:
calc = gl.CovCalcMode()
calc.setAsVario(asVario = True)
gammavv = model.evalCvv(v,ndisc,mode =calc)
print("Gammavv = " + str(round(gammavv,2)))
Gammavv = 0.51

Variance reduction¶

In [7]:
Markdown(gdoc.loadDoc("Variance_Reduction.md"))
Out[7]:

Variance reduction with support¶

Here we compare the empirical variance reduction with the one given by the model :

The empirical punctual variance is obtained by

^σ2=1NN∑i=1z2(xi)−(1NN∑i=1z(xi)))2σ^2=1N∑i=1Nz2(xi)−(1N∑i=1Nz(xi)))2

The empirical block variance

^σ2v=1NvNv∑i=1z2(vi)−(1NvNv∑i=1z(vi)))2σ^v2=1Nv∑i=1Nvz2(vi)−(1Nv∑i=1Nvz(vi)))2

The true (empirical) variance reduction is Empirical=^σ2−^σ2vEmpirical=σ^2−σ^v2

The variance reduction computed by the model is

Theoretical=¯γ(v,v)=C(0)−¯C(v,v)Theoretical=γ¯(v,v)=C(0)−C¯(v,v)

In [8]:
nc = 10
rangev = 20
sto = store(rangev,nc)

vals = list(divisorGenerator(120))
nci = np.where(np.array(vals)==nc)[0][0]

def updateVar(nc,sto):
    a1,a2 = computeVar(nc,sto)
    varTheoW.value = str(round(a1,2))
    varEmpW.value =  str(round(a2,2))
    
def sliderRangeEventhandler(change):
    sto.db,sto.model = simulate(change.new,sliderNc.value)
    updateVar(vals[sliderNc.value],sto)
    plotVar(vals[sliderNc.value],sto)
    
def sliderNcEventhandler(change):
    nc = vals[change.new]
    updateVar(nc,sto)
    plotVar(nc,sto)
    
sliderRange = gw.sliderFloat(title='Range',mini=1,maxi=50,value=rangev,
                             eventhandler=sliderRangeEventhandler)
sliderNc    = gw.sliderInt(title='Coarsify',mini=1,maxi=len(vals)-2,value=vals[nci],
                           eventhandler=sliderNcEventhandler)
varEmpW     = gw.text(title='Empirical')
varTheoW    = gw.text(title='Theoretical')

display(sliderRange)
display(widgets.HBox([sliderNc]))
display(widgets.HBox([varEmpW, varTheoW]))

updateVar(nc,sto)
plotVar(nc,sto)
20.00
10