Recall we worked with a dataset on women aged 15-49 sampled from the 2013-14 Democratic Republic of Congo (DRC) Demographic and Health Survey. Variables are:
loc_id: location id (i.e. survey cluster).
hemoglobin: hemoglobin level (g/dL).
anemia: anemia classifications.
age: age in years.
urban: urban vs. rural.
LATNUM: latitude.
LONGNUM: longitude.
Motivating dataset
loc_id hemoglobin anemia age urban LATNUM LONGNUM
1 1 12.5 not anemic 28 rural 0.220128 21.79508
2 1 12.6 not anemic 42 rural 0.220128 21.79508
3 1 13.3 not anemic 15 rural 0.220128 21.79508
4 1 12.9 not anemic 28 rural 0.220128 21.79508
5 1 10.4 mild 32 rural 0.220128 21.79508
6 1 12.2 not anemic 42 rural 0.220128 21.79508
Modeling goals:
Learn the associations between age and urbanality and hemoglobin, accounting for unmeasured spatial confounders.
Create a predicted map of hemoglobin across the spatial surface controlling for age and urbanality, with uncertainty quantification.
Map of Sud-Kivu state
Last time, we focused on one state with ~500 observations at ~30 locations.
Map of the DRC
Today we will extend the analysis to the full dataset with ~8,600 observations at ~500 locations.
\(j \in \{1,\dots,n_i\}\) indexes individuals at each location.
\(Y_j(\mathbf{u}_i)\) denotes the observation of individual \(j\) at location \(\mathbf{u}_i\).
\(\mathbf{x}_j(\mathbf{u}_i) = (\text{age}_{ij}/10,\text{urban}_i) \in \mathbb{R}^{1 \times p}\), where \(p=2\) is the number of predictors (excluding intercept).
\(\mathbf{Z}\) is an \(N \times n\) dimensional block diagonal binary matrix. Each row contains a single 1 in column \(i\) that corresponds to the location of \(Y_j(\mathbf{u}_i)\). \[
\begin{align}
\mathbf{Z} = \begin{bmatrix}
\mathbf{1}_{n_1} & \mathbf{0} & \dots & \mathbf{0} \\
\mathbf{0} & \mathbf{1}_{n_2} & \dots & \mathbf{0} \\
\vdots & \vdots & \ddots & \vdots \\
\mathbf{0} & \dots & \mathbf{0} & \mathbf{1}_{n_n}
\end{bmatrix}.
\end{align}
\]
Modeling
We specify the following model: \[\mathbf{Y} = \alpha \mathbf{1}_{N} + \mathbf{X} \boldsymbol{\beta} + \mathbf{Z}\boldsymbol{\theta} + \boldsymbol{\epsilon}, \quad \boldsymbol{\epsilon} \sim N_N(\mathbf{0},\sigma^2\mathbf{I})\] with priors
\(\boldsymbol{\theta}(\mathbf{u}) | \tau,\rho \sim GP(\mathbf{0},C(\cdot,\cdot))\), where \(C\) is the Matérn 3/2 covariance function with magnitude \(\tau\) and length scale \(\rho\)
\(\alpha^* \sim N(0,4^2)\). This is the intercept after centering \(\mathbf{X}\).
This is not scalable because we need to invert an \(n \times n\) dense covariance matrix for each MCMC iteration, which requires \(\mathcal{O}(n^3)\) floating point operations (flops), and \(\mathcal{O}(n^2)\) memory.
Scalable GP methods overview
The computational issues motivated exploration in scalable GP methods. Existing scalable methods broadly fall under two categories.
Sparsity methods
sparsity in \(\mathbf{C}\), e.g., covariance tapering (@furrer2006covariance).
sparsity in \(\mathbf{C}^{-1}\), e.g., Vecchia approximation (@vecchia1988estimation) and nearest-neighbor GP (@datta2016hierarchical).
Low-rank methods
approximate \(\mathbf{C}\) on a low-dimensional subspace.
e.g., process convolution (@higdon2002space), inducing point method(@snelson2005sparse).
Hilbert space method for GP
@solin2020hilbert introduced a Hilbert space method for reduced-rank Gaussian process regression (HSGP).
@riutort2023practical discussed how to practically implement HSGP.
Tutorial codes are available in different probabilistic programming languages:
How to use HSGP for Bayesian geospatial model fitting and posterior predictive sampling
Thursday:
Parameter tuning for HSGP
How to implement HSGP in stan
HSGP approximation
Given:
an isotropic covariance function \(C\) which admits a power spectral density, e.g., the Matérn family, and
a compact domain \(\boldsymbol{\Theta} \in \mathbb{R}^d\) with smooth boundaries. For our purposes, we only consider boxes, e.g., \([-1,1] \times [-1,1]\).
HSGP approximates the \((i,j)\) element of the corresponding \(n \times n\) covariance matrix \(\mathbf{C}\) as \[\mathbf{C}_{ij}=C(\|\mathbf{u}_i - \mathbf{u}_j\|) \approx \sum_{k=1}^m s_k\phi_k(\mathbf{u}_i)\phi_k(\mathbf{u}_j).\]
We want to make predictions for \(\mathbf{Y}^* = (Y(\mathbf{u}_{n+1}),\ldots, Y(\mathbf{u}_{n+q}))^\top\), observations at \(q\) new locations. Define \(\boldsymbol{\theta}^* = (\theta(\mathbf{u}_{n+1}),\ldots,\theta(\mathbf{u}_{n+q}))^\top\), \(\boldsymbol{\Omega} = (\alpha,\boldsymbol{\beta},\sigma,\tau,\rho)\). Recall:
where \(\mathbf{C}\) is the covariance of \(\boldsymbol{\theta}\), \(\mathbf{C}^*\) is the covariance of \(\boldsymbol{\theta}^*\), and \(\mathbf{C}_{+}\) is the cross covariance matrix between \(\boldsymbol{\theta}\) and \(\boldsymbol{\theta}^*\).
Show if \(\boldsymbol{\Phi}\) has full column rank, which is true under HSGP, then for any matrix \(\mathbf{A}\) of proper dimension, \[\mathbf{S} \boldsymbol{\Phi}^\top(\boldsymbol{\Phi}\mathbf{S}\boldsymbol{\Phi}^\top)^{\dagger}\boldsymbol{\Phi}\mathbf{A} = \mathbf{A} \tag{1}\] Therefore \(\mathbb{V}_{\boldsymbol{\theta}^*}^{HS} \equiv \mathbf{0}\).
Kriging under HSGP
Under the reparameterized model, \(\boldsymbol{\theta} = \boldsymbol{\Phi} \mathbf{S}^{1/2}\mathbf{b}\), for \(\mathbf{b} \sim N_m(0,\mathbf{I}).\) Therefore \[
\begin{align}
\boldsymbol{\theta}^* \mid (\boldsymbol{\theta},\boldsymbol{\Omega}) &= (\boldsymbol{\Phi}^*\mathbf{S}\boldsymbol{\Phi}^\top) (\boldsymbol{\Phi}\mathbf{S}\boldsymbol{\Phi}^\top)^{\dagger} \boldsymbol{\theta} \\
&= (\boldsymbol{\Phi}^*\mathbf{S}\boldsymbol{\Phi}^\top) (\boldsymbol{\Phi}\mathbf{S}\boldsymbol{\Phi}^\top)^{\dagger}(\boldsymbol{\Phi} \mathbf{S}^{1/2}\mathbf{b}) \\
&= \boldsymbol{\Phi}^*\mathbf{S}^{1/2}\mathbf{b}. \quad (\text{by equation (1) in the last slide})
\end{align}
\]
During MCMC sampling, we can obtain posterior predictive samples for \(\boldsymbol{\theta}^*\) through posterior samples of \(\mathbf{b}\) and \(\mathbf{S}\). Let superscript \((s)\) denote the \(s\)th posterior sample:
Under the reparameterized model, there is another (perhaps more intuitive) way to recognize the kriging distribution under HSGP.
We model \(\boldsymbol{\theta} = \boldsymbol{\Phi} \mathbf{S}^{1/2}\mathbf{b}\), where \(\mathbf{b}\) is treated as the unknown parameter. Therefore for kriging: \[
\begin{align}
\boldsymbol{\theta}^* \mid (\boldsymbol{\theta},\boldsymbol{\Omega}) &= \boldsymbol{\Phi}^*\mathbf{S}^{1/2}\mathbf{b} \mid (\mathbf{b},\boldsymbol{\Omega}) \\
&=\boldsymbol{\Phi}^*\mathbf{S}^{1/2}\mathbf{b}.
\end{align}
\]
HSGP kriging in stan
Kriging under HSGP can be easily implemented in stan.
for covariance function \(C\) which admits a power spectral density.
on a box \(\boldsymbol{\Theta} \subset \mathbb{R}^d\).
with \(m\) number of basis functions.
We have talked about:
why HSGP is scalable.
how to do posterior sampling and posterior predictive sampling in stan.
HSGP parameters
@solin2020hilbert showed that HSGP approximation can be made arbitrarily accurate as \(\boldsymbol{\Theta}\) and \(m\) increase.
But how to choose:
size of the box \(\boldsymbol{\Theta}\).
number of basis functions \(m\).
HSGP approximation box
Due to the design of HSGP, the approximation is less accurate near the boundaries of \(\boldsymbol{\Theta}\).
Suppose all the coordinates are centered. Let \[S_l = \max_i |\mathbf{u}_{il}|, \quad l=1,\dots,d, \quad i= 1, \dots, (n+q)\] such that \(\boldsymbol{\Theta}_S = \prod_{l=1}^d [-S_l,S_l]\) is the smallest box which contains all observed and prediction locations. We should at least ensure \(\boldsymbol{\Theta} \supset \boldsymbol{\Theta}_S\).
We want the box to be large enough to ensure good boundary accuracy. Let \(c_l \ge 1\) be boundary factors, we consider \[\boldsymbol{\Theta} = \prod_{l=1}^d [-L_l,L_l], \quad L_l = c_l S_l.\]
HSGP approximation box
How much the approximation accuracy deteriorates towards the boundaries depends on smoothness of the true surface.
E.g., the larger the length scale \(\rho\), the smoother the surface, a smaller box can be used for the same level of boundary accuracy.
HSGP approximation box
The larger the box,
the more basis functions we need for the same level of overall accuracy,
hence higher run time.
HSGP basis functions
The total number of basis functions \(m = \prod_{l=1}^d m_l\), i.e., we need to decide on \(m_l\)’s, the number of basis functions for each dimension.
The higher the \(m\), the better the overall approximation accuracy, the higher the runtime.
\(m\) scales exponentially in \(d\), hence the HSGP computation complexity \(\mathcal{O}(mn+m)\) also scales exponentially in \(d\). Therefore HSGP is only recommended for \(d \le 3\), at most \(4\).
Relationship between \(c\) and \(m\)
@riutort2023practical used simulations with a squared exponential covariance function to investigate the relationship between \(c\) and \(m\).
Notice that \(c\) needs to be above a certain minimum value to achieve a reasonable boundary accuracy.
Relationship between \(c\), \(m\) and \(\rho\)
To summarize:
The boundary factor \(c\) needs to be above a minimum value, otherwise HSGP approximation is poor no matter how large \(m\) is.
As \(\rho\) increases, the surface is less smooth,
\(c\) needs to increase to retain boundary accuracy.
\(m\) needs to increase to retain overall accuracy.
As \(c\) increases, \(m\) needs to increase to retain overall accuracy.
As \(m\) increases, run time increases. Hence we want to minimize \(m\) and \(c\) while maintaining certain accuracy level.
And, we do not know \(\rho\). So how to choose \(c\) and \(m\)?
Prepare for next class
Work on HW 05 which is due Apr 8
Complete reading to prepare for Thursday’s lecture