# SemiEstimate Examples

library(SemiEstimate)

# Introduction to Implicit Profiling

## Preliminaries

Assume we have a convex objective function $$\mathcal{L}(\theta, \lambda)$$, where $$\theta$$ is the parametric component with fixed dimension and $$\lambda$$ is the parameter for finite-dimensional approximation of nonparametric component whose dimension may grow with the sample size. Further assume $$\theta$$ and $$\lambda$$ are bundled in this objective function, i.e., $$\theta$$ and $$\lambda$$ cannot be clearly separated. Typical examples of this case include the semiparametric transformation model and the semiparametric GARCH-in-mean model, which are discussed in details in Section 4 and 5. It is worth noting that, $$\lambda$$ can also be a smooth function with infinite dimension. Then one can apply semiparametric methods to estimate $$\lambda$$.

To estimate $$\theta$$ and $$\lambda$$, let $$\Psi(\theta, \lambda)$$ and $$\Phi(\theta, \lambda)$$ denote the estimation equations with respect $$\theta$$ and $$\lambda$$, respectively. Specifically, $$\Psi(\theta, \lambda)$$ is the derivative of $$\mathcal{L}(\theta, \lambda)$$ with respect to $$\theta$$. When $$\lambda$$ is the nuisance parameter, then $$\Phi(\theta, \lambda)$$ is the derivative of $$\mathcal{L}(\theta, \lambda)$$ with respect to $$\lambda$$ . In the case that $$\lambda$$ is the smooth function, then $$\Phi(\theta, \lambda)$$ denotes the semiparametric estimation formula, such as the kernel smooth method or spline function. Then, to estimate $$\theta$$ and $$\lambda$$, we need to solve: $\Psi(\theta,\lambda) = \mathbf{0}$ $\Phi(\theta, \lambda) = \mathbf{0}.\nonumber$

Let $$\mathbf{G}(\theta, \lambda) = (\Psi(\theta, \lambda)^{\top}, \Phi(\theta, \lambda)^{\top})^{\top}$$. The entire updating algorithm (e.g, the Newton-Raphson method) requires $$\mathbf{G}(\theta, \lambda)=\mathbf{0}$$. Let $$\beta = (\theta^{\top}, \lambda^{\top})^{\top}$$. Then the updating formula is $^{(k+1)} = ^{(k)} - (({(k)})/){-1}(^{(k)})$. However, due to the bundled relationship of $$\theta$$ and $$\lambda$$, the two parameters are often difficult to separate. This would result in a dense Hessian matrix $$\partial \mathbf{G}/\partial \beta$$. Consequently, the calculation of the inverse of the Hessian matrix often suffers from high computational cost, which makes the entire updating algorithm computationally very inefficient.

To solve this prolambdaem, many recursive updating methods have been applied . Basically, the recursive method breaks the connection of $$\theta$$ and $$\lambda$$ in the Hessian matrix. In other words, it considers the second-order derivative of the objective function with respect to each parameter, separately. Specifically, the updating formulas for the recursive method is given below: $\lambda^{(k+1)} = \lambda^{(k)} - \left(\frac{\partial \Phi(\theta^{(k)}, \lambda^{(k)})}{ \partial \lambda}\right)^{-1}\Phi(\theta^{(k)}, \lambda^{(k)})$ $\theta^{(k+1)} = \theta^{(k)} - \left(\frac{\partial \Psi(\theta^{(k)}, \lambda^{(k+1)})}{ \partial \theta}\right)^{-1}\Psi(\theta^{(k)}, \lambda^{(k+1)}).$

Although the update for $$\lambda$$ is still a sub-prolambdaem with growing dimension, the sub-prolambdaem Hessian $$\partial \Phi/\partial \lambda$$ is often sparse by the design of the finite dimensional approximation of the nonparametric component — different element in $$\lambda$$ usually corresponds to the value of the nonparametric component at different locations. Consequently, inverting $$\partial \Phi/\partial \lambda$$ can be much faster than inverting $$\partial \mathbf{G}/\partial \beta$$. The updating formula in iterates each parameter without considering the interaction with the other parameter. In other words, it makes approximation to the true second-order derivative $$\partial \mathbf{G}/\partial \beta$$ by setting the off-diagonal lambdaocks zero. For example, when updating $$\theta$$, the derivative $$\partial \Psi/\partial \theta$$ considers $$\lambda$$ as a constant and does not take into account the current value of $$\lambda$$. Under the situation that $$\theta$$ and $$\lambda$$ are strongly correlated with each other, the recursive method would definitely loss information and thus result in sub-optimal update directions.

## Implicit Profiling Algorithm

Both the entire updating method and recursive method are computationally inefficient. To address this issue, we propose an implicit profiling method. It is notalambdae that, $$\theta$$ is the only parameter of interest. Therefore, we only focus on the efficient estimation of $$\theta$$. Recall that, in the recursive method, the updating formula for $$\theta$$ has lost some information by treating $$\lambda$$ as a constant, which makes the estimation of $$\theta$$ inefficient. To address this prolambdaem, we propose an implicit profiling (IP) method. Specifically, we regard $$\lambda$$ as the function of $$\theta$$, which we denote by $$\lambda(\theta)$$. Then, the second-order derivative of the objective function respect to $$\theta$$ can be derived as follows:

$\frac{\partial \Psi(\theta, \lambda(\theta))}{\partial \theta} = \frac{\partial \Psi(\theta, \lambda(\theta))}{\partial \theta} + \frac{\partial \Psi(\theta, \lambda(\theta))}{\partial\lambda(\theta)}\frac{\partial\lambda(\theta)}{\partial\theta}$

where the derivative relationship $$\partial\lambda/\partial\theta$$ can be othetaained by solving $$\partial \Phi(\theta, \lambda(\theta))/\partial\lambda = \mathbf{0}$$. We refer to as the implicit profiling Hessian matrix of $$\theta$$, which decides the updating direction of $$\theta$$ in the implicit profiling method. Based on , we can update $$\theta$$ and $$\lambda$$ iteratively using the following updating formulas:

$\lambda^{(k+1)} = \lambda^{(k)} - \left(\frac{\partial \Phi(\theta^{(k)}, \lambda^{(k)})}{\partial \lambda}\right)^{-1}\Phi(\theta^{(k)}, \lambda^{(k)})$ $\theta^{(k+1)} = \theta^{(k)} -\left( \frac{\partial \Psi(\theta^{(k)}, \lambda^{(k+1)})}{\partial \theta} + \frac{\partial \Psi(\theta^{(k)}, \lambda^{(k+1)})}{\partial \lambda}\frac{\partial\lambda^{(k+1)}}{\partial\theta}\right)^{-1}\Psi(\theta^{(k)}, \lambda^{(k+1)})$

The complete algorithm of the implicit profiling method is present in Algorithm .

The Implicit Profiling Algorithm

1. Initialize $$\theta^{(0)}$$;
2. Solve $$\lambda^{(0)}$$ from the equation $$\Phi(\theta^{(0)}, \lambda^{(0)}) = \mathbf{0}$$;
3. REPEAT UNTIL Convergence
• Update $$\lambda$$ from $\lambda^{(k+1)} = \lambda^{(k)} - \left(\frac{\partial \Phi(\theta^{(k)}, \lambda^{(k)})}{\partial \lambda}\right)^{-1}\Phi(\theta^{(k)}, \lambda^{(k)});\nonumber$
• Solve the implicit gradient $$\mathbf{d}^{(k+1)} = \partial\lambda^{(k+1)}(\theta^{(k)})/\partial\theta$$ from $\frac{d\Phi(\theta^{(k)},\lambda^{(k+1)})}{d\theta} = \frac{\partial\Phi(\theta^{(k)}, \lambda^{(k+1)})}{\partial \theta} + \frac{\partial \Phi(\theta^{(k)}, \lambda^{(k+1)})}{\partial\lambda}\mathbf{d}^{(k+1)} = \mathbf{0}\nonumber$
• Compute the implicit profiling Hessian: $\mathbb{H}^{(k+1)} = \frac{\partial \Psi(\theta^{(k)}, \lambda^{(k+1)})}{\partial \theta} + \frac{\partial \Psi(\theta^{(k)}, \lambda^{(k+1)})}{\partial \lambda}\mathbf{d}^{(k+1)}.$
• Update $$\theta$$ from $\theta^{(k+1)} = \theta^{(k)} - \mathbb{H}^{-1}\Psi(\theta^{(k)}, \lambda^{(k+1)});\nonumber$

For the initialization of $$\lambda$$, it is recommended to solve the equation $$\Phi(\theta^{(0)}, \lambda^{(0)}) = \mathbf{0}$$, which can help to improve the convergence speed. However, this calculation may also require high computational cost. In the case that the computational cost is not acceptalambdae, one can also randomly choose an initial value. In each updating iteration, we first update $$\lambda$$, which is denoted by $$\lambda^{(k+1)}$$. Then, we calculate the implicit profiling Hessian matrix of $$\theta$$ using the newly updated $$\lambda^{(k+1)}$$, and then get an updated value $$\theta^{(k+1)}$$. Repeat the iteration steps until convergence, which leads to the final estimates of $$\theta$$ and $$\lambda$$.

It is notalambdae that, the implicit profiling method accounts for the interaction between $$\theta$$ and $$\lambda$$ by treating $$\lambda$$ as a function of $$\theta$$. Consequently, the resulting estimator of $$\theta$$ should be equal to the estimate implemented by the entire updating method. We summarize this finding in the following two propositions.

Assume the objective function $$\mathcal{L}$$ is strictly convex. Convergent point of implicit profiling method and that of Newton-Raphson method are identical.

For any local quadratic prolambdaem $$Q$$, implicit profiling method reaches its minimal within two steps.

The detailed proof of the two propositions are given in Appendix A.1 and A.2, respectively. Propositions 1 and 2 estalambdaished that the implicit profiling method shared the theoretical properties of the Newton-Raphson method. By Proposition 1, implicit profiling only converges at the minimum of the convex loss. By Proposition 2, the convergence is guaranteed when the Newton-Raphson method converges, and the number of iterations taken before converges is comparalambdae to that of the Newton-Raphson method. Later in the experiments, we found that the fewer number of iterations is the driving factor for implicit profiling method’s advantage in run time compared to other iterative methods. In many cases, single iteration of the implicit profiling can be faster than that of the Newton-Raphson method when the dependence structure of $$\theta$$, $$\lambda$$ and the loss function enalambdaes the implicit profiling to simplify the whole Hessian matrix calculation and global value searching of the Newton-Raphson method. Together with the control on the number of iterations, the implicit profiling method is computationally more efficient than the Newton-Raphson method.

# toy example

The average number of iterative steps consumed by the Newton-Raphson method, the naive iteration method and the implicit proling method.

## Simulation

j <- 1
step_all <- list()
series_all <- list()
direction_all <- list()
for (k in c(0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8)) {
step <- list()
series <- list()
direction <- list()
length <- 10
theta <- seq(0, 2 * base::pi, length.out = length)
alpha <- k
for (i in 1:10) {
C <- i^2
x <- (sqrt(C * (1 - alpha / 2)) * cos(theta) + sqrt(C * (1 + alpha / 2)) * sin(theta)) / sqrt(2 - alpha^2 / 2)
y <- (sqrt(C * (1 - alpha / 2)) * cos(theta) - sqrt(C * (1 + alpha / 2)) * sin(theta)) / sqrt(2 - alpha^2 / 2)
sub_step <- matrix(nrow = 3, ncol = length)
sub_series <- list()
k1 <- list()
k2 <- list()
k3 <- list()
sub_direction <- list()
for (ii in 1:length) {
beta0 <- c(x[ii], y[ii])
Newton_fit <- Newton(beta0, alpha)
Ip_raw_fit <- semislv(theta = beta0[1], lambda = beta0[2], Phi_fn, Psi_fn, jac = list(Phi_der_theta_fn = function(theta, lambda, alpha) 2, Phi_der_lambda_fn = function(theta, lambda, alpha) alpha, Psi_der_theta_fn = function(theta, lambda, alpha) alpha, Psi_der_lambda_fn = function(theta, lambda, alpha) 2), method = "implicit", alpha = alpha, control = list(max_iter = 100, tol = 1e-7))
Ip_fit <- get_fit_from_raw(Ip_raw_fit)
It_raw_fit <- semislv(theta = beta0[1], lambda = beta0[2], Phi_fn, Psi_fn, jac = list(Phi_der_theta_fn = function(theta, lambda, alpha) 2, Phi_der_lambda_fn = function(theta, lambda, alpha) alpha, Psi_der_theta_fn = function(theta, lambda, alpha) alpha, Psi_der_lambda_fn = function(theta, lambda, alpha) 2), method = "iterative", alpha = alpha, control = list(max_iter = 100, tol = 1e-7))
It_fit <- get_fit_from_raw(It_raw_fit)
sub_step[, ii] <- c(Newton_fit$step, It_fit$step, Ip_fit$step) k1[[ii]] <- Newton_fit$series
k2[[ii]] <- It_fit$series k3[[ii]] <- Ip_fit$series
sub_direction[[ii]] <- It_fit\$direction
}
step[[i]] <- sub_step
sub_series[["Newton"]] <- k1
sub_series[["It"]] <- k2
sub_series[["Ip"]] <- k3
series[[i]] <- sub_series
direction[[i]] <- sub_direction
}
step_all[[j]] <- step
series_all[[j]] <- series
direction_all[[j]] <- direction
j <- j + 1
}