rTensor2

library(rTensor)
library(rTensor2)
#> The legacy packages maptools, rgdal, and rgeos, underpinning the sp package,
#> which was just loaded, were retired in October 2023.
#> Please refer to R-spatial evolution reports for details, especially
#> https://r-spatial.org/r/2023/05/15/evolution4.html.
#> It may be desirable to make the sf package available;
#> package maintainers should consider adding sf to Suggests:.
#>
#> Attaching package: 'rTensor2'
#> The following objects are masked from 'package:rTensor':
#>
#>     as.tensor, rand_tensor

Introduction

rTensor2 is a set of tools to manipulate 3D tensors of data. The package follows the work of Kernfled et al. (2015) https://www.sciencedirect.com/science/article/pii/S0024379515004358. For most operations, the first step is to take a discrete linear transpose along mode 3. The rTensor2 package extends the work of the rTensor package https://www.jstatsoft.org/article/download/v087i10/1270 which also provides 3D tensor tools, but this work was based on the discrete Fourier transform. rTensor2 however, extends this package by allowing tensor manipulation to be done using any one of six discrete transforms.

General principles:

• Results are transformation dependent. In other words, multiplying 2 tensors together using a different transformation will give you a different product.

• Inverses are not unique. In other words, different transformations yield inverses that are scaled differently. When calculating: $$\texttt{tmult}(\texttt{A},\texttt{tInv(A)},\texttt{"tform"})=\texttt{I}$$, the identity tensor will have zeros for all lateral slices except the first slice and the first lateral slice will be an identity matrix times some scalar. The scalar will be different depending on which transform was used as well as the number of lateral slices.

• If you calculate the inverse tensor and then use the inverse tensor in a product, you must use the same transformation that was used to find the inverse tensor. The same is true for the tensor transpose function.

Tensor Multiplication

The tensor multiplication function ($$\texttt{tmult}$$) multiplies two tensors. Step one is to perform a discrete transform along mode 3. After transformation, the frontal slices are multiplied to together resulting in a new tensor.

Because the frontal faces are multiplied together, the second mode of the first tensor must match the first mode of the second tensor. As a example, we can multiply a $$3 \times 4 \times 2$$ tensor by a $$4 \times 5 \times 2$$ tensor. In addition, mode 3 dimensions must also match. If you are using the discrete wavelet transform, there is an additional requirement that the mode 3 dimension must be a of length $$2^n$$ where $$n>1$$ otherwise you must 0 pad the tensor in that dimension.

A <- rand_tensor(modes=c(3,4,2))
B <- rand_tensor(modes=c(4,5,2))
tmult(x=A,y=B,"dct")
#> Numeric Tensor of 3 Modes
#> Modes:  3 5 2
#> Data:
#> , , 1
#>
#>            [,1]       [,2]       [,3]       [,4]        [,5]
#> [1,]  1.6133154 1.77932895  0.7206821 -0.1622463  1.36168591
#> [2,] -0.2629096 0.50051441 -2.5829045 -0.4363478  3.73739424
#> [3,]  2.2205579 0.03118077 -1.7549949 -0.9282181 -0.08587165
#>
#> , , 2
#>
#>            [,1]       [,2]      [,3]      [,4]       [,5]
#> [1,]  1.0398736 -0.7913833 0.6474074  1.128169 -1.2893098
#> [2,] -0.6216461 -2.4246046 1.2147955 -1.279326 -2.8270854
#> [3,] -2.1828371 -0.7887816 1.9263148 -1.420152 -0.9607526

Tensor Inverse

The tensor inverse function ($$\texttt{tINV}$$) performs a discrete transformation down mode 3. Then, each of the frontal slices are inverted. Because the slices must be inverted, the mode 1 and mode 2 dimensions must be equal.

A <-  rand_tensor(modes=c(3,3,2))
Ainv = tINV(A,"dct")
tmult(A,Ainv,"dct") # the result is an identity tensor
#> Numeric Tensor of 3 Modes
#> Modes:  3 3 2
#> Data:
#> , , 1
#>
#>               [,1]         [,2]          [,3]
#> [1,]  1.414214e+00 2.854992e-16 -1.275700e-16
#> [2,] -7.850462e-17 1.414214e+00  2.551400e-16
#> [3,]  5.887847e-17 3.025571e-16  1.414214e+00
#>
#> , , 2
#>
#>               [,1]          [,2]          [,3]
#> [1,]  0.000000e+00  4.814541e-17  2.943923e-17
#> [2,]  0.000000e+00  3.330669e-16  1.373831e-16
#> [3,] -2.158877e-16 -1.455479e-16 -1.110223e-16

Tensor Transpose

The tensor transpose ($$\texttt{t_tpose}$$) performs a discrete transformation down mode 3. Then, each of the frontal slices are transposed.

A <-  rand_tensor(modes=c(3,5,2))
A
#> Numeric Tensor of 3 Modes
#> Modes:  3 5 2
#> Data:
#> , , 1
#>
#>             [,1]      [,2]      [,3]      [,4]       [,5]
#> [1,]  0.49048141 0.4477505 -1.033263  0.721805 -0.5188771
#> [2,]  0.02051673 0.7350448 -1.122336  2.841532  0.1427587
#> [3,] -0.19998115 0.9877364 -1.698322 -2.033514 -1.4707067
#>
#> , , 2
#>
#>            [,1]        [,2]       [,3]       [,4]       [,5]
#> [1,]  0.6229412 -0.01278372 -1.3598823 -1.6762711 -0.7046493
#> [2,] -0.3639559 -1.17917665 -0.1584992 -2.1216170 -0.1896852
#> [3,] -1.9480121  2.26426156 -0.3665493 -0.7474772  0.5891663
t_tpose(A,"dct")
#> Numeric Tensor of 3 Modes
#> Modes:  5 3 2
#> Data:
#> , , 1
#>
#>            [,1]        [,2]       [,3]
#> [1,]  0.4904814  0.02051673 -0.1999811
#> [2,]  0.4477505  0.73504478  0.9877364
#> [3,] -1.0332634 -1.12233571 -1.6983223
#> [4,]  0.7218050  2.84153248 -2.0335139
#> [5,] -0.5188771  0.14275871 -1.4707067
#>
#> , , 2
#>
#>             [,1]       [,2]       [,3]
#> [1,]  0.62294115 -0.3639559 -1.9480121
#> [2,] -0.01278372 -1.1791767  2.2642616
#> [3,] -1.35988232 -0.1584992 -0.3665493
#> [4,] -1.67627113 -2.1216170 -0.7474772
#> [5,] -0.70464929 -0.1896852  0.5891663

Eigenvalue Decomposition

The eigenvalue decomposition function ($$\texttt{tEIG}$$) performs an eigenvalue decomposition on a tensor whose dimensions are $$n \times n \times k$$ using any discrete transform. The function returns 2 tenors, an $$n \times n \times k$$ tensor of eigenvectors and a diagonal tensor of eigenvalues which is also $$n \times n \times k$$. Notice that if we multiply $$P D P^{-1}$$ we obtain the original tensor.

A = rand_tensor(modes=c(3,3,2))
result = tEIG(A,"dst")
A - tmult(tmult(result$P,result$D,"dst"),tINV(result$P,"dst"),"dst") # zero tensor #> Numeric Tensor of 3 Modes #> Modes: 3 3 2 #> Data: #> , , 1 #> #> [,1] [,2] #> [1,] -5.551115e-16-2.220446e-16i 2.775558e-17-1.665335e-16i #> [2,] -8.881784e-16+1.480297e-16i -1.776357e-15+2.220446e-16i #> [3,] -4.440892e-16+3.700743e-16i 9.992007e-16-2.960595e-16i #> [,3] #> [1,] 2.220446e-16+1.480297e-16i #> [2,] 6.661338e-16+7.401487e-17i #> [3,] 0.000000e+00-1.480297e-16i #> #> , , 2 #> #> [,1] [,2] #> [1,] -1.332268e-15+3.798274e-16i 6.661338e-16-9.850178e-17i #> [2,] 1.221245e-15-2.880519e-16i -2.220446e-16-1.253066e-16i #> [3,] 2.359224e-16-1.683961e-16i 8.881784e-16-1.147800e-16i #> [,3] #> [1,] -8.881784e-16-6.05406e-17i #> [2,] -1.998401e-15+1.98069e-16i #> [3,] 1.443290e-15-3.40565e-16i LU Decomposition The LU decomposition factors a 3-mode tensor into a lower triangular tensor and an upper triangular tensor. Note: In order to perform this operation, the frontal slices need to be square. A <- rand_tensor(modes=c(3,3,2)) result <- tLU(A,"dht") A - tmult(result$L,result$U,"dht") #> Numeric Tensor of 3 Modes #> Modes: 3 3 2 #> Data: #> , , 1 #> #> [,1] [,2] [,3] #> [1,] 0.000000e+00 0 1.110223e-16 #> [2,] -1.110223e-16 0 0.000000e+00 #> [3,] -1.110223e-16 0 0.000000e+00 #> #> , , 2 #> #> [,1] [,2] [,3] #> [1,] 0.000000e+00 -5.551115e-17 0.000000e+00 #> [2,] 0.000000e+00 0.000000e+00 0.000000e+00 #> [3,] 8.326673e-17 -3.885781e-16 -1.110223e-16 QR Decomposition The QR decomposition factors a 3-mode tensor into a left singular tensor object tensor and a right singular tensor object. Note: In order to perform this operation, the frontal slices need to be square. A <- rand_tensor(modes=c(3,3,2)) result <- tQR(A,"fft") A - tmult(result$Q,result$R,"fft") #> Numeric Tensor of 3 Modes #> Modes: 3 3 2 #> Data: #> , , 1 #> #> [,1] [,2] [,3] #> [1,] 5.551115e-17 4.440892e-16 0.000000e+00 #> [2,] 0.000000e+00 -1.387779e-17 2.220446e-16 #> [3,] 0.000000e+00 5.551115e-17 2.220446e-16 #> #> , , 2 #> #> [,1] [,2] [,3] #> [1,] 0.000000e+00 1.665335e-16 0.000000e+00 #> [2,] 5.551115e-17 0.000000e+00 0.000000e+00 #> [3,] -2.775558e-17 0.000000e+00 2.220446e-16 Singular Value Decomposition As an example, we use singular value decomposition (SVD) for image compression. By factoring a tensor $$\cal{A}$$ into orthogonal tensors $$\cal{U},\cal{S}$$ and $$\cal{V}$$} using the SVD, we can then truncate the tensor to the first $$k$$ singular values and compare different transforms to the original image. library(raster) #> Loading required package: sp library(grid) data(raytrace) tform = "dst" A = raytrace$boat
wSVD = tSVD(A,tform)

k = 30 # number of singular values kept
U = wSVD$U V = wSVD$V
S = wSVD\$S
tV = t_tpose(V,tform)

Uk = rand_tensor(modes = c(128, k, 128), drop = FALSE)*0
Sk = rand_tensor(modes = c(k, k, 128), drop = FALSE)*0
Vk = rand_tensor(modes = c(k, 128, 128), drop = FALSE)*0

Uk = U[,1:k,]@data

Sk = S[1:k,1:k,]@data

Vk = tV[1:k,,]@data

Uk = as.tensor(Uk)
Vk = as.tensor(Vk)
Sk = as.tensor(Sk)

X = tmult(Uk, Sk,tform)
X = tmult(X, Vk, tform)

# See how close the compressed image is to the original image
fnorm(X-A)
#> [1] 15.6157

# feature scale
if (tform=="fft"){
Xnew=Re(X@data)
} else {
Xnew = X@data
}
Xnew = X@data
newX = (Xnew-min(Xnew))/(max(Xnew)-min(Xnew))

# View Images

# Compressed image
# grid.raster(newX[,45,])
# title(paste0('Compressed'))
# Original Image
# grid.raster(XT[,45,]@data)