テンソル手法の中で、最も重要かつ基本的な手法の 1 つは、テンソルを低次元の潜在因子のセットに分解するテンソル分解です。潜在因子は、データの潜在的な特性を明らかにし、それらを高度に圧縮可能な方法で表現するのに強力です。 CANDECOMP/PARAFAC (CP) 分解と Tucker 分解は 1 世紀以上研究されてきました。これは最も有用なテンソル分解モデルです。最近、テンソル トレイン (TT) 分解が提案されています。 TT 分解は、CP 分解やタッカー分解よりも計算的に便利です。

## Background

Then the tensor background, some basic tensor decomposition models, and the tensor complement method we used in our research. The original image is split into two orthogonal tensors and one orthogonal diagonal tensor by tensor singular value decomposition (t-SVD). Chapter 5 contains a general conclusion of the thesis and our future work.

## Summary of contributions

*TT rank with TV for MRI data reconstruction**Black-box adversarial attack by T-svd**Notations**CP decomposition and Tucker decomposition**Tensor train decomposition**Tensor Singular value decomposition**Proposed method introduction**Previous work about total variation and tensor completion**Simple low rank tensor completion combined with TT rank*

By analyzing the relationship of obtained data through this method, the unknown data can be predicted and sampling time can also be reduced. Recent studies have also shown that imposing tensor train (TT) and total variation (TV) constraint on tensor completion can produce impressive performance, the lower TT rank minimization constraint can be represented as the guarantee for global constraint, while the total variation as the guarantee for regional restriction. By applying the alternating direction method of multipliers (ADMM), the optimization model is decomposed into some subproblems, single value thresholding can be used as the solution for the first subproblem and soft thresholding can be used as the solution for the second subproblem.

Adversarial attacks are adding a small perturbation to the input to misclassify the result, and it has been shown that a small perturbation can affect the output of neural networks [13] [14]. It seems that the output of most image classification models can be modified by white-box attacks [16] and the result shows that after training with ML classifiers, these image data will be close to the decision boundaries. A white-box attack can be run efficiently with gradient descent [13] [17] and typically has high query efficiency than a black-box attack (finding successful ResNet/ImageNet attacks requires on the order of 104–105 queries).

MatrixX with missing data can be recovered by assuming the matrix has the low rank structure. So a new model based on low rank and the total variation is proposed [38] and its formulation is:. where a trade-off parameter and its value is between 0 and 1, then we choose anisotropic TV as the optimization norm, the optimization can be written as:.

## Proposed method

By applying the ADMM method, one of the variables can be minimized while the other variables are fixed. Furthermore, the computational cost can be reduced by precomputing the D∗D operator outside the main loop. 2 ∥L− D(Y)∥2F, (3.27) the problem can be transformed into another form since it is the total anisotropic variation:. this problem can also be solved with:. where is the soft threshold, and can be written as follows:

The convergence condition will be satisfied, when the relative error between two consecutive recovered data is smaller than the given value, it can be denoted as (∥X(n)∥F− ∥X(n−1)∥F)/∥X ( n)∥F≤ ε, X(n) is the completed tensor-int titration and ε is a given value.

## Experiment and result

Experimental parameter selection

For 6-coil, 15-coil, 24-coil, and full coil MRI data, the missing ratio ranges from 40% to 90%, and the result shows that our method consistently achieves better completion results than all other methods. First, the observation proves that the low-level TT completion is beneficial and that the TT decomposition based on the total variation performs better than the TT. Comparing our method with SiLRTC-TT, our method has better performance because the inclusion of full variation in SiLRTC provides smooth regional structures. The PCLR method uses a linear relationship and a phase constraint to recover missing data, and in this method, the original MRI data is reconstructed into a matrix whose size is larger than the previous data, but it loses some structure information to recover the data.

As the missing ratio improves, our model performs better than other models as the result shows that the PSNRandSSI Mof SiLRTC drops faster than the proposed method. The proposed model describes the global and relative information of the MRI data, even with the 90% missing ratio, it uses this limitation to recover the data with satisfactory accuracy.

## Conclusion

### Proposed method introduction

To improve the efficiency of the query, we propose a method that changes the target of the adversarial perturbation attacks from the original image pixel data to another form with a smaller amount of data. Tensor singular value decomposition [48] is one of the essential tensor methods and it is used to decompose the image data and it is an important tool to analyze data, we can divide low-rank (high value) parts and high-rank parts of the image. Some attack methods have been confirmed that the disturbance is roughly concentrated in the high-rank part and these attack methods can be easily defended by lower-rank assumptions [49] [51].

In the proposed method, the disturbance is added to the high-rank part and the low-rank part. First, the original image is split into two orthogonal tensors and one orthogonal diagonal tensor by tensor singular value decomposition (t-SVD). In order to improve the performance of the proposed method, we do not need to pay excessive attention to the optimal direction.

## Adversarial attack

Untargeted and targeted attack

### Attack models

Adversarial attack 25 model because the model structure and parameter settings are exposed to the attacker. For black-box attacks, the most valid operation is to input the data to the model and get the corresponding output. For example, when we choose to attack Google Cloud Vision, it will cost time and money in each query, so in addition to remaining the disturbed image is imperceptible, minimizing the number of queries must also be considered.

Although tensorX and tensorS have the same size, S is a diagonal tensor and X is a full-data tensor, so adding perturbation on tensorX is more efficient.

### Algorithm

Simple adversarial black-box attacks by t-SVD Input: original imageX, query directionQ belonging to vectorsQ, step size. The candidate diagonal tensor W can consist of several different types of basis tensor, they are standard basis, random orthogonal diagonal basis and some specified diagonal basis. The random diagonal basis attack is effective, but we found that compared with the standard basis and orthogonal random diagonal basis, adding specific noise of the orthogonal diagonal basis to W will increase the attack efficiency and the natural fit to images [54].

Budget considerations

*untargeted attack on google Cloud Vision**untargeted and targeted attack on ResNet-50**The qualitative comparison of different methods**Evaluating different networks*

The result shows that our method ultimately achieves a relatively high success rate and our method increases dramatically faster in success rate than SimBA and SimBA-DCT. In this experiment, we test the performance of our method by attacking the ResNet-50 network [57] and compare it with QL-attack, SimBA, and SimBA-DCT. Furthermore, untargeted attacks and targeted attacks are performed, and the number of cost queries, success rate, and average L2 norm of perturbation are used to evaluate the performance of our method.

From Table 2, we can find that our method has significantly lower queries than other methods. Although compared with SimBA and SimBA-DCT, we do not achieve a higher success rate, but our method costs less queries. In the target attack experiment, the test methods are much more comparable, but our method still requires fewer questions than other methods.

## Conclusion

FIGURE4.3: The success rate and the number of queries by ResNet-50 and DenseNet-121 models for untargeted attacks.

## APPENDIX

PROOF OF THEOREM 1

PROOF OF THEOREM 4

### PROOF OF THEOREM 5

TT-TV model for data completion (Chapter 2): In this research, we present a new method to minimize TT rank with a total variation model. In contrast to imposing the alternating linear scheme, kernel norm regularization is introduced on TT ranks in our method as it is an effective surrogate for rank optimization and our solution does not need to initialize and update tensor kernels. VDT is introduced in this paper to reshape the MRI data to improve the performance of the proposed algorithm.

The results show that our method has advantages in relative standard error (RSE), peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). It also concludes that our method achieves better accuracy compared to other methods based on the low-rank constraint. We show that without adding the disturbance to the original image, our method achieves better query efficiency compared to the state-of-the-art method.

## Future work

21] Andrzej Cichocki, Danilo Mandic, Lieven De Lathauwer, Guoxu Zhou, Qibin Zhao, Cesar Caiafa, and Huy Anh Phan. Estimation of some extreme singular values and vectors for large-scale matrices in tensor train format. Simultaneous visual data completion and denoising based on tensor rank and total variation minimization and its primal-dual partition algorithm.

38] wen long CA oh, Y AO Wang, J Ian sun, d yum Eng, kan yang, 然后 JCI chock i, 和基于 X U 的 Z。