DOI
https://doi.org/10.25772/CKKM-8G96
Defense Date
2022
Document Type
Dissertation
Degree Name
Doctor of Philosophy
Department
Systems Modeling and Analysis
First Advisor
Paul Brooks
Abstract
The best-fit subspace or low-rank approximation of a data matrix revolves
around the norm approximation technique. l2-norm criterion is probably the most
widely used norm for fitting subspaces. As the computational power increases, the
l1-norm analogue has recently gained attention from the academic community. It is
widely agreed that the l1 norm is insensitive to outliers, compared to its l2 variant.
Because of the polyhedral structure interrelated with linear programming (LP),
the l0 norm is commonly relaxed into the l1-norm problem to induce sparsity in
models. In this work, we examine the role of the l1-norm approximation in several
domains. Our focus is on the underlying characterization of the l1-norm subspace,
the parallel implementation of the algorithm, and its applications in the field of
computer vision.
Chapter 2 develops a sparse and outlier insensitive method to fit a one-
dimensional subspace that can be used as a replacement for eigenvector methods such as principal component analysis. The method is insensitive to outlier observations by formulating procedures as optimization problems that seek the best-fit line according to the l1 norm. It is also capable of producing sparse principal components with an additional penalty term to take sparseness into account.
Our algorithm has a worst-case time complexity of O(m2nlogn). This chapter
also presents the results of the implementation of this algorithm in the parallel
and heterogeneous environment of NVIDIA CUDA and discusses the behavior of
the algorithm in various settings. Our goal is to demonstrate the scalability and
efficiency of our new approach.
Chapter 3 proposes an image denoising approach based on an l1-norm best-fit
line algorithm. The denoising process is expressed as a best-fit subspace estima-
tion problem, where the best-fit subspaces are derived in a l1-norm minimization
framework. This new approach is competitive with existing approaches in terms
of objective error metrics and visual fidelity and has the advantage that it can be
implemented in parallel for large-scale applications. Numerical experiments illus-
trate that the technique can be successfully applied to the classical case of additive
Gaussian noise. The performance of our approach is experimentally verified on a
variety of images and noise levels.
Chapter 4 describes a method for denoising data using kernel principal com-
ponent analysis (KPCA) that is able to recover preimages of the intrinsic variables
in the feature space using a single line search along the gradient descent direction
of its squared projection error. This method combines a projection-free preimage
estimation algorithm with an l1-norm kernel PCA (KPCA). Those two stages pro-
vide distinct advantages to other KPCA preimage methods in the sense that it is
insensitive to outliers and computationally efficient. The method can enhance the
results of a range of unsupervised learning tasks such as denoising, clustering, and
dimensionality reduction. Numerical experiments in the Amsterdam Library of Object Images demonstrate that the proposed method performs better in terms of mean squared error than the l2-norm analogue as well as on toy synthetic data.
The proposed method is applied to different data sets and the performances are
reported.
Rights
© The Author
Is Part Of
VCU University Archives
Is Part Of
VCU Theses and Dissertations
Date of Submission
5-13-2022
Included in
Operations Research, Systems Engineering and Industrial Engineering Commons, Philosophy of Science Commons