DOI

https://doi.org/10.25772/CTXY-9R31

Author ORCID Identifier

0000-0002-3333-5101

Defense Date

2022

Document Type

Dissertation

Degree Name

Doctor of Philosophy

Department

Computer Science

First Advisor

Milos Manic

Second Advisor

Eyuphan Bulut

Third Advisor

David Shepherd

Fourth Advisor

Craig Rieger

Fifth Advisor

Ronald L. Boring

Abstract

The motivation for this dissertation is two-prong. Firstly, the current state of machine learning imposes the need for unsupervised Machine Learning (ML). Secondly, once such models are developed, a deeper understanding of ML models is necessary for humans to adapt and use such models.

Real-world systems generate massive amounts of unlabeled data at rapid speed, limiting the usability of state-of-the-art supervised machine learning approaches. Further, the manual labeling process is expensive, time-consuming, and requires the expertise of the data. Therefore, the existing supervised learning algorithms are unable to take advantage of the abundance of real-world unlabeled data. Thus, relying on supervised learning alone is not sufficient in many real-world settings. Therefore, improving on existing and developing novel unsupervised machine learning algorithms is necessary.

Once the unsupervised ML models have been developed, these models need to be understood by humans in order to adapt them efficiently. Even with the tremendous success of ML in many domains, humans are still hesitant to develop, deploy, and use machine learning methods because humans cannot understand the internal decision-making process of machine learning methods (black-box nature). Therefore, it is essential to develop machine learning algorithms that are either explainable, or to develop approaches that will explain the decision-making process behind existing methods. This process is typically referred to as explainable or interpretable machine learning. Therefore, developing novel methodologies for interpreting unsupervised machine learning methods is necessary.

The objectives of this dissertation are to improve the feature learning capability and to interpret the decision-making process of unsupervised neural networks. We present a novel Self-Organizing Neural Network architecture with improved classification accuracy and generalizability. We also present a deep Autoencoder Neural Network based framework for unsupervised feature learning and deep embedded clustering with improved robustness to network depth. Further, we developed interpretable techniques for explaining the decision-making process of these unsupervised neural networks, providing insightful and satisfactory explanations that match expert knowledge.

Rights

© The Author

Is Part Of

VCU University Archives

Is Part Of

VCU Theses and Dissertations

Date of Submission

5-9-2022

Share

COinS