Author ORCID Identifier
Doctor of Philosophy
Over the past decade, Machine Learning (ML) research has predominantly focused on building extremely complex models in order to improve predictive performance. The idea was that performance can be improved by adding complexity to the models. This approach proved to be successful in creating models that can approximate highly complex relationships while taking advantage of large datasets. However, this approach led to extremely complex black-box models that lack reliability and are difficult to interpret. By lack of reliability, we specifically refer to the lack of consistent (unpredictable) behavior in situations outside the training data. Lack of interpretability refers to the lack of understanding of the inner workings of the learned models and the difficulty of extracting knowledge from trained models.
The objective of this dissertation is to improve the reliability and interpretability of ML models. In order to improve reliability, we focus on modeling uncertainty. We study the performance of several deterministic and stochastic Neural-Network models on modeling the dynamics of physical systems. We present an approach that uses Bayesian Neural Networks (BNNs) to model system dynamics under uncertainty. The BNN model is used to find optimal plan trajectories that take the system to a predefined state goal by estimating the state of the system several steps ahead. Modeling uncertainty using BNNs allowed us to improve the reliability of long-term estimations, increasing the performance of the planning task.
Interpretability is addressed by embedding domain-knowledge into data-driven models. We present an approach that embeds domain-knowledge extracted from physical laws into Variational Gaussian Processes. Domain-knowledge is embedded using a linear-prior that is derived from basic Newtonian Mechanics. We show that embedding domain-knowledge improves the predictive performance of the model. We also show that the approach provides an interpretable model without a degradation in accuracy.
In this dissertation, we center our discussion on modeling physical and cyber systems. Reliable and interpretable machine learning models are particularly sought in these applications, specially when they involve mission-critical and critical infrastructure operations. Reliable models in the presence of uncertainty are fundamental for the operation of physical and cyber systems. Modeling uncertainty provides a framework to inform users about possible incorrect predictions. In monitoring and diagnosis applications, interpretable models provide the means to extract useful information to operators. Finally, physical and cyber systems have rich structures with a large body of domain-knowledge that has yet to be explored in conjunction with machine learning systems. This dissertation shows how we can embed domain-knowledge from physical and cyber systems in order to improve interpretability. More specifically, we demonstrate how we can take advantage of physics domain-knowledge and the graph structure of cyber systems in order to design more structured and interpretable machine learning models.
© Daniel L Marino
Is Part Of
VCU University Archives
Is Part Of
VCU Theses and Dissertations
Date of Submission
Available for download on Tuesday, November 29, 2022