DOI

https://doi.org/10.25772/J7Z9-RH75

Author ORCID Identifier

0000-0003-4015-3063

Defense Date

2021

Document Type

Dissertation

Degree Name

Doctor of Philosophy

Department

Systems Modeling and Analysis

First Advisor

Dr. David J. Edwards

Second Advisor

Dr. D'Arcy P. Mays

Third Advisor

Dr. Jason Merrick

Fourth Advisor

Dr. Yanjun Qian

Abstract

Experiments are widely used across multiple disciplines to uncover information about a system or processes. Experimental design is a statistical technique devoted to the methodology of selecting the appropriate samples to aid in the subsequent analysis. We research three open problems in experimental designs regarding calibration, sequential experimentation, and model selection. First, we focus on calibration; the impact of experimental design choice on the performance of statistical calibration is largely unknown. We investigate the performance of several experimental designs with regards to inverse prediction via a comprehensive simulation study. Specifically, we compare several design types including traditional response surface designs, algorithmically generated variance optimal designs, and space-filling designs. Next, we address sequential experimentation; uncertainty remains in optimal design techniques regarding the best way to allocate a given set of runs. The focus on maximizing information in optimal design has emphasized the running of a comprehensive large design all at one time, with or without replication. In practice, it may be better to first run a small screening design to identify important factors followed by an additional design building off knowledge gained in the first phase. We use simulations to compare the performance of D-optimal screening designs with follow up runs selected by Bayesian D-optimal augmentation against the performance of a nonsequential D-optimal design. Lastly, we explore model selection; there currently is not a suitable method available to incorporate pure error into model selection procedures when analyzing screening designs that achieves high power without the trade-off of high false discovery rates. To counteract the lack of noncentrality in the partial F-test denominator contributing to larger partial F-tests with pure error, we consider early stopping methods including Bonferroni adjusted p-values and our proposed forward selection method that incorporates a lack of fit test after each model selection step. Additionally, we develop a model selection method that incorporates pure error using lack of fit tests with LASSO penalized regression. We examine various model selection techniques using a simulation study and propose a strategy for incorporating pure error in model selection procedures that keeps false discovery rates in check.

Rights

© The Author

Is Part Of

VCU University Archives

Is Part Of

VCU Theses and Dissertations

Date of Submission

12-8-2021

Share

COinS