DOI

https://doi.org/10.25772/0PZ9-QP69

Defense Date

2016

Document Type

Thesis

Degree Name

Master of Science

Department

Computer Science

First Advisor

Vojislav Kecman

Second Advisor

Alberto Cano

Abstract

This thesis proposes a novel experimental environment (non-linear stochastic gradient descent, NL-SGD), as well as a novel online learning algorithm (OL SVM), for solving a classic nonlinear Soft Margin L1 Support Vector Machine (SVM) problem using a Stochastic Gradient Descent (SGD) algorithm. The NL-SGD implementation has a unique method of random sampling and alpha calculations. The developed code produces a competitive accuracy and speed in comparison with the solutions of the Direct L2 SVM obtained by software for Minimal Norm SVM (MN-SVM) and Non-Negative Iterative Single Data Algorithm (NN-ISDA). The latter two algorithms have shown excellent performances on large datasets; which is why we chose to compare NL-SGD and OL SVM to them. All experiments have been done under strict double (nested) cross-validation, and the results are reported in terms of accuracy and CPU times. OL SVM has been implemented within MATLAB and is compared to the classic Sequential Minimal Optimization (SMO) algorithm implemented within MATLAB's solver, fitcsvm. The experiments with OL SVM have been done using k-fold cross-validation and the results reported in % error and % speedup of CPU Time.

Rights

© The Author

Is Part Of

VCU University Archives

Is Part Of

VCU Theses and Dissertations

Date of Submission

5-16-2016

Share

COinS