DOI

https://doi.org/10.25772/F3ZW-P786

Defense Date

2012

Document Type

Thesis

Degree Name

Master of Science

Department

Computer Science

First Advisor

Vojislav Kecman

Abstract

Classification is one-out-of several applications in the neural network (NN) world. Multilayer perceptron (MLP) is the common neural network architecture which is used for classification tasks. It is famous for its error back propagation (EBP) algorithm, which opened the new way for solving classification problems given a set of empirical data. In the thesis, we performed experiments by using three different NN structures in order to find the best MLP neural network structure for performing the nonlinear classification of multiclass data sets. A developed learning algorithm used here is the batch EBP algorithm which uses all the data as a single batch while updating the NN weights. The batch EBP speeds up training significantly and this is also why the title of the thesis is dubbed 'fast NN …'. In the batch EBP, and when in the output layer a linear neurons are used, one implements the pseudo-inverse algorithm to calculate the output layer weights. In this way one always finds the local minimum of a cost function for a given hidden layer weights. Three different MLP neural network structures have been investigated while solving classification problems having K classes: one model/K output layer neurons, K separate models/One output layer neuron, and K joint models/One output layer neuron. The extensive series of experiments performed within the thesis proved that the best structure for solving multiclass classification problems is a K joint models/One output layer neuron structure.

Rights

© The Author

Is Part Of

VCU University Archives

Is Part Of

VCU Theses and Dissertations

Date of Submission

May 2012

Share

COinS