Fahad Khalid MCS-2008:22, pp. 35. TEK/avd. för interaktion och systemdesign, 2008.
In this thesis we present a theoretical investigation
of the feasibility of using a problem specific inductive
bias for back-propagated neural networks. We argue
that if a learning algorithm is biased towards optimizing a certain performance measure, it is plausible to assume that it will generate a higher performance score when evaluated using that particular measure. We use the term measure function for a multi-criteria evaluation function that can also be used as an inherent function in learning algorithms, in order to customize the bias of a learning algorithm for a specific problem. Hence, the term measure-based learning algorithms.
We discuss different characteristics of the most
commonly used performance measures and establish
similarities among them. The characteristics of
individual measures and the established similarities are then correlated to the characteristics of the backpropagation algorithm, in order to explore the
applicability of introducing a measure function to backpropagated neural networks.
Our study shows that there are certain
characteristics of the error back-propagation mechanism and the inherent gradient search method that limit the set of measures that can be used for the measure function. Also, we highlight the significance of taking the representational bias of the neural network into account when developing methods for measure-based learning.
The overall analysis of the research shows that
measure-based learning is a promising area of research with potential for further exploration. We suggest directions for future research that might help realize measure-based neural networks.