|dc.description.abstract||This thesis seeks to address current problems encountered when training classifiers
within the framework of cascades of boosted ensembles (CoBE). At present, a signifi-
cant challenge facing this framework are inordinate classifier training runtimes. In some
cases, it can take days or weeks (Viola and Jones, 2004; Verschae et al., 2008) to train
a classifier. The protracted training runtimes are an obstacle to the wider use of this
framework (Brubaker et al., 2006). They also hinder the process of producing effective
object detection applications and make the testing of new theories and algorithms, as well
as verifications of others research, a considerable challenge (McCane and Novins, 2003).
An additional shortcoming of the CoBE framework is its limited ability to train clas-
sifiers incrementally. Presently, the most reliable method of integrating new dataset in-
formation into an existing classifier, is to re-train a classifier from beginning using the
combined new and old datasets. This process is inefficient. It lacks scalability and dis-
cards valuable information learned in previous training.
To deal with these challenges, this thesis extends on the research by Barczak et al.
(2008), and presents alternative CoBE frameworks for training classifiers. The alterna-
tive frameworks reduce training runtimes by an order of magnitude over common CoBE
frameworks and introduce additional tractability to the process. They achieve this, while
preserving the generalization ability of their classifiers.
This research also introduces a new framework for incrementally training CoBE clas-
sifiers and shows how this can be done without re-training classifiers from beginning.
However, the incremental framework for CoBEs has some limitations. Although it is able
to improve the positive detection rates of existing classifiers, currently it is unable to lower
their false detection rates.||en_US