• Login
    View Item 
    •   Home
    • Massey Documents by Type
    • Theses and Dissertations
    • View Item
    •   Home
    • Massey Documents by Type
    • Theses and Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Symmetric parallel class expression learning : School of Engineering and Advanced Technology, Massey University, New Zealand : a thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science

    Icon
    View/Open Full Text
    01_front.pdf (88.69Kb)
    02_whole.pdf (2.038Mb)
    Export to EndNote
    Abstract
    The growth of both size and complexity of learning problems in description logic applications, such as the Semantic Web, requires fast and scalable description logic learning algorithms. This thesis proposes such algorithms using several related ap- proaches and compares them with existing algorithms. Particular measures used for comparison include computation time and accuracy on a range of learning problems of di erent sizes and complexities. The rst step is to use parallelisation inspired by the map-reduce framework. The top-down learning approach, coupled with an implicit divide-and-conquer strategy, also facilitates the discovery of solutions for a certain class of complex learning problems. A reduction step aggregates the partial solutions and also provides additional exibility to customise learnt results. A symmetric class expression learning algorithm produces separate de nitions of positive (true) examples and negative (false) examples (which can be computed in parallel). By treating these two sets of de nitions `symmetrically', it is sometimes possible to reduce the size of the search space signi cantly. The use of negative example denotions enhances learning problems with exceptions, where the negative examples (`exceptions') follow a few relatively simple patterns. In general, correctness (true positives) and completeness (true negatives) of a learn- ing algorithm are traded o against each other because these two criteria are normally con icting. Particular learning algorithms have an inherent bias towards either cor- rectness or completeness. The use of negative de nitions enables an approach (called forti cation in this thesis) to improve predictive correctness by applying an appropriate level of over-specialisation to the prediction model, while avoiding over- tting. The experiments presented in the thesis show that these algorithms have the po- tential to improve both the computation time and predictive accuracy of description logic learning when compared to existing algorithms.
    Date
    2013
    Author
    Tran, Cong An
    Rights
    The Author
    Publisher
    Massey University
    URI
    http://hdl.handle.net/10179/4928
    Collections
    • Theses and Dissertations
    Metadata
    Show full item record

    Copyright © Massey University
    | Contact Us | Feedback | Copyright Take Down Request | Massey University Privacy Statement
    DSpace software copyright © Duraspace
    v5.7-2020.1-beta1
     

     

    Tweets by @Massey_Research
    Information PagesContent PolicyDepositing content to MROCopyright and Access InformationDeposit LicenseDeposit License SummaryTheses FAQFile FormatsDoctoral Thesis Deposit

    Browse

    All of MROCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    LoginRegister

    Statistics

    View Usage Statistics

    Copyright © Massey University
    | Contact Us | Feedback | Copyright Take Down Request | Massey University Privacy Statement
    DSpace software copyright © Duraspace
    v5.7-2020.1-beta1