Massey Documents by Type

Permanent URI for this communityhttps://mro.massey.ac.nz/handle/10179/294

Browse

Search Results

Now showing 1 - 4 of 4
  • Item
    Learning object metadata interchange mechanism : a thesis presented in partial fulfillment of the requirements for the degree of Master of Information Science at Massey University, Palmerston North, New Zealand
    (Massey University, 2005) Zhang, Yuejun
    In spite of the current lack of conceptual clarity in the multiple definitions and uses, the term learning objects is still frequently used in content creation and aggregation in the online-learning field. In the mean time, considerable efforts have been initiated in the past few years for the standardization of metadata elements for consistent description of learning objects, so that learning objects can be identified, searched and retrieved effectively and efficiently across multiple contexts. However, there are currently a large number of standardization bodies and an even much larger number of ongoing standard initiatives in the learning field, and different learning objects repositories are likely to apply different metadata schemas to meet the specific needs of their intended communities. An interchange mechanism for the conversion between various metadata schemas, therefore, becomes necessary for intensive interoperability. In this thesis, we first make a brief introduction to the concept learning objects, then the term metadata, followed by a description of the functional requirements of learning objects. the purposes of metadata, and the importance of metadata for learning objects. After that, this thesis investigates metadata schemas in various fields in general, focused on several mainstream metadata specifications developed for learning objects in particular. The differences among these metadata schemas for learning objects are analyzed and a mapping between their elements is identified. On the basis of literature review, a framework for interchange of metadata schemas is proposed and a prototype to demonstrate the functionalities of the framework is developed. For the high scalability and the high accuracy of the developed system, a so-called LOM-intermediated approach is suggested, and a so-called dynamic-database methodology is adopted. The LOM- intermediated approach significantly simplifies the metadata mapping issues by undertaking the schema-schema mapping in a way of schema-LOM-schema mapping, while the dynamic-database methodology effectively prevents any data-loss resulting as a by-product from the use of LOM-intermediated approach. The prototype currently generates and outputs XML metadata in IMS, EdNA, Dublin Core and LOM. It is a web- based three-tier architecture, using Java technologies for implementation, MySQL as the database server and JDBC for database access.
  • Item
    Methods of representing the structure of complex industrial products on computer files, to facilitate planning, costing and related management tasks : a thesis presented in fulfilment of the requirements for the degree of Master of Technology in Manufacturing and Industrial Technology at Massey University
    (Massey University, 1992) Burns, Sara
    When the original concepts for the computerisation of product structures were developed in the late 1960's the available computer power was very limited. A modularisation technique was developed to address the situation in which a number of similar products were being manufactured. This technique tried to rationalise these products into family groups. Each member of the family differed from the others due to the possession of different features or options. However there was also a high degree of commonality to give the product membership of the family. Modularisation involved the identification of the options and features providing the variability. Those parts remaining tended to be common to all members of the family and became known as the common parts. Separate Bills of Material (BOMs) were set up for each of the identified options or features. Another BOM was set up for the common parts. The simple combination of the required options and/or features BOMs with the common parts BOM specified a product. Computer storage requirements and redundancy were reduced to a minimum. The Materials Requirements Planning (MRP) system could manipulate these option and feature BOMs to over plan product variability without over planning the parts common to all members. The modularisation philosophy had wide acceptance and is the foundation of MRP training. Modularisation, developed for MRP, is generally parts orientated. An unfortunate side effect tends to be the loss of product structure information. Most commercial software would list 6 resistors, Part No. 123, in the common parts BOM without concern as to where the resistors are fitted. This loss of product structure information can hide the fact that two products using these 6 resistors 'in common' are in fact different as they do not use the resistors in the same 6 places. Additional information must be consulted to enable the correct assembly of the 'common' portion of these products. The electronics industry is especially affected by this situation. This industry has changed considerably since the late 1960's. Product variability can be very high. Changes and enhancements are a constant factor in products having a relatively short life span. The modularisation technique does not have a good mechanism for the situation where an option itself has options or features. This situation can exist down a number of layers of the family tree structure of an electronics product. Maintenance of these BOMs is difficult. Where there are options within options the designers and production staff need to know the inter-relationship of parts between options to ensure accuracy, compatibility and plan assembly functions. The advent of computerised spreadsheets has made the maintenance of this type of product structure information easier. This matrix is another separate document which must be maintained and cross checked. It will inevitably differ from the BOMs periodically. This thesis develops a product structure Relational BOM based on the matrix for the family of products. The processing power of the 1990's computer is fully utilised to derive the common parts for any or all of the selected products of the family. All product structure information is retained and the inter-relationship of parts is highly visible. The physical maintenance of the BOMs is simple. The BOM serves all purposes without the need for supplementary information. It is fully integrated into the Sales Order Entry , MRP, Costing, Engineering Design and Computer Aided Manufacturing (CAM) systems. This technique has been proven by being the only system used in one Electronics Design and Manufacturing organisation for over 1 year without any major problems. As described in Section 1.6 user satisfaction has been high. The response of the users to the suggestion 'lets buy an "off the shelf" package' is very negative, as it would not incorporate this BOM system. Users have expressed the opinion that EXICOM could not continue, with present staffing levels, using the traditional BOM structure.
  • Item
    Adapting ACME to the database caching environment : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Information Systems at Massey University
    (Massey University, 2003) Riaz-ud-Din, Faizal
    The field of database cache replacement has seen a great many replacement policies presented in the past few years. As the challenge to find the optimal replacement policy continues, new methods and techniques of determining cache victims have been proposed, with some methods having a greater effect on results than others. Adaptive algorithms attempt to adapt to changing patterns of data access by combining the benefits of other existing algorithms. Such adaptive algorithms have recently been proposed in the web-caching environment. However, there is a lack of such research in the area of database caching. This thesis investigates an attempt to adapt a recently proposed adaptive caching algorithm in the area of web-caching, known as Adaptive Caching with Multiple Experts (ACME), to the database environment. Recently proposed replacement policies are integrated into ACME'S existing policy pool, in an attempt to gauge its ability and robustness to readily incorporate new algorithms. The results suggest that ACME is indeed well-suited to the database environment, and performs as well as the best currently caching policy within its policy pool at any particular point in time in its request stream. Although execution time increases by integrating more policies into ACME, the overall time saved increases by avoiding disk reads due to higher hit rates and fewer misses on the cache.
  • Item
    Dependencies in complex-value databases : a dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems at Massey University
    (Massey University, 2004) Link, Sebastian
    The relational data model has been the dominant model in database design for more than three decades. It considers data to be stored in matrices where rows correspond to individuals, columns correspond to attributes, and every cell constains [sic] a single atomic value. However, today's database technology trends, e.g. spatial, genetic or web-based data, require extended data models. Within the last decade new, complex-value data models such as the higher-order entity-relationship model, object-oriented data models, semi-structured data models, and XML have evolved which allow cells to contain lists, sets, multisets, trees, matrices or even more complex type constructors, references to other cells (which lead to infinite structures), and null values (indicating missing, unknown or vague data). Matrices as such allow the storage of inconsistent data, invalid in the semantic sense. As this is not acceptable, additional requirements called dependencies have to be formulated when designing a database. The correct specification and use of dependencies needs a sound mathematical basis. For the relational data model more than 90 different classes of dependencies have been defined and studied intensively. The major problems in dependency theory are the axiomatisability of classes of dependencies, determination of the closure of a chosen set of dependencies (as certain dependencies can be implied by others) and the characterisation of semantically desirable properties for well-designed databases (such as absence of redundancies or abnormal update behavior) by syntactic properties on closed sets of dependencies. With few exceptions research has only dealt with dependencies for the relational data model. Only recently, the emergence of XML as the standard format for web-based data and the rapidly increasing usage of persistent XML databases revealed the lack of a sound mathematical basis for complex-value data models. If they are expected to serve as first class data models they require a theoretical investigation of issues like integrity, consistency, data independence, recovery, redundancy, access rights, views and integration. The goal of this thesis is to develop a dependency theory for complex-value databases that is independent from any individual data model. Therefore, an abstract algebraic approach is taken that can be adapted to the presence of different combinations of type constructors such as records, lists, sets and multisets. Data models are classified according to the data types they support. In this framework the major objective is to initiate research on the following problems - investigate the axiomatisation of important dependency classes, relevant to complex-value data models, by sound and complete sets of inference rules that permit the determination of all dependencies implied by some chosen set of dependencies. - characterise semantically desirable properties by normal forms for complex-value data models and investigate whether these normal forms can always be achieved without violating other desirable properties. - develop efficient algorithms for determining the closure of a chosen set of dependencies and for restructuring databases such that normal forms are satisfied and no information is lost. In a single thesis it is impossible to consider all classes of relational dependencies in all different combinations of type constructors. Therefore the focus is put on extending two popular classes of relational dependencies: functional and multi-valued dependencies. The axiomatisation and implication of functional dependencies is investigated for all combinations of record, list, set and multiset type. Furthermore, a normal form with respect to functional dependencies in the presence of records and lists is proposed and semantically justified. It is also shown how to obtain databases which are in this normal form. Finally, axiomatisation and implication for the class of multi-valued dependencies and the class of functional and multi-valued dependencies are studied in the context of records and lists. The work of this thesis may lead to a unified dependency theory for complex-value data models.