Massey Documents by Type
Permanent URI for this communityhttps://mro.massey.ac.nz/handle/10179/294
Browse
10 results
Search Results
Item Silicon Welly : the rise of platform capitalism and the paradoxes of precarity in Wellington City : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Social Anthropology at Massey University Manawatū, Aotearoa New Zealand(Massey University, 2024-09-20) Halley, JessicaThis thesis addresses a central question: why do digital workers in Wellington’s tech sector persist despite the inherent precarity of platform capitalism? Examining the career histories of members of the Enspiral Network, a community focused on social entrepreneurship, reveals the paradoxical nature of subjectivity in digital labour. The research employs ethnographic methods, including life histories and narrative analysis, to explore the intersection of software materiality, neoliberal political economy, and Silicon Valley-inspired discourses. It investigates how digital workers navigate the precariousness of platform capitalism through emotional investment in programming and strategic career adaptations. Findings highlight the distinctive influence of Wellington’s cultural, political, and economic landscape on digital labour. The city’s counter-cultural ethos and state-driven entrepreneurial initiatives foster unique collaborative practices and open-source contributions within the tech sector. These elements collectively shape a hybrid form of platform capitalism that challenges traditional capitalist models. In conclusion, this thesis contributes to the understanding of contemporary labour by emphasizing the role of place, subjectivity, and paradox in the production end of platform capitalism. It underscores the active agency of digital workers in constructing their careers and identities amidst precarious conditions, offering insights into the broader implications of digital labour in the twenty-first century.Item An investigation into the unsoundness of static program analysis : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Palmerston North, New Zealand(Massey University, 2021) Sui, LiStatic program analysis is widely used in many software applications such as in security analysis, compiler optimisation, program verification and code refactoring. In contrast to dynamic analysis, static analysis can perform a full program analysis without the need of running the program under analysis. While it provides full program coverage, one of the main issues with static analysis is imprecision -- i.e., the potential of reporting false positives due to overestimating actual program behaviours. For many years, research in static program analysis has focused on reducing such imprecision while improving scalability. However, static program analysis may also miss some critical parts of the program, resulting in program behaviours not being reported. A typical example of this is the case of dynamic language features, where certain behaviours are hard to model due to their dynamic nature. The term ``unsoundness'' has been used to describe those missed program behaviours. Compared to static analysis, dynamic analysis has the advantage of obtaining precise results, as it only captures what has been executed during run-time. However, dynamic analysis is also limited to the defined program executions. This thesis investigates the unsoundness issue in static program analysis. We first investigate causes of unsoundness in terms of Java dynamic language features and identify potential usage patterns of such features. We then report the results of a number of empirical experiments we conducted in order to identify and categorise the sources of unsoundness in state-of-the-art static analysis frameworks. Finally, we quantify and measure the level of unsoundness in static analysis in the presence of dynamic language features. The models developed in this thesis can be used by static analysis frameworks and tools to boost the soundness in those frameworks and tools.Item Security analyses for detecting deserialisation vulnerabilities : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Palmerston North, New Zealand(Massey University, 2021) Rasheed, ShawnAn important task in software security is to identify potential vulnerabilities. Attackers exploit security vulnerabilities in systems to obtain confidential information, to breach system integrity, and to make systems unavailable to legitimate users. In recent years, particularly 2012, there has been a rise in reported Java vulnerabilities. One type of vulnerability involves (de)serialisation, a commonly used feature to store objects or data structures to an external format and restore them. In 2015, a deserialisation vulnerability was reported involving Apache Commons Collections, a popular Java library, which affected numerous Java applications. Another major deserialisation-related vulnerability that affected 55\% of Android devices was reported in 2015. Both of these vulnerabilities allowed arbitrary code execution on vulnerable systems by malicious users, a serious risk, and this came as a call for the Java community to issue patches to fix serialisation related vulnerabilities in both the Java Development Kit and libraries. Despite attention to coding guidelines and defensive strategies, deserialisation remains a risky feature and a potential weakness in object-oriented applications. In fact, deserialisation related vulnerabilities (both denial-of-service and remote code execution) continue to be reported for Java applications. Further, deserialisation is a case of parsing where external data is parsed from their external representation to a program's internal data structures and hence, potentially similar vulnerabilities can be present in parsers for file formats and serialisation languages. The problem is, given a software package, to detect either injection or denial-of-service vulnerabilities and propose strategies to prevent attacks that exploit them. The research reported in this thesis casts detecting deserialisation related vulnerabilities as a program analysis task. The goal is to automatically discover this class of vulnerabilities using program analysis techniques, and to experimentally evaluate the efficiency and effectiveness of the proposed methods on real-world software. We use multiple techniques to detect reachability to sensitive methods and taint analysis to detect if untrusted user-input can result in security violations. Challenges in using program analysis for detecting deserialisation vulnerabilities include addressing soundness issues in analysing dynamic features in Java (e.g., native code). Another hurdle is that available techniques mostly target the analysis of applications rather than library code. In this thesis, we develop techniques to address soundness issues related to analysing Java code that uses serialisation, and we adapt dynamic techniques such as fuzzing to address precision issues in the results of our analysis. We also use the results from our analysis to study libraries in other languages, and check if they are vulnerable to deserialisation-type attacks. We then provide a discussion on mitigation measures for engineers to protect their software against such vulnerabilities. In our experiments, we show that we can find unreported vulnerabilities in Java code; and how these vulnerabilities are also present in widely-used serialisers for popular languages such as JavaScript, PHP and Rust. In our study, we discovered previously unknown denial-of-service security bugs in applications/libraries that parse external data formats such as YAML, PDF and SVG.Item A generic model for software size estimation based on component partitioning : a dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Software Engineering(Massey University, 1989) Verner, June MargueriteSoftware size estimation is a central but under-researched area of software engineering economics. Most current cost estimation models use an estimated end-product size, in lines of code, as one of their most important input parameters. Software size, in a different sense, is also important for comparative productivity studies, often using a derived size measure, such as function points. The research reported in this thesis is an investigation into software size estimation and the calibration of derived software size measures with each other and with product size measures. A critical review of current software size metrics is presented together with a classification of these metrics into textual metrics, object counts, vector metrics and composite metrics. Within a review of current approaches to software size estimation, that includes a detailed analysis of Function Point Analysis-like approaches, a new classification of software size estimation methods is presented which is based on the type of structural partitioning of a specification or design that must be completed before the method can be used. This classification clearly reveals a number of fundamental concepts inherent in current size estimation methods. Traditional classifications of size estimation approaches are also discussed in relation to the new classification. A generic decomposition and summation model for software sizing is presented. Systems are classified into different categories and, within each category, into appropriate component type partitions. Each component type has a different size estimation algorithm based on size drivers appropriate to that particular type. Component size estimates are summed to produce partial or total system size estimates, as required. The model can be regarded as a generalization of a number of Function Point Analysis-like methods in current use. Provision is made for both comparative productivity studies using derived size measures, such as function points, and for end product size estimates using primitive size measures, such as lines of code. The nature and importance of calibration of derived measures for comparative studies is developed. System adjustment factors are also examined and a model for their analysis and application presented. The model overcomes most of the recent criticisms that have been levelled at Function Point Analysis-like methods. A model instance derived from the generic sizing model is applied to a major case study of a system of administrative applications in which a new Function Point Analysis-type metric suited to a particular software development technology is derived, calibrated and compared with Function Point Analysis. The comparison reveals much of the anatomy of Function Point Analysis and its many deficiencies when applied to this case study. The model instance is at least partially validated by application to a sample of components from later incremental developments within the same software development technology. The performance of the model instance for this technology is very good in its own right and also very much better than Function Point Analysis. The model is also applied to three other business software development technologies using the IFIP 1 International Federation for Information Processing standard inventory control and purchasing reference system. The purpose of this study is to demonstrate the applicability of the generic model to several quite different software technologies. Again, the three derived model instances show an excellent fit to the available data. This research shows that a software size estimation model which takes explicit advantage of the particular characteristics of the software technology used can give better size estimates than methods that do not take into account the component partitions that are characteristic of the software technology employed.Item A comparative case study of programming language expansion ratios : a thesis presented in partial fulfilment of the requirements for the degree of Master of Technology in Computing Technology at Massey University(Massey University, 1989) Lim, Ping HweeAn effective size estimation tool must allow an estimate to be obtained early enough to be useful. Some difficulties have been observed in using the traditional lines of code (LOC) measure in software sizing, much of which is due to the need for more detailed design information to be available before an accurate estimate can be achieved. This does not allow the result to be obtained early in the software development process. Moreover, the inherent language-dependency of LOC tends to restrict its use. An alternative measure using Function Point Analysis, developed by Albrecht, has been found to be an effective tool for sizing purposes and allows early sizing. However, the function point measure does not have a sufficient historical base of information for it to be used successfully in all cases with existing models of the software development process. Because lines of code already have a sense of "universality" as the de facto basic measure of software size, it can serve as a useful extension to function points. Language Expansion Ratios are seen as the key in providing such an extension by bridging the gap between function point and lines of code. Several sizing models have made use of expansion ratios in an effort to provide an equivalent size in lines of code in anticipation of its use in productivity studies and related cost models. However, its use has been associated with ranges of variability. The purpose of this thesis is to study Language Expansion Ratios, and the factors affecting them, for several languages based on a standard case study. This thesis surveys the prevailing issues of software size measurement and describes the role and importance of language expansion ratios. It presents the standard case study used and the methodology for the empirical study. The experimental results of measurements of the actual system are analysed and these form the basis for appropriate conclusions on the validity and applicability of the expansion ratios studied. This research shows that the use of Language Expansion Ratios is valid but it is considered inadequate when applied in its present form. This was found to be due to the weighting factors associated with the appropriate function value obtained for the different functional categories of the system.Item An appraisal of the SMART Board for collaborative learning : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Computer Science at Massey University(Massey University, 2003) Mohanarajah, ThevalojinieBeing a potential learning paradigm for the current decade, Computer Supported Collaborative Learning will blossom with the support of hardware technologies such as digital whiteboards along with suitable software. We investigated the effectiveness of a kind of digital whiteboard (SMART Board) for supporting collaborative learning. Our study reveals that the key features necessary for group learning such as floor control mechanism and interaction guidance are not supported by the current SMART board software We designed and implemented software to overcome the important drawbacks of the existing systems which includes facilities to guide written and verbal contribution during the learning with the help of mini-vocabulary and to manage floor control.Item Emotion-centred design : a human factors approach in affective web design : a thesis for fulfilment of a Master of Philosophy degree, College of Design, Fine Arts and Music(Massey University, 2002) Smith, Warren DavidThis thesis hypothesised that a major factor in the failure of many e-Commerce ventures was the lack of emotion imparted into the design, with trust barriers still being to the fore, and a lack of affective human factors like fun, pleasure and joy in the user experience. The human brain often acts emotionally before rationally and this affects initial reactions to experiences and the propensity to purchase online. A key to understanding human-computer communication is that form should follow emotion (as well as function). A wide range of design concepts and theories are analysed for linkages to human emotion due to the exploratory nature of this thesis. Aspects of New Media design such as video, sound, images, colour and virtual reality are covered along with previous research into affective human factors; transferability of emotional elements from other products; and the importance of trust and prevention of negative emotions. Case examples are provided throughout via screenshots and commentary, including a special section on the way that the Nike site has met many emotional design criteria. Research into the opinions of designers and users is undertaken via questionnaires to verify literary findings and measure views on emotional appeal within Websites. It was found that there are misunderstandings of human-computer communication - with designers not meeting user expectations in some areas, even though many designers agree that emotional design is important. In particular, there needs to be a better understanding of how to integrate fun, social contact, colour, trust and sound into designs. Emotion is core to human function, and evolution has seen the emotional parts of the brain grow long before rational areas arose. Given the importance of emotion it is only natural that an emphasis should be placed on it in design philosophies. Whilst some designers are realising the importance of this in consumer products this concept needs to be further emphasised in the world of e-Commerce. Designers surveyed in this thesis were nearly all following a form follows function or a subjective/intuitive design philosophy. However, it was found that there was a good level of support (70%) for emotional design. A gap was established from this fact because only 45% believe they are currently using a high level of emotional design in practice. Chi-square tests showed that there were a number of significant relationships between the level of education and other questionnaire variables such as the importance of colour and recontextualising from car and game design. Establishing trust helps to overcome the core human emotion of fear. Branding, seals of approval and high quality navigation are amongst the elements that can assist in bridging human-computer distrust. Predispositions and previous experiences can also affect initial trust values. Questionnaire results found that designers still believe that lack of trust is a major psychological barrier to purchasing online. Major trust dimensions from previous empirical research were all deemed important. It was also found that users and designers rated trust near the top of emotional themes to concentrate on in Web design. Negative emotions (anger and frustration) can also arise if the design is not inherently usable. Usability was the top-rating design theme amongst designers. There has to be a good balance between the rational and emotional sides. Further negative emotions can be evoked if the site is slow or if there are delays. Speed of loading was amongst the top emotional design elements for both users and designers. It is a difficult line for designers to tread - on one hand using speed to prevent negative emotions, but on the other hand balancing the need for other design elements that generate positive emotions through fun and pleasure characteristics (that might slow things down). Designers involved in this study were very much in agreement with the importance of choosing colours to match the emotions they wanted to evoke in visitors (based on understandings of colour-emotion stereotypes and 'temperatures'). Colour can achieve harmonious interactions or cause rejection by the human brain depending on its application. The survey of users revealed that almost half of the respondents counted colour in their top 5 emotional themes, whereas designers did not think, it was as important as other emotive dimensions. Different cultures may respond differently to metaphorical images, colours, and dimensions such as power-distance and masculinity. Nearly all designers believed that empathising with target users (a part of emotional intelligence) was very important, as was involving users in the design process (usercentred design). Only 50% of users felt that designers were respecting their demographies and culture, so there is still a large number of people who feel they could be more satisfied in this sense. It is proposed that more user testing be carried out in conjunction with frameworks that rate cultural dimensions based on target audiences. The use of video and streaming media was portrayed to be a proposition requiring careful consideration and application by previous non-empirical references. Streaming video can connect with people on an emotional level, bringing in a degree of surprise and variation, and fully highlight the appealing characteristics of the product(s) trying to be sold online. Other New Media technologies such as virtual reality (VR) and 3D have been around for quite awhile (in computer games and scientific applications) but are yet to achieve widespread usage in Website e-Commerce. Some literature is against the use of VR and 3D on the Web but several companies have been receiving accolades in this area because of the ability to bridge an emotional gap between brands and consumers. Questionnaire results showed that most design respondents did not think streaming media, 3D and VR were important in order to gain emotional connections. However, higher bandwidth speeds that will facilitate more use of streaming media and 3D are deemed favourable by designers in terms of increasing emotional appeal. The need for social contact, familiarity and recognition of expressions and gestures led to the proposition of using virtual shop assistants and agents. Contact in the form of live text chat can also fulfil some social needs and plays a big part in portraying trustworthiness since a real person is being interacted with. Designers surveyed in this study were reasonably evenly distributed amongst those in favour, unsure and in disagreement with the use of agents. Surprisingly, given that users would not have had much exposure to virtual agents and characters online, they actually deemed them amongst the highest rating emotional design elements - creating a gap between user expectations and designer actions. Resources revealed that sound can account for a large part of an overall experience. Sound creates mood and atmosphere, and is present in the physical retail environment. Although literature stresses the importance of sound to Web design, designers in this study were of quite the opposite view. Sound was not deemed to be an important experience (near the bottom of ranked emotional dimensions). Users, however, rated sound amongst the middle group of emotional elements. More use of sound is an opportunity for the future. Two broad product ranges - automobiles and computer games - were investigated to see what made them such emotion-centred items. Cars and games evoke feelings of pleasure, fun, flow and fantasy because of their design. Designers favoured interactivity, colour use and fun as gaming elements best applied to Web design. More than half of designer respondents believed that the design of cars and games can be recontextualised into Web design, and most users were definitely in favour of seeing emotional elements they like about cars and games placed into Websites. Dimensions and potential mechanisms for measuring or assessing the emotional intelligence of Websites are proposed, and these include the use of semantic maps to position and compare Websites based on their performance against dimensions such as fun, warmth, trustworthiness, use of colour and the ability to engage users on a social level. The capability of building emotion into a Website is then balanced with the need for high-quality navigation, functionality and usability - as poor efforts in these 'rational' areas can lead to negative emotions and distrust. The design also has to keep in line with the demands of the company wanting the Website built. This study was exploratory - with the aim of bringing out into the open some aspects of New Media e-Commerce design that could he better utilised in order to match the emotions and feelings of customers - potentially leading to higher degrees of sales success. This thesis is therefore hoped to be a catalyst for further study in this area.</Item A distributed shop floor control system based on the principles of heterarchical control and multi agent paradigm : a dissertation presented in partial fulfilment of the requirements for a PhD degree in Production Technology - Computer Integrated Manufacturing (CIM) Systems at Massey University(Massey University, 2004) Colak, Goran DIn progressive firms, major efforts are underway to reduce the time to design, manufacture, and deliver products. The programs have a variety of objectives, from reducing lead-time to increasing product quality. The process of improvement starts with customer requirements, which in turn lead to customer-driven manufacturing, incorporating customer requirements more directly into the manufacturing processes. Forecasting customer requirements has not become any easier, in fact, just the contrary. The implication is clear: that if demands cannot be forecast, the manufacturing function must be designed to respond to these demands. To do this rapidly, more and more of the manufacturing decisions are being delegated to the factory floor. To paraphrase; the customer is saying what is to be made, the due date is now, and the work force is figuring out how to do it online. As the manufacturing world moves toward the "zero everything" vision of the future (zero inventory, zero set-up time, zero defects, zero waste), fundamental changes will take place in the factory. These changes will necessitate changes in manufacturing planning and control systems and particularly changes in planning and control on the shop floor level. This dissertation addresses the possible direction that some of these changes might take on the shop floor. The starting preamble of this research is that forecasting in certain type of manufacturing systems is not possible. An example might be systems in which product orders arrive randomly, such as manufacturing facilities involved in production of replacement spare parts). Additionally, in many other manufacturing systems, forecasting generates results that are of a very low level of certainty. In many occasions they are practically useless, since they are applicable only for short time horizons. As an example, small-quantity batch manufacturing systems usually operate under conditions where frequent disturbances make this production unstable at all times. Therefore, addressing these systems, the main idea embodied in this dissertation could be expressed as follows: "Instead of focusing efforts on how to improve the old, or develop new methods for controlling material flows in manufacturing systems, methods that are solely based on the main premise of predicting the future circumstances, this research takes another course. It considers an alternative approach - developing of manufacturing control mechanisms that are "more reactive" to the changes in the systems and "less dependant on prediction" of future events. It is believed that the modern job shop manufacturing facilities, such as mentioned above, can further increase their competitiveness by adopting approaches for shop floor control systems that are discussed in this research study. This is because the proposed system is capable, both dynamically and in real time, of promptly responding to frequent changes in production conditions, always attempting to find the best possible solution for given circumstances.Item Developing an authoring environment for procedural task tutoring systems : a dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Palmerston North, New Zealand(Massey University, 1997) Smith, Shamus PaulThe use of computers in education is becoming more and more common as the price of technology drops and its general availability is increased. Unfortunately, building computer based tutoring systems is a difficult process which is fraught with many problems. A significant problem in this area is the lack of reuse of system components between computer tutor developments. This means that each new system must be started from scratch and mistakes from earlier projects can easily be repeated. A complementary difficulty is the variety of specialist skills that are required to build these systems. Typical developers do not usually possess the combination of domain, cognitive science and programming knowledge that is needed to build computer tutors. One solution to these problems is the use of an authoring environment for facilitating the building of computer based tutoring systems. This thesis presents an authoring tool for the construction of computer based tutoring systems teaching procedural tasks in a discovery learning environment. TANDEM (Task ANd Domain Environment Model) provides tools for domain and task definition, sub-domain definition and a domain independent tutoring engine. It is argued that such an environment can provide a non-expert user with access to advanced techniques from artificial intelligence research for knowledge acquisition and representation. Several tasks from the construction process have been automated, thus simplifying this activity. The use of sub-domain partitioning has been considered and techniques for the integration of custom built domain interfaces are described. Also, it is proposed that by providing a domain independent tutoring engine, reuse can be encouraged over numerous domains which can reduce the development time required to build these systems.Item A comparison of the main methods for evaluating the usability of computer software : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Psychology at Massey University(Massey University, 1992) Henderson, Ronald DerekThe aim of this thesis is to examine the dominant computer software usability evaluation methods. Four evaluation methods (logged data, questionnaire, interview, and verbal protocol analysis) were used to evaluate three different business software types (spreadsheet, word processor, and database) using a between groups design, involving 148 individuals of both genders. When each evaluation method was examined individually, the results tended to support findings from previous research. Comparisons were made to examine the efficiency of each evaluation method, in terms of its ability to highlight usability problems (both between and within the evaluation strategy). Here support for the efficiency of the verbal protocol analysis method was found. The efficiency of using two evaluation methods was also examined, where it was found that no significant improvement was obtained over the verbal protocol analysis used by itself. A comparison addressing the practicality of using these methods was also conducted. It seems that each method has differing strengths and weaknesses depending on the stage of the evaluation. From these results a theory for the effectiveness of evaluation strategies is presented. Suggestions for improving the methods commonly used, are also made. The thesis concludes by discussing the software evaluation domain and its relationship to the wider evaluation context.
