Massey Documents by Type

Permanent URI for this communityhttps://mro.massey.ac.nz/handle/10179/294

Browse

Search Results

Now showing 1 - 8 of 8
  • Item
    Security analyses for detecting deserialisation vulnerabilities : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Palmerston North, New Zealand
    (Massey University, 2021) Rasheed, Shawn
    An important task in software security is to identify potential vulnerabilities. Attackers exploit security vulnerabilities in systems to obtain confidential information, to breach system integrity, and to make systems unavailable to legitimate users. In recent years, particularly 2012, there has been a rise in reported Java vulnerabilities. One type of vulnerability involves (de)serialisation, a commonly used feature to store objects or data structures to an external format and restore them. In 2015, a deserialisation vulnerability was reported involving Apache Commons Collections, a popular Java library, which affected numerous Java applications. Another major deserialisation-related vulnerability that affected 55\% of Android devices was reported in 2015. Both of these vulnerabilities allowed arbitrary code execution on vulnerable systems by malicious users, a serious risk, and this came as a call for the Java community to issue patches to fix serialisation related vulnerabilities in both the Java Development Kit and libraries. Despite attention to coding guidelines and defensive strategies, deserialisation remains a risky feature and a potential weakness in object-oriented applications. In fact, deserialisation related vulnerabilities (both denial-of-service and remote code execution) continue to be reported for Java applications. Further, deserialisation is a case of parsing where external data is parsed from their external representation to a program's internal data structures and hence, potentially similar vulnerabilities can be present in parsers for file formats and serialisation languages. The problem is, given a software package, to detect either injection or denial-of-service vulnerabilities and propose strategies to prevent attacks that exploit them. The research reported in this thesis casts detecting deserialisation related vulnerabilities as a program analysis task. The goal is to automatically discover this class of vulnerabilities using program analysis techniques, and to experimentally evaluate the efficiency and effectiveness of the proposed methods on real-world software. We use multiple techniques to detect reachability to sensitive methods and taint analysis to detect if untrusted user-input can result in security violations. Challenges in using program analysis for detecting deserialisation vulnerabilities include addressing soundness issues in analysing dynamic features in Java (e.g., native code). Another hurdle is that available techniques mostly target the analysis of applications rather than library code. In this thesis, we develop techniques to address soundness issues related to analysing Java code that uses serialisation, and we adapt dynamic techniques such as fuzzing to address precision issues in the results of our analysis. We also use the results from our analysis to study libraries in other languages, and check if they are vulnerable to deserialisation-type attacks. We then provide a discussion on mitigation measures for engineers to protect their software against such vulnerabilities. In our experiments, we show that we can find unreported vulnerabilities in Java code; and how these vulnerabilities are also present in widely-used serialisers for popular languages such as JavaScript, PHP and Rust. In our study, we discovered previously unknown denial-of-service security bugs in applications/libraries that parse external data formats such as YAML, PDF and SVG.
  • Item
    Effective security analysis for combinations of MTD techniques on cloud computing : a thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy (Ph.D.) in Computer Science, Massey University
    (Massey University, 2019) Alavizadeh, Hooman
    Moving Target Defense (MTD) is an emerging security mechanism that can introduce a dynamic defensive layer for a given system by changing the attack surface. MTD techniques are useful to address security issues in cloud computing. MTD techniques are classified into three main categories: Shuffle, Diversity, and Redundancy. Shuffle MTD techniques can rearrange the system's components (e.g., IP mutation). They confuse the attackers by hardening the reconnaissance process and wasting the information collected by the attackers. Diversity MTD techniques change the variants of a system's component (e.g., operating systems), which makes an attack more difficult and costly because the attackers encounter a new set of vulnerabilities. Redundancy MTD techniques increase the system components' replicas. They can be used to increase system dependability (e.g., reliability or availability) by providing redundant ways of providing the same services when some system components are compromised. Since deploying each MTD technique may affect the others and also have different effects on the system (e.g., one can enhance the security and another can provide service's availability), it is important to combine MTD techniques in such a way that they can support each other directly or indirectly. This research first conducts an extensive survey of MTD literature to realize and summarize the key limitations of the current MTD studies. We reveal that (i) there is a lack of investigation on the combination of MTD techniques, (ii) relatively less effort has been made to evaluate the effectiveness of MTD techniques using security analysis, and (iii) there is a shortcoming in the validation of MTD techniques on more realistic cloud testbeds. We focus on the theoretical aspects of combining MTD techniques and provide formalization to combine MTD techniques in order to address those limitations. First, we achieve this by combining Shuffle and Redundancy to find a trade-off between System Risk and Reliability. Then, we provide a formal mathematical definition to combine Shuffle and Diversity to increase security while narrowing the scope for potential attacks. We evaluate the effectiveness of the proposed combined techniques using Graphical Security Models (GSMs) and incorporating various security metrics. We extend the combination of MTD techniques by including Redundancy besides Shuffle and Diversity. We perform an in-depth analysis on combining those MTD techniques to find out a trade-off between security alongside the reliability of the cloud. We show that if those MTD techniques are combined properly, it not only improves the cloud's security posture but also it increases the reliability of the cloud. Moreover, we study the economic metrics to show how MTD techniques can be deployed in a cost effective way. We also propose an Optimal Diversity Assignment Problem (O-DAP) to find the optimal solution for deploying Diversity over cloud. Finally, we design and develop an automated cloud security framework to evaluate the cloud security posture and adapt MTD techniques on the real cloud platform. We demonstrate the feasibility, adaptability, and usability of implementing MTD techniques on UniteCloud which is a real private cloud platform.
  • Item
    RanDeter : using novel statistical and physical controls to deter ransomware attacks : a thesis presented in partial fulfillment of the requirements for the degree of Master of Information Sciences in Software Engineering at Massey University, Auckland, New Zealand
    (Massey University, 2018) McIntosh, Timothy Raymond
    Crypto-Ransomware are a type of extortion-based malware that encrypt victims’ personal files with strong encryption algorithms and blackmail victims to pay ransom to recover their files. The recurrent episodes of high-profile ransomware attacks like WannaCry and Petya, particularly on healthcare, government agencies and big corporates, have highlighted the immediate demand for effective defense mechanisms. In this paper, RANDETER is introduced as a novel anti-crypto-ransomware solution that deters ransomware activities, using novel statistical and physical controls inspired by the police anti-terrorism practice. Police try to maintain public safety by maintaining a constant presence to patrol key public areas, identifying suspects who exhibit out-of-ordinary characteristics, and restricting access to protected areas. Ransomware are in many ways like terrorists; their attacks are unexpected, malicious and aim for the largest number of victims. It is possible to try to detect and deter crypto-ransomware by maintaining a constant surveillance on the potential victims – MBR and user files especially documents and photos. RANDETER is implemented as two compatible and complementary modules: PARTITION GUARD and FILE PATROL. PARTITION GUARD blocks modifications to the area of MBR on the booting disk. FILE PATROL checks all file activities of directories protected by RANDETER against a list of Recognized Processed with Multi-Tier Security Rules. Upon detection of violations of such rules, which may have been initiated by crypto-ransomware as judged by FILE PATROL, FILE PATROL will freeze access of the monitored directories, terminate the offending processes, and resume access of those directories. Our evaluation demonstrated that RANDETER could ensure less and often no irrecoverable file damage by current ransomware families, while imposing less disk performance overheads, compared to existing competitor anti-ransomware implementations like CRYPTOLOCK, SHIELDFS and REDEMPTION. In addition, RANDETER was shown to be resilient against masquerading attacks and ransomware polymorphism.
  • Item
    Strategies for resolving security and interference issues in 802.11 wireless computer networking : a thesis presented in partial fulfilment of the requirements for the degree of Masters in Engineering in Computer Systems Engineering at Massey University, Palmerston North, New Zealand
    (Massey University, 2006) Mendez, Gladwin
    This thesis presents the outcomes of the research and development of strategies to improve 802.11 wireless networking security, reduce interference, and investigation into the trends of home users in the city limits of Palmerston North, New Zealand. The main contributions of the research are several types of improvement strategies that reduce interference, add additional layers of security to 802.11, and reports on wireless trends. The thesis begins with an overview of the current 802.11 security protocols and related issues. The current state of the 802.11 security is presented along with an assessment of efficacy of 802.11. Lastly, the motivations for improving security and reducing interference are explained. The main improvement presented within the thesis is that of client filtering. The operation of filtering is explained. Using methods from other filtering protocols its shown that how an additional layer of security can be added to 802.11. Following this, more improvements are shown that can be used with or without client filtering. The use of smart aerials, wizards and frequency selective materials is discussed and the advantages and disadvantages of each are highlighted, as well as the aspects and issues of implementing the strategies on a home personal computer based platform are presented. This is followed by a description of the experiments conducted into attenuation and direction sensing. The results of the experiments are presented along with the discussion. Finally, conclusions about the improvements are detailed and the results shown, in addition to research conducted on the trends of 802.11 users to further highlight the need for this research.
  • Item
    Building in web application security at the requirements stage : a tool for visualizing and evaluating security trade-offs : a thesis presented in partial fulfilment of the requirements for the degree of Master of Information Science in Information Systems at Massey University, Albany, New Zealand
    (Massey University, 2007) Nehring, Natalia Alekseevna
    One dimension of Internet security is web application security. The purpose of this Design-science study was to design, build and evaluate a computer-based tool to support security vulnerability and risk assessment in the early stages of web application design. The tool facilitates risk assessment by managers and helps developers to model security requirements using an interactive tree diagram. The tool calculates residual risk for each component of a web application and for the application overall so developers are provided with better information for making decisions about which countermeasures to implement given limited resources tor doing so. The tool supports taking a proactive approach to building in web application security at the requirements stage as opposed to the more common reactive approach of putting countermeasures in place after an attack and loss have been incurred. The primary contribution of the proposed tool is its ability to make known security-related information (e.g. known vulnerabilities, attacks and countermeasures) more accessible to developers who are not security experts and to translate lack of security measures into an understandable measure of relative residual risk. The latter is useful for managers who need to prioritize security spending. Keywords: web application security, security requirements modelling, attack trees, threat trees, risk assessment.
  • Item
    Maximising the effectiveness of threat responses using data mining : a piracy case study : this thesis presented in partial fulfillment of the requirements for the degree of Master of Information Sciences in Information Technology, School of Engineering and Advanced Technology at Massey University, Albany, Auckland, New Zealand
    (Massey University, 2015) Lee, Seung Jun
    Companies with limited budgets must decide how best to defend against threats. This thesis presents and develops a robust approach to grouping together threats which present the highest (and lowest) risk, using film piracy as a case study. Techniques like cluster analysis can be used effectively to group together sites based on a wide range of attributes, such as income earned per day and estimated worth. The attributes of high earning and low earning websites could also give some useful insight into policy options which might be effective in reducing earnings by pirate websites. For instance, are all low value sites based in a country with effective internet controls? One of the practical data mining techniques such as a decision tree or classification tree could help rightsholders to interpret these attributes. The purpose of analysing the data in this thesis was to answer three main research questions in this thesis. It was found that, as predicted, there were two natural clusters of the most complained about sites (high income and low income). This means that rightsholders should focus their efforts and resources on only high income sites, and ignore the others. It was also found that the main significant factors or key critical variables for separating high-income vs low-income rogue websites included daily page-views, number of internal and external links, social media shares (i.e. social network engagement) and element of the page structure, including HTML page and JavaScript sizes. Further research should investigate why these factors were important in driving website revenue higher. For instance, why is high revenue associated with smaller HTML pages and less JavaScript? Is it because the pages are simply faster to load? A similar pattern is observed with the number of links. These results could form a study looking into what attributes make e-commerce successful more broadly. It is important to note that this was a preliminary study only looking at the Top 20 rogue websites basically suggested by Google Transparency Report (2015). Whilst these account for the majority of complaints, a different picture may emerge if we analysed more sites, and/or selected them based on different sets of criteria, such the time period, geographic location, content category (software versus movies, for example), and so on. Future research should also extend the clustering technique to other security domains.
  • Item
    Detection and classification of malicious network streams in honeynets : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Palmerston North, New Zealand
    (Massey University, 2013) Abbasi, Fahim U H
    Variants of malware and exploits are emerging on the global canvas at an ever-increasing rate. There is a need to automate their detection by observing their malicious footprints over network streams. Misuse-based intrusion detection systems alone cannot cope with the dynamic nature of the security threats faced today by organizations globally, nor can anomaly-based systems and models that rely solely on packet header information, without considering the payload or content. In this thesis we approach intrusion detection as a classi cation problem and describe a system using exemplar-based learning to correctly classify known classes of malware and their variants, using supervised learning techniques, and detect novel or unknown classes using unsupervised learning techniques. This is facilitated by an exemplar selection algorithm that selects most suitable exemplars and their thresholds for any given class and a novelty detection algorithm and classi cation algorithm that is capable to detect, learn and classify unknown malicious streams into their respective novel classes. The similarity between malicious network streams is determined by a proposed technique that uses string and information-theoretic metrics to evaluate the relative similarity or level of maliciousness between di erent categories of malicious network streams. This is measured by quantifying sections of analogous information or entropy between incoming network streams and reference malicious samples. Honeynets are deployed to capture these malicious streams and create labelled datasets. Clustering and classi cation methods are used to cluster similar groups of streams from the datasets. This technique is then evaluated using a large dataset and the correctness of the classi er is veri ed by using \area under the receiver operating characteristic curves" (ROC AUC) measures across various string metric-based classi ers. Di erent clustering algorithms are also compared and evaluated on a large dataset. The outcomes of this research can be applied to aid existing intrusion detection systems (IDS) to detect and classify known and unknown malicious network streams by utilizing information-theoretic and machine learning based approaches.
  • Item
    Ontological lockdown assessment : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Information Technology at Massey University, Palmerston North, New Zealand
    (Massey University, 2008) Steele, Aaron
    In order to keep shared access computers secure and stable system administrators resort to locking down the computing environment in order to prevent intentional and unintentional damage by users. Skilled attackers are often able to break out of locked down computing environments and intentionally misuse shared access computers. This misuse has resulted in cases of mass identity theft and fraud, some of which have had an estimated cost ranging in millions. In order to determine if it is possible to break out of locked down computing environments an assessment method is required. Although a number of vulnerability assessment techniques exist, none of the existing techniques are sufficient for assessing locked down shared access computers. This is due to the existing techniques focusing on traditional, application specific, software vulnerabilities. Break out path vulnerabilities (which are exploited by attackers in order to break out of locked down environments) differ substantially from traditional vulnerabilities, and as a consequence are not easily discovered using existing techniques. Ontologies can be thought of as a modelling technique that can be used to capture expert knowledge about a domain of interest. The method for discovering break out paths in locked down computers can be considered expert knowledge in the domain of shared access computer security. This research proposes an ontology based assessment process for discovering break out path vulnerabilities in locked down shared access computers. The proposed approach is called the ontological lockdown assessment process. The ontological lockdown assessment process is implemented against a real world system and successfully identifies numerous break out path vulnerabilities.