Massey Documents by Type
Permanent URI for this communityhttps://mro.massey.ac.nz/handle/10179/294
Browse
4 results
Search Results
Item A fuzzy and wavelet-based image compression algorithm : a thesis presented in partial fulfillment of the requirements for the degree of Master of Science in Computer Science at Massey University, Albany, New Zealand(Massey University, 2006) Huang, LiNowadays, the Internet and digital image widespread are used in the industry, commerce, military, traffic and all walks of life. However, this kind of general use resulted in required for less transmission time and storage space. Image compression can address the problem of reducing the amount of data to represent a digital image. The image will satisfy the transmission and the preserved request after the compression. With the increasing use some technologies in the image processing, image compression also requires new technology to get the high compression ratio and more better image quality. Therefore, a new standard has been developed by Joint Photographic Experts Group (JPEG). Apart from JPEG, there are other algorithms developed for image compression, Normally, EZW, SHIPT and VQ algorithms. However, they all deal with the calculation of coefficients with too much complexity; as a consequence, compressing still image takes too much time. In the light of these problems, this thesis introduces a new method for dealing with the requirements of the coefficients while retaining the important detail in the image, by employing a Fuzzy Logic technique reduce the number of the coefficients, and then utilizes the Huffman or LZW algorithm to complete the image compression. The algorithm developed in this research, called IWF algorithm, is based on four key techniques: 1) a wavelet transform for decomposition. This technique allows the combination of lossless and lossy compression with extremely high compression rate and image quality. 2) Quantization, this technique generally works by compressing a range of value to a single quantum value. By reducing the number of discrete symbols in a given stream, the stream becomes more compressible. This step in the IWF process is a lossy transformation. 3) Adaptation of Fuzzy logic techniques. This step uses the Fuzzy Logic techniques to handle the wavelet coefficients, enable the wavelet coefficients to have the same value in the high subbands. 4) Adaptation of Lossless data compression techniques. Keywords: Image Compression, Fuzzy Logic, Wavelet transforms, Decomposition, Haar Wavelet transform.Item Exploring the implementation of JPEG compression on FPGA : a thesis presented in partial fulfilment of the requirements for the degree of Masters of Engineering in Electronics and Computer Systems Engineering at Massey University, Palmerston North, New Zealand(Massey University, 2012) De Silva, Ann MalshaThis thesis presents an implementation of JPEG compression on a Field Programmable Gate Array (FPGA) as the data are streamed from a camera. The goal was to minimise the usage of logic resources of the FPGA and the latency at each stage of the JPEG compression. The modules of these architectures are fully pipelined to enable continuous operation on streamed data. The designed architectures are detailed in this thesis and they were described in Handel-C. The correctness of each JPEG module implemented on Handel-C was validated using MATLAB. The software and hardware based algorithms did result in small differences in the compressed images as a result of simplifying the arithmetic in hardware. However, these differences were small, with no discernible difference in image quality between hardware and software compressed images. The JPEG compression algorithm has been successfully implemented and tested on Altera DE2-115 development board. Improvements were made by minimising the latency, and increasing the performance. Final implementation also showed that implementing a quantisation module in three stage pipeline fashion and using FPGA multipliers for 1D-DCT and 2D-DCT can significantly drop the logic resources and increase the performance speed. The proposed JPEG compressor architecture has a latency of 114 clock cycles plus 7 image rows and has a maximum clock speed of 55.77MHz. The results obtained from this implementation were very satisfactory.Item Synthetic test patterns and compression artefact distortion metrics for image codecs : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Palmerston North, New Zealand(Massey University, 2009) Punchihewa, AmalThis thesis presents a framework of test methodology to assess spatial domain compression artefacts produced by image and intra-frame coded video codecs. Few researchers have studied this broad range of artefacts. A taxonomy of image and video compression artefacts is proposed. This is based on the point of origin of the artefact in the image communication model. This thesis presents objective evaluation of distortions known as artefacts due to image and intra-frame coded video compression made using synthetic test patterns. The American National Standard Institute document ANSI T1 801 qualitatively defines blockiness, blur and ringing artefacts. These definitions have been augmented with quantitative definitions in conjunction with test patterns proposed. A test and measurement environment is proposed in which the codec under test is exercised using a portfolio of test patterns. The test patterns are designed to highlight the artefact under study. Algorithms have been developed to detect and measure individual artefacts based on the characteristics of respective artefacts. Since the spatial contents of the original test patterns form known structural details, the artefact distortion metrics based on the characteristics of those artefacts are clean and swift to calculate. Distortion metrics are validated using a human vision system inspired modern image quality metric. Blockiness, blur and ringing artefacts are evaluated for representative codecs using proposed synthetic test patterns. Colour bleeding due to image and video compression is discussed with both qualitative and quantitative definitions for the colour bleeding artefacts introduced. The image reproduction performance of a few codecs was evaluated to ascertain the utility of proposed metrics and test patterns.Item Low-complexity block dividing coding method for image compression using wavelets : a thesis presented in partial fulfillment of the requirements for the degree of Master of Engineering in Computer Systems Engineering at Massey University, Palmerston North, New Zealand(Massey University, 2007) Zhu, JihaiImage coding plays a key role in multimedia signal processing and communications. JPEG2000 is the latest image coding standard, it uses the EBCOT (Embedded Block Coding with Optimal Truncation) algorithm. The EBCOT exhibits excellent compression performance, but with high complexity. The need to reduce this complexity but maintain similar performance to EBCOT has inspired a significant amount of research activity in the image coding community. Within the development of image compression techniques based on wavelet transforms, the EZW (Embedded Zerotree Wavelet) and the SPIHT (Set Partitioning in Hierarchical Trees) have played an important role. The EZW algorithm was the first breakthrough in wavelet based image coding. The SPIHT algorithm achieves similar performance to EBCOT, but with fewer features. The other very important algorithm is SBHP (Sub-band Block Hierarchical Partitioning), which attracted significant investigation during the JPEG2000 development process. In this thesis, the history of the development of wavelet transform is reviewed, and a discussion is presented on the implementation issues for wavelet transforms. The above mentioned four main coding methods for image compression using wavelet transforms are studied in detail. More importantly the factors that affect coding efficiency are identified. The main contribution of this research is the introduction of a new low-complexity coding algorithm for image compression based on wavelet transforms. The algorithm is based on block dividing coding (BDC) with an optimised packet assembly. Our extensive simulation results show that the proposed algorithm outperforms JPEG2000 in lossless coding, even though it still leaves a narrow gap in lossy coding situations
