Massey Documents by Type
Permanent URI for this communityhttps://mro.massey.ac.nz/handle/10179/294
Browse
11 results
Search Results
Item Lens distortion correction by analysing the shape of patterns in Hough transform space : a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering in Electronics and Computer Engineering at Massey University, Manawatu, New Zealand(Massey University, 2018) Chang, YuanMany low cost, wide angle lenses suffer from lens distortion, resulting from a radial variation in the lens magnification. As a result, straight lines, particularly those in the periphery, appear curved. The Hough transform is a commonly used linear feature detection technique within an image. In Hough transform space, straight lines and curved lines have different shapes of peaks. This thesis proposes a lens distortion correction method named SLDC based on analysing the shape of patterns in the Hough transform space. It works by reconstructing the distorted line from significant points on the smile-shaped Hough pattern. It then optimises the distortion parameter by mapping the reconstructed curved line into a straight line and minimising the RMSE. From both simulation and correcting real world images, the SLDC provides encouraging results.Item Tracking vertebrae in cinematic fluoroscopic X-rays : a thesis presented in fulfilment of the requirements for the degree of Master of Technology at Massey University(Massey University, 1992) Long, Samantha RobynThis thesis concerns the evaluation of an image subtraction statistic that is used by a prototype of a chiropractic image processing package to track spinal movement. The image subtraction statistic is calculated by summing the absolute differences in pixel intensity of two images. Also included in the thesis is a brief discussion of different methods of tracking and a literature search of alternative statistics that may be appropriate for the image type (low contrast and noisy). In summary the experimental work concluded that inter frame rotation does not have a significant effect on the performance of the image subtraction statistic when tracking inter-frame but when tracking from a particular frame to one which is significantly later in the sequence rotation must be included in the algorithm. It was also found that discretisation of the image had a detrimental effect on performance. This can be compensated for by adding a sub-pixel location calculation into the algorithm. In the original prototype a median filter (rank 5) was used to smooth the noise in the image to be searched. This was found to have marginal affect on the performance of the statistic. Many of the algorithms presently defined in the literature were found to be unsuitable for this application as they tracked clearly defined lines or searched for a two-dimensional shape that matched a predefined three-dimensional model. An algorithm that may prove to be a suitable alternative compared the rate of change in intensity across a window so is based on locating a change of intensity pattern rather than a pixel to pixel comparison. There are some features that could be included in the tracking procedure to make the algorithm more efficient (the two-dimensional logarithmic search) and provide checks to safeguard against points incrementally deviating from the correct location as tracking progresses (referencing a moused frame, using the vertebra rigid body property). The benefit of incorporating the safeguard features would have to be weighed against the cost of extra computational time. In conclusion, the image subtraction technique can be improved from, in some cases, total tracking loss to accuracy within two pixels of the correct location. This is achieved by tracking inter frame, that is from one frame to the next in the video sequence, and including a sub-pixel location calculation.Item The effects of anti-aliasing filters on system identification : a thesis presented in partial fulfilment of the requirements for the degree of Master of Technology in Production Technology Department at Massey University(Massey University, 1992) Sadr-Nia, Mohammad AliResearch was conducted to determine the effect of anti-aliasing filters on the identification of dynamic systems. Systems were simulated in the continuous simulation package ESL. The system response to a PRBS (Pseudo Random Binary Sequence) was recorded. Simulated noise was added and passed through a number of simulated analog filters. The systems were identified using the MATLAB identification toolbox. Two standard filters (Butterworth and Chebychev) were used with cut-off frequencies between ffis (natural frequency of the system) and 20 times ffis. Results showed that carefully designed filters could improve the performance of the identification algorithm in the presence of non-white high frequency additive noise. However for noise free measurements the filters degraded the performance of identification algorithms. This performance could be observed in the identified models steady state error, overshoot and settling time when subject to a step input. In the experiments performed, the lowest order (and in one case second order) filters with cut-off frequency of ffin= 5ros, gave the best results. [From Summary]Item DNA sequence reading by image processing : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Computer Science at Massey University(Massey University, 1993) Fan, BaozhenThe research described in this thesis is the development of the DNA sequence reading system. Macromolecular sequences of DNA are the encoded form of the genetic information of all living organisms. DNA sequencing has therefore played a significant role in the elucidation of biological systems. DNA sequence reading is a part of DNA sequencing. This project is for reading DNA sequences directly from DNA sequencing gel autoradiographs within a general purpose image processing system. The DNA sequence reading software is developed based on the waterfall software development approach combined with exploratory programming. Requirement analysis, software design, detailed design, implementation, system testing and maintenance are the basic development stages. The feedback from implementation and system testing to detailed design is much stronger in image processing than a lot of other software development. After an image is captured from a gel autoradiograph, the background of the image is normalised and the contrast is enhanced. The captured image consists several lane sets of bands. Each of the lane set represents one part of a DNA sequence. The lane sets are separated automatically into subimages to be read individually. The gap lines between the lane sets are detected for separation. The geometric distortions are corrected by finding the boundaries of the lane set in the subimage. The left boundary of the lane set is used to straighten lane set and the right boundary is used to warp the lane set into a standard width. If separation of the lane sets or geometry correction is unsuccessful by automatical processing, manual selection is used. After the band features are enhanced, the individual bands are extracted and the positions of the bands are determined. The band positions are then converted into the order of the DNA sequence. Different part of a sequence from subsequences are merged into a longer sequence. In most of the cases, the individual lane sets in a captured image are able to be separated automatically. Manual processing is necessary to handle the cases where the lane sets are too close. The system may reach an accuracy of 98% if the bands are clear. Manual checking and correcting the detected bands helps to obtain a reliable sequence. If a lane set on the autoradiograph is indistinct or bands are too close it may reduce the accuracy, in extreme cases to the point where it is unreadable. For a 512x512 image captured from a gel autoradiograph, preprocessing takes 90 seconds, processing each subimage takes 40 seconds on a 33Hz 486 PC. If processing a 430x350 mm autoradiograph with 16 lane sets, assuming 6 images are required, it takes about 40 minutes.Item TreeScan V and frame mosaicing : thesis submitted in partial fulfilment of the requirements of the Masterate Degree in Information Engineering, Department of Production Technology, Massey University(Massey University, 1997) Nourozi, FarshadIn 1993 the Department of Production Technology carried out a feasibility study of applying digital imaging technology in the pre-harvest inventory assessment for the forestry industry. Consequently a scanning mechanism was developed to capture a series of overlapping images along the stem of a tree. These overlapping images needed to be registered and combined to form a single long and thin high resolution image of the tree. This report describes different methods of finding the overlaps between the consecutive images. Algorithms developed here fall into two broad categories: Spatial Domain and Frequency Domain feature matching. Comparison of different algorithms is made and advantages and disadvantages of each one are discussed. Finally a robust algorithm is developed which combines the strengths of the other algorithms.Item Development of a low-cost automated sample presentation and analysis system for counting and classifying nematode eggs : a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering in Mechatronics at Massey University, Manawatu, New Zealand(Massey University, 2017) Pedersen, BenjaminThis thesis discusses the concept development and design of a low-cost, automated, sample presentation system for faecal egg counting, and classification. The system developed uses microfluidics to present nematode eggs for digital imaging to produce images suitable for image analysis and classification. The system costs are kept low by using simple manufacturing methods and commonly available equipment to produce microfluidic counting chambers, which can be interfaced with conventional microscopes. This thesis includes details of the design and implementation of the software developed to allow capture and processing of images from the presentation system. This thesis also includes details on the measures taken to correct for the optical aberrations introduced by the sample presentation system.Item The development of a portable Earth's field NMR system for the study of Antarctic sea ice : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Electronics at Massey University(Massey University, 2001) Dykstra, RobinA portable Nuclear Magnetic Resonance (NMR) spectrometer based on digital signal processor (DSP) technology has been developed and applied to the study of the structure of Antarctic sea ice. The portability of this system means that external sources of noise can be minimised and remote sites can be investigated. A new sea-ice probe has been developed in conjunction with the spectrometer allowing in-situ measurement of water content, relaxation times and self diffusion. The new probe minimises disturbances to the sea ice sample which have been a problem with previous techniques. The core of the spectrometer consists of a Motorola DSP56303 DSP which controls the NMR experiment under the supervison of a host computer which in this case is a PC laptop. Communication between host and DSP is via either a PCMCIA card or USB interface. DSP software runs the experiment, controls acquisition and performs digital filtering of the NMR data before sending it to the PC for analysis and display. The flexibility of the DSP based core means that this system could be adapted to other control applications with relative ease.Item The development of a Java image processing framework : a thesis presented in partial fulfillment of the requirements for the degree of Master of Technology in Computer Systems Engineering at Massey University(Massey University, 2000) McLaughlin, Jesse LouisPractical computer-based teaching methods are often used in conjunction with theory-based lecture sessions and textbooks when teaching image processing. In kind, electronic or on-line image processing courses commonly provide both theoretical and interactive components, however these are often disparate in that the software used to provide each component is independent rather than integrated. It is less common to find electronic instructional resources for image processing that integrate theoretical textual and practical interactive content together into one seamless package. An integrated approach has the advantage that the concepts are more easily conveyed and reinforced when taught 'side-by-side' this way. The World Wide Web offers an attractive medium for delivering an integrated instructional resource on image processing. Applets written in Java may be seamlessly integrated into a hypertext environment. These applets can provide practical demonstrations of image processing concepts along side the relevant hypertext-based theoretical content. One of the major barriers to realising this kind of resource is the development effort required to create the necessary applets. This research demonstrates that the provision of a software framework can significantly reduce the burden of developing these applets. Such a framework provides a common code base that can be drawn upon during applet development, thereby avoiding the need to start from scratch each time a new applet is needed. The framework's design is modelled on a dataflow view of image processing, allowing applets to be built in terms of interconnections between operations. This design is intended to provide the developer with an intuitive and easy-to-use application programming interface (API) for developing applets. The framework also provides APIs for the programmer to implement new operations and data types, thereby extending the capabilities of the framework. Further, the framework's design is general enough to allow it to be used for developing general purpose image processing programs, or other programs that lend themselves to development using a dataflow language. This thesis shows that the proposed framework achieves its aims through an example application of the development of an applet that demonstrates a thresholding operation.Item Colour consistency in computer vision : a multiple image dynamic exposure colour classification system : a thesis presented to the Institute of Natural and Mathematical Sciences in fulfilment of the requirements for the degree of Master of Science in Computer Science at Massey University, Albany, Auckland, New Zealand(Massey University, 2016) McGhie, SamColour classification vision systems face difficulty when a scene contains both very bright and dark regions. An indistinguishable colour at one exposure may be distinguishable at another. The use of multiple cameras with varying levels of sensitivity is explored in this thesis, aiding the classification of colours in scenes with high illumination ranges. Titled the Multiple Image Dynamic Exposure Colour Classification (MIDECC) System, pie-slice classifiers are optimised for normalised red/green and cyan/magenta colour spaces. The MIDECC system finds a limited section of hyperspace for each classifier, resulting in a process which requires minimal manual input with the ability to filter background samples without specialised training. In experimental implementation, automatic multiple-camera exposure, data sampling, training and colour space evaluation to recognise 8 target colours across 14 different lighting scenarios is processed in approximately 30 seconds. The system provides computationally effective training and classification, outputting an overall true positive score of 92.4% with an illumination range between bright and dim regions of 880 lux. False positive classifications are minimised to 4.24%, assisted by heuristic background filtering. The limited search space classifiers and layout of the colour spaces ensures the MIDECC system is less likely to classify dissimilar colours, requiring a certain ‘confidence’ level before a match is outputted. Unfortunately the system struggles to classify colours under extremely bright illumination due to the simplistic classification building technique. Results are compared to the common machine learning algorithms Naïve Bayes, Neural Networks, Random Tree and C4.5 Tree Classifiers. These algorithms return greater than 98.5% true positives and less than 1.53% false positives, with Random Tree and Naïve Bayes providing the best and worst comparable algorithms, respectively. Although resulting in a lower classification rate, the MIDECC system trains with minimal user input, ignores background and untrained samples when classifying and trains faster than most of the studied machine learning algorithms. Colour classification vision systems face difficulty when a scene contains both very bright and dark regions. An indistinguishable colour at one exposure may be distinguishable at another. The use of multiple cameras with varying levels of sensitivity is explored in this thesis, aiding the classification of colours in scenes with high illumination ranges. Titled the Multiple Image Dynamic Exposure Colour Classification (MIDECC) System, pie-slice classifiers are optimised for normalised red/green and cyan/magenta colour spaces. The MIDECC system finds a limited section of hyperspace for each classifier, resulting in a process which requires minimal manual input with the ability to filter background samples without specialised training. In experimental implementation, automatic multiple-camera exposure, data sampling, training and colour space evaluation to recognise 8 target colours across 14 different lighting scenarios is processed in approximately 30 seconds. The system provides computationally effective training and classification, outputting an overall true positive score of 92.4% with an illumination range between bright and dim regions of 880 lux. False positive classifications are minimised to 4.24%, assisted by heuristic background filtering. The limited search space classifiers and layout of the colour spaces ensures the MIDECC system is less likely to classify dissimilar colours, requiring a certain ‘confidence’ level before a match is outputted. Unfortunately the system struggles to classify colours under extremely bright illumination due to the simplistic classification building technique. Results are compared to the common machine learning algorithms Naïve Bayes, Neural Networks, Random Tree and C4.5 Tree Classifiers. These algorithms return greater than 98.5% true positives and less than 1.53% false positives, with Random Tree and Naïve Bayes providing the best and worst comparable algorithms, respectively. Although resulting in a lower classification rate, the MIDECC system trains with minimal user input, ignores background and untrained samples when classifying and trains faster than most of the studied machine learning algorithms.Item The influence of digital technology on modern Thai typography : transcustomary knowledge in modern Thai typography and design in the twentieth century : impacts on Thai culture and identity : Massey University of Wellington(Massey University, 2015) Supanun, SupphawutThe terms “cultural” and “political” nationalism refers to the distinctive value in diversity in a condition of a nation, where the majority of resident population are indigenous peoples like Thailand, a non--colonised country, ethnic groups of people are mostly cultural orientated. According to the Oxford English Dictionary, the word diversity is understood as “the condition of being different”. Coming from to live in New Zealand for over a decade, I’ve had multicultural experiences by living in the significant multicultural atmosphere where customs and traditions of different races and ethnicities live together. Living in the same condition might start to isolate cultural value at the current era. In the article “Workforce America” by Marilyn Loden and Judy Rosener describe a crucial mistake many people make with “race” and “culture” is to mistakenly think of the meaning of these two to be the same when we talk about diversity reinforces stereotypes and promotes “race” and “culture” as an "us versus them" (Loden & Rosener, 1991). Loden and Rosener also demonstrate value of diversity through four difference dimensions of diversity consist of personality, internal dimensions, external dimensions, and organisational dimensions. Interestingly, according to these dimensions of diversity, primary dimensions are things that we cannot change, which include age, race, ethnicity, gender, physical qualities, and sexual orientation. These are uniquely categorised described as “ethnicity and race identification” (Loden & Rosener, 1991). To draw a connection of these diversity of dimensions to the area of typography and letterform, French philosopher Jacques Derrida describes aesthetics of writing in his 1967 book, Of Grammatology, Derrida wrote against this arbitrary distinction between speech and writing, and emphasized on how written symbols are also legitimate signifiers in themselves. These linguistic variations are uniquely personalized-- aesthetics are inimitable (Derrida, 2013). As being contemporary typographer/designer, I’m always interested in how the past informs the present. My design proposition is to examine cultural value appears in creative framework as the perception of typographer and graphic designer of the modern era could dictate the aesthetics of letterform, and how this sensory information is valued in a present condition.
