Browsing by Author "Parr B"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
- ItemAnalysis of Depth Cameras for Proximal Sensing of Grapes(MDPI (Basel, Switzerland), 2022-06) Parr B; Legg M; Alam FThis work investigates the performance of five depth cameras in relation to their potential for grape yield estimation. The technologies used by these cameras include structured light (Kinect V1), active infrared stereoscopy (RealSense D415), time of flight (Kinect V2 and Kinect Azure), and LiDAR (Intel L515). To evaluate their suitability for grape yield estimation, a range of factors were investigated including their performance in and out of direct sunlight, their ability to accurately measure the shape of the grapes, and their potential to facilitate counting and sizing of individual berries. The depth cameras’ performance was benchmarked using high-resolution photogrammetry scans. All the cameras except the Kinect V1 were able to operate in direct sunlight. Indoors, the RealSense D415 camera provided the most accurate depth scans of grape bunches, with a 2 mm average depth error relative to photogrammetric scans. However, its performance was reduced in direct sunlight. The time of flight and LiDAR cameras provided depth scans of grapes that had about an 8 mm depth bias. Furthermore, the individual berries manifested in the scans as pointed shape distortions. This led to an underestimation of berry sizes when applying the RANSAC sphere fitting but may help with the detection of individual berries with more advanced algorithms. Applying an opaque coating to the surface of the grapes reduced the observed distance bias and shape distortion. This indicated that these are likely caused by the cameras’ transmitted light experiencing diffused scattering within the grapes. More work is needed to investigate if this distortion can be used for enhanced measurement of grape properties such as ripeness and berry size.
- ItemCapLoc: Capacitive Sensing Floor for Device-Free Localization and Fall Detection(IEEE Xplore, 12/10/2020) Faulkner N; Parr B; Alam F; Legg M; Demidenko SPassive indoor positioning, also known as Device-Free Localization (DFL), has applications such as occupancy sensing, human-computer interaction, fall detection, and many other location-based services in smart buildings. Vision-, infrared-, wireless-based DFL solutions have been widely explored in recent years. They are characterized by respective strengths and weaknesses in terms of the desired accuracy, feasibility in various real-world scenarios, etc. Passive positioning by tracking the footsteps on the floor has been put forward as one of the promising options. This article introduces CapLoc, a floor-based DFL solution that can localize a subject in real-time using capacitive sensing. Experimental results with three individuals walking 39 paths on the CapLoc show that it can detect and localize a single target's footsteps accurately with a median localization error of 0.026 m. The potential for fall detection is also shown with the outlines of various poses of the subject lying upon the floor.
- ItemGrape yield estimation with a smartphone’s colour and depth cameras using machine learning and computer vision techniques(Elsevier, 2023-09-06) Parr B; Legg M; Alam FA smartphone with both colour and time of flight depth cameras is used for automated grape yield estimation of Chardonnay grapes. A new technique is developed to automatically identify grape berries in the smartphone's depth maps. This utilises the distortion peaks in the depth map caused by diffused scattering of the light within each grape berry. This technique is then extended to allow unsupervised training of a YOLOv7 model for the detection of grape berries in the smartphone's colour images. A correlation coefficient (R2) of 0.946 was achieved when comparing the count of grape berries observed in RGB images to those accurately identified by YOLO. Additionally, an average precision score of 0.970 was attained. Two techniques are then presented to automatically estimate the size of the grape berries and generate 3D models of grape bunches using both colour and depth information.
- ItemOccluded Grape Cluster Detection and Vine Canopy Visualisation Using an Ultrasonic Phased Array(MDPI (Basel, Switzerland), 20/03/2021) Parr B; Legg M; Bradley S; Alam FGrape yield estimation has traditionally been performed using manual techniques. However, these tend to be labour intensive and can be inaccurate. Computer vision techniques have therefore been developed for automated grape yield estimation. However, errors occur when grapes are occluded by leaves, other bunches, etc. Synthetic aperture radar has been investigated to allow imaging through leaves to detect occluded grapes. However, such equipment can be expensive. This paper investigates the potential for using ultrasound to image through leaves and identify occluded grapes. A highly directional low frequency ultrasonic array composed of ultrasonic air-coupled transducers and microphones is used to image grapes through leaves. A fan is used to help differentiate between ultrasonic reflections from grapes and leaves. Improved resolution and detail are achieved with chirp excitation waveforms and near-field focusing of the array. The overestimation in grape volume estimation using ultrasound reduced from 222% to 112% compared to the 3D scan obtained using photogrammetry or from 56% to 2.5% compared to a convex hull of this 3D scan. This also has the added benefit of producing more accurate canopy volume estimations which are important for common precision viticulture management processes such as variable rate applications.