A weight optimization-based transfer learning approach for plant disease detection of New Zealand vegetables
dc.citation.volume | 13 | |
dc.contributor.author | Saleem MH | |
dc.contributor.author | Potgieter J | |
dc.contributor.author | Arif K | |
dc.coverage.spatial | Switzerland | |
dc.date.available | 2022 | |
dc.date.available | 2022-09-22 | |
dc.date.issued | 25/10/2022 | |
dc.description.abstract | Deep learning (DL) is an effective approach to identifying plant diseases. Among several DL-based techniques, transfer learning (TL) produces significant results in terms of improved accuracy. However, the usefulness of TL has not yet been explored using weights optimized from agricultural datasets. Furthermore, the detection of plant diseases in different organs of various vegetables has not yet been performed using a trained/optimized DL model. Moreover, the presence/detection of multiple diseases in vegetable organs has not yet been investigated. To address these research gaps, a new dataset named NZDLPlantDisease-v2 has been collected for New Zealand vegetables. The dataset includes 28 healthy and defective organs of beans, broccoli, cabbage, cauliflower, kumara, peas, potato, and tomato. This paper presents a transfer learning method that optimizes weights obtained through agricultural datasets for better outcomes in plant disease identification. First, several DL architectures are compared to obtain the best-suited model, and then, data augmentation techniques are applied. The Faster Region-based Convolutional Neural Network (RCNN) Inception ResNet-v2 attained the highest mean average precision (mAP) compared to the other DL models including different versions of Faster RCNN, Single-Shot Multibox Detector (SSD), Region-based Fully Convolutional Networks (RFCN), RetinaNet, and EfficientDet. Next, weight optimization is performed on datasets including PlantVillage, NZDLPlantDisease-v1, and DeepWeeds using image resizers, interpolators, initializers, batch normalization, and DL optimizers. Updated/optimized weights are then used to retrain the Faster RCNN Inception ResNet-v2 model on the proposed dataset. Finally, the results are compared with the model trained/optimized using a large dataset, such as Common Objects in Context (COCO). The final mAP improves by 9.25% and is found to be 91.33%. Moreover, the robustness of the methodology is demonstrated by testing the final model on an external dataset and using the stratified k-fold cross-validation method. | |
dc.description.publication-status | Published online | |
dc.format.extent | 1008079 - ? | |
dc.identifier | https://www.ncbi.nlm.nih.gov/pubmed/36388538 | |
dc.identifier.citation | Front Plant Sci, 2022, 13 pp. 1008079 - ? | |
dc.identifier.doi | 10.3389/fpls.2022.1008079 | |
dc.identifier.elements-id | 457396 | |
dc.identifier.harvested | Massey_Dark | |
dc.identifier.issn | 1664-462X | |
dc.identifier.uri | https://hdl.handle.net/10179/17804 | |
dc.language | eng | |
dc.publisher | Frontiers Media | |
dc.relation.isPartOf | Front Plant Sci | |
dc.subject | convolutional neural networks | |
dc.subject | cross-validation | |
dc.subject | deep learning | |
dc.subject | optimization algorithms | |
dc.subject | plant disease detection | |
dc.subject | transfer learning | |
dc.subject.anzsrc | 0607 Plant Biology | |
dc.title | A weight optimization-based transfer learning approach for plant disease detection of New Zealand vegetables | |
dc.type | Journal article | |
pubs.notes | Not known | |
pubs.organisational-group | /Massey University | |
pubs.organisational-group | /Massey University/College of Sciences | |
pubs.organisational-group | /Massey University/College of Sciences/School of Agriculture & Environment | |
pubs.organisational-group | /Massey University/College of Sciences/School of Agriculture & Environment/Agritech | |
pubs.organisational-group | /Massey University/College of Sciences/School of Food and Advanced Technology |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- fpls-13-1008079.pdf
- Size:
- 60.39 MB
- Format:
- Adobe Portable Document Format
- Description: