Please use this identifier to cite or link to this item: http://repository.kln.ac.lk/handle/123456789/27355
Title: Effectiveness of Using Deep Learning for Blister Blight Identification in Sri Lankan Tea
Authors: Hewawitharana, G.H.A.U.
Nawarathne, U.M.L.A.
Hassan, A.S.F.
Wijerathna, L.M.
Sinniah, Ganga D
Vidhanaarachchi, Samitha P.
Wickramarathne, Jagath
Wijekoon, Janaka L.
Keywords: Blister Blight, CNN, image processing, Mask R-CNN, tea diseases, YOLOv8
Issue Date: 2023
Publisher: Department of Industrial Management, Faculty of Science, University of Kelaniya Sri Lanka
Citation: Hewawitharana G.H.A.U.; Nawarathne U.M.L.A.; Hassan A.S.F.; Wijerathna L.M.; Sinniah Ganga D; Vidhanaarachchi Samitha P.; Wickramarathne Jagath; Wijekoon Janaka L. (2023), Effectiveness of Using Deep Learning for Blister Blight Identification in Sri Lankan Tea, International Research Conference on Smart Computing and Systems Engineering (SCSE 2023), Department of Industrial Management, Faculty of Science, University of Kelaniya Sri Lanka. Page 17
Abstract: Ceylon tea industry faces a major challenge in the form of pathogen-induced crop loss, with Blister Blight (BB) caused by Exobasidium vexans posing the greatest threat, leading to harvest losses of over 30%. This fungus attacks the tender tea shoots, resulting in a direct negative impact on the tea harvest. This paper presents a system to identify the suspicious tea leaves and BB disease at its early stages along with an assessment of severity, offering a potential solution to this critical issue. By utilizing real-time object detection, the system filters out non-tea leaves from the captured initial image of a segment of a tea plant. The identified tea leaves are then subjected to BB identification and severity assessment based on differing visual symptoms of the BB stages. This approach enables the system to accurately identify BB in the initial stage and severity stage, allowing for timely and targeted intervention to minimize crop losses. The YOLOv8 model has been able to correctly identify 98% of the objects it has detected as relevant (precision), and it has been able to correctly identify 96% of all the relevant objects present in the scene (recall). The Residual Network 50 (Resnet50) convolutional neural network (CNN) model was selected as the final model, achieving an accuracy of 89.90% during the training phase and an accuracy of 88.26% during the testing phase.
URI: http://repository.kln.ac.lk/handle/123456789/27355
Appears in Collections:Smart Computing and Systems Engineering - 2023 (SCSE 2023)

Files in This Item:
File Description SizeFormat 
Proceeding SCSE 2023 (3) 17.pdf13.21 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.