Browsing by Author "Sandaruwan, K. D."
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item An automated approach to female body-type classification for fashion style recommendations using computer-vision and machine learning(Faculty of Science, University of Kelaniya Sri Lanka, 2024) Navodya, H. K. S.; Sandaruwan, K. D.This research explores a computer vision-based approach to automating female body-type classification with the goal of enhancing personalized fashion recommendations in the online retail industry. By tailoring style suggestions to individual body types, the system aims to improve customer satisfaction and reduce return rates. The system utilizes deep learning techniques to analyze full-body images and classify them into five body-types: apple, inverted triangle, hourglass, pear, and rectangle. Traditional body-type classification methods often require detailed body measurements or complex 3D modeling, posing challenges in terms of user-friendliness and accessibility. This study highlights the advantages of deep learning and transfer learning, which enable the extraction of complex features from images, facilitating accurate and efficient classification without requiring specialized hardware or extensive user input. The model was trained using a dataset of 560 full-body images of female participants aged 20 to 35 years, representing a young adult demographic. To identify the most effective model for this task, the research compared the performance of various machine learning algorithms, including machine learning models, deep learning CNN architectures, and transfer learning models such as Xception, ResNet50, MobileNetV2, and VGG16. Accuracy and model stability were the primary evaluation criteria. The VGG16 model emerged as the best-performing classifier, achieving an accuracy rate of 83.50%. It was trained on 224x224x3 images over 100 epochs with a batch size of 32, using the Adam optimizer and a learning rate of 1e-5. Categorical cross-entropy was used to measure model performance, ensuring optimal parameter adjustments. This model was integrated into both a mobile application, and a web application. These applications allow users to upload images, predict their bodytype, and receive personalized fashion suggestions. In addition to performance metrics like precision, recall, F1-score, and accuracy, the system was validated through a user feedback survey. This survey gathered responses from users who interacted with the web application and served as a human validation metric. The classification model demonstrated performance, particularly with high F1-scores for the Inverted Triangle (0.91) and Apple Shape (0.82) body types. Hourglass and Pear shapes, while moderately accurate, showed lower precision and recall. User feedback from 60 respondents indicated high satisfaction with the system: 94% expressed satisfaction with the classification accuracy, 85.5% emphasized the importance of body type in fashion selection, and 73% reported satisfaction with the personalized fashion suggestions. These insights confirm the system's reliability in real-world applications. While this research demonstrates satisfactory results, limitations exist. The utilized dataset is relatively small, and the classification is limited to five body-types. Additionally, the fashion suggestions are text-based rather than image-based. Future work will focus on expanding the dataset to improve classification accuracy, incorporating all eight recognized female body types, and integrating image-based fashion suggestions to enhance usability. This research lays the foundation for future advancements in AI-driven fashion recommendation systems, contributing to a more personalized, and efficient fashion retail experience.Item Computer vision-based approach to floating waste detection(Faculty of Science, University of Kelaniya Sri Lanka, 2024) Rathsara, K. M. A. C. D.; Sandaruwan, K. D.Water pollution, especially from floating waste like plastics, metals, and organic matter, poses a severe threat to aquatic ecosystems and environmental health. This study aims to develop a computer visionbased model for detecting floating waste in water bodies, utilizing recent advancements in deep learning to enhance detection accuracy and efficiency. This paper presents a study toward the development and implementation of the You Only Look Once model (YOLOv8n) to improve accuracy and efficiency in detecting floating waste. The primary objective is to develop a better model for the detection of various floating waste of concern, including glass, metal, plastic, and water hyacinth. The Research involves collecting datasets from publicly available sources as well as web scraping to collect additional images. After data collection, several preprocessing steps were applied, including cleaning and normalization to ensure consistency across the dataset. Data augmentation techniques were used to increase the diversity of images and improve model reliability. Finally, the dataset was labeled using annotation tools. The YOLOv8n model was trained on this dataset with iterative parameter optimization and various experiments to improve detection performance. Experiments included creating a model from scratch, fine-tuning it, using a pre-trained model, and transferring weights to new configurations. The experiments demonstrate that the YOLOv8n model is highly effective for detecting floating waste. The model achieved a mean average precision (mAP50) of 0.932, with a precision of 0.904 and recall of 0.852, indicating strong accuracy in detection. The YOLOv8n model has shown exceptional performance, particularly in detecting water hyacinth, highlighting its effectiveness and efficiency in floating waste detection. Moreover, the model has the ability to detect floating waste precisely and potentially can also be used in real-time applications for monitoring the water environment. These findings have a huge potential for real-world applications involving rapid responses in aquatic environments and further conservation. Future work will focus on further training with iterative adjustments and dataset augmentation to improve the adaptability and accuracy of the model across different water conditions. This includes expanding the dataset through additional data collection efforts and increasing the diversity and number of identification classes. This study contributes to the wider discourse of environmental conservation by calling for innovation in technological solutions to reduce the adverse effects of floating waste in aquatic environments and also promotes sustainable management of water resources.Item Effectiveness of a VR-based Solution to Improve Practical Skills of Trainee Nurses in Sri Lanka(Department of Industrial Management, Faculty of Science, University of Kelaniya Sri Lanka, 2022) Aluthge, C. L. P.; Imeshika, K. A. S.; Weerasinghe, T. A.; Sandaruwan, K. D.This study describes educational design research conducted to determine the usability of a Virtual Reality (VR) based learning tool to practice nursing skills. With the advancement of technology, new technological solutions have become a part of nursing education. Several technologies such as Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) can be used to overcome these problems. VR technology can be used to visualize the clinical environment at anytime and anywhere. Also, it has been identified as one of the most promising technologies to support clinical education. Therefore, a VR-based application for practicing nasogastric intubation was developed with the support and advice of a group of nursing lecturers. The application was qualitatively evaluated by nursing lecturers and quantitatively evaluated by a group of trainee nurses. Most of them had a positive opinion about embracing the new experience. In the analysis of overall satisfaction, the developed solution was found to be effective and supportive of reducing the clinical training time in the physical environment. It will quickly familiarise the trainee nurses with the clinical setting and develop the fundamental nursing skills in Sri Lanka.Item Enhancing optical flow for a smoother identification of global motion(Faculty of Science, University of Kelaniya Sri Lanka, 2023) Algama, C. R.; Sandaruwan, K. D.In the domain of computer vision, optical flow stands as a cornerstone for unravelling dynamic visual scenes. However, the challenge of accurately estimating optical flow under conditions of large displacement remains an open question. The conventional image flow constraint is vulnerable to substantial nonlinear elements, rapid temporal variations, or spatial changes in the intensity function. The inaccurate approximations inherent in numerical differentiation techniques can further amplify such intricacies. In response, this research proposes an innovative algorithm for optical flow computation, utilising the higher precision of second-order Taylor series approximation within the differential estimation framework to improve the robustness and accuracy of optical flow. By embracing this mathematical underpinning, the research seeks to extract more information about the behaviour of the function under complex scenarios with large nonlinear components, rapid temporal changes, or spatial changes in the intensity gradients and estimate the motion of areas with a lack of texture. The experimental results demonstrate that the proposed algorithm outperforms the existing optical flow algorithms, revealing its capability to estimate global motion accurately even in challenging scenarios. An impressive showcase of its capabilities emerges through its competitive performance on renowned optical flow benchmarks such as KITTI (2015) and Middlebury. The average endpoint error (AEE), a quintessential measure of the accuracy of optical flow algorithms, which computes the Euclidian distance between the calculated flow field and the ground truth flow field, stands notably diminished, validating the effectiveness of the algorithm in handling complex motion patterns. Further experiments conducted against OpenCV optical flow implementation show a significant performance over state-of-the-art algorithms, indicating its potential for practical application in a range of real-world scenarios that require accurate global motion estimations, such as autonomous navigation, video surveillance, flight stabilisation in drones, video stabilisation and motion-based recognition, where accurate motion estimation between consecutive frames is imperative.