Smart Computing and Systems Engineering - 2025 (SCSE 2025)
Permanent URI for this collectionhttp://repository.kln.ac.lk/handle/123456789/30037
Browse
2 results
Search Results
Item Deep Learning-Based Approach for Distinguishing Between AI-Generated and Human-Drawn Paintings(Department of Industrial Management, Faculty of Science, University of Kelaniya., 2025) Warnakulasooriya, A. I.; Rupasingha, R. A. H. M.; Kumara, B. T. G. S.With the increasing number of robust Artificial Intelligence (AI) art generation applications, more realistic AI-generated paintings (AIGPs) are emerging, creating a significant impact on artists. Due to the widespread acceptance of AIGPs, the cultural, historical, and monetary value of real masterpieces is becoming uncertain, raising concerns about the significance of human painters and their artistic techniques. To protect artists’ rights, it is crucial to differentiate AIGPs from human-drawn paintings (HDPs). Accordingly, the main objective of this research is to develop a Convolutional Neural Network (CNN) model that can automatically distinguish between AI-generated and human-drawn paintings without human intervention. Unlike previous studies that focused mainly on pixel-level analysis, the proposed model considers additional features such as edge patterns, object arrangements, pattern distributions, and gradient characteristics in painting classification. A diverse dataset of 3,000 paintings from the AI-ArtBench Dataset—comprising 1,500 AIGPs and 1,500 HDPs across 10 different art themes—was collected and preprocessed for this study. The AIGPs were generated in equal proportions using Latent Diffusion and Standard Diffusion Models. The implemented CNN model achieved an optimum classification accuracy of 90% with a training data size of 10%, while the ANN model exhibited 77% accuracy under the same conditions. Furthermore, models were compared using performance metrics such as precision, recall, F1-score, RMSE, and MAE. Through Gradient-weighted Class Activation Mapping (Grad-CAM), the key visual features that the CNN model used to distinguish AIGPs from HDPs were identified. These findings highlight the potential of automated systems in detecting AI-generated versus human-created artworks for authentication purposes. Future work will focus on analyzing model performance across different art styles and identifying the unique discriminative features associated with each.Item AI-Driven Solutions for Automated Fish Freshness Classification Using CNN Architectures(Department of Industrial Management, Faculty of Science, University of Kelaniya., 2025) Peries, R. F. S.; Adeeba, S.; Ahamed, M. F. S.; Kumara, B. T. G. S.Ensuring fish freshness is essential for market value, consumer health, and seafood quality. In Sri Lanka, traditional sensory-based methods for assessing freshness are subjective and often inaccessible to small-scale fishermen due to high costs and limited resources. This study addresses these challenges by employing Convolutional Neural Networks (CNNs) to automate fish freshness classification using image data from the Mannar coastal region. The approach involved capturing images of whole fish, fish eyes, and fish gills, followed by preprocessing steps such as labeling, resizing, and augmentation. Separate custom CNN models were developed for each dataset, with the gill dataset achieving the highest performance at 98.26% accuracy, along with excellent precision, recall, and F1-scores. Furthermore, advanced pre-trained models—including VGG16, ResNet50, MobileNetV2, InceptionV3, Xception, and DenseNet121—were evaluated on the gill dataset. Among these, DenseNet121 emerged as the best-performing model due to its high accuracy, precision, recall, F1-score, and stable learning curve. These findings highlight the potential of CNN-based and pre-trained models to provide scalable, cost-effective solutions for fish freshness assessment, promoting sustainable seafood practices, empowering small-scale fishers, and enhancing food safety standards.