Browsing by Author "Kumara, B.T.G.S."
Now showing 1 - 10 of 10
- Results Per Page
- Sort Options
Item Comparative Analysis between K-mean and EM Clustering for Investigate Appropriate Algorithm for Landslide Risk Evaluation(4th International Conference on Advances in Computing and Technology (ICACT ‒ 2019), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2019) Madawala, C.N.; Kumara, B.T.G.S.Irregular development activities on mountains and inadequate attention to construction aspects have led to increasing of landslide and sustaining damages to lives and properties. Within the study area, nearly 3275 sq.km of the area expanded over the Ratnapura District; and is to be highly prone to land sliding of 2178 sq.km. Landslides transpired in many regions of this area, and nearly 90 deaths have reported by National Research Building Organization (NBRO), Sri Lanka. If the suitable investigations were performed at the right time, most of the landslides could be predicted relatively and accurately. The main objective behind this of study is to evaluate the landslide risk levels to discover the real extent, timing and the intensity of landslide processes in Ratnapura district, such knowledge will present vital benefit to government officials, and the general public to avoid landslide hazards and mitigate the losses. Clustering Approaches can be used to developed the Risk Analysing model using actual data. This method was based on K-mean and Expectation Maximization (EM) Algorithms by concerning triggering factor; rainfall and causative factors; slope angle, elevation, and intensity. Such data were collected and applied to the Clustering algorithms. In this study, comparing the multiple Clustering algorithms and investigate the most appropriate risk evaluation approach where it can be used to advance hazard monitoring, early warning, and disaster mitigation. The results indicate that EM clustering algorithm showed accuracy over 84% with the highest speed. The highest accuracy over 92% was acquired by the K-means algorithm, but it was more time-consuming than EM algorithm. Therefore, this research proposed that an EM clustering has a strong capability to fit for the Landslide risk evaluation and producing a more relevant and accurate prediction of the landslide vulnerability within the study area.Item A data mining approach for the analysis of undergraduate examination question papers(International Research Conference on Smart Computing and Systems Engineering - SCSE 2018, 2018) Brahmana, A.; Kumara, B.T.G.S.; Liyanage, A.L.C.J.Examinations play a major role in the teaching, learning and assessment process. Questions are used to obtain information and assess knowledge and competence of students. Academics who are involved in teaching process in higher education mostly use final examination papers to assess the retention capability and application skills of students. Questions that used to evaluate different cognitive levels of students may be categorized as higher order questions, intermediate order questions and lower order questions. This research work tries to derive a suitable methodology to categorize final examination question papers based on Bloom’s Taxonomy. The analysis was performed on computer science related end semester examination papers in the Department of computing and information systems of Sabaragamuwa University of Sri Lanka. Bloom’s Taxonomy identifies six levels in the cognitive domain. The study was conducted to check whether examination questions comply with the requirements of Bloom’s Taxonomy at various cognitive levels. According to the study the appropriate category of the questions in each examination, the paper was determined. Over 900 questions which obtained from 30 question papers are allocated for the analysis. Natural language processing techniques were used to identify the significant keywords and verbs which are useful in the determination of the suitable cognitive level. A rule based approach was used to determine the level of the question paper in the light of Bloom’s Taxonomy. An effective model which enables to determine the level of examination paper can derive as the final outcome.Item Data mining model for identifying high-quality journals(International Research Conference on Smart Computing and Systems Engineering - SCSE 2018, 2018) Jayaneththi, J.K.D.B.G.; Kumara, B.T.G.S.The focus in local universities over the last decade, have shifted from teaching at undergraduate and postgraduate levels to conducting research and publishing in reputed local and international journals. Such publications will enhance the reputation on the individual and the university. The last two decades has seen a rapid rise in open access journals. This has led to quality issues and hence chossing journals for publication has become an issue. Most of these journals focus on the monetary aspect and will publish articles that previously may not have been accepted. Some of the issues include design of the study, methodology and the rigor of the analysis. This has great consequences as some of these papers are cited and used as a basis for further studies. Another cause for concern is that, the honest researchers are sometimes duped, into believing that journals are legitimate and may end up by publishing good material in them. In addition, at present, it is very difficult to identify the fake journals from the legitimate ones. Therefore, the objective of the research was to introduce a data mining model which helps the publishers to identify the highest quality and most suitable journals to publish their research findings. The study focused on the journals in the field of Computer Science. Journal Impact Factor, H-index, Scientific Journal Rankings, Eigen factor Score, Article Influence Score and Source Normalized Impact per Paper journal metrics were used for building this data mining model. Journals were clustered into five clusters using K-Means clustering algorithm and the clusters were interpreted as excellent, good, fair, poor and very poor based on the results.Item Database Management System Deployment on Docker Containerization for Distributed Systems(Faculty of Computing and Technology (FCT), University of Kelaniya, Sri Lanka, 2021) Kithulwatta, W.M.C.J.T.; Jayasena, K.P.N.; Kumara, B.T.G.S.; Rathnayaka, R.M.K.T.Containerization is a novel technology that brings an alternative for virtualization. Due to the most infrastructure-based features, most computer system administration engineers use Docker as the infrastructure level platform. On the Docker containers, any such kind of software service can be deployed. This study aims to evaluate Docker container based relational database management system container behavior. Currently, most scholarly research articles are existing for the database engine performance evaluation under different metrics and measurements of the database management systems. Therefore, without repeating them: this study evaluated the data storage mechanisms, security approaches, container resource usages and container features on the launching mechanism. According to the observed features and factors on the containerized database management systems, containerized database management systems are presenting more value-added features. Hence containerized database management system Docker containers can be recommended for the distributed computer systems for getting the benefit of effectiveness and efficiency.Item Effectiveness of Machine Learning Algorithms on Battling Counterfeit Items in E-commerce Marketplaces(Department of Industrial Management, Faculty of Science, University of Kelaniya Sri Lanka, 2023) Gunawardhana, Kalinga; Kumara, B.T.G.S.; Rathnayake, R.M.K.T.; Jayaweera, Prasad M.For e-commerce marketplaces, counterfeit goods are a major issue since they endanger public safety in addition to causing customer unhappiness and revenue loss. Traditional techniques to identify fake goods in online marketplaces take too long and have a narrow reach, hence they are ineffective. Machine learning algorithms have become a potential tool for swiftly and precisely identifying counterfeit goods in recent years. The usefulness of two machine learning algorithms in identifying fake goods in online marketplaces is examined in this research. The study assesses the performance using a sizable dataset of descriptions, title, prices and seller names from many well-known e-commerce platforms. The study's findings show that machine learning algorithms significantly affect the detection of fake goods in online marketplaces.Item A Machine Learning Influenced Recommendation System for Predicting the Rainfall and Price for Crops in Badulla District(Faculty of Computing and Technology, University of Kelaniya Sri Lanka, 2022) Nandasiri, K.P. Sasindu Madushan; Banujan, Kuhaneswaran; Kumara, B.T.G.S.; Jayasinghe, Sadeeka; Ekanayake, E.M.U.W.J.B.; Senthan, PrasanthEvery day, agriculture becomes more vital to the global economy. Daily population expansion necessitates substantial crop output for human existence. But as the population has increased, human activity has also altered the environment. Therefore, it has resulted in challenges with weather forecasting, which is crucial for crop planting in the agricultural sector. Thus, the globe needs a method to forecast agrarian weather. In addition, it is highly advantageous for farmers to understand the production rate they can achieve and the price range they may expect for their efforts. As a result, Machine learning technologies have become unique and fashionable in the agricultural industry due to their ability to provide accurate farming predictions. Selecting suitable plants for planting has evolved into a necessity. This study focuses on the application of machine learning to estimate the optimal crop for a given period. In this work, the author addresses the beginning part of the study: precipitation prediction under the weather forecast and pricing forecast. The authors have employed six distinct machine-learning models to forecast rainfall and crop prices.Item Pedestrian detection using image processing for an effective traffic light controlling system(International Research Conference on Smart Computing and Systems Engineering - SCSE 2018, 2018) Chathumini, K.G.L.; Kumara, B.T.G.S.Traffic congestion and road pedestrian accident are the two major issues that the Sri Lankan society faced toady. These two issues can be reduced by use of traffic light controlling system in an effective way. This research paper proposed a system to make effective PEdestrian LIght CONtrolled (PELICON) crossing system using image processing. The proposed system consists of three major parts. That is CCTV camera, the system, and pair of poles with standard traffic light system. First the system captures an image of pedestrians who are waiting to cross the road, using CCTV camera. Then the system processes the image to identify and detect the number of pedestrians. Finally, if the number of pedestrians exceeds a given threshold value or pedestrian waiting time is exceeded, then the logical part of the system works and produces a result to control the traffic light system. This system that uses PELICON crossing system could be more effective than a button clicking PELICON Crossing system.Item The Role of Social Media (Twitter) in Analysing Home Violence: A Machine Learning Approach(Department of Industrial Management, Faculty of Science, University of Kelaniya Sri Lanka, 2023) Adeeba, Saleem; Banujan, Kuhaneswaran; Kumara, B.T.G.S.Home Violence (HV) has been a persistent issue across the globe, transcending economic status and cultural boundaries. The COVID-19 pandemic has further exacerbated this problem, bringing it to the forefront of public discourse. This study aims to analyse the impact of HV by utilising Twitter data and Machine Learning (ML) techniques, categorising tweets into three groups: (i) HV Incident Tweets, (ii) HV Awareness Tweets, and (iii) HV Shelter Tweets. This categorisation provides several advantages, such as uncovering new or hidden evidence, filling information gaps, and identifying potential suspects. Over 40,000 tweets were collected using the Twitter API between April 2019 and July 2021. Data pre-processing and word embedding were performed to prepare the data for analysis. Initially, tweets were categorised into HV Positive (containing relevant information) and HV Negative (noise or unrelated content) groups. Manually labelled tweets were used for training and testing purposes. Machine learning models, including Support Vector Machines (SVM), Naïve Bayes (NB), Logistic Regression, Decision Tree Classifier, Artificial Neural Networks (ANN), and BERT+LSTM, were employed for this task. Subsequently, HV Positive tweets were classified into the three aforementioned categories. Manually labelled tweets were again used for training and testing. Models such as Tf-IDF+SVM, Tf- IDF+Decision Tree, Tf-IDF+NB, and GloVe+LSTM were utilised. Several evaluation metrics were used to assess the performance of the models. The study’s results provide important new understandings of the prevalence, patterns, and causes of HV as they are reported on social media and how the general population reacts to these problems. The research clarifies how social media may help spread knowledge, provide assistance, and link victims to resources. These insights can be instrumental in informing policymakers, non-profit organisations, and researchers as they work to develop targeted interventions and strategies to address HV during and beyond the COVID-19 pandemic.Item Social media mining for post-disaster management - A case study on Twitter and news(International Research Conference on Smart Computing and Systems Engineering - SCSE 2018, 2018) Banujan, K.; Kumara, B.T.G.S.; Incheon PaikA natural disaster is a natural event which can cause damage to both lives and properties. Social media are capable of sharing information on a real-time basis. Post disaster management can be improved to a great extent if we mine the social media properly. After identifying the need and the possibility of solving that through social media, we chose Twitter to mine and News for validating the Twitter Posts. As a first stage, we fetch the Twitter posts and news posts from Twitter API and News API respectively, using predefined keywords relating to the disaster. Those posts were cleaned and the noise was reduced at the second stage. Then in the third stage, we get the disaster type and geolocation of the posts by using Named Entity Recognizer library API. As a final stage, we compared the Twitter datum with news datum to give the rating for the trueness of each Twitter post. Final integrated results show that the 80% of the Twitter posts obtained the rating of “3” and 15% obtained the rating of “2”. We believe that by using our model we can alert the organizations to do their disaster management activities. Our future development consists mainly of two folds. Firstly, we are planning to integrate the other social media to fetch the data, i.e. Instagram, YouTube, etc. Secondly, we are planning to integrate the weather data into the system in order to improve the precision and accuracy for finding the trueness of the disaster and location.Item Stack Ensemble Model to Detect the Stress in Humans by Considering the Sleeping Habits(Faculty of Computing and Technology, University of Kelaniya Sri Lanka, 2022) Kanagarathnam, Mauran; Premisha, P.; Prasanth, Senthan; Banujan, Kuhaneswaran; Kumara, B.T.G.S.Recently, one of the big challenges encountered by humans is experiencing and managing stress. Beyond the age restriction, people of all ages, from teenagers to seniors, experience issues as a result of stress. Acute and chronic stress are the two main categories of stress. Acute stress is a typical human response that aids in your body’s adaptation to a new situation. In actuality, this form of stress has positive effects. However, the second type of stress, chronic stress, is a crucial type of stress, and this study focused on determining the stress level of this type in advance. This research examined eight attributes related to chronic stress to investigate the chosen person’s sleeping patterns. The Kaggle website provided the dataset that was used in this study. The user’s snoring range, body temperature, limb movement rate, blood oxygen levels, eye movement, number of hours of sleep, heart rate, and stress levels (0-low/normal, 1-medium low, 2-medium, 3-medium high, 4 - high) were all taken into account. The stack ensemble approach was utilized with two levels during this approach. At level 0, the classifiers such as Random Forest, Decision tree, K-nearest neighbour, and XGBoost were considered. At level 1, as a Metamodel, Logistic regression was adopted. Moreover, the predictions obtained from the level 0 models added an additional attribute to the original dataset and fed it to the level 1 model as a new training dataset. Additionally, five folds of fold cross-validation were performed along with the basic assessment to validate further the model for various ratios of training and testing data. Following the cross-validation, the model’s mean accuracy obtained for RF, DT, KNN, XGB and stack ensemble models. From the results discovered, it was represented that the combined model (stack ensemble model) produced more precise results rather than the models considered in isolation.