ICACT 2018
Permanent URI for this collectionhttp://repository.kln.ac.lk/handle/123456789/18944
Browse
Item Hybrid Gene Selection with Information Gain and Multi-Objective Evolutionary Algorithm for Leukemia Classification(3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Fajila, M.N.F.Leukemia is a bone marrow cancer with various subtypes such as Acute Myeloid Leukemia and Acute Lymphoblastic Leukemia which require expertise to be identified. Morphological and histological appearances can be used to identify diseases. Yet, precise identification of subtypes is a difficult task. Therefore, subtype detection is a crucial part in prognosis. In this study, a hybrid gene selection approach Information Gain-Multi-Objective Evolutionary Algorithm (IG-MOEA) is proposed to identify Leukemia subtypes. Microarray data consists of thousands of genes where all are not corresponding to disease. Irrelevant and redundant genes have high impact on worst classification performance. Hence, IG is initially applied to preprocess the original datasets to remove irrelevant and redundant genes. Then, further MOEA is used to select a smaller subset of genes for perfect classification of new instances. Gene subset selection highly influences the classification. Further, the subsets selected intern is influenced by the algorithm used for gene selection. Moreover, informative subset of genes can be used efficiently for perfect prediction. Thus, selecting the appropriate algorithm for subset selection is important. Hence, MOEA is used in the proposed study for subset selection. The performance of proposed IG-MOEA is compared against the Information Gain-Genetic Algorithm (IGGA) and Information Gain-Evolutionary Algorithm (IG-EA). Three Leukemia microarray datasets were used to evaluate the performance of the denoted approach. Remarkably, 100% classification was achieved for all the three datasets only with few informative genes using the proposed approach.Item Technology Enabled Formative Assessment in Medical Education(3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Youhasan, P.; Sanooz, A.R.M.Technology enabled assessment is a novel pedagogical approach which has emerged into Medical educational practice. The main aim of formative assessment is to drive learning through constructive feedback. Kahoot is a free, real-time, game based, Web2.0 learning platform which is widely accepted to conduct formative assessment. The aim of this study was to explore the students’ perception on using Kahoot as a formative assessment tool at Eastern University Sri Lanka. A total number of 61 third year medical students participated in this cross sectional descriptive study following a pharmacology formative assessment conducted via Kahoot. The student perceptions on Kahoot experience was evaluated by a selfadministered questionnaire which consisted 10 perception statements. The participants asked to rate the statements by using 5-point Likert scale, ranging from 1 (Strongly disagree) to 5 (Strongly agree). Descriptive statistics was computed to present students’ perception. This study revealed that most of the students (83.6%) felt happy with Kahoot experience in conducting the formative assessment and (95.1%) of them recommended Kahoot for formative assessment in future. Majority of the participants (>90%) agreed or strongly agreed that Kahoot increases the focus on subjects, provides fun during learning, motivates to learn and is an effective method for active learning and providing feedback. The general outcome shown in this study derived from the students gives Kahoot a place as a tool to enhance the learning and to provide feedback. The free availability, feasibility, the technical simplicity and the enjoyable attitude from the students towards this application make this as a practical tool in Technology Enabled Assessment.Item Security and Privacy Implications of Biometric Authentication: a Survey(3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Wijenayake, D.S.In today’s world, biometric authentication is used by a wide range of gadgets and systems to verify user identification and access control. Even though biometric authentication is more secure when compared to traditional authentication methods, it itself is not completely hack–proof as any technology can always be hacked and exposed. Protecting user biometric data is a key security challenge in this field. In case a hacker steals any biometric information such as fingerprints, voice waves, etc. from a user, the hacker can effortlessly access all the systems that the original user has access to, which is a serious security concern. Biometric information is unique and it is even impossible to change as passwords, to block someone using it. This is the most vital drawback of biometric authentication. Therefore, the aims of this paper, are to find, understand and propose remedies to the security and privacy shortcomings of the latest biometric authentication methods. The outcome of the critical evaluation taken place in this research, results that several limitations are there in the recent researches. The lack of generalizations (testbeds and participants of the studies were limited to selected geographical areas when compared to the whole populations of the testbeds/users of the world), fewer experiments, and lack of usability/privacy requirements are among them. The paper suggests solutions/future research directions for the many of aforementioned limitations, such as, implementing indicators which depict the strength of biometric authentication methods’ security, rehashing of the studies with different populations via the internet, improving current research-based biometric authentication applications to support multimodal (using of two or more biometrics to authenticate)/continuous authentication, ensuring trust and privacy of users, etc. In conclusion, with the competition among major players in the electronic device market, research-based biometric authentication methods will be rapidly implemented in the real world. To ensure the protection of sensitive data, mobile interfaces should be improved, researchers are highly encouraged to reproduce and critically evaluate others’ researches, and the security and privacy of biometric authentication should be maintained without compromising usability. Then only it would be a challenge to hackers to exploit the biometrics.Item Finite Element Method based Triangular Mesh Generation for Aircraft-Lightning Interaction Simulation(3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Vinotha, K.; Thirukumaran, S.Lightning is a natural electrical discharge process. Most common lightning strike is Cloud-to-Ground. It occurs when the negative charges accumulated at the bottom of the thundercloud traverse towards the ground to neutralize its charges with the positive earth charges induced due to the thundercloud and electrons travels along the lightning channel. The statistics shows that the commercial aircrafts directly struck by lightning strikes that are under the thundercloud once a year on average. The study of electromagnetic threat due to lightning strikes is important for flight safety and restructuring the aircraft design to mitigate direct lightning effects on the physical material of the aircraft causing damages and indirect effects on the navigation systems in it.The prime objective of this paper is to find the electric field distribution around the aircraft conductor in free space conditions under lightning scenario. For the simulation, the flash of the cloud-to-ground lightning is represented as a wave equation. Finite element method is applied to solve the wave equation for identifying potential distribution and exclusively to electric field calculations. Each of the triangular finite elements are considered and the potential at any nodes within a typical element are obtained. The equation represents the relationship between electric potential and electric field which is used to determine the electric field distribution around the aircraft surface by a numerical derivative evaluation technique from the electric potential distribution already obtained. This paper presents an aircraft-lightning interaction simulation under the thundercloud and above the ground by generating two dimensional triangular mesh using finite element method. Significant electric field distribution is observed at the sharp end points of the aircraft. Due to higher radiated electric field, the aircraft-lightning interaction may result in an adverse impact on the aircraft navigation systems and cause damage to its structures. The simulation results would be very useful for studying lightning impact on the aerial vehicles struck by the cloud-to-ground lightning. During the simulation, it was assumed that an aircraft surface is a good conductor and the effects of material properties are left for future studies.Item Smart Iron Rack: Image Processing Approach to Iron Clothes Remotely(3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Yatanwala, Y.W.T.M.; Liyanaarachchi, D.S.G.L.D.Ironing process is a repeated manual task carried out by people daily. Conventional ironing methods always require significant amount of physical user interaction which is time consuming. As a solution, a research has been carried out to implement a smart iron rack with a mobile application that enables user to remotely perform the ironing process. As illustrated in figure below, the device connects with the mobile application through Wi-Fi and performs many tasks including hanger detection, wrinkle detection in cloths, identification of steam irons’ water levels and sending notifications to user. Iron rack consists of 5 hangers and a wide angle camera that moves along the horizontal beam to detect the clothes. When the user specifies a hanger number, the camera moves to the hanger position to check the availability of the cloth. Afterwards, the steam irons attached to the beam move vertically to iron the both sides of clothes. If the hanger number is not specified, the clothes on all five hangers will be ironed. The availability of the cloth on a particular hanger is detected using template matching algorithms in image processing. SIFT (scale-invariant feature transform) algorithm captures all interesting points of the hanger and shape of the hanger is taken as a key measure to decide the existence of the cloth. Raspberry-pi device which is mounted to micro controller, processes the images in order to determine the level of wrinkles in the outfit before and after the ironing process. “Grabcut” algorithm with localize Gaussian Mixture Model(GMM) is used to classify the foreground and background pixels in order to extract only the cloth from its background. Canny edge detection algorithm is used with (100,200) double thresholds to determine the number of wrinkle pixels in the cloth. The system was tested with 100 outfits made in cotton and silk materials. The accuracy of the system was tested in two stages. System could be able to achieve 0.80 F1 score for detecting clothes on hangers and 0.71 F1 score for detecting wrinkles in the clothes. “Smart iron rack” is a cost effective solution which is capable of remotely ironing 5 clothes at a time.Item Altered Brain Wiring in Alzheimer’s: A Structural Network Analysis using Diffusion MR Imaging(3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Mahadevan, J.; Ratnarajah, N.; Ranaweera, R.D.Alzheimer’s disease is a chronic neurodegenerative disorder and the most common form of dementia. It is characterized by cortical atrophy and disrupted anatomical connectivity as white matter fibre tracts lose axons and myelin degenerates. Biomarker tests are crucial to identify the early stages of the disease. It is currently a key priority in Alzheimer’s research to develop neuroimaging biomarkers that can accurately identify individuals in any clinical stage of the disease. Magnetic resonance imaging (MRI) can be considered the preferred neuroimaging examination for Alzheimer’s disease because it allows for accurate measurement of the 3-dimensional volume of brain structures. Diffusion Magnetic Resonance Imaging (DMRI), one of the methods, provides insights into aspects of brain anatomy that could never previously be studied in living humans. A comprehensive study of structural brain network in Alzheimer’s has been developed using diffusion MR imaging and graph theory algorithms, that can assess the white matter connections within the brain, revealing how neural pathways damaged in Alzheimer’s disease. A range of measurements of the network properties were calculated and the pattern of the community structure and the hub regions of the network were inspected. Global measures of efficiencies, clustering coefficients and characteristic path length confirms the disrupted overall brain network connectivity of Alzheimer’s. Relatively the same pattern of hub regions is preserved in Alzheimer’s, however, non-hub regions are affected, which indicates disease alters the internal pattern of the network especially the community structure. Modular analysis confirms this alteration and produces a different modular structure and increased number of modules in Alzheimer’s. Regional connectivity measures also indicated this change and the measures demonstrated the network centrality shifted from right hemisphere to left in Alzheimer’s. The knowledge gained from this study will support to find the strong imaging biomarkers of the Alzheimer’s disease.Item Investigation of the Impact of Clay as a Bulking Agent for Food Waste Composting at a Controlled Raised-up Temperature(3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Jayawardana, M.D.S.B.; Milani, Y.; Silva, C.D.; Wijesinghe, S.In agriculture, the nourishing and substantial quality of soil can be upgraded through transfiguring organic matter in food waste into humus like substance, which is called food waste composting. This is very important as food waste leads to cause odor and pollute the environment. The moisture content (MC), nitrogen content, C/N ratio and aeration in the compost material can be altered through various bulking agents used during the process. The usage of these bulking agents enhances the biodegradation of food waste and transformation of it into effective compost. Therefore, the entire composting process relies on the indispensable role of the bulking agents. Thus, this study was ultimately aimed to evaluate the influence of using clay as the bulking agent for food waste composting at a controlled high temperature (500C). Here a controlled raised-up temperature was used to lead rapid activation of thermophilic microbes. A consecutive five-day study was carried out to analyze the fluctuations of PH, MC and organic matter content (OMC) by preparing composting feedstock using clay as the bulking agent in four different weight percentages (0%, 5%, 10% and 25%). Using a Scanning Electron Microscope (SEM) surface morphology of the samples was analyzed at the initial stage and after five days composting. The analysis of physical parameters was evident that the organic matter was effectively converted to compost at 500C as all the parameters followed the corresponding gradual fluctuations which are presented at the quality compost production. According to the results, no effect was found from clay to control the PH of the composting process of food waste samples. With the increasing of clay percentage there was no significant change of PH was noticed compared to the blank waste sample. With the increment of the clay percentage of the composting feedstock, initial MC was dropped. Furthermore, by the increasing of the clay content of the samples MC was highly reduced. Similarly, OMC was also drastically decreased with the upswing of clay percentage. According to the observations, it can be concluded that clay has been acted as a good bulking agent to food waste composting. At this elevated temperature Food waste composting process had shown a significantly improvement. Presently, further studies are being carried out to further optimize the percentage of clay for food waste composting process at elevated temperature.Item Study on Theory and Practice in Software Quality Assurance (With Special Reference to Information Technology Professionals in Colombo, Sri Lanka)(3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Opatha, N.W.K.D.V.P.A software product is developed or engineered by a set of professionals, and they support and maintain it over a long period of time. Software Quality Assurance (SQA) plays an imperative role in achieving high quality software. SQA process should be carried out from the inception of the project till its maintenance. This research is expected to study the gap between the theories and practical approaches related to SQA by refereeing to Information Technology (IT) professionals in software firms in Colombo, Sri Lanka. Further, it was aimed to explore the differences in SQA activities performed, and perception of IT professionals related to testing and SQA, while identifying the reasons to have/not to have a separate SQA section. The study was conducted as a descriptive study, and the selected sample size was 40. Major reasons identified for not having the SQA section were, the company being small scale with few projects (47.1%) and to reduce the cost (29.1%). The Activities that are performed by companies with a separate SQA section and without a separate SQA section are far too different. Testing programs and retesting them after correction were most popular among both types. The companies enriched with SQA section performed various other activities apart from testing. A significant proportion of the respondents (50%) were not even aware of the SQA standards, and among them, the largest proportion (82%) did not have a separate SQA section. In considering the perception of the professionals with regard to SQA and testing, there is a significant difference among both groups. Furthermore, largest proportion (39%) of the professionals agreed that more up-to-date knowledge regarding the entire process is required, which can cover all the activities and scenarios, while having the presence of SQA from the very beginning to the end of the project. Finally, having a separate SQA section would greatly benefit the companies, and it will contribute to improve the quality, with enriched set of activities. A gap truly exists in between what is emphasized in theories and the way they have been implemented practically in SQA.Item Mobile Biometrics: The Next Generation Authentication in CloudBased Databases(3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Bhatt, C.; Liyanage, S.R.In this period of data innovation, cell phones are generally utilized around the world for fundamental correspondences, as well as an apparatus to manage anyplace, whenever data. These situations require a high security level for individual data and protection assurance through individual distinguishing proof against un-approved use if there should be an occurrence of robbery or fake use in an organized society. At present, the most received technique is the check of Personal Identification Number (PIN), which is risky and won't not be anchored enough to meet this prerequisite. As is represented in a review (Clarke and Furnell, 2005), numerous cell phone clients view the PIN as badly arranged as a secret key that is sufficiently confounded and effortlessly overlooked and not very many clients change their PIN frequently for higher security. Subsequently, it is liked to apply biometrics for the security of cell phones and enhance dependability of remote administrations. As biometrics intends to perceive a man utilizing special highlights of human physiological or conduct attributes, for example, fingerprints, voice, confront, iris, stride and mark, this verification technique normally gives an abnormal state of security. Expectedly, biometrics works with particular gadgets, for instance, infrared camera for securing of iris pictures, increasing speed sensors for step obtaining and depends on expansive scale PC servers to perform ID calculations, which experiences a few issues including massive size, operational many-sided quality and greatly surprising expense. Adding a wireless dimension to biometric identification provides a more efficient and reliable method of identity management across criminal justice and civil markets. Yet deploying cost-effective portable devices with the ability to capture biometric identifiers – such as fingerprints and facial images – is only part of the solution. An end-to-end, standards-based approach is required to deliver operational efficiencies, optimize resources and impact the bottom line. While the use of mobile biometric solutions has evolved in step with the larger biometrics market for some time, the growing ubiquity of smartphones and the rapid and dramatic improvements in their features and performance are accelerating the trend. This is the right time to take a closer look at mobile biometrics and investigate in greater depth how they can be used to their potential. Consolidated with cutting edge detecting stages can identify physiological signals and create different signs, numerous biometric strategies could be executed on phones. This offers an extensive variety of conceivable applications. For example, individual protection assurance, versatile bank exchange benefit security, and telemedicine observation. The utilization of sensor information gathered by cell phones for biometric ID and verification is a rising boondock that must be progressively investigated. We review the state-of-the-art technologies for mobile biometrics in this research.Item Forecasting Monthly Ad Revenue from Blogs using Machine Learning(3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Dias, D.S.; Dias, N.G.J.Blogs emerged in the late 1990s as a technology that allows Internet users to share information. Since then, blogging has evolved to become a source of living to some and a hobby to others. A blog with rich content and regular traffic could easily be monetized through a number of methods. Affiliate marketing, Google AdSense, offering courses or services, selling eBooks and paid banner advertisements are some of the methods in which a blog could be monetized. There exists, a direct relationship on the revenue that can be generated through any of the above methods and the traffic that the blog gets. Google AdSense is the leader in providing ads from publishers to website owners. All bloggers or blogging website owners who have monetized their blogs, attempt to maximize their revenue by publishing articles in hope that it will generate the targeted revenue. On the other hand, bloggers or blogging website owners that hope to monetize their blog will be greatly benefitted if there was a way to forecast the monthly ad revenue that could be generated through the blog. But there exists no tool in the market that can help the bloggers forecast their ad revenue from the blog. In this research, we are looking at the possibility of finding an appropriate machine learning technique by comparing a linear regression, neural network regression and decision forest regression approaches in order to forecast the monthly ad revenue that a blog can generate to a greater accuracy, using statistics from Google Analytics and Google AdSense. As conclusion, the Decision Forest Regression model came out as the best fit with an accuracy of over 70%Item Finite Element Method based Triangular Mesh Generation for Aircraft-Lightning Interaction Simulation(3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Vinotha, K.; Thirukumaran, S.Lightning is a natural electrical discharge process. Most common lightning strike is Cloud-to-Ground. It occurs when the negative charges accumulated at the bottom of the thundercloud traverse towards the ground to neutralize its charges with the positive earth charges induced due to the thundercloud and electrons travels along the lightning channel. The statistics shows that the commercial aircrafts directly struck by lightning strikes that are under the thundercloud once a year on average. The study of electromagnetic threat due to lightning strikes is important for flight safety and restructuring the aircraft design to mitigate direct lightning effects on the physical material of the aircraft causing damages and indirect effects on the navigation systems in it.The prime objective of this paper is to find the electric field distribution around the aircraft conductor in free space conditions under lightning scenario. For the simulation, the flash of the cloud-to-ground lightning is represented as a wave equation. Finite element method is applied to solve the wave equation for identifying potential distribution and exclusively to electric field calculations. Each of the triangular finite elements are considered and the potential at any nodes within a typical element are obtained. The equation 𝐸 = −𝛻𝑉 represents the relationship between electric potential and electric field which is used to determine the electric field distribution around the aircraft surface by a numerical derivative evaluation technique from the electric potential distribution already obtained. This paper presents an aircraft-lightning interaction simulation under the thundercloud and above the ground by generating two dimensional triangular mesh using finite element method. Significant electric field distribution is observed at the sharp end points of the aircraft. Due to higher radiated electric field, the aircraft-lightning interaction may result in an adverse impact on the aircraft navigation systems and cause damage to its structures. The simulation results would be very useful for studying lightning impact on the aerial vehicles struck by the cloud-to-ground lightning. During the simulation, it was assumed that an aircraft surface is a good conductor and the effects of material properties are left for future studies.Item Voltage Sag Compensation using Dynamic Voltage Restorers: A Performance Analysis(3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Senevirathne, K.M.P.C.B.; Ariyawansa, K.A.; Abeynayake, P.G.; Dampage, U.Voltage sags are considered to be one of the most severe and frequent power disturbances occurring in the power system. The electronic devices used today are very sensitive to power quality, any disturbance in the power supply will negatively affect end user equipment. In order to overcome these voltage sags, implementation of a dynamic voltage restorer (DVR) has been proposed to compensate for voltage sags. Hence this technology can provide power regulation as well as power quality improvement. Electric vehicle (EV) batteries, connected in a vehicle-to-grid (V2G) system, act as the power source to the DVR, offering feasibility as well as mobility in delivering energy, thereby making it an ideal choice for energy storage used for improving power quality. This paper presents a simulation, using MATLAB, on the performance of a Dynamic Voltage Restorer (DVR) which utilizes energy from the batteries of Electric Vehicles (EV) as its power source.Item Air Pollution Monitoring System Using Arduino(3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Rishan, U.M.Arduino based Air pollution system is presented. Air pollution monitoring is old but very useful concept in day to day life. The level of air pollution has increased with times by lot of factors like the increase in population, industrialization, increased vehicle use and urbanization. Air pollution will directly affecting health of population. However the fresh air is necessary for all human being. Actually air pollution monitoring started from early using traditional way but the most sophisticated computer has been used to monitor the air quality. However in this project I am going to make an IOT based air pollution system using Arduino this will monitor the air quality accurately. The main objectives of this project to develop low-cost and ubiquitous sensor networks to collect real time data of urban environment. This air pollution system is connected with internet and we can monitor the air quality over the web server using internet. The alarm also embedded with this system that will trigger when the air quality goes down beyond a certain level, this means there are sufficient amount of harmful gases are present in the air like CO2, alcohol, and NH3.It will display the air quality in PPM on the LCD display and as well as on webpage so that we can monitor it very easily. In this IOT project, you can monitor the pollution level from anywhere using your computer or mobile devices.Item Quadcopter based Surveillance System for an Industrial Environment(3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Gamlath, G.B.H.; Dampage, U.; Hewawasam, H.P.M.Y.; Midigaspe, C.S.W.; Herath, H.M.S.U.The protection of industrial infrastructure is a growing concern within industrial environments. Utilization of humans for efficient surveillance and security monitoring in a large area is inefficient since humans generally fail to maintain the concentration for longer periods of time and also due to cost involvement in employing sufficient human labor. A system architecture and design for a perimeter security system to address the aforesaid issue for large industrial facilities such as airports, seaports, logistic storage complexes and military establishments was being developed as the outcome of our research. It employs an integrated multisensory system to detect, assess and track perimeter intrusions. These sensors are integrated together into a standalone system that acquire, on processing and analyzing the probability of possible threat scenarios ignoring nuisance alarms. Upon finalizing an acquired threat, a quadcopter is autonomously dispatched to the location using an advanced location identification system, which will prioritize the locations according to the severity of threat whilst also alerting the security staff. On reaching the location, the quadcopter provides a real-time video feed while maintaining the focus on the detected target. This system is designed to operate on 24/7 in all-weather condition. A command and control center provides situational awareness to facilitate the security personnel responsible for monitoring and managing incidents. Due to the outcomes of this research, human security personnel will be provided with more relaxation in order to facilitate them to focus on tasks which demand cognitive skills. The proposed method will enhance the surveillance capacity of an installation as well as the rapid deployment capability ultimately leading to an efficient and effective security system with adequate defense in depth which is not found in conventional security systems.Item A Prototype P300 BCI Communicator for Sinhala Language(3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Manisha, U.K.D.N.; Liyanage, S.R.A Brain-Computer Interface (BCI) is a communication system which enables its users to send commands to a computer using only brain activities. These brain activities are generally measured by ElectroEncephaloGraphy (EEG), and processed by a system using machine learning algorithms to recognize the patterns in the EEG data. The P300-event related potential is an evoked response to an external stimulus that is observed in scalp-recorded electroencephalography (EEG). The P300 response has proven to be a reliable signal for controlling a BCI. P300 speller presents a selection of characters arranged in a matrix. The user focuses attention on one of the character cells of the matrix while each row and column of the matrix is intensified in a random sequence. The row and column intensifications that intersect at the attended cell represent the target stimuli. The rare presentation of the target stimuli in the random sequence of stimuli constitutes an Oddball Paradigm and will elicit a P300 response to the target stimuli. Emotive EPOC provides an affordable platform for BCI applications. In this study a speller application for Sinhala language characters was also developed for Emotiv users and tested. Classification of the P300 waveform was carried out using a dynamically weighted combination of classifiers. A mean letter classification accuracy of 84.53% and a mean P300 classification accuracy of 89.88% was achieved on a dataset collected from three users.Item A New Public Key Cryptosystem(3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Dissanayake, W.D.M.G.M.; Dissanayake, W.D.M.G.M.In this paper a new CCA secure public key cryptosystem is presented. The introduced cryptosystem is simple and based on the factorization problem. The cryptosystem has two public keys and two private keys. Therefore two encryption algorithms and two decryption algorithms are in this system. Here, we hide the message in a matrix. This situation makes a difficult puzzle for adversaries. In this method, the public encryption key is (e,r,𝑛), e and r are any prime numbers greater than 2 and less than n, n is a product of two large prime numbers. The decryption key is (d,s,𝑛). d and s are multiplicative inverses of e modulo ф(n) and r modulo ф(n) respectively. We should select another integer 𝑔 (< 2𝑚) and set the message 𝑚 and 𝑔 in a 2 ∗ 2 matrix 𝑋 as the determinant of X is odd. We encrypt the determinant of the matrix by raising it to the eth power modulo 𝑛. We also have to send 𝑔 for the decryption. 𝑔 is encrypted by raising it to the rth power modulo 𝑛. When we decrypt the first ciphertext by raising it to another power d modulo 𝑛 and the second ciphertext by raising it to another power s modulo 𝑛, we can find the message m. For an example, let 𝑝 = 7,𝑞 = 11, 𝑒 = 23, 𝑟 = 29. Then, 𝑛 = 𝑝𝑞 = 7 × 11 = 77,ф(𝑛) = 60. Then for the private keys, 𝑑 = 47 and 𝑠 = 29. Let the message, 𝑚 = 30 and 𝑔 = 7. Then, 𝑋 = ( 30 7 1 2 ). From the encryption equations, 𝐶1 ≡ [𝑑𝑒𝑡𝑒𝑟𝑚𝑖𝑛𝑎𝑛𝑡(𝑋)𝑒] 𝑚𝑜𝑑 𝑛 ≡ 5323 𝑚𝑜𝑑 77 ≡ 58 and 𝐶2 ≡ [𝑔𝑟] 𝑚𝑜𝑑 𝑛 ≡ 729 𝑚𝑜𝑑 77 ≡ 63. The decryption equations are Determinant (𝑋) ≡ [𝐶1𝑑] 𝑚𝑜𝑑 𝑛 ≡ 5847 𝑚𝑜𝑑 77 ≡ 53 and 𝑔 ≡ [𝐶2𝑠] 𝑚𝑜𝑑 𝑛 ≡ 6329 𝑚𝑜𝑑 77 ≡ 7. Then, using 2𝑚 – 𝑔 = Determinant (𝑋), we can find 𝑚 = 30. If we use the fast exponentiation algorithm then the computational complexity of the cryptosystem is in polynomial time. The proposed cryptosystem is OW-CCA2 secure and also can use any standard security model to increase the security.Item Development of Image Processing Algorithm for Vein Detection System(3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Wanniarachchi, T.G.; Tharushika, R.T.P.; Panthaka, W.S.P.The process of obtaining intravenous access, vein puncture is an everyday invasive procedure in medical settings. A major problem faced by the nurses today is difficulty in accessing veins for intravenous drug delivery and other medical situations. Hence a vein detection device which can clearly show veins is a useful biomedical engineering application. The accessibility to existing devices are limited due to their high costs. When considering patients admitted into hospital wards, the nurses have to struggle with majority of them to access a peripheral venous line. The probability of it is as high as 80% depending on the condition of the patient and the location of the hospital. Although a peripheral vein can be accessed in a single attempt, in a substantial number of patients the attending nurse needs multiple attempts to insert the needle successfully. Excessive vein puncture are both time and resource consuming events, which cause anxiety, pain, and distress in patients, or can, lead to severe harmful injuries. Therefore it is a significant problem in emergency rooms and during a hospital stay. This research deals with the design and development of lowcost non-invasive subcutaneous vein detection system based on near infrared imaging. In here our priority is focused for development of image processing algorithm to extract vein pattern from a acquired near infrared image. Vein detection system uses an infrared light source (740 nm) to illuminate veins in hand. A snapshot of the region is taken by the modified visible light camera to IR region and it is subjected to existing image processing techniques and author’s validity function. Finally the extracted vein pattern is used to project back to the skin of the patient.Item Parameter optimization of the II-VI thin-film photovoltaic tandem solar cell model of MZO/CdTe and CdS/CIGS(3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Ratnasinghe, D.R.; Attygalle, M.L.C.In this simulation model we have constructed a photovoltaics tandem device with a top cell of window layer n-MZO (Mg doped ZnO), with an absorber layer of (II-VI) thin-film of p-CdTe and the bottom cell with window layer n-CdS and thin absorber layer of (II-VI) p-CIGS. Photovoltaic properties of CdTe/CIGS tandem solar cell have been studied by the Solar Cell Capacitance Simulator (SCAPS-1D) software. The thicknesses of n-CdS, p-CIGS, and the p-CdTe layers have been varied to improve the tandem solar cell device parameters such as open circuit voltage, short circuit current, fill-factor and the device efficiency. All the numerical simulations were conducted with one sun illumination condition with AM1.5G solar spectrum without any light trapping methods. In this simulation, we have observed 1. 37 V open circuit voltage, 24.5 mA/cm2 short circuit current, 85.9 fill factor and the highest efficiency value of 28.8493%. In this study we have presented a model of a tandem solar cell structure which can be used to enhance the performance of existing solar cells with the least material usage.Item Feature Extraction from Old Tamil Newspapers Using Histogram Minima(3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Kasthuri, S.; Darsha, M.; Ranathunga, L.Archaeological records which provide information about the history of human cultures and past events. Newspapers can be considered as one of the main sources of gathering archaeological data. It can be seen that there exist only a few numbers of systems for the processing of old Tamil newspaper articles. An automated image processing system proposed as a suitable solution to the way of efficient and flexible searching approach, which can be used for old Tamil newspapers. In this paper is presented image processing technique to extract the features such as headlines and sub-headlines from old Tamil newspaper scanned images. Historical newspapers become damaged over time. The images of these newspapers become difficult to read the contents. The quality of the image improved by preprocessing techniques such as grayscale dilation, median filtering, and adaptive binarization. It helps to easily extract needed information on the image. Segment the article and identify the heading of the article will help to improve data manipulation. Feature extraction from old Tamil newspaper images followed these step processes; Horizontal smoothing is necessary to distinguish the paragraphs and empty space between each column; Vertical smoothing is implemented to distinguish between each paragraph and headlines; Logical AND operation combines the outcome of horizontal smoothing and vertical smoothing using AND operation; Height measurement of each block is followed by horizontal projection, that involves scanning of pixels through horizontal arrays to measure the black pixel density against index of each row by using horizontal histogram minima. This step identified horizontals breaking points of individual regions within an article. The four major horizontal regions are headlines, sub-headline, text, and graphics. The irregular block may contain images within texts. Vertical projection can be carried out to distinguish the images among text. In the evaluation process used fifty articles which have different format of paragraph arrangements and also include images. First, identified and got the count of regions manually. After that compared the result from identified regions and got the measurements. The region was identified with articles in the efficiency of 80.09%, headline extraction accuracy was 81.616%.Item An Initial Study on Understanding the Effect of Questions Structure on Students' Exam Performance(3rd International Conference on Advances in Computing and Technology (ICACT ‒ 2018), Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2018) Wijesinghe, S.; Irosha, K.P.C.; Rupasinghe, T.The main challenge in evaluating students’ performance is creating effective assessments which appraises students’ learning rather than their memory power and the practice. According to education theories, creative and carefully designed assessments can clearly evaluate the degree of learning in students. “Scaffolding” which refers to the degree to which a question guides the student through the problem-solving process is a widely used method in aiding students’ learning and conceptual understanding and assessing students’ performance in Science and Technology education. The objective of the current study was to understand the impact of exam question structure on the performance of first year undergraduates specifically focusing on understanding the effect of scaffolded questions. In the current Sri Lankan science education context, there is only a limited number of research studies that are available which provides an insight into the relationship between students’ performance and question features. Current study which was designed to address this issue was conducted as a part of the Chemistry for Technology course at the Faculty of Computing and Technology, University of Kelaniya, Sri Lanka. In this study, two different structures of the same questions were given to students as a part of an in class quiz. First one was a direct question and the second version (scaffolded question) included the same question in a step by step manner and in the latter version, students had to answer several steps to solve the problem. Marks obtained for the two versions were averaged and compared to investigate whether there is any significance of the structure of the questions towards the performance of students. Average mark for the scaffolded question was 82(±20) and the direct question was 71(±35). According to the results, it was clear that the students meet a considerable difficulty in the understanding the direct questions and the scaffolding of questions results in an increase of the performance of students. According to preliminary data, it can be concluded that scaffolding of questions preferentially assist students performance at examinations and surface features such as the structure of the question can play a key role in students’ performance at the examinations. Further studies are currently being conducted to understand whether there is any specific correlation between the improvement in performance as a consequence of scaffolding with the gender, school district and students’ English literacy.