KICACT 2016
Permanent URI for this collectionhttp://repository.kln.ac.lk/handle/123456789/15608
Browse
Item Detection of Vehicle License Plates Using Background Subtraction Method(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Ashan, M.K.B.; Dias, N.G.J.The detection of a vehicle license plate can be considered as a primary task of a License Plate Recognition System (LPRS). Detecting a vehicle, locating the license plate and the non-uniformity of license plates are few of the challenges when it comes to detection of a license plate. This paper proposes a work to ensure the detection of license plates which are being used in Sri Lanka. The work here, consists of a prototype which was developed using the Matlab’s predefined functions. The license plate detection process consists of two major phases. They are, detection of a vehicle from a video footage or from a real time video stream and license plate area isolation from the detected vehicle. By sending the isolated license plate image to an Optical Character Recognition (OCR) System, its contents can be recognized. The proposed detection process may depend on facts such as, the lighting and weather conditions, speed of the vehicle, efficiency in real time detection, non-uniformity effects of number plates, the video source device specifications and fitted angle of the camera. In the license plate detection process, the first phase, that is; the detection of a vehicle from a video source is accomplished by separating the input video source into frames and analysing these frames individually. A monitoring mask is applied at the beginning of the processing in order to define the road area and it helps the algorithm to look for vehicles in that selected area only. To identify the background, a foreground detection model is used, which is based on an adaptive Gaussian mixture model. Learning rate, threshold value to determine the background model and the number of Gaussian modes are the key parameters of the foreground detection model and they have to be configured according to the environment of the video. The background subtraction approach is used to determine the moving vehicles. In this approach, a reference frame is identified as the background from the previous step.By subtracting the current frame from that reference frame, the blobs which are considered to be vehicles are detected. A blob means a collection of pixels and the blob size should have to be configured according to facts such as the angle of the camera to the road and distance between camera and the monitoring area. Even though a vehicle is identified in the above steps, it needs a way to identify a vehicle uniquely to eliminate duplicates being processed in next layer. As the final step of the first layer, it will generate distinct numbers using the Kalman filter, for each and every vehicle which are detected from the previous steps. This distinct number will be an identifier for a particular vehicle, until it lefts the global window. In, the second phase of the license plate detection will initiate in order to isolate the license plate from the detected vehicle image. First, the input image is converted into grayscale to reduce the luminance of the colour image and then it will be dilated. Dilation is used to reduce the noise of an image, to fill any unnecessary holes in the image and to improve the boundaries of the objects by filling any broken lines in the image. Next, horizontal and vertical edge processing is carried out and histograms are drawn for both of these processing criteria. The histograms are used to detect the probable candidates where the license plate is located. The histogram values of edge processing can change drastically between consecutive columns and rows. These drastic changes are smoothed and then the unwanted regions are detected using the low histogram values. By removing these unwanted regions, the candidate regions which may consists of the license plate are identified. Since the license plate region is considered to be having few letters closely on a plain coloured background, the region with the maximum histogram value is considered as the most probable candidate for the license plate. In order to demonstrate the algorithm, a prototype was developed using MATLAB R2014a. Additional hardware plugins such as Image Acquisition Toolbox Support Package for OS Generic Video Interface, Computer vision system toolbox and Image Acquisition Toolbox were used for the development. When the prototype is being used for a certain video stream/file, first and foremost, the parameters of the foreground detector and the blob size has to be configured according to the environment. Then, the monitoring window and the hardware configurations can be done. The prototype which was developed using the algorithm discussed in this paper was tested using both video footages and static vehicle images. These data were first grouped considering facts such as non-uniformity of number plates, the fitted angle of the camera. Vehicle detection showed an efficiency around 85% and license plate locating efficiency was around 60%. Therefore, the algorithm showed an overall efficiency around 60%. The objective of this work is to develop an algorithm, which can detect vehicle license plates from a video source file/stream. Since the problem of detecting a vehicle license plates is crucial for some complex systems, the proposed algorithm would fill the gap.Item Android smartphone operated Robot(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Thiwanka, U.S.; Weerasinghe, K.G.H.D.In the present an open-source Android platform has been widely used in smartphones. Android platform has a complete software package consisting of an operating system, middleware layer and core applications. Android-based smartphones are becoming more powerful and equipped with several accessories that are useful for Robotics. The purpose of this project is to provide a powerful, user-friendly computational Android platform with simpler robot’s hardware architecture. This project describes the way of controlling robots, using smartphone and Bluetooth communication. Bluetooth has changed how people use digital device at home or office, and has transferred traditional wired digital devices into wireless devices. The project is mainly developed by using Google voice recognition feature which can be used to send commands to robot. Also motion of robot can be controlled by using the Accelerometer and the buttons in the Android app. Bluetooth communication has specifically used as a network interface controller. According to commands received from application, the robot motion can be controlled. The consistent output of a robotic system along with quality and repeatability are unmatched. This project aims at providing simple solutions to create a framework for building robots with very low cost but with high computational and sensing capabilities provided by the smartphone that is used as a control device. Using this project concept, we can help disabled people to do their work easily ex: Motorized wheelchair, remotely controlling some equipment using the smart phone. Also using this project, we can build Surveillance robot devices and reconnaissance devices can design home automation and can use to control any kind of device that can be controlled remotely. Many hardware components were used such as Arduino Uno, Adafruit Motor Shield Bluetooth module and Ultrasonic Distance Measuring Transducer Sensor. The Uno is a microcontroller board based on the ATmega328P. It contains everything needed to support the microcontroller; simply connect it to a Computer using a USB cable or power it with an AC-to-DC adapter or battery to get started. The Arduino use shield boards. These plug onto the top of the Arduino and make it easy to add functionality. This particular shield is the Adafruit Industries Motor / Stepper / Servo Shield. It has a very complete feature set, supporting servos, DC motors and stepper motors. The Bluetooth module is used to connect the smart phone with robot. It uses AT commands. The HC-SR04 ultrasonic sensor uses sonar to determine distance to an object like bats or dolphins do. It offers excellent non-contact range detection with high accuracy and stable readings in an easy-to-use package. From 2 cm to 400 cm or 1” to 13 feet. Its operation is not affected by sunlight or black materials. It comes with an ultrasonic transmitter and a receiver module. This system has two major parts. One is Android application and the other is robot hardware device. When developing this Android application new Android technologies were used ex: Google Voice and motion of the phone. To improve the security of this Application a voice login is added. In addition, a program is added to change login pin and to develop robot scan program and finally to develop two control programs using buttons with accelerometer and Google voice inputs. Arduino IDE and Arduino language is used to program the robot. Arduino has a simple methodology for running the source code. It has a setup function and a loop function. We can define variables and other things inside setup function. The loop function is running always according the content of the function body. AFmotor header is used to develop the code file to get functions to control the motor shield and the motors and used SoftwareSerial header file to make connection between Arduino and Bluetooth module. Using Black Box test method, integrity, usability, reliability, and correctness of the Android application is checked. Finally, user acceptance tests are done for different kind of users. A field-test is done to test whether the robot can identify the object in front of it and the distance limit is coded to the program. Today we are in the world of robotics. Knowingly or unknowingly, we have been using different types of robots in our daily life. The aim of this project is to evaluate whether we can design robots ourselves to do our work using a low budget and simple way. Finally, we think this project will be helpful for students who are interested in these areas and this will make a good solution for human matters. This project has many applications and a very good future scope. It also allows for modification of its components and parameters to get the desired output. This project allows customizing and automating our day-to-day things in our lives.Item Animal Behavior Video Classification by Spatial LSTM(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Huy Q. Nguyen; Kasthuri Arachchi, S.P.; Maduranga, M.W.P.; Timothy K. ShihDeep learning which is basis for building artificial intelligent system is become a quite hot research area in recent years. Current deep neural network increase human recognition level of natural images even with huge dataset such as ImageNet. Among successful architectures, Convolution Neural Network (CNN) and Long Short-term Memory (LSTM) are widely used to build complex models because of their advantages. CNN reduces number of parameters compare to full connected neural net. Furthermore, it learns spatial features by sharing weights between convolution patch, which is not only help to improve performance but also extract similar features of input. LSTM is an improvement of Vanilla Recurrent Network (RNN). When processing with time-series data, RNN gradient has tend to vanish in training with backpropagation through time (BTT), while LSTM proposed to solve vanish problem. Therefore it is well suited for manage long-term dependencies. In other words, LSTM learn temporal features of time-series data. During this we study focused on creating an animal video dataset and investigating the way that deep learning system learn features with animal video dataset. We proposed a new dataset and experiments using two types of spatial-temporal LSTM, which allow us, discover latent information of animal videos. According to our knowledge of previous studies, no one has used this method before with animal activities. Our animal dataset created under three conditions; data must be videos. Thus, our network can learn spatial-temporal features, objects are popular animals like cats and dogs since it is easy to collect more data of them and the third is one video should have one animal but without humans or any other moving objects. Under experiments, we did the recognition task on Animal Behavior Dataset with two types of models to investigate its’ differences. The first model is Conv-LSTM which is an extend version of LSTM, by replacing all input and output connections of convolutional connections. The second model is Long-term Recurrent Convolutional Networks (LRCN), which proposed by Jeff Donahue. More layers of LSTM units can easily added to both models in order to make a deeper network. We did classification using 900 training and 90 testing videos and could reached the accuracy of 66.7% on recognition rate. Here we did not do any data augmentation. However in the future we hope to improve our accuracy rate using some of preprocessing steps such as flip, rotate video clips and collecting more data for the dataset.Item Resource Efficiency for Dedicated Protection in WDM Optical Networks(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Suthaharan, S.; Samarakkody, D.; Perera, W.A.S.C.The ever increasing demand for bandwidth is posing new challenges for transport network providers. A viable solution to meet this challenge is to use optical networks based on wavelength division multiplexing (WDM) technology. WDM divides the huge transmission bandwidth available on a fiber into several non-overlapping wavelength channels and enables data transmission over these channels simultaneously. WDM is similar to frequency division multiplexing (FDM). However, instead of taking place at radio frequencies (RF), WDM is done in the electromagnetic spectrum. In this technique the optical signals with different wavelengths are combined, transmitted together, and separated again. It uses a multiplexer at the transmitter to join the several signals together, and a demultiplexer at the receiver to split them apart. It is mostly used for optical fiber communications to transmit data in several channels with slightly different wavelengths. This technique enables bidirectional communications over one strand of fiber, as well as multiplication of capacity. In this way, the transmission capacities of optical fiber links can be increased strongly. Therefore, the efficiency will be increased. WDM systems expand the capacity of the network without laying more fiber. Failure of the optical fiber in terms of fiber-cut causes loss of huge amount of data which can interrupt communication services. There are several approaches to ensure mesh fiber network survivability. In survivability, the path through which transmission is actively realized is called working path or primary path whereas the path reserved for recovery is called backup path or secondary path. In this paper we consider traditional dedicated protection method in which backup paths are configured at the time of establishing connections primary paths. If a primary path is brought down by a failure, it is guaranteed that there will be available resources to recover from the failure, assuming the backup resources have not failed also. Therefore, traffic is rerouted through backup path with a short recovery time. In this paper, we investigate the performance by calculating the spectrum efficiency variation for traditional dedicated protection in WDM optical networks. To evaluate the pattern for the spectrum efficiency we use various network topologies where the number of fiber links in each network is different. Spectrum efficiency is the optimized use of spectrum or bandwidth so that the maximum amount of data can be transmitted with the fewest transmission errors. Spectrum efficiency is calculated by dividing the total traffic bit rate by the total spectrum used in the particular network. The total traffic bit rate can be calculated by multiplying the data rate by the number of connections (lightpaths). The total spectrum would be the multiplication of the frequency used for a single wavelength and the total number of wavelengths (bandwidth slots) used in the network. In this paper, we carry out the investigation with detailed simulation experiments on different single line rate (SLR) scenarios such as 100 Gbps, 400 Gbps, and 1Tbps. In addition, this paper focuses on four standard optical network topologies which consist of different number of fiber links to identify how the spectrum efficiency deviates for each network. To evaluate the performance, we considered 21-link NFSNET, 30-link Deutsche network, 35-link Spanish network, and 43-link US network as specimens. In our simulation study, spectrum efficiency for each networks are plotted in separate graphs and compared with each other. Our findings are as follows. (1) Spectrum efficiency for each SLR is almost similar and comparable in all the network topologies. (2) Unlike network topology with low number of fiber links, the spectrum efficiency for network topology with high number of fiber links are higher, therefore, the spectrum efficiency increases when the number of links are increased.Item Performing Iris Segmentation by Using Geodesic Active Contour (GAC)(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Yuan-Tsung Chang; Chih-Wen Ou; Jayasekara, J.M.N.D.B.; Yung-Hui LiA novel iris segmentation technique based on active contour is proposed in this paper. Our approach includes two important issues, pupil segmentation and iris circle calculation. If the correct center position and radius of pupil can be find out in tested image, then the iris in the result will be able to precisely segment. The final accuracy for ICE dataset is reached around 92%, and also can get high accuracy 79% for UBIRIS. Our results demonstrate that the proposed iris segmentation can perform well with high accuracy for Iris’s image.Item Development of a Location Based Smart Mobile Tourist Guide Application for Sri Lanka(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) de Silva, A.D.; Liyanage, S.R.Tourism plays a momentous role in the accomplishment of macroeconomic solidity in Sri Lanka. It is one of the main industries that generates a higher emolument for Sri Lanka. The amount of foreign currency earnings from tourism industry has decreased significantly during the past few years according to observations and collected data. This can be partially attributed to the lack of loyalty of the physical tour guides as well as not modernizing the tour guide booklets regularly. Considering the above issues, we propose a mobile application named “Live Tour Guide” to make the travelling easier for the tourists and thereby creating a positive impact on the economy of Sri Lanka. A meticulous investigation was carried out in order to find out the software and hardware requirements to develop this automated tour guide application. The feasibility analysis for the system was carried out under three areas; i.e Operational, Economic and Technical. Since this application consists of the details about the hotels, attractive places and the longitudes/latitudes of different locations it was needed to use an exterior source to collect these respective data. Under the assumption that the particular websites are updated regularly, the dedicated websites were used to gather the required information. Direct observation data collection method was also utilized to identify the work carried out by the tour guides, their behavior, the way that they treat the tourists etc. The system has been developed focusing on two main elements; Mobile Application and Web Server. The Web Server is used to access the cached data or information through the Mobile Application. Information regarding different locations such as, longitudes and latitudes were gathered with the use of the Global Positioning System (GPS). Google maps was employed to access the map based services. Central web server can be accessed through the Internet by using wireless connectivity or 3G connection. The Web Server serves the contemporary location information and it also provides the details of the hotels and attractive places situated close-by, so that it will allow the tourists to plan out their journey accurately in advance with a minimum effort. An external database has been developed using MySQL in order to maintain the details of the places of interest. Java Script Object Notation (JSON) objects are used to exchange the location data over the internet and the application program. Google Maps Application Programming Interface is used to access the Google Map. The “Live Tour Guide” mobile application has developed in order to provide the real time location based services according to the requirements of the tourists. The system has been tested to operate on any smartphone with Android Operating System version 4.2 or later. When a user enters the source and the destination, it will display the route, estimated time for the journey without traffic and the distance between the origin and the destination. Along with that it provides two options to select as “Locations” and “Hotels”. Those two options will provide the details of all the available hotels as well as attractive places located close-by along the preferred route. Apart from the mobile application, “Live Tour Guide” web application has also been developed for maintaining the database in a user friendly manner that can be used by the travel agencies. By using all the above mentioned technologies together with the real data, the objective of developing this “Live Tour Guide” android based application was successfully achieved. Even though some of the solutions are already available as tour guides, this “Live Tour Guide” application allows the tourists to plan out their tour before they start up their journey, by providing various kinds of origins and destinations. It will allow tourists to choose the locations that they are preferred to visit during their journey, since it provides all the information including the prices as well. Any user who is equipped with an android based smartphone, eligible to use this application. However, in future this system should be enhanced by enabling to display all the public places that are available within a selected route as well as it is needed to find out a way of accessing the “Live Tour Guide” application accurately even without having an internet connection. Currently, the database updates manually, but it is better to focus on updating it automatically within regular intervals, so that it will operate more accurately. Due to this innovative application, more tourists can be attracted and will gain a positive impact on the economy of Sri Lanka.Item Optimizing the Member Selection for Ensembles of Classifiers: An Application of Rainfall Forecasting in Sri Lanka(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Nagahamulla, H.R.K.A collection of classifiers trained to do the same task is called an ensemble of classifiers. Ensembles can be created using a set of classifiers of the same type or using a set of classifiers in different types (Artificial Neural Networks (ANN), Support Vector Machines (SVM), Decision Trees, ect.). The generalization ability of an ensemble is significantly increased than that of a single classifier. To achieve increased generalization ability the members of an ensemble has to be accurate (able to produce correct forecast) and diverse (errors in different regions of the error space). However accuracy and diversity are two conflicting conditions that have to be balanced carefully to achieve good performance. Thus members for an ensemble need to be selected carefully in order for them to have the perfect balance between accuracy and diversity. This study aims to optimize the member selection for the ensembles using Genetic Algorithms (GA) to increase the ensemble performance in the context of time series forecasting. The selected application is rainfall forecasting in Sri Lanka. Rainfall is very difficult to forecast accurately because it is a very complex hydrological process. Forecasting rainfall requires manipulating huge datasets with large number of variables. But accurate rainfall forecasts are in high demand because of the close relationship rainfall has with human life. There are three steps in creating an ensemble; creating the pool of classifiers, selecting the members for the ensemble from the pool and combining the selected members using a combiner method. The performance of the ensemble depends on the techniques used in each of these steps. First a pool of classifiers, including different types of classifiers such as SVM, Back Propagation Network (BPN), Radial Basis Function Network (RBFN) and Generalized Regression Neural Network (GRNN) was created by training the classifiers using different training data. Then a number of ensembles were created by selecting different combinations of classifiers from the pool randomly and combining them using a separate GRNN. These ensembles were the initial population of the GA. A simple binary genetic algorithm was then used to create new generations of ensembles and find the ensemble that gave the best result. The fitness of the ensembles were calculated to balance the accuracy and the diversity of the ensemble. The chromosomes were ranked and sorted according to their fitness. Then, the mating pool was prepared by selecting the chromosomes with highest fitness and the pairs were selected using roulette wheel rank weighting. Mating took place using one point crossover with 0.6 crossover probability and the new generation was mutated with 0.1 mutation probability. To train and test the models rainfall data from 1961 to 2001 (41 years) of Colombo, Sri Lanka is used. Input data set consisted of 26 variables obtained from the NCEP_1961-2001 dataset and the output data was daily rainfall of Colombo. The dataset was partitioned to training data (first 60%), validation data (next 20%) and testing data (the remaining, more recent 20%). To create different training datasets from the available training data moving block bootstrap method was used. The dataset containing 10958 records was split into 9863 overlapping blocks of length 1096 and out of these 9863 blocks 10 blocks were selected to train each classifier. To validate the proposed method another two ensembles were created using two well known ensemble creation methods bagging and boosting. The performance of the best ensemble (ENN-GA) was compared with the performance of a single SVM, BPN, RBFN, GRNN, the best performing ensemble in the initial population (ENN), bagging model and the boosting model. Forecasting accuracy of each model was measured for the test dataset using Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and the Coefficient of Determination (R2). The best performing ensemble comprised of two SVM, three BPN, two RBFN and five GRNN. The number of generations for convergence was 287. The following table summarizes the results for individual classifiers, ENN, bagging, boosting and ENN-GA.The proposed model ENN-GA gave more accurate results than the single classifiers used in the study with smaller RMSE and MAE values and larger R2 and the time and space requirements were very small. The proposed model managed to predict the overall rainfall with reasonable accuracy; zero rainfall accurately, smaller rainfall with slight differences and some higher rainfall with considerable differences. These higher differences were obtained for very high rainfall that occurred suddenly. Although the number of these occurrences were very few the difference between the actual and forecasted rainfall was high. The RMSE values were larger compared to MAE values because the errors in high rainfall were magnified in RMSE. The proposed method outperform the single classifiers, ENN model and bagging and boosting models in forecasting rainfall for Colombo, Sri Lanka.Item Human Body Component Tracking and Object Detection Using Monocular Video Sequence(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Ariyasinghe, G.N.P.; Perera, D.H.L.; Wijayarathna, P.G.Medical education plays a vital role in a country’s education system. It is essential that a medical student should be provided with a realistic environment in order to effectively learn and practice disease diagnostics. According to medical education, initially, diseases are determined by diagnosing abnormal heart and lung sounds. Practicing such diagnostics requires a large pool of patients representing each disease which needs to be learnt. However, providing such a large number of patients for an examination session is impractical. Finding patients representing each disease to be learnt is another challenge. The current method used to practice diagnostics via heart and lung sounds is either by using a dummy or a healthy human and identifying disease according to symptoms described by the performer or the doctor/lecturer. This leads to an unrealistic examination environment for the medical student, thereby decreasing the productivity of the medical education system. Meanwhile, object detection in human body pose and component tracking from video inputs has been an active research field motivated by various applications including human computer interaction, motion capture systems and gesture recognition. One of the most important biomedical applications focuses on building simulators to carry on activities in the medical field. Most current tracking methods include multiple cameras and many markers placed on key body points. This makes the examination environment become less realistic and the methods are proven to be slow and unreliable. Furthermore, many tracking systems must be initialized by a human operator before they can track a sequence. Pose tracking using 3D Time of Flight (TOF) cameras exists. However, purchasing TOF cameras are expensive and since they only detect infrared emitting surfaces, they are difficult to be used for many applications. Several learning-based techniques have been proposed for monocular sequence view, but these rely on accurate body silhouette extraction and require relatively large number of training images. SimHaL (Hybrid Computer-based Simulator for Heart and Lung disease diagnosis to enhance medical Education) is an ongoing project which intends to build a hybrid computer-based simulator with an integrated human and computer components. Its aim is to enhance the productivity of medical education by simulating patient examination in a more realistic environment. Therefore it acts as a simulator for disease diagnosing by identifying relevant heart and lung sounds by medical student. The current state of SimHaL focuses on detecting the location where the Chest piece of a stethoscope is placed on a patient’s torso. Since the major target is to build an optimal realistic examination environment for the medical student, a single camera is used to monitor the activity. The output is a monocular video sequence which is the only source available for identifying the torso and the Chest piece as objects. The methodology focuses on object detection categorized into two approaches: 1. Detecting the chest piece of the stethoscope 2. Detecting the patient’s torso In order to identify the chest piece, a circle detection program is implemented using OpenCV. Here the monocular video sequence is divided into frames and circle is detected based on a provided range of radius value. The provided radius value range approximates the radius of the chest piece. Other circles detected in the background will be discarded if their radius value is not in the provided radius range. Next, the motion detection of the identified chest piece is obtained by computing the difference of Cartesian coordinates of circles detected in adjacent frames. Circles with differences which exceed a certain threshold value, are discarded. Currently this threshold value is set as a fixed value assumed to be the width of the patient’s torso. This avoids unusual movements of any detected circle and makes sure the circle detected in the current video frame is the same circle which was detected in the previous one, but has now moved to a new location. Results of this approach consists a concurrent output of x and y Cartesian values relative to the video frame along with a video sequence with a circle drawn in each frame. Radius of the circle is the radius of the chest piece detected at the beginning and the x and y values indicates the circle’s center. The current status of the research concludes identification of the chest piece. Detecting the human torso and thereby determining the location where the chest piece is placed is yet to be implemented.Item Low Cost Electronic Stethoscope(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Nilmini, K.A.C.; Illeperuma, G.D.Among many medical devices, stethoscope is the most widely used device for medical diagnosis. Auscultation is a non-invasive, pain less, quick procedure that can identify many symptoms and was used as early as 18th century. However, major drawbacks of conventional stethoscope are extremely low sound level and lack of ability to record or share the heart and lung sounds. These problems can be overcome by an electronic stethoscope and have the potential to save many lives. Although electronics stethoscopes are already available in the market, these are very expensive. Therefore, objective of the project was to build a low cost electronic. At the basic level, it would facilitate listening to the heart sounds more clearly. Among other facilities is the ability to control the sound level, to record the sound and share as digital information and also to display the sound using graphs for improved diagnostics. Recording and sharing facilities were included due to the importance of tracking patient’s medical history, and also to discuss among group of physicians. It can also facilitate remote diagnostics where experts may not be readily available. 50 mm sized chest piece of an acoustic stethoscope was used as the chest piece due to its optimized design. Chest piece’s diaphragm was placed against the chest of the patient to capture heart sounds. Those sounds were converted to electronic signals by a microphone. Electret condenser microphone was selected from several other types of microphones due to the smaller size (radius 3 mm) and ability to detect the low frequency sounds (~ 30 Hz). Those electrical signals were amplified by the pre amplifier. TL072 integrated circuit was used as a pre operational amplifier. It provided a gain of 3.8. Output signal of the preamplifier circuit was send to the Sallenkey low pass filter circuit. It filtered the first heart sound (S1, from 30 Hz to 45 Hz) , and second heart sound (S2, 50 Hz to70 Hz). Filtering was done by setting the cut off frequency as 100 Hz and that value was given by the capacitor values 0.047 μF and resistor value 33 kΩ. Getting the advantage of TL072 being a dual operational amplifiers in the single die, second operational amplifier was for the filter circuit. Output signal of the filter circuits was amplified to the appropriate amplitude by using audio power amplifier for the headphones and speakers. LM386 integrated circuit was used as the audio power amp. It provided an gain about 20. Speakers and headphones were used as the output. Facility was provided to use any standard 3.5 mm headphones. Constructed circuit was validated by, comparing the original heart sound and amplified output via a digital oscilloscope. Once the implementation was completed, it was compared for the sound quality against an acoustic stethoscope by six independent observers. Five of them heard the heart sounds more clearly by the electronics stethoscope than the acoustic stethoscope. Accuracy of the heart sound was consolidated by a person who has grip knowledge about anatomy. Recording facility was provided using open source software “audacity” and using the computer audio card to capture the sound. Saved file of the heart sound can be used in several ways such as; it can be stored in a database, can be share via e mail and also for play back for further examine in diagnosing process. Heart sounds were visualized as a graph on the computer. An Arduino was used to digitize (1024 resolution) the audio signal and send the data through virtual com port to the computer for graphing. It can also be used to record sounds to an SD card when a computer is not available. As a result similar sound quality has been found when comparing between direct listening and a recorded sound. In conclusion, the implemented system was considered a success due to low cost, ease of implementation and the ability to provide the most useful functions required from an electronic stethoscope.Item Smart Meter- Multifunctional Electricity Usage Meter(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Bandara, K.R.S.N.; Weerakoon, W.A.C.Internet of things (IoT) is the modern concept which makes new approach to make connect people and people, people and devices via internet. This concept is a great solutions for many practical problems of people. Such as connecting people with each other easily, controlling remotely, managing people or devices easily etc. This concept combined with other technologies will make more use for people. Such a modern technology is Multi-Agent technology; connecting these two technologies will make great solutions for many human problems. Software agents are well trained computer programs for certain task with different environment conditions. These agents can act autonomously with sudden changes in artificial environment. (A) Multi-Agent system (MAS) is the collection of software agents who play in artificial environment. Applying IoT and MAS together is great way of creating solutions for major problems of people. One of such problem is uncontrollable power usage. Smart Meter (SM) is the solution for this problem, which is integrated both IoT and MAS concepts. Electricity is the major type of energy which is used for everything in modern world. So electricity plays major role in industry as well as the domestic. More than 50% of domestic use electricity as their first power source. With the great usage of electricity the wastage also becomes higher. This wastage make uncomfortable to domestic economics so people need a better way to eliminate the wastage. And also it will put the world at risk. Because all resources which are used to generate electricity have decaying characteristics, this wastage will make quick reach to end of the resources. Looking at the two tasks of this problem, the key factor of acting on this issue, so people must think on this issue and must act themselves so the wastage must represent to them in manner, which they can feel the problem. Representing current usage in representative manner and predicting future usage according to past usage details will be much easier to understand on how they must act for themselves as well as the world. Implementing the methods to act according to the future plan is one important component for this concept, and remote access and automatic control will add new value to this concept and will make better and easy way to eliminate wastage. Developing countries are struggling to eliminate wastage of electricity because they spend big portion of their economy to generate electricity. If the wastage elimination plan is more expensive it is not feasible for those country. Then it need less expensive power saving equipment. So the smart Meter is such equipment which developed using less expensive equipment such as Arduino, RPI, etc. Complete SM system contains three parts they are, Physical system – which contains a component connected to the home electricity system to collect consumption data of home areas. This system contains microcontrollers, sensors, etc. Processing Unit – this is the system which contain multi-agents and other software which is used to control micro controllers. This is the core of the SM which does all calculations, and all analytical processes and report generating processes. UI Component is the third system which is used to display analytical results of those generated by Processing Unit and it lets user to control the electricity system remotely. All three units of the SM can be built cost effectively which is appropriate to developing countries. When considering the situation in Sri Lanka another major issue is that there is no good connection between domestic system and service provider so the use manual system of collecting of details of consumption. So it will take more time to process consumption data to service provider and make analysis report of domestic. But using the system SM is the best way of collecting consumptions details and also analyzing the consumption data of domestic to make sense on people to save the power because SM will predict the future data consumption and weaknesses of the power consumption of the user.SM uses an analytical program called R in the core module and it is programmed to generate the forecasting for the users’ future consumption of data to represent in understandable manner. This SM concept can be extended for industrial users and this can be extended to make power grid in the area as well as the country. So then it will give new interface for the Power Grid concept. This will lead the whole country to stand for power saving as one. SM is the MAS which is integrated with IoT concept to achieve the above tasks, which is implemented using MadKit agent platform and Java language. Each software agent is assigned for a task. These agents work together to bring out one major task alive. All devices which are connected to the central system communicating with each other as well as with the user, will bring out the concept IoT. Together these two technologies will make a complete solution for electricity wastage.Item Object Recognition Application - Mind Game(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Senanayake, H.M.I.M.; Weerasinghe, K.G.H.D.Visualizing is one of the main methods to remember something. For students who are studying something can remember things as a story or a component of an image. This application is designed to develop this skill by giving this application as a playing game to use. How to play this game? First this app will show a sequence of images and the user should remember what he see, not only the image but also how that image is drawn. The color combinations, shape, angle and much more details are there in a single image. The more the user can remember the things in a single image can score high in this game. Now this application tests up to what average the user remember things. User will be provided a drawing canvas and a pencil tool and then he is asked to draw the first image of the sequence what he remember. And then the second canvas is given to draw the second image. Next third canvas etc. Then the application process the images and match corresponding images of the sequence and offer score to the user considering the details he remembered accurately. How this application works? Most important part of this application is object recognition part. There are many algorithms in present to recognize object and patterns such as feature based methods, appearance based methods, geometric based methods etc. Most popular and widely used techniques are edge and angle based algorithms and pixel based algorithms. Among these methods appearance and geometric based techniques are the narrowly used techniques to develop applications. So in this research I cover that area. My recognition algorithm is to identify images by converting image details into a mathematical model. First this algorithm will identify the shapes in the image and each shape will be given some sequence of values which is based on relative area, perimeter, position co-ordinates of shapes and other special characteristics which are evaluated by a standard function. Each shape in any image will have its own mathematical structure to describe the roll of it in an image. So after processing all shapes of the image as mathematical points, the image can be saved as a mathematical structure. So for each object, objects will have a unique mathematical model. When recognizing object in a new drawn image, this new image is converted into a mathematical model using the same algorithm and match with other mathematical models which are previously processed and saved. Main advantage of this method is number of values which need to be saved as image data in this mathematical model is massively low when compared with other feature based techniques. This increases the speed efficiency. So this way is considerably efficient than edge and angle based techniques to recognize images with non-discrete lines. To match the models I apply a nearest neighbor algorithm to mathematical models, then the most matching image is selected. In the developing side, previously processed mathematical models which represents the images are saved as a two dimensional matrix. Rows in the matrix represents the image identity (image name or object name) and characteristics of images. And one column in the matrix represents a single image. So the number of rows in the matrix is equal to the number of characteristics of the image plus one. And number of columns in the matrix is a variable which depends on the number of images we are saving. And the matrix is saved in a .mat (Microsoft Access Table, used by MATLAB to save data in binary data container format) file. By this method, retrieving and reading data for matching images is very easy because this single matrix represents the whole database of images. Accuracy depends on the growth of the matrix. Because if the matrix has more details about objects, then the program can identify objects accurately. To increase more the accuracy of identifying objects, simply we can increase the number of images which are drawn in different angles or different ways of same object and saving those in the matrix. For example, if the object we want to recognize is a tree, then we can save set of drawings of mango trees, coconut trees, pine tree etc. in the matrix. So any tree will be identified accurately as a tree by the program, no matter what the genre of tree is. In the gaming application these methods are used to define different gaming levels and give the user a new experience. Preliminary the objective of this research was to recognize non-discrete pencil drawing objects accurately. Secondly above techniques are used to develop the application which gives an exercise to the human brain while giving a gaming experience. Designed algorithm is flexible to process any number of images at once and convert those into mathematical models and save all those mathematical models in a single matrix. And the designed program accurately identifies the pencil drawing objects using this matrix. Later, by including more image processing techniques such as image segmentation, this method will be able to enhance more to process and recognize other complex images too.Item Applying Smart User Interaction to a Log Analysis Tool(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Semini, K.A.H.; Wijegunasekara, M.C.A log file is a text file. People analyze these log files to monitor system health, detect undesired behaviors in the system and recognize power failures and so on. In general these log files are analyzed manually using log analysis tools such as Splunk, Loggly, LogRhythm…etc. All of these tools analyze log file and then generate reports, graphs which represents the analyzed log data. Analyzing log files can be divided into two categories namely, analyzing history log files and analyzing online log files. In this work analyzing history log files is only considered for an existing log file analysis framework. Most of the log analysis tools have the feature for analyzing history log files. To analyze a log file using any of the mentioned tools, it is necessary to select a time period. For example, if a user analyze a log file according to the system health, the analyzed log file specifies the system health as ‘good’ or ‘bad’ only for the given time period. In general these analysis tools provide average system health for a given time period. This analysis is good but sometimes it may not be sufficient as people may need to know exactly what happens in the system each second to predict the future behavior of the system or to get some decisions. But using these tools, it is not possible to identify the log file in detail. To do such analysis a user has to go through the log file line by line manually. As a solution for this problem, this paper describes a new smart concept for log file analysis tools. The concept works through a set of widget and it can replay executed log files. First the existing log file analysis framework was analyzed. This helped to understand the data structure of receiving log files. Next the log file analysis tools were studied to identify the main components and features that people most like. The new smart concept was designed by using replayable widget and graph widgets. The replayable widget uses to replay the inputted log file and the graph widgets graphically represent the analyzed log data. The replayable widget is the main part of the project and holds the new concept. This is a simple widget that acts just as a player. It has two components; a window and a button panel. Window shows the inputted log file and the button panel contains play, forward, backward, stop and pause buttons. The log lines which is shown in the window of the replayable widget, holds a tree structure (Figure 1: Left most widget). The button panel contains an extra button to view the log lines. These buttons are used to play the log lines, go to requested log line, view the log line and control playing log lines. It was important to select suitable chart library to design the graph widgets. A number of chart libraries were analyzed and finally D3.js was selected because it provided chart source, free version without watermarks and it also had more than 200 chart types. It has a number of chart features and also it supports to HTML5 based implementations. The following charts were implemented using D3.js chart library. Bar chart according to the pass/failure count Time line according to the time of pass/fail occurs Donut chart according to the total execute count Donut chart for Total Pass/Fail Count Every graph widgets are bind with replayable widget, so that updates are done according to the each action. The replayable widget and the graph widgets are implemented by using D3.js, JavaScript, JQuery, CSS and HTML5. The replayable widget is successfully tested and the implemented interface successfully runs in Google Chrome web browser. Figure 1 shows a sample interface of the design which is generated using a sample log file that had about 100 of log lines. Left most widget is the replayable widget that holds considered log file as a tree structure. Top right most widget is one of the graph widget represented as a bar chart shows pass/failure count and the bottom right most widget is another graph widget represented as a time line shows the time of pass/fail that occurred for the given log file. In addition the analyzed log file can also be visualized using donut charts.This paper described the new smart concept for log file analysis tools. The existing analysis tools that were mentioned did not contain this new concept. Most of the log file analysis tools use graphs for data visualization. This system was successfully implemented and it was evaluated by a number of users who work with log files. This new concept will help log analysts, system analysts, data security teams as well as top level management to extract decisions about the system by analyzing the widgets to make predictions. Furthermore, analyzed data would be useful to collect non-trivial data for data mining and machine learning procedures. As future work, the system could be enhanced to add features such as zooming and drill down method to customize graphs and identify a mechanism to filter data according to user requirements.Item An Emotion-Aware Music Playlist Generator for Music Therapy(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Dissanayaka, D.M.M.T.; Liyanage, S.R.Music has the ability to influence both mental and physical health. Music Therapy is the application of music for rehabilitation of brain activity and maintain both mental and physical health. Music therapy comes in two different forms: active and receptive. Receptive therapy takes place by making the patient to listen to suitable music tracks. Normally music therapy is used by people who suffer from disabilities or mental ailments. But the healing benefits of music can be experienced by anyone at any age through music therapy. This research proposes music android mobile application with auto generated play list according to its user’s emotional status which can be used in the telemedicine as well as in day-to-day life. Three categories of emotional conditions; happy, sad and anger were considered in this study. Live images of the user is captured from an android device. Android face detection API available in the android platform is used to detect human faces and eye positions. After the face is detected face area is cropped. Image is grey scaled and converted to a standard size in order to reduce noise and to compress image size. Then image is sent to the MATLAB based image-recognition sub-system using a client server socket connection. A Gaussian filter is used to reduce noise further in order to maintain a high accuracy of the application. Edges of the image is detected using Canny Edge Detection to identify the details of the face features. The resulting images appear as a set of connected curves that indicate the surface boundaries. Emotion recognition is carried out using the training datasets of happy, sad and angry images that are input to the emotion recognition sub-system implemented in MATLAB. Emotion recognition was carried out using Eigen face-based pattern recognition. In order to create the Eigen faces average faces of three categories are created by averaging the each database image in each category pixel by pixel. Each database image is subtracted from the average image to obtain the differences between the images in the dataset and the average face. Then each image is formed in to the column vector. Covariance matrix is calculated to find the Eigen vectors and associated values. Then weights of the Eigen faces are calculated. To find the matching emotional label Euclidean distance between each weight is calculated for each category. By comparing the obtained Euclidean distances of input image with each category, the class of the image with lowest distance is identified. The identified label (sad, angry, and happy) is sent back to the emotion recognition sub-system. Songs that are pre-categorised as happy, sad and angry are stored in the android application. When emotional label of the perceived face image is received, songs relevant to the received emotional label are loaded to the android music player 200 face images were collected at the University of Kelaniya for validation. Another 100 happy, 100 sad and 100 angry images were collected for testing. Out of the 100 test cases with happy faces, 70 were detected as happy, out of the 100 sad faces 61 were detected as sad and out of 100 angry faces 67 were successfully detected. The overall accuracy of the developed system for the 300 test cases was 66%. This concept can be extended to use in telemedicine and the system has to be made more robust to noises, different poses, and structural components. The system can be extended to include other emotions that are recognizable via facial expressions.Item Augmented Reality and its possibilities in Agriculture (In Sri Lankan Context)(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Musfira, F.A.; Linosh, N.E.Since Sri Lanka is an agro country, its economy is mostly based on agriculture and agro based industries, animal husbandry and other agriculture based businesses. In Sri Lanka, Agriculture continues to be the major occupation and way of life for more than half of the total population. Since information technology and Internet network have become essential part in business processes recently it has considerable influence to be used in agriculture in the process of development. Nowadays the Internet-based applications are more and more successful in agriculture and different parts of the food industry. When it comes to information technology field the emerging trend is Augmented Reality (AR). The field of Augmented Reality (AR) has existed for just over one decade, but the growth and rapid progress in the past few years has been remarkable. The basic goal of an AR system is to enhance the user’s perception and interaction with the real world through virtual objects. There are several application areas, such as extension services, precision agriculture, E-commerce and information services where augmented reality can be used. When it comes to the application areas of technology Agriculture is an area where advanced technology is always introduced with a delay. This research analyze on how augmented reality can be used in agriculture. Certain applications of the AR in agriculture are already present in European countries, but it’s still in the infant stage in Asian countries especially in south Asian countries. In Sri Lanka many opportunities to use these techniques in agriculture can be predicted. Following are some instances of possibility of applications of AR in agriculture. The research areas such as In Sri Lanka many agricultural research centers like Sri Lanka Council for Agricultural Research, Gannoruwa Agricultural Complex, Agricultural Research and Development Centre exist where enrichment of an image becomes necessary. Here AR can be used to add dimension lines, coordinate systems, descriptions and other virtual objects to improve investigation and analyze of the captured images. Another aspect where the AR probably will visit in the near future is the cabin of modern agricultural machinery and mobile machinery. Some components of this system already exist and are being used in the form of simple displays that show the use of GPS. Adding the large displays or special glasses, where on the image fields will be imposed lines drawn by the computer which are showing the way for passage or plot boundaries, is a logical development of existing solutions. The third area is animal Husbandry and farming. A system of AR can be developed and installed in the farms of Sri Lanka for farm monitoring. Use of suitable software may allow determining individual pieces on the screen, with simultaneous administration of the relevant information about them. The following can be shown. The data insertion, information about the health status of farm animals, etc. Finally in crop production it is possible to identify plants with a camera and appropriate AR system. This gives the ability to detect pests and to plan appropriate protective procedures. While studying the use of Augmented Reality technology in agriculture, it can be concluded that different types of services offer different possibilities. Mobile systems develop very dynamically both in regards to the speed of data transmission and services. New devices like tablets and new services like Cloud Computing, Near Field Communication (NFC) have great potential in agriculture. Augmented Reality can be allied with all those technologies and expands the possibilities to evolve towards a new era in agriculture in Sri Lankan agro farms. However, the whole assessment of topic must not be done only on the basis of the technology and taken out of its environment randomly, since the whole area is very complex, this paper focuses on finding and analyzing what is Augmented Reality and tries to highlight the possibilities in agriculture.Item Context-Aware Multimedia Services in Smart Homes(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Chih-Lin Hu; Kasthuri Arachchi, S.P.; Wimaladharma, S.T.C.I.The evolution of “smart homes” technologies exposes a broad spectrum of modern personal computers (PC), consumer electronics (CE), household appliance and mobile devices for intelligent control and services in residential environments. With high penetration of broadband access networks, PC, CE and mobile device categories can be connected on home networks, providing a home computing context for novel service design and deployment. However, conventional home services are characterized by different operations and interactive usages among family members in different zones inside a house. It is prospective to realize user-oriented and location-free home services with modern home-networked devices in smart home environments. The contribution of this paper proposes a reference design of a novel context-aware multimedia system in home-based computing networks. The proposed system integrates two major functional mechanisms: intelligent media content distribution and multimedia convergence mechanisms. The first mechanism performs intelligent controls on services and media devices in a context-aware manner. This mechanism integrates face recognition functions into home-based media content distribution services. Some devises capable of capturing images can recognize the appearances of registered users and infer their changes of location region inside a house. Media content played in the last locations can thus be distributed to home-networked devices closer to the users in the current locations. The second mechanism offers multimedia convergence among multiple media channels and then renders users a uniform presentation for media content services in residential environments. This mechanism can provide not only local media files and streams from various devices on a home network but also Internet media contents that can be online fetched, transported and played onto multiple home-networked devices. Thus, the multimedia convergence mechanism can introduce an unlimited volume of media content from the Internet to a home network. The development of a context-aware multimedia system can be described, as follows. A conceptual system playground in a home network contains several Universal Plug and Play (UPnP) specific home-networked devices that are inter-connected on a singular administrative network based on the Ethernet or Wi-Fi infrastructure. According to UPnP specifications, home-networked devices are assigned IP addresses using auto-IP configuration or DHCP protocols. Then, UPnP-compatible devices can advertise their appearances on a network. When other neighbor devices discover them, they can collaborate on media content sharing services in a network. In addition, some UPnP-compatible devices are capable of face recognition to capture the front images of users inside a house. Those captured images can be sent to a user database and compared with existing user profiles corresponding to individuals in the family community. After any registered user is recognized, the system can refer to the stored details of this particular user and then offer personal media services in a smart manner. On the other hand, the components and functionalities of the proposed system can support intelligent media content distribution and multimedia convergence mechanisms. Technically, the proposed system combines several components such as UPnP control point, UPnP media renderer, converged media proxy server, image detector and profile database of registered users and family communities. Though there are diverse media sources and formats in a home network, users remain the same operational behavior on sharing and playing media content according to common UPnP and Digital Living home Alliance (DLNA) guidelines. Prototypical development achieved a proof-of-concept software based on the Android SDK and JVM frameworks, which integrates the distribution of intelligent media content and converged media services. The resulting software is platform-independent and application-level. It can be deployed on various home-networked devices that are compatible with UPnP standard device profiles, e.g., UPnP AV media server, media player, and mobile phones. Real demonstration has been conducted with the software implementation that runs on various off-the-self home-networked devices. Therefore, the proposed system is able to offer friendly user experience for context-aware multimedia service in residential environments.Item End-user Enable Database Design and Development Automation(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Uduwela, W.C.; Wijayarathna, G.Information System (IS) is a combination of software, hardware, and network components working together to collect, process, create, and distribute data to do the business operations. It consists with “update forms” to collect data, “reports” to distribute data, and “databases” to store data. IS plays a major role in many businesses, because it improves the business competitiveness. Although SMEs are interested to adopt IS, they are often suffered by other factors: time, underline cost, and availability of ICT experts. Hence, the ideal solution for them is to automate the process of IS design and development without ICT expertise for an affordable cost. The software tools are available on the Web to generate the “update forms” and “reports” automatically for a given database model. However, there is no approach to generate the databases of IS automatically. Relational database model (RDBM) is the most commonly used database model in IS due to its advantages than the other data models. The reason of the advantages of the model is its design, but it is not a natural way of representing data. The model is a collection of data that is organized into multiple tables/relations linked to one another using key fields. These links represent the associations between relations. Typically, tables/relations represent entities in the domain. A table/relation has column/s and rows where column/s represent the attributes of the entity and rows represent the records (data). Each row in a table should have a key to identify that row uniquely. Designers should have to identify these elements from the given data requirements in the process of the RDBM design, which is difficult for non-technical people. The process of design of RDBM has few steps: collect the set of data requirements, develop the conceptual model, develop the logical model, and convert it to the physical model. Though there are approaches to automate some steps of the process of design and development of RDBM, they also request the technical support. Thus, it is required to develop a mechanism to automate the database design and development process by overcoming the difficulties in the automation approaches of RDBM, so that non-technical end-users will be able to develop their databases by themselves. Hence, a comprehensive literature survey was conducted to analyze the feasibilities and difficulties of the automation of the process of RDBM design and development. Uduwela, W. et al., the author says that the “form” is the best way to collect data requirements of the database model for its automation, because form is in semi structured way than the natural language (the most common way to present the data requirements is natural language) and it is very closer to the underline database. Approaches were available to automate the development of the conceptual model based on the given data requirements. This is the most critical step in the RDBM design process, because it needs to identify the elements of the model (entities, attributes of them, relationship among the entities, keys and the cardinalities). Form based approaches were analyzed using the data available in the literature to recognize the places where the user intervention is needed. The analysis says that all approaches need user support and it is needed to make the corrections of the outcome, because the elements are not consistent among business domains; it differs from domain to domain and with the same domain also. Further, they demand user support to make the initial input according to the data requirements (set of forms) to identify the elements of the conceptual model. The next step of the process is developing the logical model based on the conceptual model. The outcome of the logical model should be a normalized database to eliminate the data insertion, updating and deletion anomalies by reducing its data redundancies. Data redundancies often caused by Functional Dependencies (FD) that are a set of constraints between two sets of attributes in a relation. The database can be normalized by removing undesirable FDs (remove partial dependencies and transitive dependencies). We could not identify any approach that generates normalize database diagram automatically from the data requirements directly. Existing approaches request the FDs to generate the normalized RDBM. Designers’ high perception power and skills are needed to identify the correct FDs, because it also depends on the domain which is a problem for the automation. FDs can be found by doing data mining, but it also generates an incorrect set of FDs if there are insufficient data combinations. Developing the physical model from the logical model is straightforward and relational database management systems help to automate it. According to the analysis, it can be concluded that the existing approaches on conceptual model development cannot develop accurate models as they has to develop distinct models for each problem. Normalization approaches also cannot be automated as FDs also vary among business domains and with the same domain also. These concludes that there should a database model that can be designed and developed by end-users without any expert knowledge. The proposed model should not be either domain specific or problem specific. It would be better if the approach could convert the data requirements to the database model directly without intermediate steps like in the DBM design process. Further, it would be better the proposed model could be run on the existing database management systems too.Item MySight – A New Vision for the Blind Community(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Nimalarathna, O.D.; Senanayake, S.H.D.People read newspapers, books, articles and many other materials every day. It is a basic human need which is forbidden for blind people by nature. Sometimes, braille education might be the ultimate solution for such person. As stated by The National Federation of the Blind (NFB), less than 10% of the blind children are engaging with braille reading in United States. Every year a lot of people lose their vision fully or partially. According to the findings, diabetics is one of the major causes for blindness and nowadays it is spreading rapidly. Therefore, the present consequence is significant regarding the blindness. According to NFB, most of the blind people are unemployed and a lot of blind students give up their studies due to the difficulty of learning. The problem is that there are not enough teachers to teach Braille system and also the learning of Braille system is really hard. Therefore, it is feasible to find a method such that a blind person could read newspapers, textbooks, bills, etc. without the braille system. There are several smart applications available which use internet connection to find the Portable Document Format (PDF) of the particular newspaper, book or document and then reads that PDF document audibly. But the problem is that they require the PDF document of the particular reading material to be on the internet available in PDF format which is not practical when all the books have no PDF versions. Currently AIRS-LA, BARD MOBILE, iBlink Radio, NFB Newsline, Voice Dream Reader are the top five applications for the usage of the blind community. Since, they need an internet connection and use huge amounts of data, it is not suitable for real time reading. The problem is if a blind person wants to read something in an area which does not have an internet connection, he cannot use any of the mentioned applications. Therefore, there is a great necessity of an application which is portable, effective and work in an offline environment. This research is completely focused on finding a way which could let the blind people to expose to a new kind of reading. The main objective of this research was to develop an android application which could identify words and various kinds of symbols written using a standard font in a given document, and then convert them into an audible format such that a blind person could understand. It also should easy to use by a blind person by providing voice notifications and smart touch techniques. “MySight” is a revolutionary application which could change the entire reading and learning techniques of a blind person. This would replace the braille system currently used by the blind people and let them read and learn effectively and easily. The application was designed in such a way that a blind person could simply get handled with its functionalities and experience the maximum benefit. Also, this can be considered as a method which could let them read like a normal person. The first step was to find an appropriate and efficient Optical Character Recognition (OCR) technique which compatible with the Android platform. In order to fulfill that requirement, the Tesseract OCR library was used. The reason behind choosing Tesseract is, now it has become a leading commercial engine because of its accuracy. The next step was interface designing. Fig.2 shows how the application captures the text. Fig.3 (a) shows the interface after detecting the text and the Fig.3 (b) shows the text operations that can be performed once it detects the text. Once the OCR implementation completed, the application can detect the text using the captured image. At the same time it can convert the text as a sound output through the mobile phone speaker or headset. For the future improvements, the application should be enhanced to guide the blind person to capture the image of a paper or a page of a book. If the four borders of the page are not captured, the application should say the user to move in the corresponding direction. Fig.1 (a) shows the page which is not fully detected to mobile camera. In that situation the border of the page can be identified using the edge detection. If the four borders of the page are not presented in camera preview, it indicates that the page is not ready to be processed. The adjustment can be identified using the distance from the edges. Then the user can be notified by giving a sound output asking to move to left, right, forward or backward until the image is ready to be processed as in Fig.1 (b). For the above improvements, OpenCV library is going to be used for edge detection and the smart voice commands for giving instructions to the blind person. Furthermore, the application will be tested with the blind community to evaluate the applicability and the effectiveness of the application in the real environment.Item Analysis of Emotional Speech Recognition Using Artificial Neural Network(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Archana, A.F.C.; Thirukumaran, S.This paper presents an artificial neural network based approach for analyzing the classification of emotional human speech. Speech rate and energy are the most basic features of speech signal but they still have significant differences between emotions such as angry and sad. The feature pitch is frequently used in this work and auto-correlation method is used to detect the pitch in each of the frames. The speech samples used for the simulations are taken from the dataset Emotional Prosody Speech and Transcripts in the Linguistic Data Consortium (LDC). The LDC database has a set of acted emotional speeches voiced by the males and females. The speech samples of only four emotions categories in the LDC database containing both male and female emotional speeches are used for the simulation. In the speech pre-processing phase, the samples of four basic types of emotional speeches sad, angry, happy, and neutral are used. Important features related to different emotion states are extracted to recognize speech emotions from the voice signal then those features are fed into the input end of a classifier and obtain different emotions at the output end. Analog speech signal samples are converted to digital signal to perform the pre-processing. Normalized speech signals are segmented in frames so that the speech signal can maintain its characteristics in short duration. 23 short term audio signal features of 40 samples are selected and extracted from the speech signals to analyze the human emotions. Statistical values such as mean and variance have been derived from the features. These derived data along with their related emotion target are fed to train using artificial neural network and test to make up the classifier. Neural network pattern recognition algorithm has been used to train and test the data and to perform the classification. The confusion matrix is generated to analyze the performance results. The accuracy of the neural network based approach to recognize the emotions improves by applying multiple times of training. The overall correctly classified results for two times trained network is 73.8%, whereas it is 83.8% when increasing the training times to ten. The overall system provides a reliable performance and correctly classifying more than 80% emotions after properly trained.Item Use of Library and Internet Facilities for Seeking Information among Medical Students at Faculty of Medicine, University of Kelaniya(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Solangaarachchi, D.I.K.; Marasinghe, M.P.L.R.; Abeygunasekera, C.M.; Hewage, S.N.; Thulani, U.B.Information plays a vital role in education. Students are always seeking information as an aid for their studies. With the development of the internet, which is proving to be an incomparable information resource for learning and research, students are more inclined to use it for finding information. For medical students, many of the tools that support medical education and transmit health research are now available online. There are e-books, e-journals, subject-specific databases, academic and professional websites with numerous educational resources. Therefore, the internet is considered as a rich information resource that can support medical education worldwide. The study was conducted with the objective of assessing the frequency and purposes of using the faculty library and internet facilities by medical students of Faculty of Medicine, University of Kelaniya. A survey was carried out from May to June 2016 on MBBS students at Faculty of Medicine, University of Kelaniya. Students who are in their second to fifth academic years were included in the study while first year students were excluded as they were considered to be still in a period of adjustment to the system. Data collection was done using a self-administered questionnaire distributed among the students that visited the Information and Communication Technology (ICT) centre and medical library of the faculty. Two hundred forty six (85%) students responded to the questionnaire. This consisted of 27% (n=67), 20% (n=48), 30% (n=75) and 23% (n=56) from year 2 to 5 respectively. According to the responses provided in the survey, information required by medical students are mainly sought by library material (70.3%), the internet (59.3%), using personal text books (54.9%) and discussions with colleagues (37.4%). Only 13.9% of the students stated that they visited the library at least once a day, while 33.9% goes there several times a week. Those that visit the library once a week or less, but more than once a month represented 30.2% of the responders. A considerable proportion (22%) visits the library less than once a month (or never goes there). The main resources accessed in the library by students were: textbooks (92.7%), past papers (36.2%) and journals (4.9%). When it comes to frequency of internet usage 82.8% of the medical students stated that they accessed it several times per day. While 11.9% accessed internet only once a day and 5.3% accessed internet less frequently than that. Devices used by the responders for accessing the internet included smartphones (55.7%), tablets (32.9%), laptops (32.9%) and desktops (13.0%). When it comes to data access method for connecting to the internet, mobile data (75.8%) and Wi-Fi (73.2%) were most prominently featured, whereas dongle connections (20.3%) and wired connections (3.7%) were less popular. The most frequent reasons noted for accessing the internet were: for finding information related to studies (53.3%), for emailing (30.1%) and using social media such as Facebook (37.0%). Based on the responses of the sampled students, the faculty internet facilities (Wi-Fi or wired) were used by 80.9%. The times of the day for logging on to the faculty internet for most students were ‘12 noon-2 pm’ period (47.5%) and ‘after 4 pm’ period (22.8%). When inquired about problems faced while finding information via the internet: 55.3% noted connection being too slow as an issue, while 34.6% found the inability to access faculty network E-resources outside of the faculty as a hindrance. The other issues expressed were: not having enough time (16.7%), lack of ICT knowledge (6.9%), inadequate information searching skills (6.9%) and not having a device to connect to the internet (2.4%). The results show that even though less than 50% of the sampled students are regular (at least several times a week) visitors to the library, over 70% seek information related to their studies from library material. In contrast, while nearly 95% of the students were daily internet users, only around 60% used it as a source of information. Only about 53% utilised the internet for their academic requirements. The efforts of the university in providing internet facilities appears to have been worthwhile, with over 80% stating that they are consumers of the faculty Wi-Fi and/or wired internet connections. Yet, mobile data connections were the most frequently noted method of obtaining web access. This is reflected by the finding that smartphones and tablets were the most frequently used devices when accessing the web compared to laptops and desktops. The finding of the study that; more than one fifth of the students rarely visit the library could probably mean that they rely on personal text books in their studies. In addition it could also be a reflection of the influence of ICT in academic activities of students. These findings could be explained by the ever increasing influence of ICT in education as well as day-to-day life. Especially, availability of Wi-Fi within the faculty, affordability of mobile internet connections and, handheld devices like smartphones and tablets becoming versatile while also becoming accessible for most people has clearly made an impact in this regard. Recent upgrades to the faculty internet facilities may alleviate the complaint of slowness in connection. Expanding the Wi-Fi network to student hostels and the North Colombo Teaching Hospital at Ragama would help in addressing unavailability of faculty network E-resources outside of the faculty. Even though library based information seeking is still prominently featured, findings of the study show a possible shift towards the internet becoming the main source for information among medical students. The faculty medical library and ICT centre have to be sensitive when it comes to student information source preferences. By working together and adapting to the changing landscape, these two departments of the faculty could play an ever increasing role in improving students’ use of educational resources online.Item AWRSMS: An Approach to Enhance Apparel Warehousing and Retailing through IoT(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Jayathilaka, D.K.; Kottage, G.; Chankuma, C.; Dulakshi, C.; Herath, K.; Ganegoda, U.; Buhari, M.When considering about the modern trends of the Information and technology field; Internet of Things (IoT) is one of the pivotal and emerging technology. In human dependent existing systems such analysis can be done only if someone runs a query and checks for it. When it comes to Warehousing and storage industry, manually collected data are sent to an Enterprise Resource Planning (ERP) system or to a warehouse management system. These systems have some limitations such as mismatches of issuing stock, problems in handling the inventory items and placing them in the warehouse. Warehouse management system manages all the functions in a warehouse but both the systems use manual methods to collect data, such as barcodes. So errors are occurred when entering large number of data into the system. Even a well trained staff member can fail due to common human failures such as fatigue. There’s a very less number of systems which are fully automated, convert captured data into information in real time and these systems are not able to control both warehouse’s and retail shop’s functions. Moreover any of these are not using new methods to do customer promotion. To give a comprehensive solution to limitations of existing systems the above mentioned Apparel Warehouse and Retail Shop Management System (AWRSMS) is developed. The system manages the functions in both warehouse and retail shop in apparel industry. It involves organizing, automating and synchronizing the activities of these both places in effective and efficient manner, using the technology IoT. In this system, all the data about stocks, incoming goods, dispatching items, will be collected using Radio Frequency Identification (RFID) tags and readers and collected data will be sent to the system’s database which stored in a web server. By using these data the system executes several functions such as providing details of returns, new arrivals and dispatched items, mismatch of dispatch, available items, details about stock updates and selected items with relevant reports. This system is implemented as three major modules; web application, data capturing module and customer promotion module. The promotion module enables the location based promotion process inside the retail shop and it is one of a newest and significant function included in AWRSMS. It is a combination of mobile application and a web application. In this module the marketing manager can add promotions into the database using web application. Through the mobile application customer receives the ongoing promotion details. In this process when a customer comes near to the particular sales area which has an ongoing sales promotion, system detects the customer’s phone and send the promotion message to them, using IoT beacons. Mobile application installed in customer’s phone continuously searches for beacon ranges, connect with them and receives relevant promotion messages using Bluetooth Law Energy (BLE) signals transmitted by the beacons. Android studio, JAVA and XML are used as developing tools of the promotion module. Rest of the system is developed using spring framework, Java EE, Hibernate framework and MYSQL. Testing and evaluation was carried out in three procedures to verify whether the system has achieved the intended objectives. First one is the module testing, done by dividing each main function as a module and tested their functionalities, evaluated with intended results immediately after completing each module. White box testing methods were used to carry out the module testing. Test cases have been designed for each module and testing was carried out based on them. Statement coverage for all the test cases was within 85%-100%. All the modules of main web application got 100% accuracy level, Promotion module achieved 96% and data capturing module obtained 82% of accuracy level. After integrating each module, the final testing phrase was carried out by using black box testing. All the modules of web application achieved 100%, promotion module 98% and data capturing module 76% of accuracy level. To ensure that the user requirements were achieved as intended, a questionnaire have been given to a selected sample which consist of 50 members, including AWRSMS’s end customers, people who are knowledgeable of technology and management, and people who aren’t. Questions of this questionnaire categorized under user friendliness, user experience, functionality, suggestion and recommendation. Questions made under user friendliness and user experiences mainly focused the end users who are not expert in technology to measure the usability of the system. The selected users had to comment by using the system without knowing the inside functions. Then the functionality section mainly focused the technological people who tested the overall system. The intention was to figure out the relevancy and compatibility of each and every function with user requirements. Suggestions and the recommendation sections were carried to explore the further improvements. The positive feedbacks which have been gained by user friendliness of AWRSMS is 80% and 78%, 84%, 76 % was obtained for recommendation, user experience, and functionality respectively. 64% of the sample gave suggestion to upgrade the functionalities. When comparing aims objectives and gathered outcomes of the system AWRSMS has been completed in intended and successful manner. By obtaining the required resources and doing further improvements such as using industry level RFIDs, and improving mobile application by adding more features and developing it also for IOS platform, the Apparel Warehouse and Retail shop management system will be an ultramodern and significant approach for the Apparel industry.