KICACT 2016
Permanent URI for this collectionhttp://repository.kln.ac.lk/handle/123456789/15608
Browse
Item Adopting SDLC in Actual Software Development Environment: A Sri Lankan IT Industry Experience(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Munasinghe, B.; Perera, P.L.M.Systems Development Life Cycle (SDLC) and its variant forms have been around in systems development arena as a steadfast and reliable development approach since 1960s and are still widely used in software development process in information technology (IT) industry. IT industry has been adapting SDLC models as a solution to minimize issues aroused in a large number of failure projects. Though SDLC models powerfully model that software projects undergo some common phases during its development process, most software development organizations in the Sri Lankan IT industry today only use SDLC models as a token to show off their process quality but fail to adhere to them in real-time, thus failing to grasp the real benefits of SDLC approach. This study sought to find the causes behind the practical difficulties of a medium Sri Lankan IT Company to find a fitting SDLC model in their development process and the limitation to adhere to such model-based approach. The research instruments were questionnaires that were administered to a sample frame consisting of employees, experts and the managers. Interview schedules were also used. The findings of the study indicate that the main cause behind difficulty in finding a fitting model as extreme customer involvement, which causes regular requirements changes. Company concentrates more on winning the customer than following proper requirements definition approaches suggested by SDLC to define clear-cut requirement specifications, which result in inefficient customer interference throughout the development process demanding inconvenient changes to be addressed down the line. Most of the software projects with version releases involve maintenance and bugs fixing while developing the next release. As customers become system users, their demands become more insisting, making maintenance process tedious and development of next phases more challenging. Lack of proper customer management approaches is strongly visible in all areas of development and customer demands cause poor resource management and increased stress on work force. Study findings suggests that the main reasons behind the limitations in companies to follow a proper SDLC approach are: limitations in budget and human resource, unrealistic deadlines, frequent requirements changes, vague project scope definitions, nature of the project (whether offshore or local), need of using new technologies yet lack of timely availability of knowledge expertise, project team diversity and company’s own business model interfering the project dynamics. Future work will focus on further investigations incorporating number of Sri Lankan IT companies covering all ranges of business magnitudes.Item Advanced Real Time Traffic Controller System Based on Fuzzy Logic and Motion Detection Sensors(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Divaagar, P.Traffic congestion in intersections are becoming a major concern in metropolitan cities. The customary traffic light signals (TLS) are being operated in predetermined traffic light patterns based on the traffic weights calculated through previous statistics on particular junctions. This method becomes inefficient for day to day growing automobile usage in a country like Sri Lanka. Another reason for inefficiency is determining a pattern of traffic flow through statistical analysis is less reliable. A sophisticated solution for this issue is recommended in this paper by controlling the TLS respect to real time traffic flows using motion detection sensors and fuzzy logic technology. The objective is to maximize the traffic flow rate and reduce waiting on junctions. The motion detection sensors are to count the flow rate on the path toward the junction from reasonable distance. Fuzzy logic is the intelligence in the system which acts like a human traffic operator. The Matlab fuzzy logic toolbox is used to design the fuzzy logic. A model of road junction installed with the advanced real time traffic controller system is animated to display the results. The traffic light will act on the decisions made by fuzzy logic system according to the instantaneous traffic load in the roads approaching the junction. The sensors are installed twenty five meters before the intersection in all the paths approaching the junction and this helps the fuzzy logic system to efficiently decide the next signal change time and foresee incoming vehicles to make decision in advance and reduce the vehicle waiting latency. The sample traffic flows applied in a simulation and the response of traffic light signals are observed and these scenarios are compared with a customary traffic light controller system. This model is more efficient than the current traffic light controller system available in Sri Lanka.Item Analysis of Emotional Speech Recognition Using Artificial Neural Network(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Archana, A.F.C.; Thirukumaran, S.This paper presents an artificial neural network based approach for analyzing the classification of emotional human speech. Speech rate and energy are the most basic features of speech signal but they still have significant differences between emotions such as angry and sad. The feature pitch is frequently used in this work and auto-correlation method is used to detect the pitch in each of the frames. The speech samples used for the simulations are taken from the dataset Emotional Prosody Speech and Transcripts in the Linguistic Data Consortium (LDC). The LDC database has a set of acted emotional speeches voiced by the males and females. The speech samples of only four emotions categories in the LDC database containing both male and female emotional speeches are used for the simulation. In the speech pre-processing phase, the samples of four basic types of emotional speeches sad, angry, happy, and neutral are used. Important features related to different emotion states are extracted to recognize speech emotions from the voice signal then those features are fed into the input end of a classifier and obtain different emotions at the output end. Analog speech signal samples are converted to digital signal to perform the pre-processing. Normalized speech signals are segmented in frames so that the speech signal can maintain its characteristics in short duration. 23 short term audio signal features of 40 samples are selected and extracted from the speech signals to analyze the human emotions. Statistical values such as mean and variance have been derived from the features. These derived data along with their related emotion target are fed to train using artificial neural network and test to make up the classifier. Neural network pattern recognition algorithm has been used to train and test the data and to perform the classification. The confusion matrix is generated to analyze the performance results. The accuracy of the neural network based approach to recognize the emotions improves by applying multiple times of training. The overall correctly classified results for two times trained network is 73.8%, whereas it is 83.8% when increasing the training times to ten. The overall system provides a reliable performance and correctly classifying more than 80% emotions after properly trained.Item Android smartphone operated Robot(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Thiwanka, U.S.; Weerasinghe, K.G.H.D.In the present an open-source Android platform has been widely used in smartphones. Android platform has a complete software package consisting of an operating system, middleware layer and core applications. Android-based smartphones are becoming more powerful and equipped with several accessories that are useful for Robotics. The purpose of this project is to provide a powerful, user-friendly computational Android platform with simpler robot’s hardware architecture. This project describes the way of controlling robots, using smartphone and Bluetooth communication. Bluetooth has changed how people use digital device at home or office, and has transferred traditional wired digital devices into wireless devices. The project is mainly developed by using Google voice recognition feature which can be used to send commands to robot. Also motion of robot can be controlled by using the Accelerometer and the buttons in the Android app. Bluetooth communication has specifically used as a network interface controller. According to commands received from application, the robot motion can be controlled. The consistent output of a robotic system along with quality and repeatability are unmatched. This project aims at providing simple solutions to create a framework for building robots with very low cost but with high computational and sensing capabilities provided by the smartphone that is used as a control device. Using this project concept, we can help disabled people to do their work easily ex: Motorized wheelchair, remotely controlling some equipment using the smart phone. Also using this project, we can build Surveillance robot devices and reconnaissance devices can design home automation and can use to control any kind of device that can be controlled remotely. Many hardware components were used such as Arduino Uno, Adafruit Motor Shield Bluetooth module and Ultrasonic Distance Measuring Transducer Sensor. The Uno is a microcontroller board based on the ATmega328P. It contains everything needed to support the microcontroller; simply connect it to a Computer using a USB cable or power it with an AC-to-DC adapter or battery to get started. The Arduino use shield boards. These plug onto the top of the Arduino and make it easy to add functionality. This particular shield is the Adafruit Industries Motor / Stepper / Servo Shield. It has a very complete feature set, supporting servos, DC motors and stepper motors. The Bluetooth module is used to connect the smart phone with robot. It uses AT commands. The HC-SR04 ultrasonic sensor uses sonar to determine distance to an object like bats or dolphins do. It offers excellent non-contact range detection with high accuracy and stable readings in an easy-to-use package. From 2 cm to 400 cm or 1” to 13 feet. Its operation is not affected by sunlight or black materials. It comes with an ultrasonic transmitter and a receiver module. This system has two major parts. One is Android application and the other is robot hardware device. When developing this Android application new Android technologies were used ex: Google Voice and motion of the phone. To improve the security of this Application a voice login is added. In addition, a program is added to change login pin and to develop robot scan program and finally to develop two control programs using buttons with accelerometer and Google voice inputs. Arduino IDE and Arduino language is used to program the robot. Arduino has a simple methodology for running the source code. It has a setup function and a loop function. We can define variables and other things inside setup function. The loop function is running always according the content of the function body. AFmotor header is used to develop the code file to get functions to control the motor shield and the motors and used SoftwareSerial header file to make connection between Arduino and Bluetooth module. Using Black Box test method, integrity, usability, reliability, and correctness of the Android application is checked. Finally, user acceptance tests are done for different kind of users. A field-test is done to test whether the robot can identify the object in front of it and the distance limit is coded to the program. Today we are in the world of robotics. Knowingly or unknowingly, we have been using different types of robots in our daily life. The aim of this project is to evaluate whether we can design robots ourselves to do our work using a low budget and simple way. Finally, we think this project will be helpful for students who are interested in these areas and this will make a good solution for human matters. This project has many applications and a very good future scope. It also allows for modification of its components and parameters to get the desired output. This project allows customizing and automating our day-to-day things in our lives.Item Android Tablet based Menu and Order Management System for restaurants(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Medhavi, Y.A.U.; Wijegunasekara, M.C.The traditional way of taking order services in a restaurant is that, once the customer selects the food and beverages from a paper menu, the waiter uses a pen and paper to take the order. Then the waiter gives the order to the kitchen and the customer has to wait until the order is received. This process is unsatisfactory, low efficient and contain mistakes. The customer may have to wait for a long time until he receives the order. During peak times the waiting time will be much longer. In another situation the waiter might have lost the paper or the waiter’s hand writing might be difficult to understand by other people. This may cause the kitchen and the cashier mess up the orders and also may cause calculation errors. On the other hand, the paper menu can be hard to navigate and may be outdated. When the menu has a large number of menu items it makes the menu appear overwhelming to look through. Because of that, customers may not see all the items that they are interested in. When changes in price or item updates are required for the menu, the costs for reprinting and environmental concerns associated with reprinting need to be considered. Several order service systems that were implemented were studied. Some of them had attractive features, but the user interaction and friendliness of such systems were not satisfactory. These studied systems were analyzed and the attractive features for the order service were identified. These features were implemented such that they are user-friendly. The main objective of this work was to develop a tablet based restaurant menu and order management system to automate the manual order service system and to overcome the drawbacks of the studied systems. This implemented system contains four systems, a mobile application for customers and three web based systems for the admin panel, kitchen and cashier. The order is taken by a mobile device namely, a tablet placed on the restaurant table which acts as a waiter. The mobile application is started by a waiter, logging into the system and assigning the table number and a waiter identification. The waiter identification and table number are saved in the application until that particular waiter logs out. The mobile application has four subsystems namely, display subsystem, assistance subsystem, commenting subsystem and ordering subsystem. The display subsystem displays the complete restaurant menu by categories, special offers’ information and allows the customer to browse all the currently available menu items by category. The assistance subsystem allows the customer to call a waiter 2 for any assistance needed. The commenting subsystem allows customers to create user accounts for adding comments and share experience on Facebook/Twitter. The ordering subsystem allows to select the desired items and make the order. Once the customer makes the order, first he will be able to view the order information that he has ordered including the payment with/without tax and service charge. After the customer confirms the order, the order is transmitted to the kitchen department via Internet for meal preparation. The kitchen web system displays all order information that are received from the tablets. This include the customer details, table number, the waiter identification and the details of the order. After the order is prepared, the waiter will deliver the order to the customer. At the same time, the cashier web system receives the details of the delivered order and the bill is prepared. The web based admin panel system allows the restaurant’s management to add/view/remove/ update menu items and waiters, view reservation information and their cooking status/payment status, update service charge/tax, viewing revenue information over a time period.The produced design artifacts in this work have covered design concerns including architecture, application behavior and user interface. Figure 1 shows the architecture of the system. The implemented system consists of the server and a central database. The restaurant managers can access the database using admin panel to make appropriate redeployments for food materials and evaluate the business status at any time. All ordering and expenditure information is stored in a database. This system is designed to be used on android tablets (screen size-7"). It can also be used on smart phones with smaller screen sizes. It is compatible with versions 2.3 and later. Eclipse and phpStorm were used as the IDEs. Mainly used languages are HTML, JavaScript, PHP, JAVA, XML. The system uses PHP to create web service to return JSON data with the 3 server. This implemented system adopted different testing approaches to test the prototype software and discovered bugs during these testing was corrected. This system provides a more convenient, more maintainable, user-friendliness and accurate method for restaurant management. Other than that, the tablet based menu replace the paper waste, reduce the waiting time and increase the efficiency of the food and beverages order service system. By using this system, the restaurant can reduce the running cost, human errors and provide high quality service as well as enhancing customer relationship. As future development, features such as paying the bill directly through the menu application should be created. In addition, this application will be developed for other platforms such as Blackberry and iOS.Item Animal Behavior Video Classification by Spatial LSTM(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Huy Q. Nguyen; Kasthuri Arachchi, S.P.; Maduranga, M.W.P.; Timothy K. ShihDeep learning which is basis for building artificial intelligent system is become a quite hot research area in recent years. Current deep neural network increase human recognition level of natural images even with huge dataset such as ImageNet. Among successful architectures, Convolution Neural Network (CNN) and Long Short-term Memory (LSTM) are widely used to build complex models because of their advantages. CNN reduces number of parameters compare to full connected neural net. Furthermore, it learns spatial features by sharing weights between convolution patch, which is not only help to improve performance but also extract similar features of input. LSTM is an improvement of Vanilla Recurrent Network (RNN). When processing with time-series data, RNN gradient has tend to vanish in training with backpropagation through time (BTT), while LSTM proposed to solve vanish problem. Therefore it is well suited for manage long-term dependencies. In other words, LSTM learn temporal features of time-series data. During this we study focused on creating an animal video dataset and investigating the way that deep learning system learn features with animal video dataset. We proposed a new dataset and experiments using two types of spatial-temporal LSTM, which allow us, discover latent information of animal videos. According to our knowledge of previous studies, no one has used this method before with animal activities. Our animal dataset created under three conditions; data must be videos. Thus, our network can learn spatial-temporal features, objects are popular animals like cats and dogs since it is easy to collect more data of them and the third is one video should have one animal but without humans or any other moving objects. Under experiments, we did the recognition task on Animal Behavior Dataset with two types of models to investigate its’ differences. The first model is Conv-LSTM which is an extend version of LSTM, by replacing all input and output connections of convolutional connections. The second model is Long-term Recurrent Convolutional Networks (LRCN), which proposed by Jeff Donahue. More layers of LSTM units can easily added to both models in order to make a deeper network. We did classification using 900 training and 90 testing videos and could reached the accuracy of 66.7% on recognition rate. Here we did not do any data augmentation. However in the future we hope to improve our accuracy rate using some of preprocessing steps such as flip, rotate video clips and collecting more data for the dataset.Item An Application of Context Assured Ontology for Rule Based Cluster Selection in Psychotherapy(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Vidanage, K.; de Silva, O.Personality trait analysis is considered as a very important requirement mainly in psychotherapy. A consultant should have a sound awareness about the client`s personality to commence effective therapy sessions. In this research OCEAN model for personality trait analysis is computationally implemented. OCEAN model is an effective model used in psychology to determine the human temperament composition. Expert knowledge associated with the five dimensions of the OCEAN model is captured and stored inform of rule based expert clusters. Additionally, an upper ontology is designed to control the context associated with the OCEAN model. Ontologies are very good at storing domain associated knowledge in forms of triples. Various lexicon combinations, depicting contexts, could be grouped together and can be assigned as a specific object property. Different properties of the same object will depict various contexts, the object could be exposed to. Here, the upper ontology will act as the navigator which shows to specific knowledge cluster. The knowledge clusters are used to determine the sub facets of a particular trait as well as the intensity. Once the client enters the experiencing psychological discomfort through text to the interface, it`s natural language processed and important semantics are identified. Subsequently, depending on the semantics captured, entered text will be sent to an established SPARQL query module. Defined SPARQL queries in the module are mapped with a particular region of the created ontology. Therefore, execution of a particular SPARQL query will, question a specific region of the ontology. End points of the defined ontology are further mapped with created different rule based expert clusters. Ultimately, client`s entered problem, in form of text will be directed to a particular rule based expert cluster, which contains expert knowledge captured from psychologists. Eventually a similarity index is calculated and percentile compositions of the personnel traits are derived as per the dimensions of the OCEAN model. Developed prototype, got evaluated in two forms. Primarily, more than 30 expressed psychological inconveniences are captured from two famous discussion forms, which are globally available to share psychological snags across community. By name, those are “Panic Center” and Daily Strength”. Each of these stories captured are fed as the input to the prototype and OCEAN reports are generated. Henceforth, scenario and the generated reports are shared with the psychologist, in order to evaluate the accuracy of the final outcomes. After evaluating the final outcomes of the prototype with the expert knowledge of the psychologist, more than 80% accuracy depicted. As the second mechanism, results are compared against, Truity, which is one of the very famous questionnaire based online trait evaluation site. A trait evaluation questionnaire designed using OCEAN model was attempted in Truity and at the end final result sheet was obtained. Next, covering the same set of questions in the attempted questionnaire along with the same answers provided, an artificial story was created. Afterwards, this artificial story was provided as the input to the prototype and another OCEAN report got generated. Eventually, both the Truity generated report and the prototype generated reports are compared against. Though there`re small variations visible in the percentile values, inflations and deflations patterns of both the reports are almost identical. As conversed above, both these validations mechanisms have evidenced that the prototype generated OCEAN report is also depicting an acceptable level of accuracy. Though there`re ample of questionnaire based online trait analysis tools available, it`s almost no text based trait analytics approaches. A questionnaire based mechanism will limit the express-ability of the user / patient, hence the patient is restricted via some pre-defined set of questions. But, with this prototype, no restrictions applied. Liberty is provided, for the free flowing thoughts of the user to be entered. Other than, requesting the patient, who is psychologically distressed to fill a questionnaire which is not fair, this prototype allows to express anything what comes to the mind about the user`s cognitions. Also, the chances of misinterpreting the questions in the questionnaire and providing of wrong answers, are also addressed through this system. To get the optimal from this system, definitely it has to be used under the governance of a psychologist or a psychiatrists. This prototype is targeted to provide digital diagnostic assistance to the consultants. Hence, domestic use of this without the intermediation of the consultant, will not give the intended benefits. The ultimate intension of this research is to improve the interaction between the consultant and the patient, through a computational intervention. Because, the active ingredients in therapy comes with the live interactions between the consultants and the patient. As proved in literatures, the 100% computational replacements of therapy has become an utter failure. But the effective blend of computing with live therapy has improve the efficacy of psychotherapy in great heights.Item Applying Smart User Interaction to a Log Analysis Tool(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Semini, K.A.H.; Wijegunasekara, M.C.A log file is a text file. People analyze these log files to monitor system health, detect undesired behaviors in the system and recognize power failures and so on. In general these log files are analyzed manually using log analysis tools such as Splunk, Loggly, LogRhythm…etc. All of these tools analyze log file and then generate reports, graphs which represents the analyzed log data. Analyzing log files can be divided into two categories namely, analyzing history log files and analyzing online log files. In this work analyzing history log files is only considered for an existing log file analysis framework. Most of the log analysis tools have the feature for analyzing history log files. To analyze a log file using any of the mentioned tools, it is necessary to select a time period. For example, if a user analyze a log file according to the system health, the analyzed log file specifies the system health as ‘good’ or ‘bad’ only for the given time period. In general these analysis tools provide average system health for a given time period. This analysis is good but sometimes it may not be sufficient as people may need to know exactly what happens in the system each second to predict the future behavior of the system or to get some decisions. But using these tools, it is not possible to identify the log file in detail. To do such analysis a user has to go through the log file line by line manually. As a solution for this problem, this paper describes a new smart concept for log file analysis tools. The concept works through a set of widget and it can replay executed log files. First the existing log file analysis framework was analyzed. This helped to understand the data structure of receiving log files. Next the log file analysis tools were studied to identify the main components and features that people most like. The new smart concept was designed by using replayable widget and graph widgets. The replayable widget uses to replay the inputted log file and the graph widgets graphically represent the analyzed log data. The replayable widget is the main part of the project and holds the new concept. This is a simple widget that acts just as a player. It has two components; a window and a button panel. Window shows the inputted log file and the button panel contains play, forward, backward, stop and pause buttons. The log lines which is shown in the window of the replayable widget, holds a tree structure (Figure 1: Left most widget). The button panel contains an extra button to view the log lines. These buttons are used to play the log lines, go to requested log line, view the log line and control playing log lines. It was important to select suitable chart library to design the graph widgets. A number of chart libraries were analyzed and finally D3.js was selected because it provided chart source, free version without watermarks and it also had more than 200 chart types. It has a number of chart features and also it supports to HTML5 based implementations. The following charts were implemented using D3.js chart library. Bar chart according to the pass/failure count Time line according to the time of pass/fail occurs Donut chart according to the total execute count Donut chart for Total Pass/Fail Count Every graph widgets are bind with replayable widget, so that updates are done according to the each action. The replayable widget and the graph widgets are implemented by using D3.js, JavaScript, JQuery, CSS and HTML5. The replayable widget is successfully tested and the implemented interface successfully runs in Google Chrome web browser. Figure 1 shows a sample interface of the design which is generated using a sample log file that had about 100 of log lines. Left most widget is the replayable widget that holds considered log file as a tree structure. Top right most widget is one of the graph widget represented as a bar chart shows pass/failure count and the bottom right most widget is another graph widget represented as a time line shows the time of pass/fail that occurred for the given log file. In addition the analyzed log file can also be visualized using donut charts.This paper described the new smart concept for log file analysis tools. The existing analysis tools that were mentioned did not contain this new concept. Most of the log file analysis tools use graphs for data visualization. This system was successfully implemented and it was evaluated by a number of users who work with log files. This new concept will help log analysts, system analysts, data security teams as well as top level management to extract decisions about the system by analyzing the widgets to make predictions. Furthermore, analyzed data would be useful to collect non-trivial data for data mining and machine learning procedures. As future work, the system could be enhanced to add features such as zooming and drill down method to customize graphs and identify a mechanism to filter data according to user requirements.Item Augmented Reality and its possibilities in Agriculture (In Sri Lankan Context)(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Musfira, F.A.; Linosh, N.E.Since Sri Lanka is an agro country, its economy is mostly based on agriculture and agro based industries, animal husbandry and other agriculture based businesses. In Sri Lanka, Agriculture continues to be the major occupation and way of life for more than half of the total population. Since information technology and Internet network have become essential part in business processes recently it has considerable influence to be used in agriculture in the process of development. Nowadays the Internet-based applications are more and more successful in agriculture and different parts of the food industry. When it comes to information technology field the emerging trend is Augmented Reality (AR). The field of Augmented Reality (AR) has existed for just over one decade, but the growth and rapid progress in the past few years has been remarkable. The basic goal of an AR system is to enhance the user’s perception and interaction with the real world through virtual objects. There are several application areas, such as extension services, precision agriculture, E-commerce and information services where augmented reality can be used. When it comes to the application areas of technology Agriculture is an area where advanced technology is always introduced with a delay. This research analyze on how augmented reality can be used in agriculture. Certain applications of the AR in agriculture are already present in European countries, but it’s still in the infant stage in Asian countries especially in south Asian countries. In Sri Lanka many opportunities to use these techniques in agriculture can be predicted. Following are some instances of possibility of applications of AR in agriculture. The research areas such as In Sri Lanka many agricultural research centers like Sri Lanka Council for Agricultural Research, Gannoruwa Agricultural Complex, Agricultural Research and Development Centre exist where enrichment of an image becomes necessary. Here AR can be used to add dimension lines, coordinate systems, descriptions and other virtual objects to improve investigation and analyze of the captured images. Another aspect where the AR probably will visit in the near future is the cabin of modern agricultural machinery and mobile machinery. Some components of this system already exist and are being used in the form of simple displays that show the use of GPS. Adding the large displays or special glasses, where on the image fields will be imposed lines drawn by the computer which are showing the way for passage or plot boundaries, is a logical development of existing solutions. The third area is animal Husbandry and farming. A system of AR can be developed and installed in the farms of Sri Lanka for farm monitoring. Use of suitable software may allow determining individual pieces on the screen, with simultaneous administration of the relevant information about them. The following can be shown. The data insertion, information about the health status of farm animals, etc. Finally in crop production it is possible to identify plants with a camera and appropriate AR system. This gives the ability to detect pests and to plan appropriate protective procedures. While studying the use of Augmented Reality technology in agriculture, it can be concluded that different types of services offer different possibilities. Mobile systems develop very dynamically both in regards to the speed of data transmission and services. New devices like tablets and new services like Cloud Computing, Near Field Communication (NFC) have great potential in agriculture. Augmented Reality can be allied with all those technologies and expands the possibilities to evolve towards a new era in agriculture in Sri Lankan agro farms. However, the whole assessment of topic must not be done only on the basis of the technology and taken out of its environment randomly, since the whole area is very complex, this paper focuses on finding and analyzing what is Augmented Reality and tries to highlight the possibilities in agriculture.Item An Automated Solution for the Postal Service in Sri Lanka(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Piyawardana, M.D.S.N.; Wikcramarathna, R.M.D.; Vidanagama, D.U.The significant development in the field of e-commerce is becoming the only option for many business activities. Even though, most of the businesses use numerous technological advancement in the modern world, the postal centers in Sri Lanka are still operating in manually and it face abundant challenge of dealing with postal activities manually such as parcel handling, selling postal products, etc. The primary reason for the issue is the lack of human resources in the postal centers. The main objective of this research is to implement an automated solution to overcome the current loosing ends in traditional postal service in Sri Lanka. Based on requirements and feasibility analysis, system mainly covers mails/parcels scheduling and domestic tracking system for the customer to track their mails or packagers which post through the postal service. The online shopping cart function for service communication locales (Post Shops) would be able to sell their products and the e-post card creator function would be able to send post cards via internet. Customers would be ensure the secure online transactions with the online payment gateway. The research strategy shows the importance of the automation for Sri Lanka postal sector. Therefore, after conducting the research, it aims to address almost all the activities related to postal sector with user friendliness and accurate.Item AWRSMS: An Approach to Enhance Apparel Warehousing and Retailing through IoT(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Jayathilaka, D.K.; Kottage, G.; Chankuma, C.; Dulakshi, C.; Herath, K.; Ganegoda, U.; Buhari, M.When considering about the modern trends of the Information and technology field; Internet of Things (IoT) is one of the pivotal and emerging technology. In human dependent existing systems such analysis can be done only if someone runs a query and checks for it. When it comes to Warehousing and storage industry, manually collected data are sent to an Enterprise Resource Planning (ERP) system or to a warehouse management system. These systems have some limitations such as mismatches of issuing stock, problems in handling the inventory items and placing them in the warehouse. Warehouse management system manages all the functions in a warehouse but both the systems use manual methods to collect data, such as barcodes. So errors are occurred when entering large number of data into the system. Even a well trained staff member can fail due to common human failures such as fatigue. There’s a very less number of systems which are fully automated, convert captured data into information in real time and these systems are not able to control both warehouse’s and retail shop’s functions. Moreover any of these are not using new methods to do customer promotion. To give a comprehensive solution to limitations of existing systems the above mentioned Apparel Warehouse and Retail Shop Management System (AWRSMS) is developed. The system manages the functions in both warehouse and retail shop in apparel industry. It involves organizing, automating and synchronizing the activities of these both places in effective and efficient manner, using the technology IoT. In this system, all the data about stocks, incoming goods, dispatching items, will be collected using Radio Frequency Identification (RFID) tags and readers and collected data will be sent to the system’s database which stored in a web server. By using these data the system executes several functions such as providing details of returns, new arrivals and dispatched items, mismatch of dispatch, available items, details about stock updates and selected items with relevant reports. This system is implemented as three major modules; web application, data capturing module and customer promotion module. The promotion module enables the location based promotion process inside the retail shop and it is one of a newest and significant function included in AWRSMS. It is a combination of mobile application and a web application. In this module the marketing manager can add promotions into the database using web application. Through the mobile application customer receives the ongoing promotion details. In this process when a customer comes near to the particular sales area which has an ongoing sales promotion, system detects the customer’s phone and send the promotion message to them, using IoT beacons. Mobile application installed in customer’s phone continuously searches for beacon ranges, connect with them and receives relevant promotion messages using Bluetooth Law Energy (BLE) signals transmitted by the beacons. Android studio, JAVA and XML are used as developing tools of the promotion module. Rest of the system is developed using spring framework, Java EE, Hibernate framework and MYSQL. Testing and evaluation was carried out in three procedures to verify whether the system has achieved the intended objectives. First one is the module testing, done by dividing each main function as a module and tested their functionalities, evaluated with intended results immediately after completing each module. White box testing methods were used to carry out the module testing. Test cases have been designed for each module and testing was carried out based on them. Statement coverage for all the test cases was within 85%-100%. All the modules of main web application got 100% accuracy level, Promotion module achieved 96% and data capturing module obtained 82% of accuracy level. After integrating each module, the final testing phrase was carried out by using black box testing. All the modules of web application achieved 100%, promotion module 98% and data capturing module 76% of accuracy level. To ensure that the user requirements were achieved as intended, a questionnaire have been given to a selected sample which consist of 50 members, including AWRSMS’s end customers, people who are knowledgeable of technology and management, and people who aren’t. Questions of this questionnaire categorized under user friendliness, user experience, functionality, suggestion and recommendation. Questions made under user friendliness and user experiences mainly focused the end users who are not expert in technology to measure the usability of the system. The selected users had to comment by using the system without knowing the inside functions. Then the functionality section mainly focused the technological people who tested the overall system. The intention was to figure out the relevancy and compatibility of each and every function with user requirements. Suggestions and the recommendation sections were carried to explore the further improvements. The positive feedbacks which have been gained by user friendliness of AWRSMS is 80% and 78%, 84%, 76 % was obtained for recommendation, user experience, and functionality respectively. 64% of the sample gave suggestion to upgrade the functionalities. When comparing aims objectives and gathered outcomes of the system AWRSMS has been completed in intended and successful manner. By obtaining the required resources and doing further improvements such as using industry level RFIDs, and improving mobile application by adding more features and developing it also for IOS platform, the Apparel Warehouse and Retail shop management system will be an ultramodern and significant approach for the Apparel industry.Item Braille Messenger: SMS Sending Mobile App for Blinds Using Braille(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Udapola, U.B.H.S.; Liyanage, S.R.The mobile phone is one of essential device for people in day to day life. Mostly they use mobiles for communication, entertainment and scheduling tasks etc. Among those tasks when considered about the communication, people use voice calls, online chatting, Short Message Service (SMS) to communicate with each other. But typing a message is not much easier for blinds or Visually Impaired (VI) people. At the beginning of the mobile era, mobiles have tactile buttons (hard keyboard). So typing texts using tactile buttons is much easier for blinds than using touch screens. But with the increases of mobile technology, the market targets the best featured mobiles with accessibility features (Screen Reading feature) like Voiceover in IOS, Narrator in Windows and Talkback in Android etc. So blinds also could to move on smart mobile phones. But at the beginning, to type texts on smart mobiles just used same QWERTY or 4X3 soft keyboards that sighted people are used to input texts. In this method blind user need to move finger on keyboard then system speak out the touched key and if user need to input that key need to double tap on that key. But when consider about blind or VI people their familiar way of reading and writing is the System of Braille which founded by Frenchman Louis Braille. So designers have introduced braille to text method to type texts. But when designing the app by targeting braille input, Multi-touch capability of the device must be considered. Even though most of mobile phones have Multi-touch capability, count of points that can be detect simultaneously is different. It can be 2, 5 or 10 etc. So if someone come up with a design with using 6 point of multi-touch features that not suitable for devices which having less number of multi-touch points than 6 and app won’t produce the expected output. As a solution for that problem if someone come up with a design with using only basic multi-touch feature (2 points), that design reduce the efficiency and usability who have mobile devices which capable with best multi-touch feature (10 points). Therefore, I come up with a solution by giving different User Interface (UI) designs by checking multi-touch capability of the device. I developed 3 different UI designs to support for mobile devices with having different multi-touch capabilities. Design A: Type a single braille character using 2 fingers & needs to tap 3 times to insert a single character. Target the devices which have only basic multi-touch capability of points of 2. Design B: Type a single braille character using 3 fingers & needs to tap 2 times to insert a single character. Target the devices which have multi-touch capability with less than 6 points but greater than 2 points. Design C: Type a single braille character using 6 fingers & by single tap can insert a single character. Target the devices which have best multi-touch capability of points of 10 or more than 6 points. Here at the first user have to register reference points one by one. Because here I design the user customizable UI which means no restriction way of putting fingers on screen. User just need to register fingers for position 1, 2,3,4,5 and 6 respectively. Then I used K-NN algorithm to detect input finger. I considered each reference points’ (x, y) coordinates as center of each class. Here I assume that user will not reposition his/her hand from the device. But with repeatedly touching users’ touch points automatically drifting from the first registered reference points and it may cause to increase error rate. So here I used K-Mean algorithm to update reference points/centers of each class with each single user tap. If user repositioning his/her hand he/she has to register reference points again since there is a greater variance between registered reference points and currently touched points. Here I using 6-bit Braille encoding method with voice and vibration feedback. Most of apps use Text-To-Speech (TTS) engine to read text. Here I included vibration rhythms to identify braille characters for blind-deaf people. But this feature available only for Grade 1 Braille system. Moreover, Braille Messenger to become more user-friendly I have used some simple patterns to run commands like adding WHITE SPACE, BACKSPACE, ENTER etc. To determine those patterns, I store the coordinates of draw pattern and then by using Mathematical algorithm I classify the command. As well as I hope to provide the most frequently using words which have more than 5 characters as predicted word. But here I just hope to provide a single word (most frequently used word) rather than presenting list of all prediction words. When I tested this app with participate of 3 blind people including one pseudo blind averagely I got the 92.3% of accuracy of detecting inserted braille characters and 95% accuracy of detecting draw pattern commands. With the time, speed of typing on design A, B & C was increased respect to number of sessions tried and with the 2 hand I got the maximum speed of typing which was 16 WPM.Item Context-Aware Multimedia Services in Smart Homes(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Chih-Lin Hu; Kasthuri Arachchi, S.P.; Wimaladharma, S.T.C.I.The evolution of “smart homes” technologies exposes a broad spectrum of modern personal computers (PC), consumer electronics (CE), household appliance and mobile devices for intelligent control and services in residential environments. With high penetration of broadband access networks, PC, CE and mobile device categories can be connected on home networks, providing a home computing context for novel service design and deployment. However, conventional home services are characterized by different operations and interactive usages among family members in different zones inside a house. It is prospective to realize user-oriented and location-free home services with modern home-networked devices in smart home environments. The contribution of this paper proposes a reference design of a novel context-aware multimedia system in home-based computing networks. The proposed system integrates two major functional mechanisms: intelligent media content distribution and multimedia convergence mechanisms. The first mechanism performs intelligent controls on services and media devices in a context-aware manner. This mechanism integrates face recognition functions into home-based media content distribution services. Some devises capable of capturing images can recognize the appearances of registered users and infer their changes of location region inside a house. Media content played in the last locations can thus be distributed to home-networked devices closer to the users in the current locations. The second mechanism offers multimedia convergence among multiple media channels and then renders users a uniform presentation for media content services in residential environments. This mechanism can provide not only local media files and streams from various devices on a home network but also Internet media contents that can be online fetched, transported and played onto multiple home-networked devices. Thus, the multimedia convergence mechanism can introduce an unlimited volume of media content from the Internet to a home network. The development of a context-aware multimedia system can be described, as follows. A conceptual system playground in a home network contains several Universal Plug and Play (UPnP) specific home-networked devices that are inter-connected on a singular administrative network based on the Ethernet or Wi-Fi infrastructure. According to UPnP specifications, home-networked devices are assigned IP addresses using auto-IP configuration or DHCP protocols. Then, UPnP-compatible devices can advertise their appearances on a network. When other neighbor devices discover them, they can collaborate on media content sharing services in a network. In addition, some UPnP-compatible devices are capable of face recognition to capture the front images of users inside a house. Those captured images can be sent to a user database and compared with existing user profiles corresponding to individuals in the family community. After any registered user is recognized, the system can refer to the stored details of this particular user and then offer personal media services in a smart manner. On the other hand, the components and functionalities of the proposed system can support intelligent media content distribution and multimedia convergence mechanisms. Technically, the proposed system combines several components such as UPnP control point, UPnP media renderer, converged media proxy server, image detector and profile database of registered users and family communities. Though there are diverse media sources and formats in a home network, users remain the same operational behavior on sharing and playing media content according to common UPnP and Digital Living home Alliance (DLNA) guidelines. Prototypical development achieved a proof-of-concept software based on the Android SDK and JVM frameworks, which integrates the distribution of intelligent media content and converged media services. The resulting software is platform-independent and application-level. It can be deployed on various home-networked devices that are compatible with UPnP standard device profiles, e.g., UPnP AV media server, media player, and mobile phones. Real demonstration has been conducted with the software implementation that runs on various off-the-self home-networked devices. Therefore, the proposed system is able to offer friendly user experience for context-aware multimedia service in residential environments.Item Cost Effective High Availability Transparent Web Caching with Content Filtering for University of Kelaniya, Sri Lanka(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Pathirana, T.; Fernando, S.; Gunasekara, H.The rapid growth of Internet usage at University of Kelaniya with the concept of “Bring Your Own Device” have increased issues with traditional proxy systems. The key problem is to introduce a suitable web caching system with content filtering which will enable end users to access internet without setting up proxy server details on their devices. On this study it is intended to analyse the network flow of University of Kelaniya and introduce a transparent system which will cache and filter the content according to university’s existing policies. The implementation should be a cost effective and a high availability caching mechanism which will allow users to browse internet without changing their browser settings. This will introduce a free and open source proxy system “Squid” and a content filtering system, “DansGuardian” on two dual NIC Linux boxes based on Ubuntu operating system and will be placed between Local Area Network and the firewall. Squid is a FOSS proxy widely used in the community as a traditional proxy provider. In this scenario squid will be configured as a transparent proxy which will listen on port 3128, using Linux IP tables all http traffic coming to LAN side interface will be redirected to port 8080. Default gateway for the servers will be the firewall while all internal subnets will be routed to LAN L3 devices by the servers. Between L3 device and servers, load balancing will be done based on port grouping. Before forwarding cached traffic according to squid rules, they will be checked against the content filtering policies of DansGuardian which listens on port 8080. Once content filtering is done it will be sent to the requester. End users are configured with DHCP and with No-Proxy browser settings and therefore they may not notice any traditional proxy as all caching and filtering will be transparent to the users. After testing and fine tuning wireless users for 2 months, the system was integrated for the whole network. As an influencer for BYOD, removing existing proxy settings enabled any authorized user to access the Internet through the local network. Number of detected end computers were drastically rising and therefore high bandwidth necessity was also going up. Analysing loading times and bandwidth peaks, it was confirmed that the system was stable. This made the subscribed Internet use rise up to 100% on peak times and more than 50% on off peak compared to 80% and 10% record for the traditional proxy. User comments were also positive than for the previous system as now they can bring their devices and do the browsing without consulting IT helpdesk for the proxy settings. Implementation of the transparent proxy in University of Kelaniya was the first long term transparent proxy installations in a Sri Lankan University which influenced other institutes to adopt the concept. Only downfall was this implemented system cannot detect or cache https traffic which were encrypted. Web caching and content filtering is crucial when it comes to network bandwidth considerations. In a university it has to be done with saving advantages for Education. The implemented system is a cost effective and reliable solution to address the problem on government and educational background. This will allow any authorized user to access network with their own device without any major changes.Item Data mining approach for Sales Prediction(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Rathnadiwakara, A.S.K.; Liyanage, S.R.Nowadays predictive analysis is more popular among companies to improve their business profits. Those companies differentiate what they do from “Data Mining”. The characteristic deduction is that data mining is limited to the discovery of patterns, whereas predictive analytics allows the application of the patterns to new data to predict unknown values. The main aim of data mining is to extract knowledge from the data at hand, increasing its intrinsic value and making the data useful. Today most business areas using many strategies to improve their business profits. They are mostly use traditional methods. Therefore, the company’s efficiency and profitability, goes to the critical situation. When considering today’s business arena, the most important things are good efficiency and correct strategies for a business. Converting to the new technologies companies can achieve their business goals and they can reveal their sales life-cycle. This research proposes for a medium scale tyre dealing company which is situated in Colombo. It is important to company to accurately predict their future order details and salary income before having unprofitable occasion. Company could conduct sufficient stock when talking prediction support. That is the best solution to reduce time, save importing cost, growth for income and manage resources. Data mining algorithms and techniques used for the prediction process and used MS SQL Server 2008 R2 with Analysis Server and Business Intelligent Development Studio for modeling process. Analysis Services contains number of standard data mining algorithms. Decision Tree, Neural Network and Clustering data mining models were attempted for the prediction. Decision Tree is a graph of decisions and their possible consequences, represented in form of branches and nodes. A Neural Network is a parallel distributed processor that has a propensity for experiential knowledge and making it available for users. Clustering is used to place data elements into related groups without advance knowledge of the group definitions. The best algorithm was selected for each model and it focused on five main attributes which were referred to as factors affecting a sales process such as Item Code, Item Type, Item Quantity, Item Value, Item Sold Date, etc. variables were used in data mining process. Among those variables five variables were selected for the mining process. Dataset arranged with 30% data for testing process and 70% data for the training process. According to the predicting probabilities, Decision Tree algorithm were performed 99.53%, Neural Network algorithm were performed 73.36% and Clustering algorithm were performed 67.79%. Clustering model belongs to the lowest predict probability value. Therefore Clustering model was the worst model. Decision Trees model contains highest predicted value 99.53%. Therefore it was the best model. Neural Network model was also a good model. The Score results indicate that Decision Trees mining model has the best score 1.00 and followed by Neural Network mining algorithm with score of 0.92 and clustering mining algorithm with 0.94. Considering the data mining lift chart for mining structures, it graphically represents the improvement that a mining model provides when compared against a random guess, and measures the change in terms of a lift score. By comparing the lift scores for various portions of the dataset and for different models. According to the Lift chart representation, Decision Trees curve present in upper in the chart with comparing other carvers. By considering lift chart, score and target population with predicting probabilities, Decision Trees algorithm was the best one for prediction process. Finally, Data mining model was implemented using Decision Trees algorithm. According to these predicting results, the company can handle their imports optimizing the available resources; storage, time, money. Therefore this research would benefit the Company to improve their incomes.Item Detection of Vehicle License Plates Using Background Subtraction Method(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Ashan, M.K.B.; Dias, N.G.J.The detection of a vehicle license plate can be considered as a primary task of a License Plate Recognition System (LPRS). Detecting a vehicle, locating the license plate and the non-uniformity of license plates are few of the challenges when it comes to detection of a license plate. This paper proposes a work to ensure the detection of license plates which are being used in Sri Lanka. The work here, consists of a prototype which was developed using the Matlab’s predefined functions. The license plate detection process consists of two major phases. They are, detection of a vehicle from a video footage or from a real time video stream and license plate area isolation from the detected vehicle. By sending the isolated license plate image to an Optical Character Recognition (OCR) System, its contents can be recognized. The proposed detection process may depend on facts such as, the lighting and weather conditions, speed of the vehicle, efficiency in real time detection, non-uniformity effects of number plates, the video source device specifications and fitted angle of the camera. In the license plate detection process, the first phase, that is; the detection of a vehicle from a video source is accomplished by separating the input video source into frames and analysing these frames individually. A monitoring mask is applied at the beginning of the processing in order to define the road area and it helps the algorithm to look for vehicles in that selected area only. To identify the background, a foreground detection model is used, which is based on an adaptive Gaussian mixture model. Learning rate, threshold value to determine the background model and the number of Gaussian modes are the key parameters of the foreground detection model and they have to be configured according to the environment of the video. The background subtraction approach is used to determine the moving vehicles. In this approach, a reference frame is identified as the background from the previous step.By subtracting the current frame from that reference frame, the blobs which are considered to be vehicles are detected. A blob means a collection of pixels and the blob size should have to be configured according to facts such as the angle of the camera to the road and distance between camera and the monitoring area. Even though a vehicle is identified in the above steps, it needs a way to identify a vehicle uniquely to eliminate duplicates being processed in next layer. As the final step of the first layer, it will generate distinct numbers using the Kalman filter, for each and every vehicle which are detected from the previous steps. This distinct number will be an identifier for a particular vehicle, until it lefts the global window. In, the second phase of the license plate detection will initiate in order to isolate the license plate from the detected vehicle image. First, the input image is converted into grayscale to reduce the luminance of the colour image and then it will be dilated. Dilation is used to reduce the noise of an image, to fill any unnecessary holes in the image and to improve the boundaries of the objects by filling any broken lines in the image. Next, horizontal and vertical edge processing is carried out and histograms are drawn for both of these processing criteria. The histograms are used to detect the probable candidates where the license plate is located. The histogram values of edge processing can change drastically between consecutive columns and rows. These drastic changes are smoothed and then the unwanted regions are detected using the low histogram values. By removing these unwanted regions, the candidate regions which may consists of the license plate are identified. Since the license plate region is considered to be having few letters closely on a plain coloured background, the region with the maximum histogram value is considered as the most probable candidate for the license plate. In order to demonstrate the algorithm, a prototype was developed using MATLAB R2014a. Additional hardware plugins such as Image Acquisition Toolbox Support Package for OS Generic Video Interface, Computer vision system toolbox and Image Acquisition Toolbox were used for the development. When the prototype is being used for a certain video stream/file, first and foremost, the parameters of the foreground detector and the blob size has to be configured according to the environment. Then, the monitoring window and the hardware configurations can be done. The prototype which was developed using the algorithm discussed in this paper was tested using both video footages and static vehicle images. These data were first grouped considering facts such as non-uniformity of number plates, the fitted angle of the camera. Vehicle detection showed an efficiency around 85% and license plate locating efficiency was around 60%. Therefore, the algorithm showed an overall efficiency around 60%. The objective of this work is to develop an algorithm, which can detect vehicle license plates from a video source file/stream. Since the problem of detecting a vehicle license plates is crucial for some complex systems, the proposed algorithm would fill the gap.Item Development of a Location Based Smart Mobile Tourist Guide Application for Sri Lanka(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) de Silva, A.D.; Liyanage, S.R.Tourism plays a momentous role in the accomplishment of macroeconomic solidity in Sri Lanka. It is one of the main industries that generates a higher emolument for Sri Lanka. The amount of foreign currency earnings from tourism industry has decreased significantly during the past few years according to observations and collected data. This can be partially attributed to the lack of loyalty of the physical tour guides as well as not modernizing the tour guide booklets regularly. Considering the above issues, we propose a mobile application named “Live Tour Guide” to make the travelling easier for the tourists and thereby creating a positive impact on the economy of Sri Lanka. A meticulous investigation was carried out in order to find out the software and hardware requirements to develop this automated tour guide application. The feasibility analysis for the system was carried out under three areas; i.e Operational, Economic and Technical. Since this application consists of the details about the hotels, attractive places and the longitudes/latitudes of different locations it was needed to use an exterior source to collect these respective data. Under the assumption that the particular websites are updated regularly, the dedicated websites were used to gather the required information. Direct observation data collection method was also utilized to identify the work carried out by the tour guides, their behavior, the way that they treat the tourists etc. The system has been developed focusing on two main elements; Mobile Application and Web Server. The Web Server is used to access the cached data or information through the Mobile Application. Information regarding different locations such as, longitudes and latitudes were gathered with the use of the Global Positioning System (GPS). Google maps was employed to access the map based services. Central web server can be accessed through the Internet by using wireless connectivity or 3G connection. The Web Server serves the contemporary location information and it also provides the details of the hotels and attractive places situated close-by, so that it will allow the tourists to plan out their journey accurately in advance with a minimum effort. An external database has been developed using MySQL in order to maintain the details of the places of interest. Java Script Object Notation (JSON) objects are used to exchange the location data over the internet and the application program. Google Maps Application Programming Interface is used to access the Google Map. The “Live Tour Guide” mobile application has developed in order to provide the real time location based services according to the requirements of the tourists. The system has been tested to operate on any smartphone with Android Operating System version 4.2 or later. When a user enters the source and the destination, it will display the route, estimated time for the journey without traffic and the distance between the origin and the destination. Along with that it provides two options to select as “Locations” and “Hotels”. Those two options will provide the details of all the available hotels as well as attractive places located close-by along the preferred route. Apart from the mobile application, “Live Tour Guide” web application has also been developed for maintaining the database in a user friendly manner that can be used by the travel agencies. By using all the above mentioned technologies together with the real data, the objective of developing this “Live Tour Guide” android based application was successfully achieved. Even though some of the solutions are already available as tour guides, this “Live Tour Guide” application allows the tourists to plan out their tour before they start up their journey, by providing various kinds of origins and destinations. It will allow tourists to choose the locations that they are preferred to visit during their journey, since it provides all the information including the prices as well. Any user who is equipped with an android based smartphone, eligible to use this application. However, in future this system should be enhanced by enabling to display all the public places that are available within a selected route as well as it is needed to find out a way of accessing the “Live Tour Guide” application accurately even without having an internet connection. Currently, the database updates manually, but it is better to focus on updating it automatically within regular intervals, so that it will operate more accurately. Due to this innovative application, more tourists can be attracted and will gain a positive impact on the economy of Sri Lanka.Item Driver Assist Traffic Signs Detection and Recognition System(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Manisha, U.K.D.N.; Liyanage, S.R.Traffic signs or road signs are signs that are initiated at the roads to provide information of the overcoming behavior of the road to drivers and pedestrians. Since 1930s with the increment of the use of vehicles, road signs were introduced in Europe. Latterly many countries have adopted them to standardize their signs to enhance the safety of road users. Since the number of vehicles is an increasing factor in the world, road traffic became an increasing factor. Specially in urban areas the pedestrian activities at the road is generally high along with the road traffic. It is possible that drivers may lose their concentration to the traffic signs because of closing vehicles and pedestrian activities. There are many notification boards with various colors and textures at road sides. This also may cause the problem that hard to detect the traffic signs clearly to the eyes. Violating traffic signs may cause drivers to make accidents and also unnecessary problems like penalties from the law. To ensure more safety and convenient drive, automation of traffic signs recognition took apart. Computer Vision is a promising approach for addressing this problem which is an interdisciplinary field that emphasis, how the computers can be made to gain high level understanding from digital images. First automated traffic signs recognition was reported in Japan in 1984. Since then number of methods have been developed for traffic signs detection and recognition. This paper presents ‘Driver Assist Traffic Signs Detection and Recognition System’ which is capable of detecting, recognizing and indicating traffic signs at the road side to the driver to ensure a safety and convenient drive by acknowledging the behavior of the road. The proposed system mainly consists with two phases which are detection phase and recognition phase. In both phases I have used classifiers with different technologies which are computer vision image processing techniques and machine learning techniques respectively. In detection phase I have used a cascade classifier to analyze the each frame of the input to find traffic signs of it. For the purpose of training the classifier I have provided over 3000 positive samples of images with region of interests (ROIs) which includes traffic signs and provided over 15000 negative samples of images which does not include any traffic signs. Haar-like features of the images were used to train the classifier with a proper false alarm rate. Aspect ratio changes for most of 3D objects with the location of the camera. Since the classifier is very sensitive to the aspect ratio of the traffic sign I have to use many training images as possible to achieve almost all the orientations of traffic signs to the training set of images. The main objective of the detection phase is to classify the presence of traffic signs and return the coordinates of the sign for each frame. In recognition phase I have used machine learning techniques to train a category classifier support vector machine (SVM) to recognize and indicate the detected traffic signs by the detector. Histogram of Oriented Gradient (HOG) features were used to train the SVM by extracting the features from the training sets and stores them in separate classes as separate categories. For each coordinate that returned by the detector, used to crop the original frame and make an input image to the category classifier. For each input image the category classifier gives a separate score for each category by matching the HOG features of the image. The highest score gives the nearest category and I have obtained an optimal score value to ensure the accuracy of the recognition phase. The main objective of the recognition phase is to choose the correct category of the detected traffic sign by the detector and indicates the traffic sign category. In the detection phase I used LBP and HOG as the feature extraction methods along with the Haar like feature and obtained that the higher accurate technique is to use Haar like features. In recognition phase I chose 11 categories of traffic signs for the training process. I have obtained an optimal value of -0.04 as the score for the best accuracy of the recognition phase. The proposed system can detect, recognize and indicates traffic signs with great accuracy not only at the daylight but at night also and can be implemented to use in any vehicle. Detection process achieves over 88% accuracy and in recognition process accuracy of classify the category of a detected sign is over 98%. In real time testing overall system achieves over 88% of accuracy over 45-50 km/h speed.Item An Emotion-Aware Music Playlist Generator for Music Therapy(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Dissanayaka, D.M.M.T.; Liyanage, S.R.Music has the ability to influence both mental and physical health. Music Therapy is the application of music for rehabilitation of brain activity and maintain both mental and physical health. Music therapy comes in two different forms: active and receptive. Receptive therapy takes place by making the patient to listen to suitable music tracks. Normally music therapy is used by people who suffer from disabilities or mental ailments. But the healing benefits of music can be experienced by anyone at any age through music therapy. This research proposes music android mobile application with auto generated play list according to its user’s emotional status which can be used in the telemedicine as well as in day-to-day life. Three categories of emotional conditions; happy, sad and anger were considered in this study. Live images of the user is captured from an android device. Android face detection API available in the android platform is used to detect human faces and eye positions. After the face is detected face area is cropped. Image is grey scaled and converted to a standard size in order to reduce noise and to compress image size. Then image is sent to the MATLAB based image-recognition sub-system using a client server socket connection. A Gaussian filter is used to reduce noise further in order to maintain a high accuracy of the application. Edges of the image is detected using Canny Edge Detection to identify the details of the face features. The resulting images appear as a set of connected curves that indicate the surface boundaries. Emotion recognition is carried out using the training datasets of happy, sad and angry images that are input to the emotion recognition sub-system implemented in MATLAB. Emotion recognition was carried out using Eigen face-based pattern recognition. In order to create the Eigen faces average faces of three categories are created by averaging the each database image in each category pixel by pixel. Each database image is subtracted from the average image to obtain the differences between the images in the dataset and the average face. Then each image is formed in to the column vector. Covariance matrix is calculated to find the Eigen vectors and associated values. Then weights of the Eigen faces are calculated. To find the matching emotional label Euclidean distance between each weight is calculated for each category. By comparing the obtained Euclidean distances of input image with each category, the class of the image with lowest distance is identified. The identified label (sad, angry, and happy) is sent back to the emotion recognition sub-system. Songs that are pre-categorised as happy, sad and angry are stored in the android application. When emotional label of the perceived face image is received, songs relevant to the received emotional label are loaded to the android music player 200 face images were collected at the University of Kelaniya for validation. Another 100 happy, 100 sad and 100 angry images were collected for testing. Out of the 100 test cases with happy faces, 70 were detected as happy, out of the 100 sad faces 61 were detected as sad and out of 100 angry faces 67 were successfully detected. The overall accuracy of the developed system for the 300 test cases was 66%. This concept can be extended to use in telemedicine and the system has to be made more robust to noises, different poses, and structural components. The system can be extended to include other emotions that are recognizable via facial expressions.Item End-user Enable Database Design and Development Automation(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Uduwela, W.C.; Wijayarathna, G.Information System (IS) is a combination of software, hardware, and network components working together to collect, process, create, and distribute data to do the business operations. It consists with “update forms” to collect data, “reports” to distribute data, and “databases” to store data. IS plays a major role in many businesses, because it improves the business competitiveness. Although SMEs are interested to adopt IS, they are often suffered by other factors: time, underline cost, and availability of ICT experts. Hence, the ideal solution for them is to automate the process of IS design and development without ICT expertise for an affordable cost. The software tools are available on the Web to generate the “update forms” and “reports” automatically for a given database model. However, there is no approach to generate the databases of IS automatically. Relational database model (RDBM) is the most commonly used database model in IS due to its advantages than the other data models. The reason of the advantages of the model is its design, but it is not a natural way of representing data. The model is a collection of data that is organized into multiple tables/relations linked to one another using key fields. These links represent the associations between relations. Typically, tables/relations represent entities in the domain. A table/relation has column/s and rows where column/s represent the attributes of the entity and rows represent the records (data). Each row in a table should have a key to identify that row uniquely. Designers should have to identify these elements from the given data requirements in the process of the RDBM design, which is difficult for non-technical people. The process of design of RDBM has few steps: collect the set of data requirements, develop the conceptual model, develop the logical model, and convert it to the physical model. Though there are approaches to automate some steps of the process of design and development of RDBM, they also request the technical support. Thus, it is required to develop a mechanism to automate the database design and development process by overcoming the difficulties in the automation approaches of RDBM, so that non-technical end-users will be able to develop their databases by themselves. Hence, a comprehensive literature survey was conducted to analyze the feasibilities and difficulties of the automation of the process of RDBM design and development. Uduwela, W. et al., the author says that the “form” is the best way to collect data requirements of the database model for its automation, because form is in semi structured way than the natural language (the most common way to present the data requirements is natural language) and it is very closer to the underline database. Approaches were available to automate the development of the conceptual model based on the given data requirements. This is the most critical step in the RDBM design process, because it needs to identify the elements of the model (entities, attributes of them, relationship among the entities, keys and the cardinalities). Form based approaches were analyzed using the data available in the literature to recognize the places where the user intervention is needed. The analysis says that all approaches need user support and it is needed to make the corrections of the outcome, because the elements are not consistent among business domains; it differs from domain to domain and with the same domain also. Further, they demand user support to make the initial input according to the data requirements (set of forms) to identify the elements of the conceptual model. The next step of the process is developing the logical model based on the conceptual model. The outcome of the logical model should be a normalized database to eliminate the data insertion, updating and deletion anomalies by reducing its data redundancies. Data redundancies often caused by Functional Dependencies (FD) that are a set of constraints between two sets of attributes in a relation. The database can be normalized by removing undesirable FDs (remove partial dependencies and transitive dependencies). We could not identify any approach that generates normalize database diagram automatically from the data requirements directly. Existing approaches request the FDs to generate the normalized RDBM. Designers’ high perception power and skills are needed to identify the correct FDs, because it also depends on the domain which is a problem for the automation. FDs can be found by doing data mining, but it also generates an incorrect set of FDs if there are insufficient data combinations. Developing the physical model from the logical model is straightforward and relational database management systems help to automate it. According to the analysis, it can be concluded that the existing approaches on conceptual model development cannot develop accurate models as they has to develop distinct models for each problem. Normalization approaches also cannot be automated as FDs also vary among business domains and with the same domain also. These concludes that there should a database model that can be designed and developed by end-users without any expert knowledge. The proposed model should not be either domain specific or problem specific. It would be better if the approach could convert the data requirements to the database model directly without intermediate steps like in the DBM design process. Further, it would be better the proposed model could be run on the existing database management systems too.