Symposia and Conferences
Permanent URI for this communityhttp://repository.kln.ac.lk/handle/123456789/15606
Browse
Item Android smartphone operated Robot(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Thiwanka, U.S.; Weerasinghe, K.G.H.D.In the present an open-source Android platform has been widely used in smartphones. Android platform has a complete software package consisting of an operating system, middleware layer and core applications. Android-based smartphones are becoming more powerful and equipped with several accessories that are useful for Robotics. The purpose of this project is to provide a powerful, user-friendly computational Android platform with simpler robot’s hardware architecture. This project describes the way of controlling robots, using smartphone and Bluetooth communication. Bluetooth has changed how people use digital device at home or office, and has transferred traditional wired digital devices into wireless devices. The project is mainly developed by using Google voice recognition feature which can be used to send commands to robot. Also motion of robot can be controlled by using the Accelerometer and the buttons in the Android app. Bluetooth communication has specifically used as a network interface controller. According to commands received from application, the robot motion can be controlled. The consistent output of a robotic system along with quality and repeatability are unmatched. This project aims at providing simple solutions to create a framework for building robots with very low cost but with high computational and sensing capabilities provided by the smartphone that is used as a control device. Using this project concept, we can help disabled people to do their work easily ex: Motorized wheelchair, remotely controlling some equipment using the smart phone. Also using this project, we can build Surveillance robot devices and reconnaissance devices can design home automation and can use to control any kind of device that can be controlled remotely. Many hardware components were used such as Arduino Uno, Adafruit Motor Shield Bluetooth module and Ultrasonic Distance Measuring Transducer Sensor. The Uno is a microcontroller board based on the ATmega328P. It contains everything needed to support the microcontroller; simply connect it to a Computer using a USB cable or power it with an AC-to-DC adapter or battery to get started. The Arduino use shield boards. These plug onto the top of the Arduino and make it easy to add functionality. This particular shield is the Adafruit Industries Motor / Stepper / Servo Shield. It has a very complete feature set, supporting servos, DC motors and stepper motors. The Bluetooth module is used to connect the smart phone with robot. It uses AT commands. The HC-SR04 ultrasonic sensor uses sonar to determine distance to an object like bats or dolphins do. It offers excellent non-contact range detection with high accuracy and stable readings in an easy-to-use package. From 2 cm to 400 cm or 1” to 13 feet. Its operation is not affected by sunlight or black materials. It comes with an ultrasonic transmitter and a receiver module. This system has two major parts. One is Android application and the other is robot hardware device. When developing this Android application new Android technologies were used ex: Google Voice and motion of the phone. To improve the security of this Application a voice login is added. In addition, a program is added to change login pin and to develop robot scan program and finally to develop two control programs using buttons with accelerometer and Google voice inputs. Arduino IDE and Arduino language is used to program the robot. Arduino has a simple methodology for running the source code. It has a setup function and a loop function. We can define variables and other things inside setup function. The loop function is running always according the content of the function body. AFmotor header is used to develop the code file to get functions to control the motor shield and the motors and used SoftwareSerial header file to make connection between Arduino and Bluetooth module. Using Black Box test method, integrity, usability, reliability, and correctness of the Android application is checked. Finally, user acceptance tests are done for different kind of users. A field-test is done to test whether the robot can identify the object in front of it and the distance limit is coded to the program. Today we are in the world of robotics. Knowingly or unknowingly, we have been using different types of robots in our daily life. The aim of this project is to evaluate whether we can design robots ourselves to do our work using a low budget and simple way. Finally, we think this project will be helpful for students who are interested in these areas and this will make a good solution for human matters. This project has many applications and a very good future scope. It also allows for modification of its components and parameters to get the desired output. This project allows customizing and automating our day-to-day things in our lives.Item Detection of Vehicle License Plates Using Background Subtraction Method(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Ashan, M.K.B.; Dias, N.G.J.The detection of a vehicle license plate can be considered as a primary task of a License Plate Recognition System (LPRS). Detecting a vehicle, locating the license plate and the non-uniformity of license plates are few of the challenges when it comes to detection of a license plate. This paper proposes a work to ensure the detection of license plates which are being used in Sri Lanka. The work here, consists of a prototype which was developed using the Matlab’s predefined functions. The license plate detection process consists of two major phases. They are, detection of a vehicle from a video footage or from a real time video stream and license plate area isolation from the detected vehicle. By sending the isolated license plate image to an Optical Character Recognition (OCR) System, its contents can be recognized. The proposed detection process may depend on facts such as, the lighting and weather conditions, speed of the vehicle, efficiency in real time detection, non-uniformity effects of number plates, the video source device specifications and fitted angle of the camera. In the license plate detection process, the first phase, that is; the detection of a vehicle from a video source is accomplished by separating the input video source into frames and analysing these frames individually. A monitoring mask is applied at the beginning of the processing in order to define the road area and it helps the algorithm to look for vehicles in that selected area only. To identify the background, a foreground detection model is used, which is based on an adaptive Gaussian mixture model. Learning rate, threshold value to determine the background model and the number of Gaussian modes are the key parameters of the foreground detection model and they have to be configured according to the environment of the video. The background subtraction approach is used to determine the moving vehicles. In this approach, a reference frame is identified as the background from the previous step.By subtracting the current frame from that reference frame, the blobs which are considered to be vehicles are detected. A blob means a collection of pixels and the blob size should have to be configured according to facts such as the angle of the camera to the road and distance between camera and the monitoring area. Even though a vehicle is identified in the above steps, it needs a way to identify a vehicle uniquely to eliminate duplicates being processed in next layer. As the final step of the first layer, it will generate distinct numbers using the Kalman filter, for each and every vehicle which are detected from the previous steps. This distinct number will be an identifier for a particular vehicle, until it lefts the global window. In, the second phase of the license plate detection will initiate in order to isolate the license plate from the detected vehicle image. First, the input image is converted into grayscale to reduce the luminance of the colour image and then it will be dilated. Dilation is used to reduce the noise of an image, to fill any unnecessary holes in the image and to improve the boundaries of the objects by filling any broken lines in the image. Next, horizontal and vertical edge processing is carried out and histograms are drawn for both of these processing criteria. The histograms are used to detect the probable candidates where the license plate is located. The histogram values of edge processing can change drastically between consecutive columns and rows. These drastic changes are smoothed and then the unwanted regions are detected using the low histogram values. By removing these unwanted regions, the candidate regions which may consists of the license plate are identified. Since the license plate region is considered to be having few letters closely on a plain coloured background, the region with the maximum histogram value is considered as the most probable candidate for the license plate. In order to demonstrate the algorithm, a prototype was developed using MATLAB R2014a. Additional hardware plugins such as Image Acquisition Toolbox Support Package for OS Generic Video Interface, Computer vision system toolbox and Image Acquisition Toolbox were used for the development. When the prototype is being used for a certain video stream/file, first and foremost, the parameters of the foreground detector and the blob size has to be configured according to the environment. Then, the monitoring window and the hardware configurations can be done. The prototype which was developed using the algorithm discussed in this paper was tested using both video footages and static vehicle images. These data were first grouped considering facts such as non-uniformity of number plates, the fitted angle of the camera. Vehicle detection showed an efficiency around 85% and license plate locating efficiency was around 60%. Therefore, the algorithm showed an overall efficiency around 60%. The objective of this work is to develop an algorithm, which can detect vehicle license plates from a video source file/stream. Since the problem of detecting a vehicle license plates is crucial for some complex systems, the proposed algorithm would fill the gap.Item Resource Efficiency for Dedicated Protection in WDM Optical Networks(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Suthaharan, S.; Samarakkody, D.; Perera, W.A.S.C.The ever increasing demand for bandwidth is posing new challenges for transport network providers. A viable solution to meet this challenge is to use optical networks based on wavelength division multiplexing (WDM) technology. WDM divides the huge transmission bandwidth available on a fiber into several non-overlapping wavelength channels and enables data transmission over these channels simultaneously. WDM is similar to frequency division multiplexing (FDM). However, instead of taking place at radio frequencies (RF), WDM is done in the electromagnetic spectrum. In this technique the optical signals with different wavelengths are combined, transmitted together, and separated again. It uses a multiplexer at the transmitter to join the several signals together, and a demultiplexer at the receiver to split them apart. It is mostly used for optical fiber communications to transmit data in several channels with slightly different wavelengths. This technique enables bidirectional communications over one strand of fiber, as well as multiplication of capacity. In this way, the transmission capacities of optical fiber links can be increased strongly. Therefore, the efficiency will be increased. WDM systems expand the capacity of the network without laying more fiber. Failure of the optical fiber in terms of fiber-cut causes loss of huge amount of data which can interrupt communication services. There are several approaches to ensure mesh fiber network survivability. In survivability, the path through which transmission is actively realized is called working path or primary path whereas the path reserved for recovery is called backup path or secondary path. In this paper we consider traditional dedicated protection method in which backup paths are configured at the time of establishing connections primary paths. If a primary path is brought down by a failure, it is guaranteed that there will be available resources to recover from the failure, assuming the backup resources have not failed also. Therefore, traffic is rerouted through backup path with a short recovery time. In this paper, we investigate the performance by calculating the spectrum efficiency variation for traditional dedicated protection in WDM optical networks. To evaluate the pattern for the spectrum efficiency we use various network topologies where the number of fiber links in each network is different. Spectrum efficiency is the optimized use of spectrum or bandwidth so that the maximum amount of data can be transmitted with the fewest transmission errors. Spectrum efficiency is calculated by dividing the total traffic bit rate by the total spectrum used in the particular network. The total traffic bit rate can be calculated by multiplying the data rate by the number of connections (lightpaths). The total spectrum would be the multiplication of the frequency used for a single wavelength and the total number of wavelengths (bandwidth slots) used in the network. In this paper, we carry out the investigation with detailed simulation experiments on different single line rate (SLR) scenarios such as 100 Gbps, 400 Gbps, and 1Tbps. In addition, this paper focuses on four standard optical network topologies which consist of different number of fiber links to identify how the spectrum efficiency deviates for each network. To evaluate the performance, we considered 21-link NFSNET, 30-link Deutsche network, 35-link Spanish network, and 43-link US network as specimens. In our simulation study, spectrum efficiency for each networks are plotted in separate graphs and compared with each other. Our findings are as follows. (1) Spectrum efficiency for each SLR is almost similar and comparable in all the network topologies. (2) Unlike network topology with low number of fiber links, the spectrum efficiency for network topology with high number of fiber links are higher, therefore, the spectrum efficiency increases when the number of links are increased.Item Performing Iris Segmentation by Using Geodesic Active Contour (GAC)(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Yuan-Tsung Chang; Chih-Wen Ou; Jayasekara, J.M.N.D.B.; Yung-Hui LiA novel iris segmentation technique based on active contour is proposed in this paper. Our approach includes two important issues, pupil segmentation and iris circle calculation. If the correct center position and radius of pupil can be find out in tested image, then the iris in the result will be able to precisely segment. The final accuracy for ICE dataset is reached around 92%, and also can get high accuracy 79% for UBIRIS. Our results demonstrate that the proposed iris segmentation can perform well with high accuracy for Iris’s image.Item Animal Behavior Video Classification by Spatial LSTM(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Huy Q. Nguyen; Kasthuri Arachchi, S.P.; Maduranga, M.W.P.; Timothy K. ShihDeep learning which is basis for building artificial intelligent system is become a quite hot research area in recent years. Current deep neural network increase human recognition level of natural images even with huge dataset such as ImageNet. Among successful architectures, Convolution Neural Network (CNN) and Long Short-term Memory (LSTM) are widely used to build complex models because of their advantages. CNN reduces number of parameters compare to full connected neural net. Furthermore, it learns spatial features by sharing weights between convolution patch, which is not only help to improve performance but also extract similar features of input. LSTM is an improvement of Vanilla Recurrent Network (RNN). When processing with time-series data, RNN gradient has tend to vanish in training with backpropagation through time (BTT), while LSTM proposed to solve vanish problem. Therefore it is well suited for manage long-term dependencies. In other words, LSTM learn temporal features of time-series data. During this we study focused on creating an animal video dataset and investigating the way that deep learning system learn features with animal video dataset. We proposed a new dataset and experiments using two types of spatial-temporal LSTM, which allow us, discover latent information of animal videos. According to our knowledge of previous studies, no one has used this method before with animal activities. Our animal dataset created under three conditions; data must be videos. Thus, our network can learn spatial-temporal features, objects are popular animals like cats and dogs since it is easy to collect more data of them and the third is one video should have one animal but without humans or any other moving objects. Under experiments, we did the recognition task on Animal Behavior Dataset with two types of models to investigate its’ differences. The first model is Conv-LSTM which is an extend version of LSTM, by replacing all input and output connections of convolutional connections. The second model is Long-term Recurrent Convolutional Networks (LRCN), which proposed by Jeff Donahue. More layers of LSTM units can easily added to both models in order to make a deeper network. We did classification using 900 training and 90 testing videos and could reached the accuracy of 66.7% on recognition rate. Here we did not do any data augmentation. However in the future we hope to improve our accuracy rate using some of preprocessing steps such as flip, rotate video clips and collecting more data for the dataset.Item Low Cost Electronic Stethoscope(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Nilmini, K.A.C.; Illeperuma, G.D.Among many medical devices, stethoscope is the most widely used device for medical diagnosis. Auscultation is a non-invasive, pain less, quick procedure that can identify many symptoms and was used as early as 18th century. However, major drawbacks of conventional stethoscope are extremely low sound level and lack of ability to record or share the heart and lung sounds. These problems can be overcome by an electronic stethoscope and have the potential to save many lives. Although electronics stethoscopes are already available in the market, these are very expensive. Therefore, objective of the project was to build a low cost electronic. At the basic level, it would facilitate listening to the heart sounds more clearly. Among other facilities is the ability to control the sound level, to record the sound and share as digital information and also to display the sound using graphs for improved diagnostics. Recording and sharing facilities were included due to the importance of tracking patient’s medical history, and also to discuss among group of physicians. It can also facilitate remote diagnostics where experts may not be readily available. 50 mm sized chest piece of an acoustic stethoscope was used as the chest piece due to its optimized design. Chest piece’s diaphragm was placed against the chest of the patient to capture heart sounds. Those sounds were converted to electronic signals by a microphone. Electret condenser microphone was selected from several other types of microphones due to the smaller size (radius 3 mm) and ability to detect the low frequency sounds (~ 30 Hz). Those electrical signals were amplified by the pre amplifier. TL072 integrated circuit was used as a pre operational amplifier. It provided a gain of 3.8. Output signal of the preamplifier circuit was send to the Sallenkey low pass filter circuit. It filtered the first heart sound (S1, from 30 Hz to 45 Hz) , and second heart sound (S2, 50 Hz to70 Hz). Filtering was done by setting the cut off frequency as 100 Hz and that value was given by the capacitor values 0.047 μF and resistor value 33 kΩ. Getting the advantage of TL072 being a dual operational amplifiers in the single die, second operational amplifier was for the filter circuit. Output signal of the filter circuits was amplified to the appropriate amplitude by using audio power amplifier for the headphones and speakers. LM386 integrated circuit was used as the audio power amp. It provided an gain about 20. Speakers and headphones were used as the output. Facility was provided to use any standard 3.5 mm headphones. Constructed circuit was validated by, comparing the original heart sound and amplified output via a digital oscilloscope. Once the implementation was completed, it was compared for the sound quality against an acoustic stethoscope by six independent observers. Five of them heard the heart sounds more clearly by the electronics stethoscope than the acoustic stethoscope. Accuracy of the heart sound was consolidated by a person who has grip knowledge about anatomy. Recording facility was provided using open source software “audacity” and using the computer audio card to capture the sound. Saved file of the heart sound can be used in several ways such as; it can be stored in a database, can be share via e mail and also for play back for further examine in diagnosing process. Heart sounds were visualized as a graph on the computer. An Arduino was used to digitize (1024 resolution) the audio signal and send the data through virtual com port to the computer for graphing. It can also be used to record sounds to an SD card when a computer is not available. As a result similar sound quality has been found when comparing between direct listening and a recorded sound. In conclusion, the implemented system was considered a success due to low cost, ease of implementation and the ability to provide the most useful functions required from an electronic stethoscope.Item Smart Meter- Multifunctional Electricity Usage Meter(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Bandara, K.R.S.N.; Weerakoon, W.A.C.Internet of things (IoT) is the modern concept which makes new approach to make connect people and people, people and devices via internet. This concept is a great solutions for many practical problems of people. Such as connecting people with each other easily, controlling remotely, managing people or devices easily etc. This concept combined with other technologies will make more use for people. Such a modern technology is Multi-Agent technology; connecting these two technologies will make great solutions for many human problems. Software agents are well trained computer programs for certain task with different environment conditions. These agents can act autonomously with sudden changes in artificial environment. (A) Multi-Agent system (MAS) is the collection of software agents who play in artificial environment. Applying IoT and MAS together is great way of creating solutions for major problems of people. One of such problem is uncontrollable power usage. Smart Meter (SM) is the solution for this problem, which is integrated both IoT and MAS concepts. Electricity is the major type of energy which is used for everything in modern world. So electricity plays major role in industry as well as the domestic. More than 50% of domestic use electricity as their first power source. With the great usage of electricity the wastage also becomes higher. This wastage make uncomfortable to domestic economics so people need a better way to eliminate the wastage. And also it will put the world at risk. Because all resources which are used to generate electricity have decaying characteristics, this wastage will make quick reach to end of the resources. Looking at the two tasks of this problem, the key factor of acting on this issue, so people must think on this issue and must act themselves so the wastage must represent to them in manner, which they can feel the problem. Representing current usage in representative manner and predicting future usage according to past usage details will be much easier to understand on how they must act for themselves as well as the world. Implementing the methods to act according to the future plan is one important component for this concept, and remote access and automatic control will add new value to this concept and will make better and easy way to eliminate wastage. Developing countries are struggling to eliminate wastage of electricity because they spend big portion of their economy to generate electricity. If the wastage elimination plan is more expensive it is not feasible for those country. Then it need less expensive power saving equipment. So the smart Meter is such equipment which developed using less expensive equipment such as Arduino, RPI, etc. Complete SM system contains three parts they are, Physical system – which contains a component connected to the home electricity system to collect consumption data of home areas. This system contains microcontrollers, sensors, etc. Processing Unit – this is the system which contain multi-agents and other software which is used to control micro controllers. This is the core of the SM which does all calculations, and all analytical processes and report generating processes. UI Component is the third system which is used to display analytical results of those generated by Processing Unit and it lets user to control the electricity system remotely. All three units of the SM can be built cost effectively which is appropriate to developing countries. When considering the situation in Sri Lanka another major issue is that there is no good connection between domestic system and service provider so the use manual system of collecting of details of consumption. So it will take more time to process consumption data to service provider and make analysis report of domestic. But using the system SM is the best way of collecting consumptions details and also analyzing the consumption data of domestic to make sense on people to save the power because SM will predict the future data consumption and weaknesses of the power consumption of the user.SM uses an analytical program called R in the core module and it is programmed to generate the forecasting for the users’ future consumption of data to represent in understandable manner. This SM concept can be extended for industrial users and this can be extended to make power grid in the area as well as the country. So then it will give new interface for the Power Grid concept. This will lead the whole country to stand for power saving as one. SM is the MAS which is integrated with IoT concept to achieve the above tasks, which is implemented using MadKit agent platform and Java language. Each software agent is assigned for a task. These agents work together to bring out one major task alive. All devices which are connected to the central system communicating with each other as well as with the user, will bring out the concept IoT. Together these two technologies will make a complete solution for electricity wastage.Item End-user Enable Database Design and Development Automation(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Uduwela, W.C.; Wijayarathna, G.Information System (IS) is a combination of software, hardware, and network components working together to collect, process, create, and distribute data to do the business operations. It consists with “update forms” to collect data, “reports” to distribute data, and “databases” to store data. IS plays a major role in many businesses, because it improves the business competitiveness. Although SMEs are interested to adopt IS, they are often suffered by other factors: time, underline cost, and availability of ICT experts. Hence, the ideal solution for them is to automate the process of IS design and development without ICT expertise for an affordable cost. The software tools are available on the Web to generate the “update forms” and “reports” automatically for a given database model. However, there is no approach to generate the databases of IS automatically. Relational database model (RDBM) is the most commonly used database model in IS due to its advantages than the other data models. The reason of the advantages of the model is its design, but it is not a natural way of representing data. The model is a collection of data that is organized into multiple tables/relations linked to one another using key fields. These links represent the associations between relations. Typically, tables/relations represent entities in the domain. A table/relation has column/s and rows where column/s represent the attributes of the entity and rows represent the records (data). Each row in a table should have a key to identify that row uniquely. Designers should have to identify these elements from the given data requirements in the process of the RDBM design, which is difficult for non-technical people. The process of design of RDBM has few steps: collect the set of data requirements, develop the conceptual model, develop the logical model, and convert it to the physical model. Though there are approaches to automate some steps of the process of design and development of RDBM, they also request the technical support. Thus, it is required to develop a mechanism to automate the database design and development process by overcoming the difficulties in the automation approaches of RDBM, so that non-technical end-users will be able to develop their databases by themselves. Hence, a comprehensive literature survey was conducted to analyze the feasibilities and difficulties of the automation of the process of RDBM design and development. Uduwela, W. et al., the author says that the “form” is the best way to collect data requirements of the database model for its automation, because form is in semi structured way than the natural language (the most common way to present the data requirements is natural language) and it is very closer to the underline database. Approaches were available to automate the development of the conceptual model based on the given data requirements. This is the most critical step in the RDBM design process, because it needs to identify the elements of the model (entities, attributes of them, relationship among the entities, keys and the cardinalities). Form based approaches were analyzed using the data available in the literature to recognize the places where the user intervention is needed. The analysis says that all approaches need user support and it is needed to make the corrections of the outcome, because the elements are not consistent among business domains; it differs from domain to domain and with the same domain also. Further, they demand user support to make the initial input according to the data requirements (set of forms) to identify the elements of the conceptual model. The next step of the process is developing the logical model based on the conceptual model. The outcome of the logical model should be a normalized database to eliminate the data insertion, updating and deletion anomalies by reducing its data redundancies. Data redundancies often caused by Functional Dependencies (FD) that are a set of constraints between two sets of attributes in a relation. The database can be normalized by removing undesirable FDs (remove partial dependencies and transitive dependencies). We could not identify any approach that generates normalize database diagram automatically from the data requirements directly. Existing approaches request the FDs to generate the normalized RDBM. Designers’ high perception power and skills are needed to identify the correct FDs, because it also depends on the domain which is a problem for the automation. FDs can be found by doing data mining, but it also generates an incorrect set of FDs if there are insufficient data combinations. Developing the physical model from the logical model is straightforward and relational database management systems help to automate it. According to the analysis, it can be concluded that the existing approaches on conceptual model development cannot develop accurate models as they has to develop distinct models for each problem. Normalization approaches also cannot be automated as FDs also vary among business domains and with the same domain also. These concludes that there should a database model that can be designed and developed by end-users without any expert knowledge. The proposed model should not be either domain specific or problem specific. It would be better if the approach could convert the data requirements to the database model directly without intermediate steps like in the DBM design process. Further, it would be better the proposed model could be run on the existing database management systems too.Item Analysis of Emotional Speech Recognition Using Artificial Neural Network(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Archana, A.F.C.; Thirukumaran, S.This paper presents an artificial neural network based approach for analyzing the classification of emotional human speech. Speech rate and energy are the most basic features of speech signal but they still have significant differences between emotions such as angry and sad. The feature pitch is frequently used in this work and auto-correlation method is used to detect the pitch in each of the frames. The speech samples used for the simulations are taken from the dataset Emotional Prosody Speech and Transcripts in the Linguistic Data Consortium (LDC). The LDC database has a set of acted emotional speeches voiced by the males and females. The speech samples of only four emotions categories in the LDC database containing both male and female emotional speeches are used for the simulation. In the speech pre-processing phase, the samples of four basic types of emotional speeches sad, angry, happy, and neutral are used. Important features related to different emotion states are extracted to recognize speech emotions from the voice signal then those features are fed into the input end of a classifier and obtain different emotions at the output end. Analog speech signal samples are converted to digital signal to perform the pre-processing. Normalized speech signals are segmented in frames so that the speech signal can maintain its characteristics in short duration. 23 short term audio signal features of 40 samples are selected and extracted from the speech signals to analyze the human emotions. Statistical values such as mean and variance have been derived from the features. These derived data along with their related emotion target are fed to train using artificial neural network and test to make up the classifier. Neural network pattern recognition algorithm has been used to train and test the data and to perform the classification. The confusion matrix is generated to analyze the performance results. The accuracy of the neural network based approach to recognize the emotions improves by applying multiple times of training. The overall correctly classified results for two times trained network is 73.8%, whereas it is 83.8% when increasing the training times to ten. The overall system provides a reliable performance and correctly classifying more than 80% emotions after properly trained.Item Applying Smart User Interaction to a Log Analysis Tool(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Semini, K.A.H.; Wijegunasekara, M.C.A log file is a text file. People analyze these log files to monitor system health, detect undesired behaviors in the system and recognize power failures and so on. In general these log files are analyzed manually using log analysis tools such as Splunk, Loggly, LogRhythm…etc. All of these tools analyze log file and then generate reports, graphs which represents the analyzed log data. Analyzing log files can be divided into two categories namely, analyzing history log files and analyzing online log files. In this work analyzing history log files is only considered for an existing log file analysis framework. Most of the log analysis tools have the feature for analyzing history log files. To analyze a log file using any of the mentioned tools, it is necessary to select a time period. For example, if a user analyze a log file according to the system health, the analyzed log file specifies the system health as ‘good’ or ‘bad’ only for the given time period. In general these analysis tools provide average system health for a given time period. This analysis is good but sometimes it may not be sufficient as people may need to know exactly what happens in the system each second to predict the future behavior of the system or to get some decisions. But using these tools, it is not possible to identify the log file in detail. To do such analysis a user has to go through the log file line by line manually. As a solution for this problem, this paper describes a new smart concept for log file analysis tools. The concept works through a set of widget and it can replay executed log files. First the existing log file analysis framework was analyzed. This helped to understand the data structure of receiving log files. Next the log file analysis tools were studied to identify the main components and features that people most like. The new smart concept was designed by using replayable widget and graph widgets. The replayable widget uses to replay the inputted log file and the graph widgets graphically represent the analyzed log data. The replayable widget is the main part of the project and holds the new concept. This is a simple widget that acts just as a player. It has two components; a window and a button panel. Window shows the inputted log file and the button panel contains play, forward, backward, stop and pause buttons. The log lines which is shown in the window of the replayable widget, holds a tree structure (Figure 1: Left most widget). The button panel contains an extra button to view the log lines. These buttons are used to play the log lines, go to requested log line, view the log line and control playing log lines. It was important to select suitable chart library to design the graph widgets. A number of chart libraries were analyzed and finally D3.js was selected because it provided chart source, free version without watermarks and it also had more than 200 chart types. It has a number of chart features and also it supports to HTML5 based implementations. The following charts were implemented using D3.js chart library. Bar chart according to the pass/failure count Time line according to the time of pass/fail occurs Donut chart according to the total execute count Donut chart for Total Pass/Fail Count Every graph widgets are bind with replayable widget, so that updates are done according to the each action. The replayable widget and the graph widgets are implemented by using D3.js, JavaScript, JQuery, CSS and HTML5. The replayable widget is successfully tested and the implemented interface successfully runs in Google Chrome web browser. Figure 1 shows a sample interface of the design which is generated using a sample log file that had about 100 of log lines. Left most widget is the replayable widget that holds considered log file as a tree structure. Top right most widget is one of the graph widget represented as a bar chart shows pass/failure count and the bottom right most widget is another graph widget represented as a time line shows the time of pass/fail that occurred for the given log file. In addition the analyzed log file can also be visualized using donut charts.This paper described the new smart concept for log file analysis tools. The existing analysis tools that were mentioned did not contain this new concept. Most of the log file analysis tools use graphs for data visualization. This system was successfully implemented and it was evaluated by a number of users who work with log files. This new concept will help log analysts, system analysts, data security teams as well as top level management to extract decisions about the system by analyzing the widgets to make predictions. Furthermore, analyzed data would be useful to collect non-trivial data for data mining and machine learning procedures. As future work, the system could be enhanced to add features such as zooming and drill down method to customize graphs and identify a mechanism to filter data according to user requirements.Item An Emotion-Aware Music Playlist Generator for Music Therapy(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Dissanayaka, D.M.M.T.; Liyanage, S.R.Music has the ability to influence both mental and physical health. Music Therapy is the application of music for rehabilitation of brain activity and maintain both mental and physical health. Music therapy comes in two different forms: active and receptive. Receptive therapy takes place by making the patient to listen to suitable music tracks. Normally music therapy is used by people who suffer from disabilities or mental ailments. But the healing benefits of music can be experienced by anyone at any age through music therapy. This research proposes music android mobile application with auto generated play list according to its user’s emotional status which can be used in the telemedicine as well as in day-to-day life. Three categories of emotional conditions; happy, sad and anger were considered in this study. Live images of the user is captured from an android device. Android face detection API available in the android platform is used to detect human faces and eye positions. After the face is detected face area is cropped. Image is grey scaled and converted to a standard size in order to reduce noise and to compress image size. Then image is sent to the MATLAB based image-recognition sub-system using a client server socket connection. A Gaussian filter is used to reduce noise further in order to maintain a high accuracy of the application. Edges of the image is detected using Canny Edge Detection to identify the details of the face features. The resulting images appear as a set of connected curves that indicate the surface boundaries. Emotion recognition is carried out using the training datasets of happy, sad and angry images that are input to the emotion recognition sub-system implemented in MATLAB. Emotion recognition was carried out using Eigen face-based pattern recognition. In order to create the Eigen faces average faces of three categories are created by averaging the each database image in each category pixel by pixel. Each database image is subtracted from the average image to obtain the differences between the images in the dataset and the average face. Then each image is formed in to the column vector. Covariance matrix is calculated to find the Eigen vectors and associated values. Then weights of the Eigen faces are calculated. To find the matching emotional label Euclidean distance between each weight is calculated for each category. By comparing the obtained Euclidean distances of input image with each category, the class of the image with lowest distance is identified. The identified label (sad, angry, and happy) is sent back to the emotion recognition sub-system. Songs that are pre-categorised as happy, sad and angry are stored in the android application. When emotional label of the perceived face image is received, songs relevant to the received emotional label are loaded to the android music player 200 face images were collected at the University of Kelaniya for validation. Another 100 happy, 100 sad and 100 angry images were collected for testing. Out of the 100 test cases with happy faces, 70 were detected as happy, out of the 100 sad faces 61 were detected as sad and out of 100 angry faces 67 were successfully detected. The overall accuracy of the developed system for the 300 test cases was 66%. This concept can be extended to use in telemedicine and the system has to be made more robust to noises, different poses, and structural components. The system can be extended to include other emotions that are recognizable via facial expressions.Item Augmented Reality and its possibilities in Agriculture (In Sri Lankan Context)(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Musfira, F.A.; Linosh, N.E.Since Sri Lanka is an agro country, its economy is mostly based on agriculture and agro based industries, animal husbandry and other agriculture based businesses. In Sri Lanka, Agriculture continues to be the major occupation and way of life for more than half of the total population. Since information technology and Internet network have become essential part in business processes recently it has considerable influence to be used in agriculture in the process of development. Nowadays the Internet-based applications are more and more successful in agriculture and different parts of the food industry. When it comes to information technology field the emerging trend is Augmented Reality (AR). The field of Augmented Reality (AR) has existed for just over one decade, but the growth and rapid progress in the past few years has been remarkable. The basic goal of an AR system is to enhance the user’s perception and interaction with the real world through virtual objects. There are several application areas, such as extension services, precision agriculture, E-commerce and information services where augmented reality can be used. When it comes to the application areas of technology Agriculture is an area where advanced technology is always introduced with a delay. This research analyze on how augmented reality can be used in agriculture. Certain applications of the AR in agriculture are already present in European countries, but it’s still in the infant stage in Asian countries especially in south Asian countries. In Sri Lanka many opportunities to use these techniques in agriculture can be predicted. Following are some instances of possibility of applications of AR in agriculture. The research areas such as In Sri Lanka many agricultural research centers like Sri Lanka Council for Agricultural Research, Gannoruwa Agricultural Complex, Agricultural Research and Development Centre exist where enrichment of an image becomes necessary. Here AR can be used to add dimension lines, coordinate systems, descriptions and other virtual objects to improve investigation and analyze of the captured images. Another aspect where the AR probably will visit in the near future is the cabin of modern agricultural machinery and mobile machinery. Some components of this system already exist and are being used in the form of simple displays that show the use of GPS. Adding the large displays or special glasses, where on the image fields will be imposed lines drawn by the computer which are showing the way for passage or plot boundaries, is a logical development of existing solutions. The third area is animal Husbandry and farming. A system of AR can be developed and installed in the farms of Sri Lanka for farm monitoring. Use of suitable software may allow determining individual pieces on the screen, with simultaneous administration of the relevant information about them. The following can be shown. The data insertion, information about the health status of farm animals, etc. Finally in crop production it is possible to identify plants with a camera and appropriate AR system. This gives the ability to detect pests and to plan appropriate protective procedures. While studying the use of Augmented Reality technology in agriculture, it can be concluded that different types of services offer different possibilities. Mobile systems develop very dynamically both in regards to the speed of data transmission and services. New devices like tablets and new services like Cloud Computing, Near Field Communication (NFC) have great potential in agriculture. Augmented Reality can be allied with all those technologies and expands the possibilities to evolve towards a new era in agriculture in Sri Lankan agro farms. However, the whole assessment of topic must not be done only on the basis of the technology and taken out of its environment randomly, since the whole area is very complex, this paper focuses on finding and analyzing what is Augmented Reality and tries to highlight the possibilities in agriculture.Item Context-Aware Multimedia Services in Smart Homes(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Chih-Lin Hu; Kasthuri Arachchi, S.P.; Wimaladharma, S.T.C.I.The evolution of “smart homes” technologies exposes a broad spectrum of modern personal computers (PC), consumer electronics (CE), household appliance and mobile devices for intelligent control and services in residential environments. With high penetration of broadband access networks, PC, CE and mobile device categories can be connected on home networks, providing a home computing context for novel service design and deployment. However, conventional home services are characterized by different operations and interactive usages among family members in different zones inside a house. It is prospective to realize user-oriented and location-free home services with modern home-networked devices in smart home environments. The contribution of this paper proposes a reference design of a novel context-aware multimedia system in home-based computing networks. The proposed system integrates two major functional mechanisms: intelligent media content distribution and multimedia convergence mechanisms. The first mechanism performs intelligent controls on services and media devices in a context-aware manner. This mechanism integrates face recognition functions into home-based media content distribution services. Some devises capable of capturing images can recognize the appearances of registered users and infer their changes of location region inside a house. Media content played in the last locations can thus be distributed to home-networked devices closer to the users in the current locations. The second mechanism offers multimedia convergence among multiple media channels and then renders users a uniform presentation for media content services in residential environments. This mechanism can provide not only local media files and streams from various devices on a home network but also Internet media contents that can be online fetched, transported and played onto multiple home-networked devices. Thus, the multimedia convergence mechanism can introduce an unlimited volume of media content from the Internet to a home network. The development of a context-aware multimedia system can be described, as follows. A conceptual system playground in a home network contains several Universal Plug and Play (UPnP) specific home-networked devices that are inter-connected on a singular administrative network based on the Ethernet or Wi-Fi infrastructure. According to UPnP specifications, home-networked devices are assigned IP addresses using auto-IP configuration or DHCP protocols. Then, UPnP-compatible devices can advertise their appearances on a network. When other neighbor devices discover them, they can collaborate on media content sharing services in a network. In addition, some UPnP-compatible devices are capable of face recognition to capture the front images of users inside a house. Those captured images can be sent to a user database and compared with existing user profiles corresponding to individuals in the family community. After any registered user is recognized, the system can refer to the stored details of this particular user and then offer personal media services in a smart manner. On the other hand, the components and functionalities of the proposed system can support intelligent media content distribution and multimedia convergence mechanisms. Technically, the proposed system combines several components such as UPnP control point, UPnP media renderer, converged media proxy server, image detector and profile database of registered users and family communities. Though there are diverse media sources and formats in a home network, users remain the same operational behavior on sharing and playing media content according to common UPnP and Digital Living home Alliance (DLNA) guidelines. Prototypical development achieved a proof-of-concept software based on the Android SDK and JVM frameworks, which integrates the distribution of intelligent media content and converged media services. The resulting software is platform-independent and application-level. It can be deployed on various home-networked devices that are compatible with UPnP standard device profiles, e.g., UPnP AV media server, media player, and mobile phones. Real demonstration has been conducted with the software implementation that runs on various off-the-self home-networked devices. Therefore, the proposed system is able to offer friendly user experience for context-aware multimedia service in residential environments.Item Intelligent Recruitment Management Engine(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Chiththananda, H.K.I.C.L.; Perera, T.D.; Rathnayake, L.M.; Mahanama, M.G.G.D.D.P.; Prabashana, P.M.P.; Dias, D.P.N.P.A system which can be used to automate the recruitment process of an organization is proposed in this paper. The system is designed targeting the human resource department of an organization in order to simplify the massive process of data extraction from a large number of curriculum vitaes (CVs), on the other hand to reduce the cost and time which have to be spent on the interview process and to allocate most suitable interviewers from the organization for each interviews by analyzing the past data. Information extraction is used in-order to retrieve data from CVs as well as from the cover letters. An ontology map is created to analyze and categorized the extracted key words through this system. Then the CVs are sorted and prioritized according to the requirements of the organization. We come up with a prediction component which will be embedded in the main system to predict the future of the incoming values such as the cost and time of the recruitment process of a particular organization by analyzing the past records. Hence we believe that this system will enhance the efficiency and effectiveness of the recruitment process of any organization.Item Students’ Perspective on Using the Audio-visual Aids to Teach English Literature and Its Effectiveness(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Wijekoon, W.M.S.N.K.The field of education is renewing in each and every second. Human Computer Interaction plays a vital role in it. Thus, the government authorities have paid more attention on this aspect in order to provide a qualitative education. According to the reports which were published by the Ministry of Education, the government has conducted trainings, workshops and seminars on using the modern technology including the modern audio and visual aids in all around the country. Yet, most of the teachers of English Literature still do not use them within the classroom and the students learn the subject in a conventional classroom environment. In this respect, this study explores how effective it is to use audio visual aids to teach English Literature which is considered as a traditional subject in order to enhance students’ literary competence and the students’ perspective on using the audio-visual aids to teach English Literature. For the study, as the sample, forty five students who are from Grade Ten and learn English Literature as an optional subject for the GCE Ordinary Level Examination were selected from four government schools in Kandy district and Matale district. Data was collected through a questionnaire and participant observation. Through the questionnaire, students’ preference for the subject and their views on teaching methods with and without the modern audio-visual aids were studied. Learning behavior and the students’ involvement with and without the audio-visual aids were studied through participant observation. The qualitative analysis of data revealed that there is a high involvement of the students when they learn this subject with the modern audio-visual aids. The quantitative data analysis provides the initial evidence that their teachers’ conventional teaching process is less productive and provides a lessened contribution to reach the expected goals of teaching and learning English Literature. The findings suggest that it is necessary to implement this pedagogical tool to teach English Literature as it has the ability of bringing a highly constructive learning environment.Item Cost Effective High Availability Transparent Web Caching with Content Filtering for University of Kelaniya, Sri Lanka(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Pathirana, T.; Fernando, S.; Gunasekara, H.The rapid growth of Internet usage at University of Kelaniya with the concept of “Bring Your Own Device” have increased issues with traditional proxy systems. The key problem is to introduce a suitable web caching system with content filtering which will enable end users to access internet without setting up proxy server details on their devices. On this study it is intended to analyse the network flow of University of Kelaniya and introduce a transparent system which will cache and filter the content according to university’s existing policies. The implementation should be a cost effective and a high availability caching mechanism which will allow users to browse internet without changing their browser settings. This will introduce a free and open source proxy system “Squid” and a content filtering system, “DansGuardian” on two dual NIC Linux boxes based on Ubuntu operating system and will be placed between Local Area Network and the firewall. Squid is a FOSS proxy widely used in the community as a traditional proxy provider. In this scenario squid will be configured as a transparent proxy which will listen on port 3128, using Linux IP tables all http traffic coming to LAN side interface will be redirected to port 8080. Default gateway for the servers will be the firewall while all internal subnets will be routed to LAN L3 devices by the servers. Between L3 device and servers, load balancing will be done based on port grouping. Before forwarding cached traffic according to squid rules, they will be checked against the content filtering policies of DansGuardian which listens on port 8080. Once content filtering is done it will be sent to the requester. End users are configured with DHCP and with No-Proxy browser settings and therefore they may not notice any traditional proxy as all caching and filtering will be transparent to the users. After testing and fine tuning wireless users for 2 months, the system was integrated for the whole network. As an influencer for BYOD, removing existing proxy settings enabled any authorized user to access the Internet through the local network. Number of detected end computers were drastically rising and therefore high bandwidth necessity was also going up. Analysing loading times and bandwidth peaks, it was confirmed that the system was stable. This made the subscribed Internet use rise up to 100% on peak times and more than 50% on off peak compared to 80% and 10% record for the traditional proxy. User comments were also positive than for the previous system as now they can bring their devices and do the browsing without consulting IT helpdesk for the proxy settings. Implementation of the transparent proxy in University of Kelaniya was the first long term transparent proxy installations in a Sri Lankan University which influenced other institutes to adopt the concept. Only downfall was this implemented system cannot detect or cache https traffic which were encrypted. Web caching and content filtering is crucial when it comes to network bandwidth considerations. In a university it has to be done with saving advantages for Education. The implemented system is a cost effective and reliable solution to address the problem on government and educational background. This will allow any authorized user to access network with their own device without any major changes.Item The Staff Perception on the Effect of Virtual Learning Environment in Distance Education(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Gamage, A.W.M.Technology has removed the distance barriers and has given lot of opportunities for education through human computer interaction. Today, the impact of technology has dramatically changed the distant education system and many educational institutions are looking for different methods to expand their existing practices in regard to technology in order to bring student centered environment (Florida Virtual School, 2006). In order to respond to learner’s requirements and overcome the geographical challenges of students who have spread over Sri Lanka, most of the Sri Lankan universities have introduced the Virtual Learning Environment (VLE) systems for distance learners. The aim of this research paper is to investigate how the usage of VLE contributes the attainment of the academic improvement of distance learners. Thus, this research paper explores how the ways VLEs are developed and how the ways the teachers make use of it affect the learners’ performance and whether teachers are satisfied with the performances of distance learners. For this qualitative study, two Sri Lankan government universities from western province are selected. The sample of study consists of 10 staff members from both universities where VLE is used for teaching distance learners. The research has used both semi-structured and structured interviews in order to collect data. The data is collected both via primary and secondary sources. The interviews were used as the primary source while websites and records of relevant institutions were used for secondary sources. The collected data is analyzed by considering the staffs’ perspective on the contribution of VLE in distance education. The results of the research brings out various aspects regarding staffs’ perceptions on contribution; effectiveness, usefulness and convenience of Virtual Learning Environment especially it shows that VLE has a significant effect on pass rate of all subjects when it comes to the distance education.Item An Automated Solution for the Postal Service in Sri Lanka(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Piyawardana, M.D.S.N.; Wikcramarathna, R.M.D.; Vidanagama, D.U.The significant development in the field of e-commerce is becoming the only option for many business activities. Even though, most of the businesses use numerous technological advancement in the modern world, the postal centers in Sri Lanka are still operating in manually and it face abundant challenge of dealing with postal activities manually such as parcel handling, selling postal products, etc. The primary reason for the issue is the lack of human resources in the postal centers. The main objective of this research is to implement an automated solution to overcome the current loosing ends in traditional postal service in Sri Lanka. Based on requirements and feasibility analysis, system mainly covers mails/parcels scheduling and domestic tracking system for the customer to track their mails or packagers which post through the postal service. The online shopping cart function for service communication locales (Post Shops) would be able to sell their products and the e-post card creator function would be able to send post cards via internet. Customers would be ensure the secure online transactions with the online payment gateway. The research strategy shows the importance of the automation for Sri Lanka postal sector. Therefore, after conducting the research, it aims to address almost all the activities related to postal sector with user friendliness and accurate.Item Right to Privacy in Cyberspace: Comparative Perspectives from Sri Lanka and other Jurisdictions(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Madugalla, K.K.Right to privacy in cyberspace could be considered as a critical issue which has many implications towards individual liberty in the modern world. In view of the incidence of violations of right to privacy through abuse of personal information in cyberspace, it becomes necessary to explore the legal mechanisms that have been employed to address this issue and ensure the right to privacy in cyberspace in Sri Lanka and other comparative jurisdictions. The first research objective is to identify and examine legal provisions regarding right to privacy in cyberspace at the international level. The second research objective is to identify and analyze laws relating to right to privacy in cyberspace in United Kingdom and compare these legal provisions with Sri Lankan law. Thereby the strengths and weaknesses of the Sri Lankan legal framework regarding right to privacy in cyberspace would be examined and options for necessary reforms would also be suggested. Qualitative research methodology was employed in the research. The results of the research revealed that right to privacy has been recognized at the international level under UN Guidelines for Regulation of Computerized Personal Data Files (1989) and more recently under UN General Assembly Resolution on Right to Privacy in Digital Age (2013). In UK, the Data Protection Act (1998), Regulation of Investigatory Powers Act (2000) and a series of regulations enacted pursuant to EU Directives assume significance. In spite of the positive features in these statutes, UK approach towards data privacy has been criticized for its inherent lacunas and inconsistencies (Raab & Goold, 2011). In fact, the need to reform UK law according to privacy principles which would result in strengthening the right to privacy in cyberspace within the country has been considered (Raab & Goold, 2011). On the other hand, it could be seen that Sri Lanka does not recognize right to privacy in its Constitution or in any other specific legislation. In fact, relevant Sri Lankan statutes such as Electronic Transactions Act (2006) and Computer Crimes Act (2007) could be seen to be devoid of specific provisions relating to right to privacy in cyberspace. However, certain legislation in this area contain provisions which are relevant to right to privacy in cyberspace. For example, the Telecommunication Act (1991) has provided that interception of telecommunication transmissions is a punishable offence and it could be seen that preventing or obstructing the transmission of a telecommunication messages or intruding, interfering or accessing telecommunication messages have also been prohibited. Furthermore, it could be seen that this issue has been addressed to a certain extent through the Computer Crimes Act, which has penalized dealing with unlawfully obtained data, illegal interception of data and unauthorized disclosure of information. In addition, right to privacy has been recognized by the judiciary under the common law of Sri Lanka, where actions have been brought under Roman-Dutch Law. Thus it is evident that in spite of the absence of specific constitutional or legislative recognition of right to privacy, it has been recognized under common law by the Sri Lankan judiciary in a variety of legal contexts. Therefore it is to be seen whether the Sri Lankan courts would recognize this right in relation to protection of personal information in cyberspace. However, it is asserted that there is a pressing need to reform Sri Lankan law in order to reflect the recent trends at the international level. In spite of the availability of a remedy for violation of right to privacy under the common law of Sri Lanka, statutory recognition of right to privacy in cyberspace would provide for clarity and certainty in this area of law, which would ensure effective legal protection regarding right to privacy in cyberspace in the country.Item Advanced Real Time Traffic Controller System Based on Fuzzy Logic and Motion Detection Sensors(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Divaagar, P.Traffic congestion in intersections are becoming a major concern in metropolitan cities. The customary traffic light signals (TLS) are being operated in predetermined traffic light patterns based on the traffic weights calculated through previous statistics on particular junctions. This method becomes inefficient for day to day growing automobile usage in a country like Sri Lanka. Another reason for inefficiency is determining a pattern of traffic flow through statistical analysis is less reliable. A sophisticated solution for this issue is recommended in this paper by controlling the TLS respect to real time traffic flows using motion detection sensors and fuzzy logic technology. The objective is to maximize the traffic flow rate and reduce waiting on junctions. The motion detection sensors are to count the flow rate on the path toward the junction from reasonable distance. Fuzzy logic is the intelligence in the system which acts like a human traffic operator. The Matlab fuzzy logic toolbox is used to design the fuzzy logic. A model of road junction installed with the advanced real time traffic controller system is animated to display the results. The traffic light will act on the decisions made by fuzzy logic system according to the instantaneous traffic load in the roads approaching the junction. The sensors are installed twenty five meters before the intersection in all the paths approaching the junction and this helps the fuzzy logic system to efficiently decide the next signal change time and foresee incoming vehicles to make decision in advance and reduce the vehicle waiting latency. The sample traffic flows applied in a simulation and the response of traffic light signals are observed and these scenarios are compared with a customary traffic light controller system. This model is more efficient than the current traffic light controller system available in Sri Lanka.