International Conference on Advances in Computing and Technology (ICACT)
Permanent URI for this communityhttp://repository.kln.ac.lk/handle/123456789/15607
Browse
Item Detection of Vehicle License Plates Using Background Subtraction Method(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Ashan, M.K.B.; Dias, N.G.J.The detection of a vehicle license plate can be considered as a primary task of a License Plate Recognition System (LPRS). Detecting a vehicle, locating the license plate and the non-uniformity of license plates are few of the challenges when it comes to detection of a license plate. This paper proposes a work to ensure the detection of license plates which are being used in Sri Lanka. The work here, consists of a prototype which was developed using the Matlab’s predefined functions. The license plate detection process consists of two major phases. They are, detection of a vehicle from a video footage or from a real time video stream and license plate area isolation from the detected vehicle. By sending the isolated license plate image to an Optical Character Recognition (OCR) System, its contents can be recognized. The proposed detection process may depend on facts such as, the lighting and weather conditions, speed of the vehicle, efficiency in real time detection, non-uniformity effects of number plates, the video source device specifications and fitted angle of the camera. In the license plate detection process, the first phase, that is; the detection of a vehicle from a video source is accomplished by separating the input video source into frames and analysing these frames individually. A monitoring mask is applied at the beginning of the processing in order to define the road area and it helps the algorithm to look for vehicles in that selected area only. To identify the background, a foreground detection model is used, which is based on an adaptive Gaussian mixture model. Learning rate, threshold value to determine the background model and the number of Gaussian modes are the key parameters of the foreground detection model and they have to be configured according to the environment of the video. The background subtraction approach is used to determine the moving vehicles. In this approach, a reference frame is identified as the background from the previous step.By subtracting the current frame from that reference frame, the blobs which are considered to be vehicles are detected. A blob means a collection of pixels and the blob size should have to be configured according to facts such as the angle of the camera to the road and distance between camera and the monitoring area. Even though a vehicle is identified in the above steps, it needs a way to identify a vehicle uniquely to eliminate duplicates being processed in next layer. As the final step of the first layer, it will generate distinct numbers using the Kalman filter, for each and every vehicle which are detected from the previous steps. This distinct number will be an identifier for a particular vehicle, until it lefts the global window. In, the second phase of the license plate detection will initiate in order to isolate the license plate from the detected vehicle image. First, the input image is converted into grayscale to reduce the luminance of the colour image and then it will be dilated. Dilation is used to reduce the noise of an image, to fill any unnecessary holes in the image and to improve the boundaries of the objects by filling any broken lines in the image. Next, horizontal and vertical edge processing is carried out and histograms are drawn for both of these processing criteria. The histograms are used to detect the probable candidates where the license plate is located. The histogram values of edge processing can change drastically between consecutive columns and rows. These drastic changes are smoothed and then the unwanted regions are detected using the low histogram values. By removing these unwanted regions, the candidate regions which may consists of the license plate are identified. Since the license plate region is considered to be having few letters closely on a plain coloured background, the region with the maximum histogram value is considered as the most probable candidate for the license plate. In order to demonstrate the algorithm, a prototype was developed using MATLAB R2014a. Additional hardware plugins such as Image Acquisition Toolbox Support Package for OS Generic Video Interface, Computer vision system toolbox and Image Acquisition Toolbox were used for the development. When the prototype is being used for a certain video stream/file, first and foremost, the parameters of the foreground detector and the blob size has to be configured according to the environment. Then, the monitoring window and the hardware configurations can be done. The prototype which was developed using the algorithm discussed in this paper was tested using both video footages and static vehicle images. These data were first grouped considering facts such as non-uniformity of number plates, the fitted angle of the camera. Vehicle detection showed an efficiency around 85% and license plate locating efficiency was around 60%. Therefore, the algorithm showed an overall efficiency around 60%. The objective of this work is to develop an algorithm, which can detect vehicle license plates from a video source file/stream. Since the problem of detecting a vehicle license plates is crucial for some complex systems, the proposed algorithm would fill the gap.Item Resource Efficiency for Dedicated Protection in WDM Optical Networks(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Suthaharan, S.; Samarakkody, D.; Perera, W.A.S.C.The ever increasing demand for bandwidth is posing new challenges for transport network providers. A viable solution to meet this challenge is to use optical networks based on wavelength division multiplexing (WDM) technology. WDM divides the huge transmission bandwidth available on a fiber into several non-overlapping wavelength channels and enables data transmission over these channels simultaneously. WDM is similar to frequency division multiplexing (FDM). However, instead of taking place at radio frequencies (RF), WDM is done in the electromagnetic spectrum. In this technique the optical signals with different wavelengths are combined, transmitted together, and separated again. It uses a multiplexer at the transmitter to join the several signals together, and a demultiplexer at the receiver to split them apart. It is mostly used for optical fiber communications to transmit data in several channels with slightly different wavelengths. This technique enables bidirectional communications over one strand of fiber, as well as multiplication of capacity. In this way, the transmission capacities of optical fiber links can be increased strongly. Therefore, the efficiency will be increased. WDM systems expand the capacity of the network without laying more fiber. Failure of the optical fiber in terms of fiber-cut causes loss of huge amount of data which can interrupt communication services. There are several approaches to ensure mesh fiber network survivability. In survivability, the path through which transmission is actively realized is called working path or primary path whereas the path reserved for recovery is called backup path or secondary path. In this paper we consider traditional dedicated protection method in which backup paths are configured at the time of establishing connections primary paths. If a primary path is brought down by a failure, it is guaranteed that there will be available resources to recover from the failure, assuming the backup resources have not failed also. Therefore, traffic is rerouted through backup path with a short recovery time. In this paper, we investigate the performance by calculating the spectrum efficiency variation for traditional dedicated protection in WDM optical networks. To evaluate the pattern for the spectrum efficiency we use various network topologies where the number of fiber links in each network is different. Spectrum efficiency is the optimized use of spectrum or bandwidth so that the maximum amount of data can be transmitted with the fewest transmission errors. Spectrum efficiency is calculated by dividing the total traffic bit rate by the total spectrum used in the particular network. The total traffic bit rate can be calculated by multiplying the data rate by the number of connections (lightpaths). The total spectrum would be the multiplication of the frequency used for a single wavelength and the total number of wavelengths (bandwidth slots) used in the network. In this paper, we carry out the investigation with detailed simulation experiments on different single line rate (SLR) scenarios such as 100 Gbps, 400 Gbps, and 1Tbps. In addition, this paper focuses on four standard optical network topologies which consist of different number of fiber links to identify how the spectrum efficiency deviates for each network. To evaluate the performance, we considered 21-link NFSNET, 30-link Deutsche network, 35-link Spanish network, and 43-link US network as specimens. In our simulation study, spectrum efficiency for each networks are plotted in separate graphs and compared with each other. Our findings are as follows. (1) Spectrum efficiency for each SLR is almost similar and comparable in all the network topologies. (2) Unlike network topology with low number of fiber links, the spectrum efficiency for network topology with high number of fiber links are higher, therefore, the spectrum efficiency increases when the number of links are increased.Item Performing Iris Segmentation by Using Geodesic Active Contour (GAC)(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Yuan-Tsung Chang; Chih-Wen Ou; Jayasekara, J.M.N.D.B.; Yung-Hui LiA novel iris segmentation technique based on active contour is proposed in this paper. Our approach includes two important issues, pupil segmentation and iris circle calculation. If the correct center position and radius of pupil can be find out in tested image, then the iris in the result will be able to precisely segment. The final accuracy for ICE dataset is reached around 92%, and also can get high accuracy 79% for UBIRIS. Our results demonstrate that the proposed iris segmentation can perform well with high accuracy for Iris’s image.Item Android smartphone operated Robot(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Thiwanka, U.S.; Weerasinghe, K.G.H.D.In the present an open-source Android platform has been widely used in smartphones. Android platform has a complete software package consisting of an operating system, middleware layer and core applications. Android-based smartphones are becoming more powerful and equipped with several accessories that are useful for Robotics. The purpose of this project is to provide a powerful, user-friendly computational Android platform with simpler robot’s hardware architecture. This project describes the way of controlling robots, using smartphone and Bluetooth communication. Bluetooth has changed how people use digital device at home or office, and has transferred traditional wired digital devices into wireless devices. The project is mainly developed by using Google voice recognition feature which can be used to send commands to robot. Also motion of robot can be controlled by using the Accelerometer and the buttons in the Android app. Bluetooth communication has specifically used as a network interface controller. According to commands received from application, the robot motion can be controlled. The consistent output of a robotic system along with quality and repeatability are unmatched. This project aims at providing simple solutions to create a framework for building robots with very low cost but with high computational and sensing capabilities provided by the smartphone that is used as a control device. Using this project concept, we can help disabled people to do their work easily ex: Motorized wheelchair, remotely controlling some equipment using the smart phone. Also using this project, we can build Surveillance robot devices and reconnaissance devices can design home automation and can use to control any kind of device that can be controlled remotely. Many hardware components were used such as Arduino Uno, Adafruit Motor Shield Bluetooth module and Ultrasonic Distance Measuring Transducer Sensor. The Uno is a microcontroller board based on the ATmega328P. It contains everything needed to support the microcontroller; simply connect it to a Computer using a USB cable or power it with an AC-to-DC adapter or battery to get started. The Arduino use shield boards. These plug onto the top of the Arduino and make it easy to add functionality. This particular shield is the Adafruit Industries Motor / Stepper / Servo Shield. It has a very complete feature set, supporting servos, DC motors and stepper motors. The Bluetooth module is used to connect the smart phone with robot. It uses AT commands. The HC-SR04 ultrasonic sensor uses sonar to determine distance to an object like bats or dolphins do. It offers excellent non-contact range detection with high accuracy and stable readings in an easy-to-use package. From 2 cm to 400 cm or 1” to 13 feet. Its operation is not affected by sunlight or black materials. It comes with an ultrasonic transmitter and a receiver module. This system has two major parts. One is Android application and the other is robot hardware device. When developing this Android application new Android technologies were used ex: Google Voice and motion of the phone. To improve the security of this Application a voice login is added. In addition, a program is added to change login pin and to develop robot scan program and finally to develop two control programs using buttons with accelerometer and Google voice inputs. Arduino IDE and Arduino language is used to program the robot. Arduino has a simple methodology for running the source code. It has a setup function and a loop function. We can define variables and other things inside setup function. The loop function is running always according the content of the function body. AFmotor header is used to develop the code file to get functions to control the motor shield and the motors and used SoftwareSerial header file to make connection between Arduino and Bluetooth module. Using Black Box test method, integrity, usability, reliability, and correctness of the Android application is checked. Finally, user acceptance tests are done for different kind of users. A field-test is done to test whether the robot can identify the object in front of it and the distance limit is coded to the program. Today we are in the world of robotics. Knowingly or unknowingly, we have been using different types of robots in our daily life. The aim of this project is to evaluate whether we can design robots ourselves to do our work using a low budget and simple way. Finally, we think this project will be helpful for students who are interested in these areas and this will make a good solution for human matters. This project has many applications and a very good future scope. It also allows for modification of its components and parameters to get the desired output. This project allows customizing and automating our day-to-day things in our lives.Item Low Cost Electronic Stethoscope(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Nilmini, K.A.C.; Illeperuma, G.D.Among many medical devices, stethoscope is the most widely used device for medical diagnosis. Auscultation is a non-invasive, pain less, quick procedure that can identify many symptoms and was used as early as 18th century. However, major drawbacks of conventional stethoscope are extremely low sound level and lack of ability to record or share the heart and lung sounds. These problems can be overcome by an electronic stethoscope and have the potential to save many lives. Although electronics stethoscopes are already available in the market, these are very expensive. Therefore, objective of the project was to build a low cost electronic. At the basic level, it would facilitate listening to the heart sounds more clearly. Among other facilities is the ability to control the sound level, to record the sound and share as digital information and also to display the sound using graphs for improved diagnostics. Recording and sharing facilities were included due to the importance of tracking patient’s medical history, and also to discuss among group of physicians. It can also facilitate remote diagnostics where experts may not be readily available. 50 mm sized chest piece of an acoustic stethoscope was used as the chest piece due to its optimized design. Chest piece’s diaphragm was placed against the chest of the patient to capture heart sounds. Those sounds were converted to electronic signals by a microphone. Electret condenser microphone was selected from several other types of microphones due to the smaller size (radius 3 mm) and ability to detect the low frequency sounds (~ 30 Hz). Those electrical signals were amplified by the pre amplifier. TL072 integrated circuit was used as a pre operational amplifier. It provided a gain of 3.8. Output signal of the preamplifier circuit was send to the Sallenkey low pass filter circuit. It filtered the first heart sound (S1, from 30 Hz to 45 Hz) , and second heart sound (S2, 50 Hz to70 Hz). Filtering was done by setting the cut off frequency as 100 Hz and that value was given by the capacitor values 0.047 μF and resistor value 33 kΩ. Getting the advantage of TL072 being a dual operational amplifiers in the single die, second operational amplifier was for the filter circuit. Output signal of the filter circuits was amplified to the appropriate amplitude by using audio power amplifier for the headphones and speakers. LM386 integrated circuit was used as the audio power amp. It provided an gain about 20. Speakers and headphones were used as the output. Facility was provided to use any standard 3.5 mm headphones. Constructed circuit was validated by, comparing the original heart sound and amplified output via a digital oscilloscope. Once the implementation was completed, it was compared for the sound quality against an acoustic stethoscope by six independent observers. Five of them heard the heart sounds more clearly by the electronics stethoscope than the acoustic stethoscope. Accuracy of the heart sound was consolidated by a person who has grip knowledge about anatomy. Recording facility was provided using open source software “audacity” and using the computer audio card to capture the sound. Saved file of the heart sound can be used in several ways such as; it can be stored in a database, can be share via e mail and also for play back for further examine in diagnosing process. Heart sounds were visualized as a graph on the computer. An Arduino was used to digitize (1024 resolution) the audio signal and send the data through virtual com port to the computer for graphing. It can also be used to record sounds to an SD card when a computer is not available. As a result similar sound quality has been found when comparing between direct listening and a recorded sound. In conclusion, the implemented system was considered a success due to low cost, ease of implementation and the ability to provide the most useful functions required from an electronic stethoscope.Item Smart Meter- Multifunctional Electricity Usage Meter(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Bandara, K.R.S.N.; Weerakoon, W.A.C.Internet of things (IoT) is the modern concept which makes new approach to make connect people and people, people and devices via internet. This concept is a great solutions for many practical problems of people. Such as connecting people with each other easily, controlling remotely, managing people or devices easily etc. This concept combined with other technologies will make more use for people. Such a modern technology is Multi-Agent technology; connecting these two technologies will make great solutions for many human problems. Software agents are well trained computer programs for certain task with different environment conditions. These agents can act autonomously with sudden changes in artificial environment. (A) Multi-Agent system (MAS) is the collection of software agents who play in artificial environment. Applying IoT and MAS together is great way of creating solutions for major problems of people. One of such problem is uncontrollable power usage. Smart Meter (SM) is the solution for this problem, which is integrated both IoT and MAS concepts. Electricity is the major type of energy which is used for everything in modern world. So electricity plays major role in industry as well as the domestic. More than 50% of domestic use electricity as their first power source. With the great usage of electricity the wastage also becomes higher. This wastage make uncomfortable to domestic economics so people need a better way to eliminate the wastage. And also it will put the world at risk. Because all resources which are used to generate electricity have decaying characteristics, this wastage will make quick reach to end of the resources. Looking at the two tasks of this problem, the key factor of acting on this issue, so people must think on this issue and must act themselves so the wastage must represent to them in manner, which they can feel the problem. Representing current usage in representative manner and predicting future usage according to past usage details will be much easier to understand on how they must act for themselves as well as the world. Implementing the methods to act according to the future plan is one important component for this concept, and remote access and automatic control will add new value to this concept and will make better and easy way to eliminate wastage. Developing countries are struggling to eliminate wastage of electricity because they spend big portion of their economy to generate electricity. If the wastage elimination plan is more expensive it is not feasible for those country. Then it need less expensive power saving equipment. So the smart Meter is such equipment which developed using less expensive equipment such as Arduino, RPI, etc. Complete SM system contains three parts they are, Physical system – which contains a component connected to the home electricity system to collect consumption data of home areas. This system contains microcontrollers, sensors, etc. Processing Unit – this is the system which contain multi-agents and other software which is used to control micro controllers. This is the core of the SM which does all calculations, and all analytical processes and report generating processes. UI Component is the third system which is used to display analytical results of those generated by Processing Unit and it lets user to control the electricity system remotely. All three units of the SM can be built cost effectively which is appropriate to developing countries. When considering the situation in Sri Lanka another major issue is that there is no good connection between domestic system and service provider so the use manual system of collecting of details of consumption. So it will take more time to process consumption data to service provider and make analysis report of domestic. But using the system SM is the best way of collecting consumptions details and also analyzing the consumption data of domestic to make sense on people to save the power because SM will predict the future data consumption and weaknesses of the power consumption of the user.SM uses an analytical program called R in the core module and it is programmed to generate the forecasting for the users’ future consumption of data to represent in understandable manner. This SM concept can be extended for industrial users and this can be extended to make power grid in the area as well as the country. So then it will give new interface for the Power Grid concept. This will lead the whole country to stand for power saving as one. SM is the MAS which is integrated with IoT concept to achieve the above tasks, which is implemented using MadKit agent platform and Java language. Each software agent is assigned for a task. These agents work together to bring out one major task alive. All devices which are connected to the central system communicating with each other as well as with the user, will bring out the concept IoT. Together these two technologies will make a complete solution for electricity wastage.Item Analysis of Emotional Speech Recognition Using Artificial Neural Network(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Archana, A.F.C.; Thirukumaran, S.This paper presents an artificial neural network based approach for analyzing the classification of emotional human speech. Speech rate and energy are the most basic features of speech signal but they still have significant differences between emotions such as angry and sad. The feature pitch is frequently used in this work and auto-correlation method is used to detect the pitch in each of the frames. The speech samples used for the simulations are taken from the dataset Emotional Prosody Speech and Transcripts in the Linguistic Data Consortium (LDC). The LDC database has a set of acted emotional speeches voiced by the males and females. The speech samples of only four emotions categories in the LDC database containing both male and female emotional speeches are used for the simulation. In the speech pre-processing phase, the samples of four basic types of emotional speeches sad, angry, happy, and neutral are used. Important features related to different emotion states are extracted to recognize speech emotions from the voice signal then those features are fed into the input end of a classifier and obtain different emotions at the output end. Analog speech signal samples are converted to digital signal to perform the pre-processing. Normalized speech signals are segmented in frames so that the speech signal can maintain its characteristics in short duration. 23 short term audio signal features of 40 samples are selected and extracted from the speech signals to analyze the human emotions. Statistical values such as mean and variance have been derived from the features. These derived data along with their related emotion target are fed to train using artificial neural network and test to make up the classifier. Neural network pattern recognition algorithm has been used to train and test the data and to perform the classification. The confusion matrix is generated to analyze the performance results. The accuracy of the neural network based approach to recognize the emotions improves by applying multiple times of training. The overall correctly classified results for two times trained network is 73.8%, whereas it is 83.8% when increasing the training times to ten. The overall system provides a reliable performance and correctly classifying more than 80% emotions after properly trained.Item An Emotion-Aware Music Playlist Generator for Music Therapy(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Dissanayaka, D.M.M.T.; Liyanage, S.R.Music has the ability to influence both mental and physical health. Music Therapy is the application of music for rehabilitation of brain activity and maintain both mental and physical health. Music therapy comes in two different forms: active and receptive. Receptive therapy takes place by making the patient to listen to suitable music tracks. Normally music therapy is used by people who suffer from disabilities or mental ailments. But the healing benefits of music can be experienced by anyone at any age through music therapy. This research proposes music android mobile application with auto generated play list according to its user’s emotional status which can be used in the telemedicine as well as in day-to-day life. Three categories of emotional conditions; happy, sad and anger were considered in this study. Live images of the user is captured from an android device. Android face detection API available in the android platform is used to detect human faces and eye positions. After the face is detected face area is cropped. Image is grey scaled and converted to a standard size in order to reduce noise and to compress image size. Then image is sent to the MATLAB based image-recognition sub-system using a client server socket connection. A Gaussian filter is used to reduce noise further in order to maintain a high accuracy of the application. Edges of the image is detected using Canny Edge Detection to identify the details of the face features. The resulting images appear as a set of connected curves that indicate the surface boundaries. Emotion recognition is carried out using the training datasets of happy, sad and angry images that are input to the emotion recognition sub-system implemented in MATLAB. Emotion recognition was carried out using Eigen face-based pattern recognition. In order to create the Eigen faces average faces of three categories are created by averaging the each database image in each category pixel by pixel. Each database image is subtracted from the average image to obtain the differences between the images in the dataset and the average face. Then each image is formed in to the column vector. Covariance matrix is calculated to find the Eigen vectors and associated values. Then weights of the Eigen faces are calculated. To find the matching emotional label Euclidean distance between each weight is calculated for each category. By comparing the obtained Euclidean distances of input image with each category, the class of the image with lowest distance is identified. The identified label (sad, angry, and happy) is sent back to the emotion recognition sub-system. Songs that are pre-categorised as happy, sad and angry are stored in the android application. When emotional label of the perceived face image is received, songs relevant to the received emotional label are loaded to the android music player 200 face images were collected at the University of Kelaniya for validation. Another 100 happy, 100 sad and 100 angry images were collected for testing. Out of the 100 test cases with happy faces, 70 were detected as happy, out of the 100 sad faces 61 were detected as sad and out of 100 angry faces 67 were successfully detected. The overall accuracy of the developed system for the 300 test cases was 66%. This concept can be extended to use in telemedicine and the system has to be made more robust to noises, different poses, and structural components. The system can be extended to include other emotions that are recognizable via facial expressions.Item MySight – A New Vision for the Blind Community(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Nimalarathna, O.D.; Senanayake, S.H.D.People read newspapers, books, articles and many other materials every day. It is a basic human need which is forbidden for blind people by nature. Sometimes, braille education might be the ultimate solution for such person. As stated by The National Federation of the Blind (NFB), less than 10% of the blind children are engaging with braille reading in United States. Every year a lot of people lose their vision fully or partially. According to the findings, diabetics is one of the major causes for blindness and nowadays it is spreading rapidly. Therefore, the present consequence is significant regarding the blindness. According to NFB, most of the blind people are unemployed and a lot of blind students give up their studies due to the difficulty of learning. The problem is that there are not enough teachers to teach Braille system and also the learning of Braille system is really hard. Therefore, it is feasible to find a method such that a blind person could read newspapers, textbooks, bills, etc. without the braille system. There are several smart applications available which use internet connection to find the Portable Document Format (PDF) of the particular newspaper, book or document and then reads that PDF document audibly. But the problem is that they require the PDF document of the particular reading material to be on the internet available in PDF format which is not practical when all the books have no PDF versions. Currently AIRS-LA, BARD MOBILE, iBlink Radio, NFB Newsline, Voice Dream Reader are the top five applications for the usage of the blind community. Since, they need an internet connection and use huge amounts of data, it is not suitable for real time reading. The problem is if a blind person wants to read something in an area which does not have an internet connection, he cannot use any of the mentioned applications. Therefore, there is a great necessity of an application which is portable, effective and work in an offline environment. This research is completely focused on finding a way which could let the blind people to expose to a new kind of reading. The main objective of this research was to develop an android application which could identify words and various kinds of symbols written using a standard font in a given document, and then convert them into an audible format such that a blind person could understand. It also should easy to use by a blind person by providing voice notifications and smart touch techniques. “MySight” is a revolutionary application which could change the entire reading and learning techniques of a blind person. This would replace the braille system currently used by the blind people and let them read and learn effectively and easily. The application was designed in such a way that a blind person could simply get handled with its functionalities and experience the maximum benefit. Also, this can be considered as a method which could let them read like a normal person. The first step was to find an appropriate and efficient Optical Character Recognition (OCR) technique which compatible with the Android platform. In order to fulfill that requirement, the Tesseract OCR library was used. The reason behind choosing Tesseract is, now it has become a leading commercial engine because of its accuracy. The next step was interface designing. Fig.2 shows how the application captures the text. Fig.3 (a) shows the interface after detecting the text and the Fig.3 (b) shows the text operations that can be performed once it detects the text. Once the OCR implementation completed, the application can detect the text using the captured image. At the same time it can convert the text as a sound output through the mobile phone speaker or headset. For the future improvements, the application should be enhanced to guide the blind person to capture the image of a paper or a page of a book. If the four borders of the page are not captured, the application should say the user to move in the corresponding direction. Fig.1 (a) shows the page which is not fully detected to mobile camera. In that situation the border of the page can be identified using the edge detection. If the four borders of the page are not presented in camera preview, it indicates that the page is not ready to be processed. The adjustment can be identified using the distance from the edges. Then the user can be notified by giving a sound output asking to move to left, right, forward or backward until the image is ready to be processed as in Fig.1 (b). For the above improvements, OpenCV library is going to be used for edge detection and the smart voice commands for giving instructions to the blind person. Furthermore, the application will be tested with the blind community to evaluate the applicability and the effectiveness of the application in the real environment.Item Augmented Reality and its possibilities in Agriculture (In Sri Lankan Context)(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Musfira, F.A.; Linosh, N.E.Since Sri Lanka is an agro country, its economy is mostly based on agriculture and agro based industries, animal husbandry and other agriculture based businesses. In Sri Lanka, Agriculture continues to be the major occupation and way of life for more than half of the total population. Since information technology and Internet network have become essential part in business processes recently it has considerable influence to be used in agriculture in the process of development. Nowadays the Internet-based applications are more and more successful in agriculture and different parts of the food industry. When it comes to information technology field the emerging trend is Augmented Reality (AR). The field of Augmented Reality (AR) has existed for just over one decade, but the growth and rapid progress in the past few years has been remarkable. The basic goal of an AR system is to enhance the user’s perception and interaction with the real world through virtual objects. There are several application areas, such as extension services, precision agriculture, E-commerce and information services where augmented reality can be used. When it comes to the application areas of technology Agriculture is an area where advanced technology is always introduced with a delay. This research analyze on how augmented reality can be used in agriculture. Certain applications of the AR in agriculture are already present in European countries, but it’s still in the infant stage in Asian countries especially in south Asian countries. In Sri Lanka many opportunities to use these techniques in agriculture can be predicted. Following are some instances of possibility of applications of AR in agriculture. The research areas such as In Sri Lanka many agricultural research centers like Sri Lanka Council for Agricultural Research, Gannoruwa Agricultural Complex, Agricultural Research and Development Centre exist where enrichment of an image becomes necessary. Here AR can be used to add dimension lines, coordinate systems, descriptions and other virtual objects to improve investigation and analyze of the captured images. Another aspect where the AR probably will visit in the near future is the cabin of modern agricultural machinery and mobile machinery. Some components of this system already exist and are being used in the form of simple displays that show the use of GPS. Adding the large displays or special glasses, where on the image fields will be imposed lines drawn by the computer which are showing the way for passage or plot boundaries, is a logical development of existing solutions. The third area is animal Husbandry and farming. A system of AR can be developed and installed in the farms of Sri Lanka for farm monitoring. Use of suitable software may allow determining individual pieces on the screen, with simultaneous administration of the relevant information about them. The following can be shown. The data insertion, information about the health status of farm animals, etc. Finally in crop production it is possible to identify plants with a camera and appropriate AR system. This gives the ability to detect pests and to plan appropriate protective procedures. While studying the use of Augmented Reality technology in agriculture, it can be concluded that different types of services offer different possibilities. Mobile systems develop very dynamically both in regards to the speed of data transmission and services. New devices like tablets and new services like Cloud Computing, Near Field Communication (NFC) have great potential in agriculture. Augmented Reality can be allied with all those technologies and expands the possibilities to evolve towards a new era in agriculture in Sri Lankan agro farms. However, the whole assessment of topic must not be done only on the basis of the technology and taken out of its environment randomly, since the whole area is very complex, this paper focuses on finding and analyzing what is Augmented Reality and tries to highlight the possibilities in agriculture.Item Context-Aware Multimedia Services in Smart Homes(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Chih-Lin Hu; Kasthuri Arachchi, S.P.; Wimaladharma, S.T.C.I.The evolution of “smart homes” technologies exposes a broad spectrum of modern personal computers (PC), consumer electronics (CE), household appliance and mobile devices for intelligent control and services in residential environments. With high penetration of broadband access networks, PC, CE and mobile device categories can be connected on home networks, providing a home computing context for novel service design and deployment. However, conventional home services are characterized by different operations and interactive usages among family members in different zones inside a house. It is prospective to realize user-oriented and location-free home services with modern home-networked devices in smart home environments. The contribution of this paper proposes a reference design of a novel context-aware multimedia system in home-based computing networks. The proposed system integrates two major functional mechanisms: intelligent media content distribution and multimedia convergence mechanisms. The first mechanism performs intelligent controls on services and media devices in a context-aware manner. This mechanism integrates face recognition functions into home-based media content distribution services. Some devises capable of capturing images can recognize the appearances of registered users and infer their changes of location region inside a house. Media content played in the last locations can thus be distributed to home-networked devices closer to the users in the current locations. The second mechanism offers multimedia convergence among multiple media channels and then renders users a uniform presentation for media content services in residential environments. This mechanism can provide not only local media files and streams from various devices on a home network but also Internet media contents that can be online fetched, transported and played onto multiple home-networked devices. Thus, the multimedia convergence mechanism can introduce an unlimited volume of media content from the Internet to a home network. The development of a context-aware multimedia system can be described, as follows. A conceptual system playground in a home network contains several Universal Plug and Play (UPnP) specific home-networked devices that are inter-connected on a singular administrative network based on the Ethernet or Wi-Fi infrastructure. According to UPnP specifications, home-networked devices are assigned IP addresses using auto-IP configuration or DHCP protocols. Then, UPnP-compatible devices can advertise their appearances on a network. When other neighbor devices discover them, they can collaborate on media content sharing services in a network. In addition, some UPnP-compatible devices are capable of face recognition to capture the front images of users inside a house. Those captured images can be sent to a user database and compared with existing user profiles corresponding to individuals in the family community. After any registered user is recognized, the system can refer to the stored details of this particular user and then offer personal media services in a smart manner. On the other hand, the components and functionalities of the proposed system can support intelligent media content distribution and multimedia convergence mechanisms. Technically, the proposed system combines several components such as UPnP control point, UPnP media renderer, converged media proxy server, image detector and profile database of registered users and family communities. Though there are diverse media sources and formats in a home network, users remain the same operational behavior on sharing and playing media content according to common UPnP and Digital Living home Alliance (DLNA) guidelines. Prototypical development achieved a proof-of-concept software based on the Android SDK and JVM frameworks, which integrates the distribution of intelligent media content and converged media services. The resulting software is platform-independent and application-level. It can be deployed on various home-networked devices that are compatible with UPnP standard device profiles, e.g., UPnP AV media server, media player, and mobile phones. Real demonstration has been conducted with the software implementation that runs on various off-the-self home-networked devices. Therefore, the proposed system is able to offer friendly user experience for context-aware multimedia service in residential environments.Item Applying Smart User Interaction to a Log Analysis Tool(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Semini, K.A.H.; Wijegunasekara, M.C.A log file is a text file. People analyze these log files to monitor system health, detect undesired behaviors in the system and recognize power failures and so on. In general these log files are analyzed manually using log analysis tools such as Splunk, Loggly, LogRhythm…etc. All of these tools analyze log file and then generate reports, graphs which represents the analyzed log data. Analyzing log files can be divided into two categories namely, analyzing history log files and analyzing online log files. In this work analyzing history log files is only considered for an existing log file analysis framework. Most of the log analysis tools have the feature for analyzing history log files. To analyze a log file using any of the mentioned tools, it is necessary to select a time period. For example, if a user analyze a log file according to the system health, the analyzed log file specifies the system health as ‘good’ or ‘bad’ only for the given time period. In general these analysis tools provide average system health for a given time period. This analysis is good but sometimes it may not be sufficient as people may need to know exactly what happens in the system each second to predict the future behavior of the system or to get some decisions. But using these tools, it is not possible to identify the log file in detail. To do such analysis a user has to go through the log file line by line manually. As a solution for this problem, this paper describes a new smart concept for log file analysis tools. The concept works through a set of widget and it can replay executed log files. First the existing log file analysis framework was analyzed. This helped to understand the data structure of receiving log files. Next the log file analysis tools were studied to identify the main components and features that people most like. The new smart concept was designed by using replayable widget and graph widgets. The replayable widget uses to replay the inputted log file and the graph widgets graphically represent the analyzed log data. The replayable widget is the main part of the project and holds the new concept. This is a simple widget that acts just as a player. It has two components; a window and a button panel. Window shows the inputted log file and the button panel contains play, forward, backward, stop and pause buttons. The log lines which is shown in the window of the replayable widget, holds a tree structure (Figure 1: Left most widget). The button panel contains an extra button to view the log lines. These buttons are used to play the log lines, go to requested log line, view the log line and control playing log lines. It was important to select suitable chart library to design the graph widgets. A number of chart libraries were analyzed and finally D3.js was selected because it provided chart source, free version without watermarks and it also had more than 200 chart types. It has a number of chart features and also it supports to HTML5 based implementations. The following charts were implemented using D3.js chart library. Bar chart according to the pass/failure count Time line according to the time of pass/fail occurs Donut chart according to the total execute count Donut chart for Total Pass/Fail Count Every graph widgets are bind with replayable widget, so that updates are done according to the each action. The replayable widget and the graph widgets are implemented by using D3.js, JavaScript, JQuery, CSS and HTML5. The replayable widget is successfully tested and the implemented interface successfully runs in Google Chrome web browser. Figure 1 shows a sample interface of the design which is generated using a sample log file that had about 100 of log lines. Left most widget is the replayable widget that holds considered log file as a tree structure. Top right most widget is one of the graph widget represented as a bar chart shows pass/failure count and the bottom right most widget is another graph widget represented as a time line shows the time of pass/fail that occurred for the given log file. In addition the analyzed log file can also be visualized using donut charts.This paper described the new smart concept for log file analysis tools. The existing analysis tools that were mentioned did not contain this new concept. Most of the log file analysis tools use graphs for data visualization. This system was successfully implemented and it was evaluated by a number of users who work with log files. This new concept will help log analysts, system analysts, data security teams as well as top level management to extract decisions about the system by analyzing the widgets to make predictions. Furthermore, analyzed data would be useful to collect non-trivial data for data mining and machine learning procedures. As future work, the system could be enhanced to add features such as zooming and drill down method to customize graphs and identify a mechanism to filter data according to user requirements.Item End-user Enable Database Design and Development Automation(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Uduwela, W.C.; Wijayarathna, G.Information System (IS) is a combination of software, hardware, and network components working together to collect, process, create, and distribute data to do the business operations. It consists with “update forms” to collect data, “reports” to distribute data, and “databases” to store data. IS plays a major role in many businesses, because it improves the business competitiveness. Although SMEs are interested to adopt IS, they are often suffered by other factors: time, underline cost, and availability of ICT experts. Hence, the ideal solution for them is to automate the process of IS design and development without ICT expertise for an affordable cost. The software tools are available on the Web to generate the “update forms” and “reports” automatically for a given database model. However, there is no approach to generate the databases of IS automatically. Relational database model (RDBM) is the most commonly used database model in IS due to its advantages than the other data models. The reason of the advantages of the model is its design, but it is not a natural way of representing data. The model is a collection of data that is organized into multiple tables/relations linked to one another using key fields. These links represent the associations between relations. Typically, tables/relations represent entities in the domain. A table/relation has column/s and rows where column/s represent the attributes of the entity and rows represent the records (data). Each row in a table should have a key to identify that row uniquely. Designers should have to identify these elements from the given data requirements in the process of the RDBM design, which is difficult for non-technical people. The process of design of RDBM has few steps: collect the set of data requirements, develop the conceptual model, develop the logical model, and convert it to the physical model. Though there are approaches to automate some steps of the process of design and development of RDBM, they also request the technical support. Thus, it is required to develop a mechanism to automate the database design and development process by overcoming the difficulties in the automation approaches of RDBM, so that non-technical end-users will be able to develop their databases by themselves. Hence, a comprehensive literature survey was conducted to analyze the feasibilities and difficulties of the automation of the process of RDBM design and development. Uduwela, W. et al., the author says that the “form” is the best way to collect data requirements of the database model for its automation, because form is in semi structured way than the natural language (the most common way to present the data requirements is natural language) and it is very closer to the underline database. Approaches were available to automate the development of the conceptual model based on the given data requirements. This is the most critical step in the RDBM design process, because it needs to identify the elements of the model (entities, attributes of them, relationship among the entities, keys and the cardinalities). Form based approaches were analyzed using the data available in the literature to recognize the places where the user intervention is needed. The analysis says that all approaches need user support and it is needed to make the corrections of the outcome, because the elements are not consistent among business domains; it differs from domain to domain and with the same domain also. Further, they demand user support to make the initial input according to the data requirements (set of forms) to identify the elements of the conceptual model. The next step of the process is developing the logical model based on the conceptual model. The outcome of the logical model should be a normalized database to eliminate the data insertion, updating and deletion anomalies by reducing its data redundancies. Data redundancies often caused by Functional Dependencies (FD) that are a set of constraints between two sets of attributes in a relation. The database can be normalized by removing undesirable FDs (remove partial dependencies and transitive dependencies). We could not identify any approach that generates normalize database diagram automatically from the data requirements directly. Existing approaches request the FDs to generate the normalized RDBM. Designers’ high perception power and skills are needed to identify the correct FDs, because it also depends on the domain which is a problem for the automation. FDs can be found by doing data mining, but it also generates an incorrect set of FDs if there are insufficient data combinations. Developing the physical model from the logical model is straightforward and relational database management systems help to automate it. According to the analysis, it can be concluded that the existing approaches on conceptual model development cannot develop accurate models as they has to develop distinct models for each problem. Normalization approaches also cannot be automated as FDs also vary among business domains and with the same domain also. These concludes that there should a database model that can be designed and developed by end-users without any expert knowledge. The proposed model should not be either domain specific or problem specific. It would be better if the approach could convert the data requirements to the database model directly without intermediate steps like in the DBM design process. Further, it would be better the proposed model could be run on the existing database management systems too.Item Animal Behavior Video Classification by Spatial LSTM(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Huy Q. Nguyen; Kasthuri Arachchi, S.P.; Maduranga, M.W.P.; Timothy K. ShihDeep learning which is basis for building artificial intelligent system is become a quite hot research area in recent years. Current deep neural network increase human recognition level of natural images even with huge dataset such as ImageNet. Among successful architectures, Convolution Neural Network (CNN) and Long Short-term Memory (LSTM) are widely used to build complex models because of their advantages. CNN reduces number of parameters compare to full connected neural net. Furthermore, it learns spatial features by sharing weights between convolution patch, which is not only help to improve performance but also extract similar features of input. LSTM is an improvement of Vanilla Recurrent Network (RNN). When processing with time-series data, RNN gradient has tend to vanish in training with backpropagation through time (BTT), while LSTM proposed to solve vanish problem. Therefore it is well suited for manage long-term dependencies. In other words, LSTM learn temporal features of time-series data. During this we study focused on creating an animal video dataset and investigating the way that deep learning system learn features with animal video dataset. We proposed a new dataset and experiments using two types of spatial-temporal LSTM, which allow us, discover latent information of animal videos. According to our knowledge of previous studies, no one has used this method before with animal activities. Our animal dataset created under three conditions; data must be videos. Thus, our network can learn spatial-temporal features, objects are popular animals like cats and dogs since it is easy to collect more data of them and the third is one video should have one animal but without humans or any other moving objects. Under experiments, we did the recognition task on Animal Behavior Dataset with two types of models to investigate its’ differences. The first model is Conv-LSTM which is an extend version of LSTM, by replacing all input and output connections of convolutional connections. The second model is Long-term Recurrent Convolutional Networks (LRCN), which proposed by Jeff Donahue. More layers of LSTM units can easily added to both models in order to make a deeper network. We did classification using 900 training and 90 testing videos and could reached the accuracy of 66.7% on recognition rate. Here we did not do any data augmentation. However in the future we hope to improve our accuracy rate using some of preprocessing steps such as flip, rotate video clips and collecting more data for the dataset.Item Development of a Location Based Smart Mobile Tourist Guide Application for Sri Lanka(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) de Silva, A.D.; Liyanage, S.R.Tourism plays a momentous role in the accomplishment of macroeconomic solidity in Sri Lanka. It is one of the main industries that generates a higher emolument for Sri Lanka. The amount of foreign currency earnings from tourism industry has decreased significantly during the past few years according to observations and collected data. This can be partially attributed to the lack of loyalty of the physical tour guides as well as not modernizing the tour guide booklets regularly. Considering the above issues, we propose a mobile application named “Live Tour Guide” to make the travelling easier for the tourists and thereby creating a positive impact on the economy of Sri Lanka. A meticulous investigation was carried out in order to find out the software and hardware requirements to develop this automated tour guide application. The feasibility analysis for the system was carried out under three areas; i.e Operational, Economic and Technical. Since this application consists of the details about the hotels, attractive places and the longitudes/latitudes of different locations it was needed to use an exterior source to collect these respective data. Under the assumption that the particular websites are updated regularly, the dedicated websites were used to gather the required information. Direct observation data collection method was also utilized to identify the work carried out by the tour guides, their behavior, the way that they treat the tourists etc. The system has been developed focusing on two main elements; Mobile Application and Web Server. The Web Server is used to access the cached data or information through the Mobile Application. Information regarding different locations such as, longitudes and latitudes were gathered with the use of the Global Positioning System (GPS). Google maps was employed to access the map based services. Central web server can be accessed through the Internet by using wireless connectivity or 3G connection. The Web Server serves the contemporary location information and it also provides the details of the hotels and attractive places situated close-by, so that it will allow the tourists to plan out their journey accurately in advance with a minimum effort. An external database has been developed using MySQL in order to maintain the details of the places of interest. Java Script Object Notation (JSON) objects are used to exchange the location data over the internet and the application program. Google Maps Application Programming Interface is used to access the Google Map. The “Live Tour Guide” mobile application has developed in order to provide the real time location based services according to the requirements of the tourists. The system has been tested to operate on any smartphone with Android Operating System version 4.2 or later. When a user enters the source and the destination, it will display the route, estimated time for the journey without traffic and the distance between the origin and the destination. Along with that it provides two options to select as “Locations” and “Hotels”. Those two options will provide the details of all the available hotels as well as attractive places located close-by along the preferred route. Apart from the mobile application, “Live Tour Guide” web application has also been developed for maintaining the database in a user friendly manner that can be used by the travel agencies. By using all the above mentioned technologies together with the real data, the objective of developing this “Live Tour Guide” android based application was successfully achieved. Even though some of the solutions are already available as tour guides, this “Live Tour Guide” application allows the tourists to plan out their tour before they start up their journey, by providing various kinds of origins and destinations. It will allow tourists to choose the locations that they are preferred to visit during their journey, since it provides all the information including the prices as well. Any user who is equipped with an android based smartphone, eligible to use this application. However, in future this system should be enhanced by enabling to display all the public places that are available within a selected route as well as it is needed to find out a way of accessing the “Live Tour Guide” application accurately even without having an internet connection. Currently, the database updates manually, but it is better to focus on updating it automatically within regular intervals, so that it will operate more accurately. Due to this innovative application, more tourists can be attracted and will gain a positive impact on the economy of Sri Lanka.Item Student Attendance Management System Based on Fingerprint Recognition(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Pushpakumara, D.C.; Weerakoon, W.A.C.During lectures in the university, traditionally students’ attendance is taken manually by using attendance sheets given by lecturer in class and not by an automated system. At the end of the semester, the students’ attendance is used in calculating the final grade of each subject. Supporting staff manually input attendance data into excel sheets based on signature lists collected during lecture hours. This manual method requires a lot of stationery materials. It will be a tedious job to maintain the record by the user and the retrieval of the information is not as well. Hence, this manual process consumes a lot of human and physical resources, and has many disadvantages such as, expensive and time consuming data entry process, large manual data insertions prone to errors, sometimes lecturers find it inconvenient to track and analyze attendance registry due to dishonest behaviors of students and lack of automated system. The lecturers are responsible to monitor all the students’ attendance for the whole semester and it is apparent that the current manual process is highly inefficient. Because of this problem, an automated system is needed in order to reduce the human intervention and the physical resources used in recording the attendance of the students more accurately and efficiently and to forward the attendance to final grading process. This manual system was analyzed and identified the necessary features for the automated system as functional and non-functional requirements. As a result, this implemented system is mainly comprised of development of student attendance management system using fingerprint authentication. A fingerprint device will be provided at each lecture hall at the faculty. This system will record the attendance of students in class when the class begins. The main objectives for this study were to reduce the paper usage, avoid human errors, compare efficiency between the proposed system and the manual system, generate effective reports, and use various sensor technologies to enhance the User Interface experience. The implemented system significance can be discussed from three perspectives such as faculty, lecturers, and students. This system can keep the track of students, courses, time table details etc. This system can only be accessed by the authorized people and there are different privilege categories. Student Attendance Management System provides useful analysis graphs for lecturers with other calculation processes and flexible report for all students. To fulfill all analysed requirements, the system consists of three modules. The first module allows the system administrator (admin) to log into his account and to accomplish the functions such as, adding new students, modifying students’ information and deleting students, adding new course modules and modifying/deleting course details, enrolling students in courses, marking student attendance, adding students’ tutorial marks, practical marks and final marks for each course modules, generating attendance reports, result sheets for each course and each student. Furthermore, reports can be printed or sent via email, generating data analysis graphs for each course unit, managing time table details, adding new users and modifying user information, provision to change login password. The second module of this system defines itself in terms of being used by the lecturers. Lecturer has to enter their login user name and password in system. There are the privileges for lecturers provided by the system such as only view the students’ details, mark students’ attendance/tutorial marks/practical marks/final marks can only be entered by the relevant lecturer, generate attendance reports and can be printed, view time table details, change login password, send special notices to the admin. The last module is for the students. This module provides a web based system for students with the privileges such as view their personal details, time tables, his/her attendance details and results. Student Attendance Management System uses JAVA to implement the front-end and has connectivity with MySQL. The implemented system is based on the database, object oriented and networking techniques. With the Fingerprint module, can check fingerprint, while up to 600 fingerprint memories and 60,000 transaction records contained. It can also get information out of it, because the fingerprint machine supported USB cable and USB Flash Drive both. NetBeans used as the IDEs. Mainly used languages are JAVA, JavaScript, PHP. Student Attendance System consists of a server and a central database. The system administrator can access the database using admin panel. Test cases were created for each criterion and Simple Unit Tests, System Testing, Integration Testing, Security Testing and Performance Testing were successfully done to check all the functional requirements. After analysis of this system’s goal and research direction, a set of objectives were established, such as implementing the attendance system with N-tier architecture, testing the software in a real environment, generating effective reports like attendance reports/results sheets. Implemented system is faster and more accurate than the existing system. As the future work, I am planning to implement the fingerprint machine using a GT-511C1 fingerprint scanner and Arduino and also using Wi-Fi shield fingerprint machine can be passed among the students during the lecturer. Further, the efficiency of this automated system can be enhanced using the Multi Agent Technology introducing different software agents who bare distinct responsibilities within the system. Finally, this system can be integrated into current information system used within the faculties in the university.Item Adopting SDLC in Actual Software Development Environment: A Sri Lankan IT Industry Experience(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Munasinghe, B.; Perera, P.L.M.Systems Development Life Cycle (SDLC) and its variant forms have been around in systems development arena as a steadfast and reliable development approach since 1960s and are still widely used in software development process in information technology (IT) industry. IT industry has been adapting SDLC models as a solution to minimize issues aroused in a large number of failure projects. Though SDLC models powerfully model that software projects undergo some common phases during its development process, most software development organizations in the Sri Lankan IT industry today only use SDLC models as a token to show off their process quality but fail to adhere to them in real-time, thus failing to grasp the real benefits of SDLC approach. This study sought to find the causes behind the practical difficulties of a medium Sri Lankan IT Company to find a fitting SDLC model in their development process and the limitation to adhere to such model-based approach. The research instruments were questionnaires that were administered to a sample frame consisting of employees, experts and the managers. Interview schedules were also used. The findings of the study indicate that the main cause behind difficulty in finding a fitting model as extreme customer involvement, which causes regular requirements changes. Company concentrates more on winning the customer than following proper requirements definition approaches suggested by SDLC to define clear-cut requirement specifications, which result in inefficient customer interference throughout the development process demanding inconvenient changes to be addressed down the line. Most of the software projects with version releases involve maintenance and bugs fixing while developing the next release. As customers become system users, their demands become more insisting, making maintenance process tedious and development of next phases more challenging. Lack of proper customer management approaches is strongly visible in all areas of development and customer demands cause poor resource management and increased stress on work force. Study findings suggests that the main reasons behind the limitations in companies to follow a proper SDLC approach are: limitations in budget and human resource, unrealistic deadlines, frequent requirements changes, vague project scope definitions, nature of the project (whether offshore or local), need of using new technologies yet lack of timely availability of knowledge expertise, project team diversity and company’s own business model interfering the project dynamics. Future work will focus on further investigations incorporating number of Sri Lankan IT companies covering all ranges of business magnitudes.Item Two Tier Shield Unapparent Information Deliver along with the Visual Streams(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Eranga, D.M.S.; Weerasinghe, K.G.H.D.Information put out of sight for various security purposes have become highly exciting topic in the industry and also academic areas. Encryption provides the ability of data hiding. With the development of the technology, people tend to figure out a technique which is not only capable in hiding a message, but also capable in hiding the actuality of the message. Steganography was introduces as a result of those researches. The current study is conducted in order to hide a file inside a video file. Generally, steganography benefits do not use in the industry or students even though it is widely discussing topic in modern information world. The major aim of this research is the ability to hide any type of file in a video file and retrieve hidden information. There are few algorithms/systems developed to embed a file into video files. It is a great challenge to extract secret information directly from the video, which is embedded already. The existing applications require a considerable time to embed a small message and some are not freeware. Focuses areas of the research are confidentiality, authentication, increase hidden data size, integrity, assure unapparent perceptual transparency of video file (cover object) and send/receive video files. Video consists of frames called I, P and B frames. Each frame uses LSB technique to hide information. This original message can be any kind of file type and almost all popular video file formats for carrier. Identifies the type of the message and encrypts the message file using AES256 with given key. Encrypted message size stores in four bytes and type of the message file stores in another four bytes. Propose algorithm decides the number of frames require to hide the secret information according to size of both carrier video and the secret message. Firstly, reads the video header to retrieve important information and skip the header. Video file Splits in to bitmap images with having pre-defined frame gap between two images, corresponding to the secret message size. Every bitmap image consists of red, green and blue colors and bitmap image pixel has 8 bit for each color and total of 24 bits called bit depth. Writes message size followed by the message type in the bitmap images. Then, writes the message. Each encoded image adds into the original video file. In the process of retrieval, skips the header frames and fetches the images from the video according to the pre-defined gap between images. Reads first eight bytes to identify the message size and type of the message respectively. Then, decodes the encrypted message and decrypts the message with same secret key, which used to encrypt the message. Carrier video file can be watched during the both process of encode and decode. This method doesn’t increase the size of the carrier, though the existence of the message cannot be detected. AES256 key size encryption supports the dual layer security of classified information. Proposed solution supports unique feature that can delete the hidden information, which concealed inside the video without affecting the video carrier. Encoded video is guaranteed the original quality of the carrier. So, this proposed way-out emerges along with an application called SilentVideo1.0. The system was tested to assure the quality of the final product. Testing focused on the accuracy of the propose algorithm, which is ability of hiding the existence of the information as well as the ability of retrieving the information correctly using the application. Test results guarantee the success rate of the proposed algorithm goes up to 85 percent. Furthermore, the application was evaluated for exactness of the input and output information by black box tests using 200 samples from different video formats. The aim of this work was propose a strong resolution for steganography in digital media with multi-tier protection. The hidden file capacity will be increased using sound track of the video file as well. Upcoming versions of the system will be upgraded with latest cryptographic involvement and increase the conceal message capacity along with the lowest encoding and decoding time frame.Item Intelligent Recruitment Management Engine(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Chiththananda, H.K.I.C.L.; Perera, T.D.; Rathnayake, L.M.; Mahanama, M.G.G.D.D.P.; Prabashana, P.M.P.; Dias, D.P.N.P.A system which can be used to automate the recruitment process of an organization is proposed in this paper. The system is designed targeting the human resource department of an organization in order to simplify the massive process of data extraction from a large number of curriculum vitaes (CVs), on the other hand to reduce the cost and time which have to be spent on the interview process and to allocate most suitable interviewers from the organization for each interviews by analyzing the past data. Information extraction is used in-order to retrieve data from CVs as well as from the cover letters. An ontology map is created to analyze and categorized the extracted key words through this system. Then the CVs are sorted and prioritized according to the requirements of the organization. We come up with a prediction component which will be embedded in the main system to predict the future of the incoming values such as the cost and time of the recruitment process of a particular organization by analyzing the past records. Hence we believe that this system will enhance the efficiency and effectiveness of the recruitment process of any organization.Item Students’ Perspective on Using the Audio-visual Aids to Teach English Literature and Its Effectiveness(Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Wijekoon, W.M.S.N.K.The field of education is renewing in each and every second. Human Computer Interaction plays a vital role in it. Thus, the government authorities have paid more attention on this aspect in order to provide a qualitative education. According to the reports which were published by the Ministry of Education, the government has conducted trainings, workshops and seminars on using the modern technology including the modern audio and visual aids in all around the country. Yet, most of the teachers of English Literature still do not use them within the classroom and the students learn the subject in a conventional classroom environment. In this respect, this study explores how effective it is to use audio visual aids to teach English Literature which is considered as a traditional subject in order to enhance students’ literary competence and the students’ perspective on using the audio-visual aids to teach English Literature. For the study, as the sample, forty five students who are from Grade Ten and learn English Literature as an optional subject for the GCE Ordinary Level Examination were selected from four government schools in Kandy district and Matale district. Data was collected through a questionnaire and participant observation. Through the questionnaire, students’ preference for the subject and their views on teaching methods with and without the modern audio-visual aids were studied. Learning behavior and the students’ involvement with and without the audio-visual aids were studied through participant observation. The qualitative analysis of data revealed that there is a high involvement of the students when they learn this subject with the modern audio-visual aids. The quantitative data analysis provides the initial evidence that their teachers’ conventional teaching process is less productive and provides a lessened contribution to reach the expected goals of teaching and learning English Literature. The findings suggest that it is necessary to implement this pedagogical tool to teach English Literature as it has the ability of bringing a highly constructive learning environment.