KICACT 2016

Permanent URI for this collectionhttp://repository.kln.ac.lk/handle/123456789/15608

Browse

Search Results

Now showing 1 - 10 of 40
  • Item
    Performing Iris Segmentation by Using Geodesic Active Contour (GAC)
    (Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Yuan-Tsung Chang; Chih-Wen Ou; Jayasekara, J.M.N.D.B.; Yung-Hui Li
    A novel iris segmentation technique based on active contour is proposed in this paper. Our approach includes two important issues, pupil segmentation and iris circle calculation. If the correct center position and radius of pupil can be find out in tested image, then the iris in the result will be able to precisely segment. The final accuracy for ICE dataset is reached around 92%, and also can get high accuracy 79% for UBIRIS. Our results demonstrate that the proposed iris segmentation can perform well with high accuracy for Iris’s image.
  • Item
    Students’ Perspective on Using the Audio-visual Aids to Teach English Literature and Its Effectiveness
    (Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Wijekoon, W.M.S.N.K.
    The field of education is renewing in each and every second. Human Computer Interaction plays a vital role in it. Thus, the government authorities have paid more attention on this aspect in order to provide a qualitative education. According to the reports which were published by the Ministry of Education, the government has conducted trainings, workshops and seminars on using the modern technology including the modern audio and visual aids in all around the country. Yet, most of the teachers of English Literature still do not use them within the classroom and the students learn the subject in a conventional classroom environment. In this respect, this study explores how effective it is to use audio visual aids to teach English Literature which is considered as a traditional subject in order to enhance students’ literary competence and the students’ perspective on using the audio-visual aids to teach English Literature. For the study, as the sample, forty five students who are from Grade Ten and learn English Literature as an optional subject for the GCE Ordinary Level Examination were selected from four government schools in Kandy district and Matale district. Data was collected through a questionnaire and participant observation. Through the questionnaire, students’ preference for the subject and their views on teaching methods with and without the modern audio-visual aids were studied. Learning behavior and the students’ involvement with and without the audio-visual aids were studied through participant observation. The qualitative analysis of data revealed that there is a high involvement of the students when they learn this subject with the modern audio-visual aids. The quantitative data analysis provides the initial evidence that their teachers’ conventional teaching process is less productive and provides a lessened contribution to reach the expected goals of teaching and learning English Literature. The findings suggest that it is necessary to implement this pedagogical tool to teach English Literature as it has the ability of bringing a highly constructive learning environment.
  • Item
    An Application of Context Assured Ontology for Rule Based Cluster Selection in Psychotherapy
    (Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Vidanage, K.; de Silva, O.
    Personality trait analysis is considered as a very important requirement mainly in psychotherapy. A consultant should have a sound awareness about the client`s personality to commence effective therapy sessions. In this research OCEAN model for personality trait analysis is computationally implemented. OCEAN model is an effective model used in psychology to determine the human temperament composition. Expert knowledge associated with the five dimensions of the OCEAN model is captured and stored inform of rule based expert clusters. Additionally, an upper ontology is designed to control the context associated with the OCEAN model. Ontologies are very good at storing domain associated knowledge in forms of triples. Various lexicon combinations, depicting contexts, could be grouped together and can be assigned as a specific object property. Different properties of the same object will depict various contexts, the object could be exposed to. Here, the upper ontology will act as the navigator which shows to specific knowledge cluster. The knowledge clusters are used to determine the sub facets of a particular trait as well as the intensity. Once the client enters the experiencing psychological discomfort through text to the interface, it`s natural language processed and important semantics are identified. Subsequently, depending on the semantics captured, entered text will be sent to an established SPARQL query module. Defined SPARQL queries in the module are mapped with a particular region of the created ontology. Therefore, execution of a particular SPARQL query will, question a specific region of the ontology. End points of the defined ontology are further mapped with created different rule based expert clusters. Ultimately, client`s entered problem, in form of text will be directed to a particular rule based expert cluster, which contains expert knowledge captured from psychologists. Eventually a similarity index is calculated and percentile compositions of the personnel traits are derived as per the dimensions of the OCEAN model. Developed prototype, got evaluated in two forms. Primarily, more than 30 expressed psychological inconveniences are captured from two famous discussion forms, which are globally available to share psychological snags across community. By name, those are “Panic Center” and Daily Strength”. Each of these stories captured are fed as the input to the prototype and OCEAN reports are generated. Henceforth, scenario and the generated reports are shared with the psychologist, in order to evaluate the accuracy of the final outcomes. After evaluating the final outcomes of the prototype with the expert knowledge of the psychologist, more than 80% accuracy depicted. As the second mechanism, results are compared against, Truity, which is one of the very famous questionnaire based online trait evaluation site. A trait evaluation questionnaire designed using OCEAN model was attempted in Truity and at the end final result sheet was obtained. Next, covering the same set of questions in the attempted questionnaire along with the same answers provided, an artificial story was created. Afterwards, this artificial story was provided as the input to the prototype and another OCEAN report got generated. Eventually, both the Truity generated report and the prototype generated reports are compared against. Though there`re small variations visible in the percentile values, inflations and deflations patterns of both the reports are almost identical. As conversed above, both these validations mechanisms have evidenced that the prototype generated OCEAN report is also depicting an acceptable level of accuracy. Though there`re ample of questionnaire based online trait analysis tools available, it`s almost no text based trait analytics approaches. A questionnaire based mechanism will limit the express-ability of the user / patient, hence the patient is restricted via some pre-defined set of questions. But, with this prototype, no restrictions applied. Liberty is provided, for the free flowing thoughts of the user to be entered. Other than, requesting the patient, who is psychologically distressed to fill a questionnaire which is not fair, this prototype allows to express anything what comes to the mind about the user`s cognitions. Also, the chances of misinterpreting the questions in the questionnaire and providing of wrong answers, are also addressed through this system. To get the optimal from this system, definitely it has to be used under the governance of a psychologist or a psychiatrists. This prototype is targeted to provide digital diagnostic assistance to the consultants. Hence, domestic use of this without the intermediation of the consultant, will not give the intended benefits. The ultimate intension of this research is to improve the interaction between the consultant and the patient, through a computational intervention. Because, the active ingredients in therapy comes with the live interactions between the consultants and the patient. As proved in literatures, the 100% computational replacements of therapy has become an utter failure. But the effective blend of computing with live therapy has improve the efficacy of psychotherapy in great heights.
  • Item
    End-user Enable Database Design and Development Automation
    (Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Uduwela, W.C.; Wijayarathna, G.
    Information System (IS) is a combination of software, hardware, and network components working together to collect, process, create, and distribute data to do the business operations. It consists with “update forms” to collect data, “reports” to distribute data, and “databases” to store data. IS plays a major role in many businesses, because it improves the business competitiveness. Although SMEs are interested to adopt IS, they are often suffered by other factors: time, underline cost, and availability of ICT experts. Hence, the ideal solution for them is to automate the process of IS design and development without ICT expertise for an affordable cost. The software tools are available on the Web to generate the “update forms” and “reports” automatically for a given database model. However, there is no approach to generate the databases of IS automatically. Relational database model (RDBM) is the most commonly used database model in IS due to its advantages than the other data models. The reason of the advantages of the model is its design, but it is not a natural way of representing data. The model is a collection of data that is organized into multiple tables/relations linked to one another using key fields. These links represent the associations between relations. Typically, tables/relations represent entities in the domain. A table/relation has column/s and rows where column/s represent the attributes of the entity and rows represent the records (data). Each row in a table should have a key to identify that row uniquely. Designers should have to identify these elements from the given data requirements in the process of the RDBM design, which is difficult for non-technical people. The process of design of RDBM has few steps: collect the set of data requirements, develop the conceptual model, develop the logical model, and convert it to the physical model. Though there are approaches to automate some steps of the process of design and development of RDBM, they also request the technical support. Thus, it is required to develop a mechanism to automate the database design and development process by overcoming the difficulties in the automation approaches of RDBM, so that non-technical end-users will be able to develop their databases by themselves. Hence, a comprehensive literature survey was conducted to analyze the feasibilities and difficulties of the automation of the process of RDBM design and development. Uduwela, W. et al., the author says that the “form” is the best way to collect data requirements of the database model for its automation, because form is in semi structured way than the natural language (the most common way to present the data requirements is natural language) and it is very closer to the underline database. Approaches were available to automate the development of the conceptual model based on the given data requirements. This is the most critical step in the RDBM design process, because it needs to identify the elements of the model (entities, attributes of them, relationship among the entities, keys and the cardinalities). Form based approaches were analyzed using the data available in the literature to recognize the places where the user intervention is needed. The analysis says that all approaches need user support and it is needed to make the corrections of the outcome, because the elements are not consistent among business domains; it differs from domain to domain and with the same domain also. Further, they demand user support to make the initial input according to the data requirements (set of forms) to identify the elements of the conceptual model. The next step of the process is developing the logical model based on the conceptual model. The outcome of the logical model should be a normalized database to eliminate the data insertion, updating and deletion anomalies by reducing its data redundancies. Data redundancies often caused by Functional Dependencies (FD) that are a set of constraints between two sets of attributes in a relation. The database can be normalized by removing undesirable FDs (remove partial dependencies and transitive dependencies). We could not identify any approach that generates normalize database diagram automatically from the data requirements directly. Existing approaches request the FDs to generate the normalized RDBM. Designers’ high perception power and skills are needed to identify the correct FDs, because it also depends on the domain which is a problem for the automation. FDs can be found by doing data mining, but it also generates an incorrect set of FDs if there are insufficient data combinations. Developing the physical model from the logical model is straightforward and relational database management systems help to automate it. According to the analysis, it can be concluded that the existing approaches on conceptual model development cannot develop accurate models as they has to develop distinct models for each problem. Normalization approaches also cannot be automated as FDs also vary among business domains and with the same domain also. These concludes that there should a database model that can be designed and developed by end-users without any expert knowledge. The proposed model should not be either domain specific or problem specific. It would be better if the approach could convert the data requirements to the database model directly without intermediate steps like in the DBM design process. Further, it would be better the proposed model could be run on the existing database management systems too.
  • Item
    Braille Messenger: SMS Sending Mobile App for Blinds Using Braille
    (Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Udapola, U.B.H.S.; Liyanage, S.R.
    The mobile phone is one of essential device for people in day to day life. Mostly they use mobiles for communication, entertainment and scheduling tasks etc. Among those tasks when considered about the communication, people use voice calls, online chatting, Short Message Service (SMS) to communicate with each other. But typing a message is not much easier for blinds or Visually Impaired (VI) people. At the beginning of the mobile era, mobiles have tactile buttons (hard keyboard). So typing texts using tactile buttons is much easier for blinds than using touch screens. But with the increases of mobile technology, the market targets the best featured mobiles with accessibility features (Screen Reading feature) like Voiceover in IOS, Narrator in Windows and Talkback in Android etc. So blinds also could to move on smart mobile phones. But at the beginning, to type texts on smart mobiles just used same QWERTY or 4X3 soft keyboards that sighted people are used to input texts. In this method blind user need to move finger on keyboard then system speak out the touched key and if user need to input that key need to double tap on that key. But when consider about blind or VI people their familiar way of reading and writing is the System of Braille which founded by Frenchman Louis Braille. So designers have introduced braille to text method to type texts. But when designing the app by targeting braille input, Multi-touch capability of the device must be considered. Even though most of mobile phones have Multi-touch capability, count of points that can be detect simultaneously is different. It can be 2, 5 or 10 etc. So if someone come up with a design with using 6 point of multi-touch features that not suitable for devices which having less number of multi-touch points than 6 and app won’t produce the expected output. As a solution for that problem if someone come up with a design with using only basic multi-touch feature (2 points), that design reduce the efficiency and usability who have mobile devices which capable with best multi-touch feature (10 points). Therefore, I come up with a solution by giving different User Interface (UI) designs by checking multi-touch capability of the device. I developed 3 different UI designs to support for mobile devices with having different multi-touch capabilities. Design A: Type a single braille character using 2 fingers & needs to tap 3 times to insert a single character. Target the devices which have only basic multi-touch capability of points of 2. Design B: Type a single braille character using 3 fingers & needs to tap 2 times to insert a single character. Target the devices which have multi-touch capability with less than 6 points but greater than 2 points. Design C: Type a single braille character using 6 fingers & by single tap can insert a single character. Target the devices which have best multi-touch capability of points of 10 or more than 6 points. Here at the first user have to register reference points one by one. Because here I design the user customizable UI which means no restriction way of putting fingers on screen. User just need to register fingers for position 1, 2,3,4,5 and 6 respectively. Then I used K-NN algorithm to detect input finger. I considered each reference points’ (x, y) coordinates as center of each class. Here I assume that user will not reposition his/her hand from the device. But with repeatedly touching users’ touch points automatically drifting from the first registered reference points and it may cause to increase error rate. So here I used K-Mean algorithm to update reference points/centers of each class with each single user tap. If user repositioning his/her hand he/she has to register reference points again since there is a greater variance between registered reference points and currently touched points. Here I using 6-bit Braille encoding method with voice and vibration feedback. Most of apps use Text-To-Speech (TTS) engine to read text. Here I included vibration rhythms to identify braille characters for blind-deaf people. But this feature available only for Grade 1 Braille system. Moreover, Braille Messenger to become more user-friendly I have used some simple patterns to run commands like adding WHITE SPACE, BACKSPACE, ENTER etc. To determine those patterns, I store the coordinates of draw pattern and then by using Mathematical algorithm I classify the command. As well as I hope to provide the most frequently using words which have more than 5 characters as predicted word. But here I just hope to provide a single word (most frequently used word) rather than presenting list of all prediction words. When I tested this app with participate of 3 blind people including one pseudo blind averagely I got the 92.3% of accuracy of detecting inserted braille characters and 95% accuracy of detecting draw pattern commands. With the time, speed of typing on design A, B & C was increased respect to number of sessions tried and with the 2 hand I got the maximum speed of typing which was 16 WPM.
  • Item
    Android smartphone operated Robot
    (Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Thiwanka, U.S.; Weerasinghe, K.G.H.D.
    In the present an open-source Android platform has been widely used in smartphones. Android platform has a complete software package consisting of an operating system, middleware layer and core applications. Android-based smartphones are becoming more powerful and equipped with several accessories that are useful for Robotics. The purpose of this project is to provide a powerful, user-friendly computational Android platform with simpler robot’s hardware architecture. This project describes the way of controlling robots, using smartphone and Bluetooth communication. Bluetooth has changed how people use digital device at home or office, and has transferred traditional wired digital devices into wireless devices. The project is mainly developed by using Google voice recognition feature which can be used to send commands to robot. Also motion of robot can be controlled by using the Accelerometer and the buttons in the Android app. Bluetooth communication has specifically used as a network interface controller. According to commands received from application, the robot motion can be controlled. The consistent output of a robotic system along with quality and repeatability are unmatched. This project aims at providing simple solutions to create a framework for building robots with very low cost but with high computational and sensing capabilities provided by the smartphone that is used as a control device. Using this project concept, we can help disabled people to do their work easily ex: Motorized wheelchair, remotely controlling some equipment using the smart phone. Also using this project, we can build Surveillance robot devices and reconnaissance devices can design home automation and can use to control any kind of device that can be controlled remotely. Many hardware components were used such as Arduino Uno, Adafruit Motor Shield Bluetooth module and Ultrasonic Distance Measuring Transducer Sensor. The Uno is a microcontroller board based on the ATmega328P. It contains everything needed to support the microcontroller; simply connect it to a Computer using a USB cable or power it with an AC-to-DC adapter or battery to get started. The Arduino use shield boards. These plug onto the top of the Arduino and make it easy to add functionality. This particular shield is the Adafruit Industries Motor / Stepper / Servo Shield. It has a very complete feature set, supporting servos, DC motors and stepper motors. The Bluetooth module is used to connect the smart phone with robot. It uses AT commands. The HC-SR04 ultrasonic sensor uses sonar to determine distance to an object like bats or dolphins do. It offers excellent non-contact range detection with high accuracy and stable readings in an easy-to-use package. From 2 cm to 400 cm or 1” to 13 feet. Its operation is not affected by sunlight or black materials. It comes with an ultrasonic transmitter and a receiver module. This system has two major parts. One is Android application and the other is robot hardware device. When developing this Android application new Android technologies were used ex: Google Voice and motion of the phone. To improve the security of this Application a voice login is added. In addition, a program is added to change login pin and to develop robot scan program and finally to develop two control programs using buttons with accelerometer and Google voice inputs. Arduino IDE and Arduino language is used to program the robot. Arduino has a simple methodology for running the source code. It has a setup function and a loop function. We can define variables and other things inside setup function. The loop function is running always according the content of the function body. AFmotor header is used to develop the code file to get functions to control the motor shield and the motors and used SoftwareSerial header file to make connection between Arduino and Bluetooth module. Using Black Box test method, integrity, usability, reliability, and correctness of the Android application is checked. Finally, user acceptance tests are done for different kind of users. A field-test is done to test whether the robot can identify the object in front of it and the distance limit is coded to the program. Today we are in the world of robotics. Knowingly or unknowingly, we have been using different types of robots in our daily life. The aim of this project is to evaluate whether we can design robots ourselves to do our work using a low budget and simple way. Finally, we think this project will be helpful for students who are interested in these areas and this will make a good solution for human matters. This project has many applications and a very good future scope. It also allows for modification of its components and parameters to get the desired output. This project allows customizing and automating our day-to-day things in our lives.
  • Item
    Resource Efficiency for Dedicated Protection in WDM Optical Networks
    (Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Suthaharan, S.; Samarakkody, D.; Perera, W.A.S.C.
    The ever increasing demand for bandwidth is posing new challenges for transport network providers. A viable solution to meet this challenge is to use optical networks based on wavelength division multiplexing (WDM) technology. WDM divides the huge transmission bandwidth available on a fiber into several non-overlapping wavelength channels and enables data transmission over these channels simultaneously. WDM is similar to frequency division multiplexing (FDM). However, instead of taking place at radio frequencies (RF), WDM is done in the electromagnetic spectrum. In this technique the optical signals with different wavelengths are combined, transmitted together, and separated again. It uses a multiplexer at the transmitter to join the several signals together, and a demultiplexer at the receiver to split them apart. It is mostly used for optical fiber communications to transmit data in several channels with slightly different wavelengths. This technique enables bidirectional communications over one strand of fiber, as well as multiplication of capacity. In this way, the transmission capacities of optical fiber links can be increased strongly. Therefore, the efficiency will be increased. WDM systems expand the capacity of the network without laying more fiber. Failure of the optical fiber in terms of fiber-cut causes loss of huge amount of data which can interrupt communication services. There are several approaches to ensure mesh fiber network survivability. In survivability, the path through which transmission is actively realized is called working path or primary path whereas the path reserved for recovery is called backup path or secondary path. In this paper we consider traditional dedicated protection method in which backup paths are configured at the time of establishing connections primary paths. If a primary path is brought down by a failure, it is guaranteed that there will be available resources to recover from the failure, assuming the backup resources have not failed also. Therefore, traffic is rerouted through backup path with a short recovery time. In this paper, we investigate the performance by calculating the spectrum efficiency variation for traditional dedicated protection in WDM optical networks. To evaluate the pattern for the spectrum efficiency we use various network topologies where the number of fiber links in each network is different. Spectrum efficiency is the optimized use of spectrum or bandwidth so that the maximum amount of data can be transmitted with the fewest transmission errors. Spectrum efficiency is calculated by dividing the total traffic bit rate by the total spectrum used in the particular network. The total traffic bit rate can be calculated by multiplying the data rate by the number of connections (lightpaths). The total spectrum would be the multiplication of the frequency used for a single wavelength and the total number of wavelengths (bandwidth slots) used in the network. In this paper, we carry out the investigation with detailed simulation experiments on different single line rate (SLR) scenarios such as 100 Gbps, 400 Gbps, and 1Tbps. In addition, this paper focuses on four standard optical network topologies which consist of different number of fiber links to identify how the spectrum efficiency deviates for each network. To evaluate the performance, we considered 21-link NFSNET, 30-link Deutsche network, 35-link Spanish network, and 43-link US network as specimens. In our simulation study, spectrum efficiency for each networks are plotted in separate graphs and compared with each other. Our findings are as follows. (1) Spectrum efficiency for each SLR is almost similar and comparable in all the network topologies. (2) Unlike network topology with low number of fiber links, the spectrum efficiency for network topology with high number of fiber links are higher, therefore, the spectrum efficiency increases when the number of links are increased.
  • Item
    Use of Library and Internet Facilities for Seeking Information among Medical Students at Faculty of Medicine, University of Kelaniya
    (Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Solangaarachchi, D.I.K.; Marasinghe, M.P.L.R.; Abeygunasekera, C.M.; Hewage, S.N.; Thulani, U.B.
    Information plays a vital role in education. Students are always seeking information as an aid for their studies. With the development of the internet, which is proving to be an incomparable information resource for learning and research, students are more inclined to use it for finding information. For medical students, many of the tools that support medical education and transmit health research are now available online. There are e-books, e-journals, subject-specific databases, academic and professional websites with numerous educational resources. Therefore, the internet is considered as a rich information resource that can support medical education worldwide. The study was conducted with the objective of assessing the frequency and purposes of using the faculty library and internet facilities by medical students of Faculty of Medicine, University of Kelaniya. A survey was carried out from May to June 2016 on MBBS students at Faculty of Medicine, University of Kelaniya. Students who are in their second to fifth academic years were included in the study while first year students were excluded as they were considered to be still in a period of adjustment to the system. Data collection was done using a self-administered questionnaire distributed among the students that visited the Information and Communication Technology (ICT) centre and medical library of the faculty. Two hundred forty six (85%) students responded to the questionnaire. This consisted of 27% (n=67), 20% (n=48), 30% (n=75) and 23% (n=56) from year 2 to 5 respectively. According to the responses provided in the survey, information required by medical students are mainly sought by library material (70.3%), the internet (59.3%), using personal text books (54.9%) and discussions with colleagues (37.4%). Only 13.9% of the students stated that they visited the library at least once a day, while 33.9% goes there several times a week. Those that visit the library once a week or less, but more than once a month represented 30.2% of the responders. A considerable proportion (22%) visits the library less than once a month (or never goes there). The main resources accessed in the library by students were: textbooks (92.7%), past papers (36.2%) and journals (4.9%). When it comes to frequency of internet usage 82.8% of the medical students stated that they accessed it several times per day. While 11.9% accessed internet only once a day and 5.3% accessed internet less frequently than that. Devices used by the responders for accessing the internet included smartphones (55.7%), tablets (32.9%), laptops (32.9%) and desktops (13.0%). When it comes to data access method for connecting to the internet, mobile data (75.8%) and Wi-Fi (73.2%) were most prominently featured, whereas dongle connections (20.3%) and wired connections (3.7%) were less popular. The most frequent reasons noted for accessing the internet were: for finding information related to studies (53.3%), for emailing (30.1%) and using social media such as Facebook (37.0%). Based on the responses of the sampled students, the faculty internet facilities (Wi-Fi or wired) were used by 80.9%. The times of the day for logging on to the faculty internet for most students were ‘12 noon-2 pm’ period (47.5%) and ‘after 4 pm’ period (22.8%). When inquired about problems faced while finding information via the internet: 55.3% noted connection being too slow as an issue, while 34.6% found the inability to access faculty network E-resources outside of the faculty as a hindrance. The other issues expressed were: not having enough time (16.7%), lack of ICT knowledge (6.9%), inadequate information searching skills (6.9%) and not having a device to connect to the internet (2.4%). The results show that even though less than 50% of the sampled students are regular (at least several times a week) visitors to the library, over 70% seek information related to their studies from library material. In contrast, while nearly 95% of the students were daily internet users, only around 60% used it as a source of information. Only about 53% utilised the internet for their academic requirements. The efforts of the university in providing internet facilities appears to have been worthwhile, with over 80% stating that they are consumers of the faculty Wi-Fi and/or wired internet connections. Yet, mobile data connections were the most frequently noted method of obtaining web access. This is reflected by the finding that smartphones and tablets were the most frequently used devices when accessing the web compared to laptops and desktops. The finding of the study that; more than one fifth of the students rarely visit the library could probably mean that they rely on personal text books in their studies. In addition it could also be a reflection of the influence of ICT in academic activities of students. These findings could be explained by the ever increasing influence of ICT in education as well as day-to-day life. Especially, availability of Wi-Fi within the faculty, affordability of mobile internet connections and, handheld devices like smartphones and tablets becoming versatile while also becoming accessible for most people has clearly made an impact in this regard. Recent upgrades to the faculty internet facilities may alleviate the complaint of slowness in connection. Expanding the Wi-Fi network to student hostels and the North Colombo Teaching Hospital at Ragama would help in addressing unavailability of faculty network E-resources outside of the faculty. Even though library based information seeking is still prominently featured, findings of the study show a possible shift towards the internet becoming the main source for information among medical students. The faculty medical library and ICT centre have to be sensitive when it comes to student information source preferences. By working together and adapting to the changing landscape, these two departments of the faculty could play an ever increasing role in improving students’ use of educational resources online.
  • Item
    Object Recognition Application - Mind Game
    (Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Senanayake, H.M.I.M.; Weerasinghe, K.G.H.D.
    Visualizing is one of the main methods to remember something. For students who are studying something can remember things as a story or a component of an image. This application is designed to develop this skill by giving this application as a playing game to use. How to play this game? First this app will show a sequence of images and the user should remember what he see, not only the image but also how that image is drawn. The color combinations, shape, angle and much more details are there in a single image. The more the user can remember the things in a single image can score high in this game. Now this application tests up to what average the user remember things. User will be provided a drawing canvas and a pencil tool and then he is asked to draw the first image of the sequence what he remember. And then the second canvas is given to draw the second image. Next third canvas etc. Then the application process the images and match corresponding images of the sequence and offer score to the user considering the details he remembered accurately. How this application works? Most important part of this application is object recognition part. There are many algorithms in present to recognize object and patterns such as feature based methods, appearance based methods, geometric based methods etc. Most popular and widely used techniques are edge and angle based algorithms and pixel based algorithms. Among these methods appearance and geometric based techniques are the narrowly used techniques to develop applications. So in this research I cover that area. My recognition algorithm is to identify images by converting image details into a mathematical model. First this algorithm will identify the shapes in the image and each shape will be given some sequence of values which is based on relative area, perimeter, position co-ordinates of shapes and other special characteristics which are evaluated by a standard function. Each shape in any image will have its own mathematical structure to describe the roll of it in an image. So after processing all shapes of the image as mathematical points, the image can be saved as a mathematical structure. So for each object, objects will have a unique mathematical model. When recognizing object in a new drawn image, this new image is converted into a mathematical model using the same algorithm and match with other mathematical models which are previously processed and saved. Main advantage of this method is number of values which need to be saved as image data in this mathematical model is massively low when compared with other feature based techniques. This increases the speed efficiency. So this way is considerably efficient than edge and angle based techniques to recognize images with non-discrete lines. To match the models I apply a nearest neighbor algorithm to mathematical models, then the most matching image is selected. In the developing side, previously processed mathematical models which represents the images are saved as a two dimensional matrix. Rows in the matrix represents the image identity (image name or object name) and characteristics of images. And one column in the matrix represents a single image. So the number of rows in the matrix is equal to the number of characteristics of the image plus one. And number of columns in the matrix is a variable which depends on the number of images we are saving. And the matrix is saved in a .mat (Microsoft Access Table, used by MATLAB to save data in binary data container format) file. By this method, retrieving and reading data for matching images is very easy because this single matrix represents the whole database of images. Accuracy depends on the growth of the matrix. Because if the matrix has more details about objects, then the program can identify objects accurately. To increase more the accuracy of identifying objects, simply we can increase the number of images which are drawn in different angles or different ways of same object and saving those in the matrix. For example, if the object we want to recognize is a tree, then we can save set of drawings of mango trees, coconut trees, pine tree etc. in the matrix. So any tree will be identified accurately as a tree by the program, no matter what the genre of tree is. In the gaming application these methods are used to define different gaming levels and give the user a new experience. Preliminary the objective of this research was to recognize non-discrete pencil drawing objects accurately. Secondly above techniques are used to develop the application which gives an exercise to the human brain while giving a gaming experience. Designed algorithm is flexible to process any number of images at once and convert those into mathematical models and save all those mathematical models in a single matrix. And the designed program accurately identifies the pencil drawing objects using this matrix. Later, by including more image processing techniques such as image segmentation, this method will be able to enhance more to process and recognize other complex images too.
  • Item
    Applying Smart User Interaction to a Log Analysis Tool
    (Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Semini, K.A.H.; Wijegunasekara, M.C.
    A log file is a text file. People analyze these log files to monitor system health, detect undesired behaviors in the system and recognize power failures and so on. In general these log files are analyzed manually using log analysis tools such as Splunk, Loggly, LogRhythm…etc. All of these tools analyze log file and then generate reports, graphs which represents the analyzed log data. Analyzing log files can be divided into two categories namely, analyzing history log files and analyzing online log files. In this work analyzing history log files is only considered for an existing log file analysis framework. Most of the log analysis tools have the feature for analyzing history log files. To analyze a log file using any of the mentioned tools, it is necessary to select a time period. For example, if a user analyze a log file according to the system health, the analyzed log file specifies the system health as ‘good’ or ‘bad’ only for the given time period. In general these analysis tools provide average system health for a given time period. This analysis is good but sometimes it may not be sufficient as people may need to know exactly what happens in the system each second to predict the future behavior of the system or to get some decisions. But using these tools, it is not possible to identify the log file in detail. To do such analysis a user has to go through the log file line by line manually. As a solution for this problem, this paper describes a new smart concept for log file analysis tools. The concept works through a set of widget and it can replay executed log files. First the existing log file analysis framework was analyzed. This helped to understand the data structure of receiving log files. Next the log file analysis tools were studied to identify the main components and features that people most like. The new smart concept was designed by using replayable widget and graph widgets. The replayable widget uses to replay the inputted log file and the graph widgets graphically represent the analyzed log data. The replayable widget is the main part of the project and holds the new concept. This is a simple widget that acts just as a player. It has two components; a window and a button panel. Window shows the inputted log file and the button panel contains play, forward, backward, stop and pause buttons. The log lines which is shown in the window of the replayable widget, holds a tree structure (Figure 1: Left most widget). The button panel contains an extra button to view the log lines. These buttons are used to play the log lines, go to requested log line, view the log line and control playing log lines. It was important to select suitable chart library to design the graph widgets. A number of chart libraries were analyzed and finally D3.js was selected because it provided chart source, free version without watermarks and it also had more than 200 chart types. It has a number of chart features and also it supports to HTML5 based implementations. The following charts were implemented using D3.js chart library.  Bar chart according to the pass/failure count  Time line according to the time of pass/fail occurs  Donut chart according to the total execute count  Donut chart for Total Pass/Fail Count Every graph widgets are bind with replayable widget, so that updates are done according to the each action. The replayable widget and the graph widgets are implemented by using D3.js, JavaScript, JQuery, CSS and HTML5. The replayable widget is successfully tested and the implemented interface successfully runs in Google Chrome web browser. Figure 1 shows a sample interface of the design which is generated using a sample log file that had about 100 of log lines. Left most widget is the replayable widget that holds considered log file as a tree structure. Top right most widget is one of the graph widget represented as a bar chart shows pass/failure count and the bottom right most widget is another graph widget represented as a time line shows the time of pass/fail that occurred for the given log file. In addition the analyzed log file can also be visualized using donut charts.This paper described the new smart concept for log file analysis tools. The existing analysis tools that were mentioned did not contain this new concept. Most of the log file analysis tools use graphs for data visualization. This system was successfully implemented and it was evaluated by a number of users who work with log files. This new concept will help log analysts, system analysts, data security teams as well as top level management to extract decisions about the system by analyzing the widgets to make predictions. Furthermore, analyzed data would be useful to collect non-trivial data for data mining and machine learning procedures. As future work, the system could be enhanced to add features such as zooming and drill down method to customize graphs and identify a mechanism to filter data according to user requirements.