DSpace Collection:http://repository.kln.ac.lk/handle/123456789/156082024-03-28T18:09:07Z2024-03-28T18:09:07ZPerforming Iris Segmentation by Using Geodesic Active Contour (GAC)Yuan-Tsung ChangChih-Wen OuJayasekara, J.M.N.D.B.Yung-Hui Lihttp://repository.kln.ac.lk/handle/123456789/156482017-05-31T08:16:18Z2016-01-01T00:00:00ZTitle: Performing Iris Segmentation by Using Geodesic Active Contour (GAC)
Authors: Yuan-Tsung Chang; Chih-Wen Ou; Jayasekara, J.M.N.D.B.; Yung-Hui Li
Abstract: A novel iris segmentation technique based on active contour is proposed in this paper. Our approach includes two important issues, pupil segmentation and iris circle calculation. If the correct center position and radius of pupil can be find out in tested image, then the iris in the result will be able to precisely segment. The final accuracy for ICE dataset is reached around 92%, and also can get high accuracy 79% for UBIRIS. Our results demonstrate that the proposed iris segmentation can perform well with high accuracy for Iris’s image.2016-01-01T00:00:00ZStudents’ Perspective on Using the Audio-visual Aids to Teach English Literature and Its EffectivenessWijekoon, W.M.S.N.K.http://repository.kln.ac.lk/handle/123456789/156472017-05-31T08:16:06Z2016-01-01T00:00:00ZTitle: Students’ Perspective on Using the Audio-visual Aids to Teach English Literature and Its Effectiveness
Authors: Wijekoon, W.M.S.N.K.
Abstract: The field of education is renewing in each and every second. Human Computer Interaction plays a vital role in it. Thus, the government authorities have paid more attention on this aspect in order to provide a qualitative education. According to the reports which were published by the Ministry of Education, the government has conducted trainings, workshops and seminars on using the modern technology including the modern audio and visual aids in all around the country. Yet, most of the teachers of English Literature still do not use them within the classroom and the students learn the subject in a conventional classroom environment. In this respect, this study explores how effective it is to use audio visual aids to teach English Literature which is considered as a traditional subject in order to enhance students’ literary competence and the students’ perspective on using the audio-visual aids to teach English Literature. For the study, as the sample, forty five students who are from Grade Ten and learn English Literature as an optional subject for the GCE Ordinary Level Examination were selected from four government schools in Kandy district and Matale district. Data was collected through a questionnaire and participant observation. Through the questionnaire, students’ preference for the subject and their views on teaching methods with and without the modern audio-visual aids were studied. Learning behavior and the students’ involvement with and without the audio-visual aids were studied through participant observation. The qualitative analysis of data revealed that there is a high involvement of the students when they learn this subject with the modern audio-visual aids. The quantitative data analysis provides the initial evidence that their teachers’ conventional teaching process is less productive and provides a lessened contribution to reach the expected goals of teaching and learning English Literature. The findings suggest that it is necessary to implement this pedagogical tool to teach English Literature as it has the ability of bringing a highly constructive learning environment.2016-01-01T00:00:00ZAn Application of Context Assured Ontology for Rule Based Cluster Selection in PsychotherapyVidanage, K.de Silva, O.http://repository.kln.ac.lk/handle/123456789/156462017-05-31T08:15:53Z2016-01-01T00:00:00ZTitle: An Application of Context Assured Ontology for Rule Based Cluster Selection in Psychotherapy
Authors: Vidanage, K.; de Silva, O.
Abstract: Personality trait analysis is considered as a very important requirement mainly in psychotherapy. A consultant should have a sound awareness about the client`s personality to commence effective therapy sessions. In this research OCEAN model for personality trait analysis is computationally implemented. OCEAN model is an effective model used in psychology to determine the human temperament composition. Expert knowledge associated with the five dimensions of the OCEAN model is captured and stored inform of rule based expert clusters. Additionally, an upper ontology is designed to control the context associated with the OCEAN model. Ontologies are very good at storing domain associated knowledge in forms of triples. Various lexicon combinations, depicting contexts, could be grouped together and can be assigned as a specific object property. Different properties of the same object will depict various contexts, the object could be exposed to. Here, the upper ontology will act as the navigator which shows to specific knowledge cluster. The knowledge clusters are used to determine the sub facets of a particular trait as well as the intensity.
Once the client enters the experiencing psychological discomfort through text to the interface, it`s natural language processed and important semantics are identified. Subsequently, depending on the semantics captured, entered text will be sent to an established SPARQL query module. Defined SPARQL queries in the module are mapped with a particular region of the created ontology. Therefore, execution of a particular SPARQL query will, question a specific region of the ontology. End points of the defined ontology are further mapped with created different rule based expert clusters.
Ultimately, client`s entered problem, in form of text will be directed to a particular rule based expert cluster, which contains expert knowledge captured from psychologists. Eventually a similarity index is calculated and percentile compositions of the personnel traits are derived as per the dimensions of the OCEAN model.
Developed prototype, got evaluated in two forms. Primarily, more than 30 expressed psychological inconveniences are captured from two famous discussion forms, which are globally available to share psychological snags across community. By name, those are “Panic Center” and Daily Strength”. Each of these stories captured are fed as the input to the prototype and OCEAN reports are generated. Henceforth, scenario and the generated reports are shared with the psychologist, in order to evaluate the accuracy of the final outcomes. After evaluating the final outcomes of the prototype with the expert knowledge of the psychologist, more than 80% accuracy depicted.
As the second mechanism, results are compared against, Truity, which is one of the very famous questionnaire based online trait evaluation site. A trait evaluation questionnaire designed using OCEAN model was attempted in Truity and at the end final result sheet was obtained. Next, covering the same set of questions in the attempted questionnaire along with the same answers provided, an artificial story was created. Afterwards, this artificial story was provided as the input to the prototype and another OCEAN report got generated. Eventually, both the Truity generated report and the prototype generated reports are compared against. Though there`re small variations visible in the percentile values, inflations and deflations patterns of both the reports are almost identical.
As conversed above, both these validations mechanisms have evidenced that the prototype generated OCEAN report is also depicting an acceptable level of accuracy. Though there`re ample of questionnaire based online trait analysis tools available, it`s almost no text based trait analytics approaches. A questionnaire based mechanism will limit the express-ability of the user / patient, hence the patient is restricted via some pre-defined set of questions. But, with this prototype, no restrictions applied. Liberty is provided, for the free flowing thoughts of the user to be entered.
Other than, requesting the patient, who is psychologically distressed to fill a questionnaire which is not fair, this prototype allows to express anything what comes to the mind about the user`s cognitions. Also, the chances of misinterpreting the questions in the questionnaire and providing of wrong answers, are also addressed through this system.
To get the optimal from this system, definitely it has to be used under the governance of a psychologist or a psychiatrists. This prototype is targeted to provide digital diagnostic assistance to the consultants. Hence, domestic use of this without the intermediation of the consultant, will not give the intended benefits. The ultimate intension of this research is to improve the interaction between the consultant and the patient, through a computational intervention. Because, the active ingredients in therapy comes with the live interactions between the consultants and the patient. As proved in literatures, the 100% computational replacements of therapy has become an utter failure. But the effective blend of computing with live therapy has improve the efficacy of psychotherapy in great heights.2016-01-01T00:00:00ZEnd-user Enable Database Design and Development AutomationUduwela, W.C.Wijayarathna, G.http://repository.kln.ac.lk/handle/123456789/156452017-05-31T08:15:41Z2016-01-01T00:00:00ZTitle: End-user Enable Database Design and Development Automation
Authors: Uduwela, W.C.; Wijayarathna, G.
Abstract: Information System (IS) is a combination of software, hardware, and network components working together to collect, process, create, and distribute data to do the business operations. It consists with “update forms” to collect data, “reports” to distribute data, and “databases” to store data. IS plays a major role in many businesses, because it improves the business competitiveness. Although SMEs are interested to adopt IS, they are often suffered by other factors: time, underline cost, and availability of ICT experts. Hence, the ideal solution for them is to automate the process of IS design and development without ICT expertise for an affordable cost. The software tools are available on the Web to generate the “update forms” and “reports” automatically for a given database model. However, there is no approach to generate the databases of IS automatically.
Relational database model (RDBM) is the most commonly used database model in IS due to its advantages than the other data models. The reason of the advantages of the model is its design, but it is not a natural way of representing data. The model is a collection of data that is organized into multiple tables/relations linked to one another using key fields. These links represent the associations between relations. Typically, tables/relations represent entities in the domain. A table/relation has column/s and rows where column/s represent the attributes of the entity and rows represent the records (data). Each row in a table should have a key to identify that row uniquely. Designers should have to identify these elements from the given data requirements in the process of the RDBM design, which is difficult for non-technical people. The process of design of RDBM has few steps: collect the set of data requirements, develop the conceptual model, develop the logical model, and convert it to the physical model. Though there are approaches to automate some steps of the process of design and development of RDBM, they also request the technical support. Thus, it is required to develop a mechanism to automate the database design and development process by overcoming the difficulties in the automation approaches of RDBM, so that non-technical end-users will be able to develop their databases by themselves. Hence, a comprehensive literature survey was conducted to analyze the feasibilities and difficulties of the automation of the process of RDBM design and development.
Uduwela, W. et al., the author says that the “form” is the best way to collect data requirements of the database model for its automation, because form is in semi structured way than the natural language (the most common way to present the data requirements is natural language) and it is very closer to the underline database.
Approaches were available to automate the development of the conceptual model based on the given data requirements. This is the most critical step in the RDBM design process, because it needs to identify the elements of the model (entities, attributes of them, relationship among the entities, keys and the cardinalities). Form based approaches were analyzed using the data available in the literature to recognize the places where the user intervention is needed. The analysis says that all approaches need user support and it is needed to make the corrections of the outcome, because the elements are not consistent among business domains; it differs from domain to domain and with the same domain also. Further, they demand user support to make the initial input according to the data requirements (set of forms) to identify the elements of the conceptual model.
The next step of the process is developing the logical model based on the conceptual model. The outcome of the logical model should be a normalized database to eliminate the data insertion, updating and deletion anomalies by reducing its data redundancies. Data redundancies often caused by Functional Dependencies (FD) that are a set of constraints between two sets of attributes in a relation. The database can be normalized by removing undesirable FDs (remove partial dependencies and transitive dependencies). We could not identify any approach that generates normalize database diagram automatically from the data requirements directly. Existing approaches request the FDs to generate the normalized RDBM. Designers’ high perception power and skills are needed to identify the correct FDs, because it also depends on the domain which is a problem for the automation. FDs can be found by doing data mining, but it also generates an incorrect set of FDs if there are insufficient data combinations. Developing the physical model from the logical model is straightforward and relational database management systems help to automate it.
According to the analysis, it can be concluded that the existing approaches on conceptual model development cannot develop accurate models as they has to develop distinct models for each problem. Normalization approaches also cannot be automated as FDs also vary among business domains and with the same domain also. These concludes that there should a database model that can be designed and developed by end-users without any expert knowledge. The proposed model should not be either domain specific or problem specific. It would be better if the approach could convert the data requirements to the database model directly without intermediate steps like in the DBM design process. Further, it would be better the proposed model could be run on the existing database management systems too.2016-01-01T00:00:00Z