KICACT 2016

Permanent URI for this collectionhttp://repository.kln.ac.lk/handle/123456789/15608

Browse

Search Results

Now showing 1 - 3 of 3
  • Thumbnail Image
    Item
    A Working Group Construction Mechanism Based on Text Mining and Collaborative Filtering
    (Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Kasthuri Arachchi, S.P.; Zhen-Rong Chen; Irugalbandara, T.C.; Timothy K. Shih
    Massive Open Online Courses (MOOCs) are popular in E-learning research domain with the advance of internet technology (Sa'don, Alias, and Ohshima 2014). MOOCs easily provide higher education courses for registered users as well as institutions or teachers who can offer courses in order to join more students than traditional education. However, producing high-quality learning materials may cause increase time, cost and efforts. For the purpose of reusing materials and reducing the cost of re-creating materials, the Learning Object (LO) concepts have been introduced. The content management systems which used these LOs are called Learning Objects Repository (LOR). The stored LOs in the repository can be easily searched by users. In this paper we introduce a working group construction mechanism for users on LOR. The proposed mechanism uses text mining technique to analyse the similarity of groups to construct prototypes of working groups. Then find the users' preferences about LOs based collaborative filtering to optimize constructed prototypes. Hence users on LOR can find quickly and easily their interesting learning materials via relevant working groups. This mechanism reduces the consuming time for re-creating learning materials by improving the quality of production. This study is based on a Google MOOC FRA project (http://googleresearch.blogspot.tw/2015/03/announcing-google-mooc-focused-research.html). There are 3 parts of the system (Fig. 1 (a)) as: conversion tool between ELO (http://edxpdrlab.ncu.cc/), Course Builder, Open edX, and SCORM 2004; Authoring Tool for ELO; and Repository for ELO (Fig. 1 (b)). The user on the ELO repository can access the working groups which related to themselves and reduce the time consumed about re-creating learning materials and improving production quality.
  • Thumbnail Image
    Item
    Animal Behavior Video Classification by Spatial LSTM
    (Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Huy Q. Nguyen; Kasthuri Arachchi, S.P.; Maduranga, M.W.P.; Timothy K. Shih
    Deep learning which is basis for building artificial intelligent system is become a quite hot research area in recent years. Current deep neural network increase human recognition level of natural images even with huge dataset such as ImageNet. Among successful architectures, Convolution Neural Network (CNN) and Long Short-term Memory (LSTM) are widely used to build complex models because of their advantages. CNN reduces number of parameters compare to full connected neural net. Furthermore, it learns spatial features by sharing weights between convolution patch, which is not only help to improve performance but also extract similar features of input. LSTM is an improvement of Vanilla Recurrent Network (RNN). When processing with time-series data, RNN gradient has tend to vanish in training with backpropagation through time (BTT), while LSTM proposed to solve vanish problem. Therefore it is well suited for manage long-term dependencies. In other words, LSTM learn temporal features of time-series data. During this we study focused on creating an animal video dataset and investigating the way that deep learning system learn features with animal video dataset. We proposed a new dataset and experiments using two types of spatial-temporal LSTM, which allow us, discover latent information of animal videos. According to our knowledge of previous studies, no one has used this method before with animal activities. Our animal dataset created under three conditions; data must be videos. Thus, our network can learn spatial-temporal features, objects are popular animals like cats and dogs since it is easy to collect more data of them and the third is one video should have one animal but without humans or any other moving objects. Under experiments, we did the recognition task on Animal Behavior Dataset with two types of models to investigate its’ differences. The first model is Conv-LSTM which is an extend version of LSTM, by replacing all input and output connections of convolutional connections. The second model is Long-term Recurrent Convolutional Networks (LRCN), which proposed by Jeff Donahue. More layers of LSTM units can easily added to both models in order to make a deeper network. We did classification using 900 training and 90 testing videos and could reached the accuracy of 66.7% on recognition rate. Here we did not do any data augmentation. However in the future we hope to improve our accuracy rate using some of preprocessing steps such as flip, rotate video clips and collecting more data for the dataset.
  • Thumbnail Image
    Item
    Context-Aware Multimedia Services in Smart Homes
    (Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Chih-Lin Hu; Kasthuri Arachchi, S.P.; Wimaladharma, S.T.C.I.
    The evolution of “smart homes” technologies exposes a broad spectrum of modern personal computers (PC), consumer electronics (CE), household appliance and mobile devices for intelligent control and services in residential environments. With high penetration of broadband access networks, PC, CE and mobile device categories can be connected on home networks, providing a home computing context for novel service design and deployment. However, conventional home services are characterized by different operations and interactive usages among family members in different zones inside a house. It is prospective to realize user-oriented and location-free home services with modern home-networked devices in smart home environments. The contribution of this paper proposes a reference design of a novel context-aware multimedia system in home-based computing networks. The proposed system integrates two major functional mechanisms: intelligent media content distribution and multimedia convergence mechanisms. The first mechanism performs intelligent controls on services and media devices in a context-aware manner. This mechanism integrates face recognition functions into home-based media content distribution services. Some devises capable of capturing images can recognize the appearances of registered users and infer their changes of location region inside a house. Media content played in the last locations can thus be distributed to home-networked devices closer to the users in the current locations. The second mechanism offers multimedia convergence among multiple media channels and then renders users a uniform presentation for media content services in residential environments. This mechanism can provide not only local media files and streams from various devices on a home network but also Internet media contents that can be online fetched, transported and played onto multiple home-networked devices. Thus, the multimedia convergence mechanism can introduce an unlimited volume of media content from the Internet to a home network. The development of a context-aware multimedia system can be described, as follows. A conceptual system playground in a home network contains several Universal Plug and Play (UPnP) specific home-networked devices that are inter-connected on a singular administrative network based on the Ethernet or Wi-Fi infrastructure. According to UPnP specifications, home-networked devices are assigned IP addresses using auto-IP configuration or DHCP protocols. Then, UPnP-compatible devices can advertise their appearances on a network. When other neighbor devices discover them, they can collaborate on media content sharing services in a network. In addition, some UPnP-compatible devices are capable of face recognition to capture the front images of users inside a house. Those captured images can be sent to a user database and compared with existing user profiles corresponding to individuals in the family community. After any registered user is recognized, the system can refer to the stored details of this particular user and then offer personal media services in a smart manner. On the other hand, the components and functionalities of the proposed system can support intelligent media content distribution and multimedia convergence mechanisms. Technically, the proposed system combines several components such as UPnP control point, UPnP media renderer, converged media proxy server, image detector and profile database of registered users and family communities. Though there are diverse media sources and formats in a home network, users remain the same operational behavior on sharing and playing media content according to common UPnP and Digital Living home Alliance (DLNA) guidelines. Prototypical development achieved a proof-of-concept software based on the Android SDK and JVM frameworks, which integrates the distribution of intelligent media content and converged media services. The resulting software is platform-independent and application-level. It can be deployed on various home-networked devices that are compatible with UPnP standard device profiles, e.g., UPnP AV media server, media player, and mobile phones. Real demonstration has been conducted with the software implementation that runs on various off-the-self home-networked devices. Therefore, the proposed system is able to offer friendly user experience for context-aware multimedia service in residential environments.