Repository logo
Communities & Collections
All of DSpace
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Wijegunasekara, M.C."

Filter results by typing the first few letters
Now showing 1 - 9 of 9
  • Results Per Page
  • Sort Options
  • Thumbnail Image
    Item
    Android Shopping Cart Application (ASCA)
    (Faculty of Computing and Technology, University of Kelaniya, Sri Lanka., 2017) Bandara, K.; Wijegunasekara, M.C.
    Due to the busy lifestyle of people in today’s society it has become more convenient for them to buy all their daily shopping items in one place. Therefore, shopping grocery items in a supermarket has become a common activity in Sri Lanka too. The major problems faced by customers when shopping grocery items in a supermarket, is the effort they have to put and the time consuming tasks to be faced almost every day during shopping. One such difficulty is the need of frequently visiting the supermarket in order to buy day-to-day items. Also when buying the goods, most of the time, a walk around the shop to select the necessary products is an inevitable task. Even after buying, they need to stand in long queues at the counters to do the payments. Therefore, using the modern technology to build a suitable system to solve such problems is valuable. Mainly there are two approaches to solve this problem. First is a web shopping cart application and the other is a mobile application. Today as most of the people always carry smart phones with them, nowadays every business requires to have its own business application for mobile users. This research project has two major parts: the mobile application and the website which acts as a content management system. Using this mobile application, the customers are given the facility to buy online or to get the products delivered to their home by the delivery service provided from the shop or else they can send the order confirmation and get the ordered items by payments done at the shop. This mobile application is being developed using the Android Studio Software. The client side of the application is designed as a website, for the supermarket owners to manage the online database which stores the content for the mobile application. In order to measure the effectiveness of the implementation of this project, questionnaires were distributed to a total population of 50 people who buy their daily groceries in a supermarket and having an android smart phone. With the analysis of data, 32% of the people strongly agreed and 48% agreed that traditional shopping will be superseded with online shopping in near future and only 6% has disagreed the above idea. Half of the population agreed and 16% strongly agreed the fact that only credit card holders being able to buy products online is a major drawback in a shopping cart application. As future enhancements, the application will be developed to run on any type of mobile operating system other than just android. Currently only the bank portal and a link to connect with Paypal is designed and the payment gateway is to be developed further. The client side can also be created as a mobile application. In conclusion, the result of this research project is a user friendly mobile application which runs on Android Operating System and a Content Management System has developed as a Website to interact with the Database. The ASCA was a success in developing an online mobile shopping cart which could satisfy the current problems of customers who buy their daily grocery items in a supermarket.
  • Thumbnail Image
    Item
    Android Tablet based Menu and Order Management System for restaurants
    (Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Medhavi, Y.A.U.; Wijegunasekara, M.C.
    The traditional way of taking order services in a restaurant is that, once the customer selects the food and beverages from a paper menu, the waiter uses a pen and paper to take the order. Then the waiter gives the order to the kitchen and the customer has to wait until the order is received. This process is unsatisfactory, low efficient and contain mistakes. The customer may have to wait for a long time until he receives the order. During peak times the waiting time will be much longer. In another situation the waiter might have lost the paper or the waiter’s hand writing might be difficult to understand by other people. This may cause the kitchen and the cashier mess up the orders and also may cause calculation errors. On the other hand, the paper menu can be hard to navigate and may be outdated. When the menu has a large number of menu items it makes the menu appear overwhelming to look through. Because of that, customers may not see all the items that they are interested in. When changes in price or item updates are required for the menu, the costs for reprinting and environmental concerns associated with reprinting need to be considered. Several order service systems that were implemented were studied. Some of them had attractive features, but the user interaction and friendliness of such systems were not satisfactory. These studied systems were analyzed and the attractive features for the order service were identified. These features were implemented such that they are user-friendly. The main objective of this work was to develop a tablet based restaurant menu and order management system to automate the manual order service system and to overcome the drawbacks of the studied systems. This implemented system contains four systems, a mobile application for customers and three web based systems for the admin panel, kitchen and cashier. The order is taken by a mobile device namely, a tablet placed on the restaurant table which acts as a waiter. The mobile application is started by a waiter, logging into the system and assigning the table number and a waiter identification. The waiter identification and table number are saved in the application until that particular waiter logs out. The mobile application has four subsystems namely, display subsystem, assistance subsystem, commenting subsystem and ordering subsystem. The display subsystem displays the complete restaurant menu by categories, special offers’ information and allows the customer to browse all the currently available menu items by category. The assistance subsystem allows the customer to call a waiter 2 for any assistance needed. The commenting subsystem allows customers to create user accounts for adding comments and share experience on Facebook/Twitter. The ordering subsystem allows to select the desired items and make the order. Once the customer makes the order, first he will be able to view the order information that he has ordered including the payment with/without tax and service charge. After the customer confirms the order, the order is transmitted to the kitchen department via Internet for meal preparation. The kitchen web system displays all order information that are received from the tablets. This include the customer details, table number, the waiter identification and the details of the order. After the order is prepared, the waiter will deliver the order to the customer. At the same time, the cashier web system receives the details of the delivered order and the bill is prepared. The web based admin panel system allows the restaurant’s management to add/view/remove/ update menu items and waiters, view reservation information and their cooking status/payment status, update service charge/tax, viewing revenue information over a time period.The produced design artifacts in this work have covered design concerns including architecture, application behavior and user interface. Figure 1 shows the architecture of the system. The implemented system consists of the server and a central database. The restaurant managers can access the database using admin panel to make appropriate redeployments for food materials and evaluate the business status at any time. All ordering and expenditure information is stored in a database. This system is designed to be used on android tablets (screen size-7"). It can also be used on smart phones with smaller screen sizes. It is compatible with versions 2.3 and later. Eclipse and phpStorm were used as the IDEs. Mainly used languages are HTML, JavaScript, PHP, JAVA, XML. The system uses PHP to create web service to return JSON data with the 3 server. This implemented system adopted different testing approaches to test the prototype software and discovered bugs during these testing was corrected. This system provides a more convenient, more maintainable, user-friendliness and accurate method for restaurant management. Other than that, the tablet based menu replace the paper waste, reduce the waiting time and increase the efficiency of the food and beverages order service system. By using this system, the restaurant can reduce the running cost, human errors and provide high quality service as well as enhancing customer relationship. As future development, features such as paying the bill directly through the menu application should be created. In addition, this application will be developed for other platforms such as Blackberry and iOS.
  • Thumbnail Image
    Item
    Applying Smart User Interaction to a Log Analysis Tool
    (Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Semini, K.A.H.; Wijegunasekara, M.C.
    A log file is a text file. People analyze these log files to monitor system health, detect undesired behaviors in the system and recognize power failures and so on. In general these log files are analyzed manually using log analysis tools such as Splunk, Loggly, LogRhythm…etc. All of these tools analyze log file and then generate reports, graphs which represents the analyzed log data. Analyzing log files can be divided into two categories namely, analyzing history log files and analyzing online log files. In this work analyzing history log files is only considered for an existing log file analysis framework. Most of the log analysis tools have the feature for analyzing history log files. To analyze a log file using any of the mentioned tools, it is necessary to select a time period. For example, if a user analyze a log file according to the system health, the analyzed log file specifies the system health as ‘good’ or ‘bad’ only for the given time period. In general these analysis tools provide average system health for a given time period. This analysis is good but sometimes it may not be sufficient as people may need to know exactly what happens in the system each second to predict the future behavior of the system or to get some decisions. But using these tools, it is not possible to identify the log file in detail. To do such analysis a user has to go through the log file line by line manually. As a solution for this problem, this paper describes a new smart concept for log file analysis tools. The concept works through a set of widget and it can replay executed log files. First the existing log file analysis framework was analyzed. This helped to understand the data structure of receiving log files. Next the log file analysis tools were studied to identify the main components and features that people most like. The new smart concept was designed by using replayable widget and graph widgets. The replayable widget uses to replay the inputted log file and the graph widgets graphically represent the analyzed log data. The replayable widget is the main part of the project and holds the new concept. This is a simple widget that acts just as a player. It has two components; a window and a button panel. Window shows the inputted log file and the button panel contains play, forward, backward, stop and pause buttons. The log lines which is shown in the window of the replayable widget, holds a tree structure (Figure 1: Left most widget). The button panel contains an extra button to view the log lines. These buttons are used to play the log lines, go to requested log line, view the log line and control playing log lines. It was important to select suitable chart library to design the graph widgets. A number of chart libraries were analyzed and finally D3.js was selected because it provided chart source, free version without watermarks and it also had more than 200 chart types. It has a number of chart features and also it supports to HTML5 based implementations. The following charts were implemented using D3.js chart library.  Bar chart according to the pass/failure count  Time line according to the time of pass/fail occurs  Donut chart according to the total execute count  Donut chart for Total Pass/Fail Count Every graph widgets are bind with replayable widget, so that updates are done according to the each action. The replayable widget and the graph widgets are implemented by using D3.js, JavaScript, JQuery, CSS and HTML5. The replayable widget is successfully tested and the implemented interface successfully runs in Google Chrome web browser. Figure 1 shows a sample interface of the design which is generated using a sample log file that had about 100 of log lines. Left most widget is the replayable widget that holds considered log file as a tree structure. Top right most widget is one of the graph widget represented as a bar chart shows pass/failure count and the bottom right most widget is another graph widget represented as a time line shows the time of pass/fail that occurred for the given log file. In addition the analyzed log file can also be visualized using donut charts.This paper described the new smart concept for log file analysis tools. The existing analysis tools that were mentioned did not contain this new concept. Most of the log file analysis tools use graphs for data visualization. This system was successfully implemented and it was evaluated by a number of users who work with log files. This new concept will help log analysts, system analysts, data security teams as well as top level management to extract decisions about the system by analyzing the widgets to make predictions. Furthermore, analyzed data would be useful to collect non-trivial data for data mining and machine learning procedures. As future work, the system could be enhanced to add features such as zooming and drill down method to customize graphs and identify a mechanism to filter data according to user requirements.
  • Thumbnail Image
    Item
    Optimization of SpdK-means Algorithm
    (Faculty of Graduate Studies, University of Kelaniya, Sri Lanka, 2016) Gunasekara, R.P.T.H.; Wijegunasekara, M.C.; Dias, N.G.J.
    This study was carried out to enhance the performance of the k-mean data-mining algorithm by using parallel programming methodologies. As a result, the Speedup k-means (SpdK-means) algorithm which is an extension of k-means algorithm was implemented to reduce the cluster building time. Although SpdK-means speed up the cluster building process, the main drawback was that the cumulative cluster density of the created clusters by the SpdK-means algorithm was different from the initial population. This means some elements (data points) were missed out in the clustering process which reduces the cluster quality. The aim of this paper is to discuss how the drawback was identified and how the SpdK-means algorithm was optimized to overcome the identified drawback. The SpdK-means clustering algorithm was applied to three datasets which was gathered from a Ceylon Electricity Board Dataset by changing the number of clusters k. For k=2, 3, 4 did not give any significant difference between the cumulative cluster density and the initial dataset. When the number of clusters were more than 4 (i.e., when k>=5), there was a significant difference on cluster densities. The density of each cluster was recorded and it was identified that the cumulative density of all clusters was different from the initial population. It was identified that about 1% of elements from total population were missing after clusters were formed. To overcome this identified drawback the SpdK-mean clustering algorithm was studied carefully and it was identified that there are elements which had equal distances from several cluster centroids were missed out in intermediate iterations. When an element had an equal distance to two or more centroids the SpdK-means algorithm was unable to identify to which cluster that the element should belong and as a result the element is not included in any cluster. If such element was included into all the clusters that had an equal distance and if this process is repeated to all such elements the cumulative cluster density will be highly increased from the initial population. Therefore, the SpdK-means was optimized by selecting one of the cluster centroids which had equal distance to one element. After many studies of selection methods and their outcomes, it was able to modify the SpdK-means algorithm to find suitable cluster to an equal distance element. Since, an element can belong to any cluster it is not possible give any priority to select a belonging cluster. As all centroids had equal distances from the elements, the algorithm will select one of the centroid from all equal centroids randomly. The developed optimized SpdK-means algorithm successfully solved the identified problem by identifying missing elements and including them in to the correct clusters. By analyzing the iterations when applied to the datasets, the number of iterations was reduced by 20% than the former SpdK-means algorithm. After applying optimized SpdK-means algorithm to above mentioned datasets, it was found that it reduces the cluster building time by 10% to 12% than the SpdK-means algorithm. Therefore, the cluster building time was further reduced than the former SpdK-means algorithm.
  • Thumbnail Image
    Item
    Performance of k-mean data mining algorithm with the use of WEKA-parallel
    (University of Kelaniya, 2013) Gunasekara, R.P.T.H.; Dias, N.G.J.; Wijegunasekara, M.C.
    This study is based on enhancing the performance of the k-mean data mining algorithm by using parallel programming methodologies. To identify the performance of parallelizing, first a study was done on k-mean algorithm using WEKA in a stand-alone machine and then compared with the performance of k-mean with WEKA-parallel. Data mining is a process to discover if data exhibit similar patterns from the database/dataset in the different areas like finance, retail industry, science, statistics, medical sciences, artificial intelligence, neuro science etc. To discover patterns from large data sets, clustering algorithms such as k-mean, k -medoid and, balance iterative reducing and clustering using hierarchies (BIRCH) are used. In data mining, k-means clustering is a method of cluster analysis which aims to partition n observations into k (where k is the number of selected groups) clusters in which each observation belongs to the cluster with the nearest mean. The grouping is done by minimizing the sum of squared distances (Euclidean distances) between items and the corresponding centroid (Center of Mass of the cluster). As the data sets are increasing exponentially, high performance technologies are needed to analyze and to recognize patterns of those data. The applications or the algorithms that are used for these processes have to invoke data records several times iteratively. Therefore, this process is very time consuming and consumes more device memory on a very large scale. During the study of enhancing the performance of data mining algorithms, it was identified that the data mining algorithms that were developed for the parallel processing were based on the distributed, cluster or grid computing environments. Nowadays, the algorithms are required to implement the multi-core processor to utilize the full computation power of the processors. The widely used machine learning and data mining software, namely WEKA was first chosen to analyze clusters and identify the performance of k -mean algorithm. k -mean clustering algorithm was applied to an electricity consumption dataset to generate k clusters. As a result, the dataset was partitioned into k clusters along with their mean values and the time taken to build clusters was also recorded. (The dataset consists of 30000 entries and it was collected from the Ceylon Electricity Board). Secondly to reduce the time consumed, we selected parallel environment using WEKA-parallel (Machine Learning software). This is a new option of WEKA used for multi-core programming methodology that can be used to connect several servers and client machines. Here, threads are passed among machines to fulfill this task. The WEKA parallel was installed and established for some distributed server machines with one client machine. The same electricity consumption dataset was used with k -mean in WEKA-parallel. The speed of building clusters was increased when the parallel software was used. But the mean values of the clusters are not exact with the previously obtained clusters. By visualizing both sets of clusters it was identified that some border elements of the first set of clusters have jumped to other clusters. The mean values of clusters are changed because of those jumped elements. The experiment was done on a single core i3, 3.3 GHz machine with Linux operating system to find the execution time taken to create k number of clusters using WEKA for several different datasets. The same experiment was repeated on a cluster of machines with similar specifications to compute the execution time taken to create k number of clusters in a parallel environment using WEKA-parallel by varying the number of machines in the cluster. According to the results, WEKA-parallel significantly improves the speed of k-mean clustering. The results of the experiment for a dataset on the consumption of electricity consumers in the North Western Province are shown in Table 1. This study shows that the use of WEKA-parallel and parallel programming methodologies significantly improve the performance of the k-mean data mining algorithm for building clusters.
  • Thumbnail Image
    Item
    Productive Web Application for Construction Guiding, Consulting and Providing Services
    (Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2021) Padmasiri, K.M.P.U.; Wijegunasekara, M.C.
    Up to now, the Internet provides a vast platform for industries to expand their business opportunities. However, people do not obtain the maximum benefit from using modern technology in the construction process. For example, purchasing land, selecting house plans and purchasing hardware items is still done in the traditional way. Furthermore, most people are unaware of the approvals and permissions that must be obtained from the appropriate government authorities prior to the beginning of civil construction. Meanwhile, people do not have many facilities to get the right service from the right professional or the most suitable person at the right time. In this research, a web application is proposed to solve the above problems. Background research was conducted to discover available technologies and similar web applications in the market. Eleven websites that are related to the objectives of the proposed web application were referred. Some of them were for purchasing land [1], house plans [2] and hardware items [3] and some of them were for providing contact details of the service providers [4]. However, users cannot get access to all the functionalities related to home construction from a single website. Hence, the main aim of this study is to develop a productive web application for the construction industry, which provides guidance, consultation and services for users and service providers. The business need of the web application is to acquire financial benefits by hosting it on the Internet. The developed web application will enable someone to sell their land through paid advertisement. It will also allow professionals to publish their details on the web application through paid advertisement. Online hardware store can be maintained which will give a direct income to the person who maintains the web application. The next advantage is that professionals can use the web application as an advertising platform to publish their advertisements. Accordingly, it gives a solution to unemployment. HTML, CSS, JQueries and Bootstrap are used for the front-end development of the proposed web application. For backend development, PHP is used. Phpmyadmin is used for the database creation and management. SQL is used as the query language to fetch data from a database. The evaluation process was carried out for the verification and validation of the developed web applicationDomain experts and technical personnel participated in that process in which bugs and errors were identified and fixed. Feedback was received and relevant suggestions were implemented to maximize the usability of the web application.
  • Thumbnail Image
    Item
    A Study on Loan Performance Using Data Mining Techniques
    (Faculty of Graduate Studies, University of Kelaniya, 2015) Thisara, E.B.; Wijegunasekara, M.C.
    Most of the modern financial companies offer loans to customers in order to build up their own business. Such companies have a major problem when they recover the loan as the customers do not pay the installments according to the signed contract. It is crucial to determine/create the appropriate strategies and to identify the risk free customers as there is high potential of non-performing loans. In order to predict the risk factors that affect to non-performing loan, Data Mining techniques were considered. This research discovered the factors/reasons for non-performing loan using the data from a reputed Finance Company. This research focused on eighteen attributes which were referred to as factors affecting a nonperforming loan state and the dataset contained with 30% of test data and 70% of training data from 750 records. Among those attributes eleven key attributes namely: Age, Area, Branch Name, Customer Job, Income, Loan State, Mortgage, Number of Terms, Overdue days, Product Type and Interest Rate were selected to create the data mining models. The considered mining models were namely: Neural Networks (NN), Decision Trees (DT) and Clustering (CL). These models were created using the Business Intelligence tool and the database was created in SQL Server Management Studio 2008R2. The predicted probabilities (as a percentage) of Neural Networks, Decision Trees and Clustering models were 1.57%, 0.44% and 10.46% for non-performing loan state respectively. As the Clustering Model had the highest value it was chosen as the best algorithm to evaluate loan state by using Microsoft clustering method. The Clustering model was given ten clusters numbered from 1 to 10 and five clusters namely: 3, 6, 8, 9 and 10 were identified as the most inclined towards the non-performing loan state by comparative analysis. The predicted probabilities of selected clusters were 23%, 41%, 32%, 23% and 35% respectively and cluster number 6 showed a highest value and cluster number 10 showed the next highest value. Based on cluster performance, numbers 1, 2, 4, 5, 7 had a high probability of becoming performing loan and thus were not included in the analysis. According to the states of attributes within each cluster profiles Product Type, Customer Job, Mortgage, Income, Number of Terms and Interest Rate were identified and shortlisted as the factors affecting the nonperforming loan state most. The research identified that if the customer is self-employed or individual, a small property owner, or having a low income and depending on the type of mortgage (building, vehicle or non-mortgage) the loan tend to be non-performing. The longer duration for loan repayment or higher interest rates will also cause a loan to be non-performing. According to the above results it can be concluded that the high interest loans provided for the unemployed customers or customers with low income have a higher potential to be non-performing and hence resulting in a monetary loss for the financial company. Therefore a financial company will be able to improve its profits if they are more concerned about such customers and undertake suitable decisions. The model will support the financial sector in identifying the amount of loans that could be transformed into the non-performing state. Therefore the findings of this research will benefit the financial industry to reduce the risk of granting loans when providing loans in future.
  • Thumbnail Image
    Item
    T-Moms for Restaurants
    (Faculty of Graduate Studies, University of Kelaniya, Sri Lanka, 2016) Medhavi, Y.A.U.; Wijegunasekara, M.C.
    The aim of this study was to identify the drawbacks of a restaurant order management system and suggest a solution. Several such systems were studied and it was identified that customers waiting time to receive an order is considerably high. This is because during peak hours the waiter staff is not sufficient and the service offered is not to the required standard. In addition, the paper menus can be flimsy, hard to navigate, and outdated. To reduce customer’s wait times, management must ensure sufficient staff to present during peak hours and that they are properly trained to provide excellent customer service. These staffing issues can lead to substantial costs for the business. As a result, the Tablet based Menu and Order Management System (T-MOMS) was implemented to resolve these problems using mobile devices. The T-MOMS contains four systems, a mobile application for customers and three web based systems for the admin panel, kitchen and cashier. The order is taken by a mobile device namely, a tablet placed on the restaurant table which acts as a waiter. The mobile application is started by a waiter by logging into the system and assigning the table number and a waiter identification. The waiter identification and table number are saved in the application until that particular waiter logs out. The mobile application has four subsystems namely, display subsystem, assistance subsystem, commenting subsystem and ordering subsystem. The display subsystem displays the complete restaurant menu by categories, special offers’ information and allows the customer to browse all the currently available menu items by category. The assistance subsystem allows the customer to call a waiter for any assistance needed. The commenting subsystem allows customers to create user accounts for adding comments and share experience on Facebook/Twitter. The ordering subsystem allows to select the desired items and make the order. Once the customer makes the order, first he will be able to view the order information that he has ordered including the payment with/without tax and service charge. After the customer confirms the order, the order is transmitted to the kitchen department via Internet for meal preparation. The kitchen web system displays all order information that are received from the tablets. This include the customer details, table number, the waiter identification and the details of the order. After the order is prepared, the waiter will deliver the order to the customer. At the same time, the cashier web system receives the details of the delivered order and the bill is prepared. The web based admin panel system allows the restaurant’s management to add/view/remove/ update menu items and waiters, view reservation information and their cooking status/payment status, update service charge/tax, viewing revenue information over a time period. The T-MOMS system consists of a central server and a database. All ordering and expenditure information is stored in a central database. Eclipse and PHPStorm used as the IDEs. Mainly used languages are HTML, JavaScript, PHP, JAVA, XML. The menu application is designed to be used on 7" tablets as well as it will be supported on smaller screen sizes. As future development, features such as restaurant statistics should be implemented, paying the bill directly through the menu application should be created.
  • Thumbnail Image
    Item
    A Virtual Dressing System
    (University of Kelaniya, 2013) Rajasinghe, R.M.C.N.A.; Wijegunasekara, M.C.
    Virtual dressing rooms are a relatively new concept, which is slowly becoming a trend on various fashion websites. The virtual dressing room allows a customer who is at home, to virtually try on dresses, and other fashions online. This allows the consumer to gauge, if the style and the fit are an appropriate match before adding it to the virtual shopping cart of a webstore. Customers are nervous about purchasing garments electronically, because they are unsure of what size to order, and how the clothes will look on them. Merchants are nervous about the high volume of apparel returns. For a merchant, the handling of an apparel return can cost up to four times what it costs to process the initial sale of the garment. Industry analysts have estimated that apparel returns for electronic merchants range from about 10% for very basic items to between 35% to 40% for high end clothing. The single biggest reason for returns of apparel purchased electronically is poor fit. The objective of this research is to address the above stated issues. Firstly, to improve the ability to make the right buy, with better opportunities to experiment with the dress style. These are the competitive advantages. Secondly, to reduce the buying risk, time, effort, discomfort, queues at shops, and the proportion of returned items. To address these issues, the technology of image processing, template matching (which is for finding small parts of the image) and thresholding, the simplest method of image segmentation was used. NET was the main framework for this application and C# and C++ are used as the language for the development. The OpenCv libraries were also used for this application. Main functions implemented in this system can be catogorized as follows: 1. Loading the video stream to the form 2. Embedding textile images 3. Facilitating the user to move the textile image that was embedded to the video according to requirements. Any user who is new to the system must select the given item and background. Selected values are written in a text file. These text file values are read by logic files and it would load the appropriate images into the forms. The function cvtColor() in OpenCv converts the input image from one color space to another. In the case of transformation to-from RGB color space the ordering of the channels is specified explicitly (RGB or BGR). In case of non-linear, the input RGB image is normalized to the proper value range in order to get the correct results. Image is scaled before a transformation. Transformations are done within the RGB space by adding or removing an alpha channel, reversing the channel order, conversion to-from 16-bit RGB color (R5:G6:B5 or R5:G5:B5), conversion to-from grayscale and the conversion from a RGB image to gray color. The 8-bit and 16-bit images R, G and B are converted to floating-point format and scaled to fit in a range in between 0 to 1 and the values are then converted to the destination data type. The system is functioned by a threshold color and all the detecting functions are working according to these threshold colors. The OpenCv threshold method is used for the above. The Bayer pattern used in CCD and CMOS cameras allows color pictures from a single plane where R, G and B pixels (sensors of a particular component) are interleaved. The output RGB components of a pixel are interpolated from 1, 2 or 4 neighbors of the pixel with the same color. The implemented system can be used to overcome the identified problems of this study. The system was a real success with the illumination conditions that were used to test the system.

DSpace software copyright © 2002-2025 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify