2019 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT) Program
Sunday, September 22
Sunday, September 22 8:00 - 9:00
R1: Registration-Day1
Sunday, September 22 9:00 - 9:05
OC-1: Opening Ceremony
Sunday, September 22 9:05 - 9:10
OC-2: Quran Recitation By
Sunday, September 22 9:10 - 9:20
OC-3: Talk by His Excellency President of the University of Bahrain
Sunday, September 22 9:20 - 9:25
OC-4: Presentation for accepted paper statics by Dean of IT College
Sunday, September 22 9:25 - 9:35
OC-5: Appreciation for Keynote Speakers & Sponsors
Sunday, September 22 9:35 - 10:20
KS-1: Keynote Speaker-1 "Create a Sustainable Eco-System in the UAE Smart ICT Industry, Through Entrepreneurship & R&BD Projects"

The economic eco-system requires many players to participate in the advancement of global cities and countries. The main variables of the equation of the advancement are related to business processes, business laws, management practices, innovation, big data, analytics, cloud computing, 5G, Artificial Intelligence (AI), Internet of things (IoT), Blockchain and crypto-currencies which serve the 4th Industrial Revolution (4IR) era. However, even though the economic drivers depend on natural resources, human resources and technology, the main driver is the technology and technology only. Thus, ICT should be the main priority of economies to advance the nations into a sustainable prosperity and productive lifestyle. Thus, entering the Information & Communication Technology (ICT) development field is the bright future to most of the economies of the world. This trend has been going for a while; however, from the beginning of the 21st century this trend in ICT field has been moving so dramatically that those who can't catch up with this rapid pace will be left behind. They will be facing a challenge of implementing very fragile economies depending on delicate economic foundations. The economic driving forces depend highly on technology and specifically ICT. The ICT trends are so agile that it is very difficult to keep pace with these vast changes unless there are serious strategic plans and initiatives to implement the elements of ICT value chain in order to create the knowledge-based eco-system. This value chain has three main elements that will build the foundations of a knowledge-based society. Starting from creating the skillful workforce through educational activities, moving towards the intermediate stage of creating patents and intellectual properties (IP's) by encouraging R&D activities and ending with knowledge creation through incubators and start-ups. If we look at statistics of the present and future ICT sector, we can realize that this sector is and will be the main driving force towards economic development which will be the basis of establishing a smart and knowledge-based economy. In terms of digitalization of a nation, UAE is pioneering the drive in the region and the world. ICT drive towards sustainable smart cities has been the main strategic agenda of UAE to lead the world.
Sunday, September 22 10:20 - 10:50
KS-2: Keynote Speaker-2: CNS Infrastructure (Communication, Navigation, Surveillance) for Multimodal Autonomous Mobility

Autonomous Mobility seems to be the next big thing. Unfortunately the public interest and the academic attention on this topic is often narrowed down to autonomous driving only (Tesla, Google etc.). With regard to automobiles as well as in any other mode of transportation such as aircrafts, drones (RPAS), trains etc. the ultimate goal to be achieved is autonomous mobility where a driver/pilot and even a steering wheel is obsolete (recognized as Level 5) versus highly assisted and/or automated mobility where a driver/pilot is still in command although heavily supported by high tech sensors. However, in reality the latest Audi A8 (2018) for instance has more than 40 sensors such as radar, ultrasonic, microwaves, cameras etc. within the car which allows a limited set of automation in specific situations such as automatic driving on motorways in traffic jams up to a speed of 60 km/h. The reason why car manufactures put all their efforts and sensors into the car is quite obvious: The industry wants to sell cars all over the world regardless of the ground based infrastructure which might be available in any given country and ultimately without any ground based infrastructure others than streets at all. However, the flipside of this rationale is that Level 5 autonomy cannot be achieved for sure. We have similar challenges for all other modes of transportation such as delivery drones, Urban Air Mobility etc. too. The most advanced sector in terms of infrastructure is the aviation sector where we see a quite sophisticated environment in terms of Communication, Navigation and Surveillance (CNS Infrastructure). The author is heavily promoting an integrated, multimodal and performance based CNS infrastructure in defined areas or even the entire cities suitable for all modes of transportation. This CNS infrastructure should be certified by governmental authorities and independently operated by providers others than transportation companies for the sake of safety and security. This includes e.g. 5G technology for communication purposes as well as Ground Based Augmentation Systems (GBAS) for navigation purposes (GNSS) as well as Blockchain technology to assure data integrity.
Sunday, September 22 10:50 - 11:10
SB-1: Short Break
Sunday, September 22 11:10 - 12:50
S1: Smart Cities-1
- 11:10 Concept for and Implementation of Wildlife Monitoring to Contribute Sustainable Development Goals
- With the increasing use of the Internet of things (IoT), heterogeneous IoT services are offered to satisfy various needs. The Fed4IoT project aims to develop an IoT virtualization technology to realize a smart-city application that federates heterogeneous IoT services. In this project, wildlife monitoring was chosen as an example of the application. Wildlife damage is not only an agricultural problem but also is disturbing the daily lives of people who live in rural areas. It has been a universal problem for a long time. In this use case, an IoT platform with end devices that consist of a cage, camera, target counter, etc. is employed, and local offices and residents are informed of an animal's approach, the condition of the cage, etc. through an information-centric network. The obtained data are used in other Fed4IoT services. In this work, the IoT platform and its application to wildlife monitoring are discussed. Moreover, this study and the project aim to help in the achievement of the United Nations Sustainable Development Goals (17 goals and 169 targets to eradicate poverty and realize a sustainable world).
- 11:30 LOLP and LOLE Calculation for Smart Cities Power Plants
- Generation system reliability is an important factor in the long-term planning for future capacity expansion to make sure that the total installed capacity is enough to support demand. This case is very important for smart cities. The planning process utilizes reliability indices as criteria to decide on new investments in new generation capacities. Generation system reliability evaluated by using different indexes. In this project loss of load probability (LOLP) and Loss of Load Expectation (LOLE) is simulated to evaluate the system reliability effects of the system parameters such as forced outage rate is tested on the LOLP index and LOLE index. The traditional use of LOLE is to determine the required installed capacity, based on expected capacity during peak periods, and LOLP measures a probability of outages to overall resource adequacy. To determine the amount of electric power capacity for a power plant; both indices, the Loss of Load Expectation (LOLE) and Loss of Load Probability (LOLP) are required. These indices help for the analysis of the plant. By calculating the power capacity, we can obtain the desired reliability target. The LOLE and LOLP calculation involves the combination of both the generation outages and load profiles. For the calculation, the probability of generator-forced outages (FOR) is required for the calculation. The expected number of days in the year obtained when a shortage might occur. The historical number of days of shortage data measure needed. The outage assumed coincident with the daily peak load. The index "LOLP" refers to a probability of outages, where the "LOLE" index describing an expected value. The LOLE index analysis forms the basis of calculating how many particular generators, or group of generators contribute towards planning reserve. The output calculation of this capacity contribution called the effective load carrying capability.
- 11:50 Quick Optimal Lane Assignment Using Artificial Neural Networks
- Signalized intersections are major components for any transportation network as the traffic operation at these intersections will significantly affect the operation for the whole network. Dynamic lane assignment (DLA) represents an Intelligent Transport System (ITS) technique that can be used to enhance the traffic operations at signalized intersection by utilizing the space efficiently. In DLA strategy, the number of lanes assigned for each movement (left, through and right) depends mainly on the real time traffic demand for that movement. This study aims to develop an artificial neural network (ANN) model that can be used to predict the optimal lane assignment combinations at signalized intersections using turning movement volumes for all intersection approaches. Developing an ANN model will expedite the selection process for the optimal lane assignment since it does not require detail delay calculations for all possible lane combinations to identify the optimum lane configuration for a given traffic movement. The optimum topology for the developed ANN model was found to be three hidden layers with 14 neurons in each layer with an average testing accuracy of 92%.
- 12:10 Systematic literature review of the smart city maturity model
- While there has been an increasing number of studies on smart cities, there are still issues in assessment and governance of these cities. This study aims to analyze and provide insights into the research done in smart city maturity models by conducting a systemically review of related literature. The study discusses and analyzes 22 maturity frameworks from the peer-reviewed articles. The discussion also included the alignment of the smart city to the United Nations sustainable development goals. The analysis shows that there is an increased emphasis on the development of maturity models over the years 2011-2018 However, only 9 out of the 22 models were comprehensive, other models were focused on specific aspects such as citizens, data and IoT. In addition, 50% of the models from the literature included components related to technology but most of the models lacked assessment of the environmental aspects. This review indicates the need for a comprehensive smart cities' maturity model to assess the cities in their smartness transformation.
- 12:30 Solutions to address the security issues of IoT-based smart grid networks
- Smart Grid (SG) is the modern power system, is development and integrates the power grids with the Information and Communication Technology (ICT) to ensure reliable and safe power deliver to the customers. SG allows smart devices and smart meters to exchange information with power utilities. The Advanced Metering Infrastructure (AMI) acts as the communication center to connect with the smart grid, the most common component in smart grid applications. The Internet of things (IoT) is a set of devices that contain electronics, software, sensors, actuators, and connectivity that enable the connection and transmit data. (IoT) has developed as one of the technologies that make a smart grid network to be possible. The main challenges of IoT is security and how to protect smart grid and the devices that connected over a network from the cyber-attack. Cyber-attack on a smart grid can cause interruption on reliability of infrastructure and shutting down the electricity grid. Once a one device is compromised, then the entire grid can be susceptible to cyber-attacks. Due to this, the security became a serious factor we must pay attention to implement the IoT based smart grid networks. In this paper we review and explore the major security issues of IoT-based smart grid networks and the solutions to mitigate the impact of attack on IoT-based smart grid.
S2: Informatics -1 & Advanced IT
- 11:10 Optimum Placement of Conformal Antenna Array Based on Path Loss Profile
- In this paper, the optimum placement of a conformal antenna array based on the path loss profile has been discussed. The arrays are considered to be conformed on the wings and the cylindrical fuselage of an Unmanned Aerial Vehicles (UAVs). Two types of feeding designs have been presented: rectangular waveguide (RWG) for fuselage conformal array and substrate integrated waveguide (SIW) for wing conformal array. The optimum placement of conformal arrays is proposed on the basis of path loss profile for air-to-air (AA) and air-to-ground (AG) links. The proposed arrays have been designed and simulated in Ansys HFSS and it has been observed that the proposed arrays offer gain values of 11.15 dB and 9.8 dB for wing and fuselage, respectively
- 11:30 Interoperable Framework to Enhance Citizen Services in the Kingdom of Bahrain
- Citizen records are scattered between different state organizations. It wastes time, effort, and resources for both citizen and organization to collect, maintain, and update records to fulfill citizen services. Interoperability is a key element that enables seamless collaboration between different entities. It requires non-conventional methods to overcome interoperability challenges such as lack of trust, centralization, and policy and technology differences. Blockchain is a disruptive technology with the potential to overcome these challenges. The technology designed to enable peer-to-peer transactions with elimination of intermediary in a trustless environment through the control of consensus mechanisms. This research aims to explore the status of interoperability in Bahrain, design an interoperable framework, and then test the validity of the framework by implementation of a prototype using blockchain technology. The research will be divided into four phases; I: Information collection, II: Design and modeling the framework, III: Implementation of a prototype, and Phase IV: Measuring the performance of the prototype. This research is in progress and it is expected, once is it complete, to enhance the e-government's plan in the Kingdom of Bahrain to provide better services to citizens and help in the transition from e-government to seamless government, which will lead to sustainable citizen services. On the other hand, the findings of the study is expected to improve the social, economical, and environmental sustainability by the increase in process optimization, reduction of cost and complexity.
- 11:50 Cost Contingency Modelling for Construction Projects: Insight from the Literature
- A review of cost contingency modelling in construction projects is presented to find the application areas and limitations of the existing models to pinpoint the further research gap. Three broader approaches of contingency modelling are found: 1) a data-intensive contingency prediction model without risk analysis; (2) contingency estimation following a knowledge-based risk assessment, and (3) contingency modelling integrated with risk management strategy. However, the existing models have a basic limitation such as the risks in contingency modelling were assumed to be independent. Ignoring the inter-relationships between the risks leads to inaccuracy in risk assessment, which subsequently leads to inaccuracy in contingency modelling. In response to this issue, an integrated fuzzy Bayesian belief networks model is potential for future research to develop a risk induced contingency model for complex projects.
- 12:10 Structural Behavior of Non-Uniform Mat Foundation on Weak Sandy Soil
- Rapid urban growth due to increase in population and economic prosperity has witnessed large number of high-rise residential and commercial structures in cities like Abu Dhabi. The weak salty-sand that covers many of cities in Middle East requires construction of costly deep foundation. As a solution to the requirement of large numbers of parking spaces for high-rise structures, especially in city areas, require basement-parking provision. To accommodate the high number of parking spaces needed for the building users, the underground floors on a mat foundation are considered as a solution for vertical expansion in a safe and an economic way. The mat foundations are often designed based on simplified design methods. Large number of piles, associated with an over-designed uniform thickness mat foundation can significant increase the total cost of foundation for a high-rise building. Although, the application of non-uniform mat foundation can contribute in saving, the structural behavior of these non-uniform mat foundations on soil types similar to the soil of many parts of Middle East is not well studied. To address this gap in knowledge on the behavior of both uniform and non-uniform mat foundations, a parametric study was conducted to explore the structural responses of different schemes of mat foundation on typical soil conditions using finite element analysis. Results from this study provide a better understanding of the behavior of typical uniform and non-uniform mat foundation on a weak sandy soil. The findings of this study suggest some savings in material percentage by changing the mat scheme from uniform to non-uniform thickness subjected to weak sandy soil conditions.
S3: Artificial Intelligence
- 11:10 Incorporate Doubts in Fuzzy Logic
- In fuzzy logic we carefully design membership functions to obtain proper values that represent the degree of truthiness. In most of the cases, membership functions deal with the most important and common factors/parameters that affect the degree of truthiness and at the same time, might ignore some factors that could decrease the degree of truthiness and increase the degree of doubt. Ignoring the doubts yields to not accurately get the degree of truthiness values. In this paper we explain what we mean by doubts and how it might affect the calculation of the degree of truthiness and finally we propose an approach to consider the degree of doubts and how it is incorporated in membership functions.
- 11:30 Bat Algorithm with Different Initialization Approaches for Numerical Optimization
- In the field of applied engineering, function optimization has been extensively applied for the purpose of finding an optimal solution for the given problem. BAT Algorithm (BA) is population-based stochastic algorithm utilized to determine the both discrete and continuous kind of problems. The use of robust pattern for population initialization may lead to enhance the performance of optimization algorithms which results in prevention of premature convergence. In this paper, we implemented the BA with the probability distributions based on initialization and introduced the four new methods of BA Initialization such as BA initialized with Beta distribution B-BAT, an Exponential distribution E-BAT, Gamma distribution G-BAT and finally the Weibull distribution W-BAT. The empirical results conclude that our proposed techniques are more robust in term of fast convergence rate.
- 11:50 Recognition Bangla Sign Language using Convolutional Neural Network
- The sign language, considered as the main language for deaf and hard of hearing, uses manual communication and body language to convey expressions, plays a major role in developing an identity. Nowadays, sign language recognition is an emerging field of research to improve interaction with the deaf community. The automatic recognition of American, British, and French sign languages with high accuracy has been reported in the literature. Even though, Bangla is one of the mostly spoken languages in the world, no significant research on Bangla sign language recognition has been found in the literature. The main reason for this lagging might be due to the unavailability of Bangla sign language dataset. In this study, we have presented a large dataset of Bangla sign language consisting of both alphabets and numerals. The dataset was composed of 7052 samples representing 10 numerals and 23864 samples correspond to the 35 basic characters of the alphabet. Finally, the performance of a convolutional neural network in the recognition of numerals and alphabet separately, and in mixing of them has been observed on the developed dataset using 10-fold cross-validation technique. The proposed method provided an average recognition accuracy of 99.83%, 100%, and 99.80%, respectively for basic characters, numerals, and for their combined dataset.
- 12:10 Glucose Controller For Artificial Pancreas
- Diabetics type1 have been helped by "artificial pancreas"(AP), which is known as a closed loop control of blood sugar regulation in diabetes. It is a system that joins blood glucose checking through a glucose sensor to check blood sugar in vivo, insulin pump and calculation unit to enumerate insulin dosage and amount according to blood glucose measure pump control unit based on the results of enumeration. There is a necessity to put an optimal control which can keep up with sudden change for (T1D), finds better following personification for glucose, and maintain the glucose level within the natural zone (between 70mg/dl and 120mg/dl). In this paper, a state feedback control method based on "Linear-Quadratic Regulator-Servo"(LQR-S) has been proposed. A Kalman Filter (KF) has been used to provide necessary estimates of states to be implemented by the LQR-S design. Simulation results show considerable improvements of the AP performance using the developed controller as compared with that of a conventional PID controller.
Sunday, September 22 12:50 - 13:20
P1: Prayer
Sunday, September 22 13:20 - 15:00
S4: Informatics -2
- 13:20 Investigating the Role of Information System Quality and Managerial Competency on the Organization's Performance
- This research intends to provide insight on the latest literature review that dealt with the role of the quality of IS and their impact on the performance of organizations. A systematic review approach using content analysis was used. The review includes journals articles obtained from various databases, namely: Science Direct, Springer, and Emerald. The search techniques specified by the selected database was used to obtain related research papers within the period frame of the study. Based on fifty-eight empirical studies, the researchers found that information system quality is a variable used in many studies and highly contributes to the enhancement of organizational performance. Moreover, previous studies relating the information system quality to the improvement of the organization's management functions, activities, problem-solving and decision-making which in turn impact performance. The results of this review provide a roadmap for researchers on organizational performance. It also highlights useful variables and directions for research on the impact of information system quality and competencies of management on organizations' performance. The researchers conclude with a conceptual model which provides a justifiable snapshot of the relationship between information system quality, managerial competency and their impact on the organization performance.
- 13:40 The Association between Technological Readiness and Higher Education: The case of Middle East Countries
- This study examines the association between the two sets of variables namely, "Technological readiness" and "Higher education and training" through twelve countries in the Middle East using the findings of the Global Competitiveness Index (GCI) 2012-2018. We employed Canonical Correlation Analysis (CCA) which is based on the data published by the World Economic Forum Reports (WEFs) in the period from 2012 to 2018. It analyses the relationship between the two sets of variables and maximizes the correlation between linear composites of the two sets of these variables. Moreover, CCA identifies the preference variable in each set of variables. The main findings of the study revealed that the two sets of pillars, "Technological readiness" with six variables and "Higher education and training" with eight variables, are highly positive correlated. The most influential variable in the first set "Technological readiness" is "Availability of latest technologies" followed by "Firm-level technology absorption", "Foreign direct investment and technology transfer", "Internet users", "Mobile-broadband subscriptions" and finally "Internet bandwidth". While, the most influential variable in the second set "Higher education and training" is "Internet access in schools" followed by "Extent of staff training", "Local availability of specialized training services", "Secondary education enrolment rate", "Quality of management schools", "Quality of math and science education", "Tertiary education enrolment rate" and "Quality of the education system". The outcomes of the study can benefit decision-makers in the Middle East countries by focusing on the most influential factors in each set to stimulate and boost economic growth.
- 14:00 Comparative Evaluation of Short Read Alignment Tools for next Generation DNA Sequencing
- The recent advancements in Next-generation sequencing technologies have resulted in an enormous amount of sequencing data with newer properties. An important step in all genomic analysis is aligning the reads to a reference genome. Numerous short read alignment tools are available to perform this time-consuming task. However, these tools have different trade-offs in terms of accuracy and speed. Hence, it is crucial to evaluate these tools based on various aspects to have a clear understanding of their performance. BWA MEM and Bowtie2 are selected to perform this evaluation mainly because they are designed to handle longer reads. The performance was evaluated using throughput and mapping percentage. The tools were put under several tests such as their ability to handle longer reads. The results indicate that one tool cannot outperform the other tool in all aspects. Bowtie2 achieved higher throughput in some cases while in other cases BWA MEM achieved higher throughput. However, BWA MEM had higher mapping percentage in most cases and handled longer reads more efficiently than Bowtie2.
- 14:20 Employees' Productivity Measurement And Control - A Case Of A National University
- An experimental study, that finds the impact of Internet access control on the employees' productivity in a National University. The purpose of the study is to boost the employee productivity through proper Internet access control. The main objectives are to find the most used web categories by the staff, find if relation exists between productivity and non-work-related internet usage, and choose the best level of Internet access control. Before initiating the experiment, Employees' Internet usage was monitored and accordingly classified them into the proper Internet access control groups. Then supervisors were asked for a pre-test productivity measures for their staff, after that the experiment was initiated for 45 days. Then, a post-test productivity measure was done. Productivity changes were analyzed with the department nature, its Internet usage portfolio and its current Internet access control group; then the best level of restriction was found. The result showed that the productivity of departments with low Internet usage was not affected by restricting/unrestricting Internet access. However, for high Internet usage departments noticeable productivity improvement was there when the Internet restriction policy was not affecting work-related websites; but when it was affecting work-related websites the productivity decreased.
- 14:40 Does national culture and Information System Quality (ISQ) have an impact on empowering the transition toward cloud computing adaption by SME organizations in the GCC?
- This research intends to provide insight on the latest literature review that dealt with the role of the quality of information systems and national culture and their impact on the cloud computing adoption in SMEs in GCC. A systematic review approach using content analysis was used. The review includes journals articles obtained from various databases, namely: Science Direct, Springer, and Emerald. The search techniques specified by the selected database was used to obtain related research papers within the period frame of the study. Based on fifty-eight empirical studies, the researchers found that information system quality is a variable used in many studies and highly contributes to the adoption of cloud computing. The results of this review provide a roadmap for researchers on the cloud computing adoption in SMEs in GCC. It also highlights useful variables and directions for research on the impact of national culture on the adoption of cloud computing. The researchers conclude with a conceptual model, which provides a justifiable snapshot of the impact of information system quality and national culture on cloud computing adoption in SMEs in GCC.
S5: Software Engineering - 1
- 13:20 A Code Summarization Approach for Object Oriented Programs
- The software maintenance process is not an easy job, especially when the source code becomes more complex and less documented. Software developers and engineers may spend plenty of time to understand the structures and features of the source code in order to maintain their software projects. Using an automated code summarization technique is a key factor to save time and cost of the maintenance process. Generating descriptive documents from the source code helps the software developers to understand their software. In this paper, a code summarization framework is proposed to document the source code. A set of software features are extracted and displayed to the developers based on mapping the target source code segments to an XML representation. The proposed framework has been conducted on an open source project to evaluate its effectiveness. The generated results showed that the proposed approach is useful in understanding the different structural aspects of the source code.
- 13:40 Prototyping a Visual Academic Transcript
- In this paper we propose two visual academic transcript interface alternatives. Both have exactly the same features but with different layout design. While version (A) presents all of the required information in an overview page, version (B) adopts simplicity as a primary concern by minimizing the number of interface components. Both alternatives apply the main HCI design principles such as: Layout, Aesthetics, Information, Colors, Charts, and Metaphors. The paper also discusses the failure of basing the interface design on legacy processes and interface design, and concludes with the results of testing the two proposals on a sample of students.
- 14:00 Formal Verification of Cloud based Distributed System using UPPAAL
- Verifying the cloud-based distributed system is a very challenging task, which requires the right modeling and verification of the properties before it is deployed. Different synchronized systems' properties need to be verified with limited resources. This paper proposed a compositional approach with the goal that it tends to be reused by others to make new verifications for future property checking needs. The proposed approach targeted distributed applications written in visual studio. To verify the target properties timed automata are used to model different aspects related to distributed systems and cloud computations.
- 14:20 Reverse engineering Approach for Classes' representations and interactions in Software Projects
- software life cycle is an essential process for all software projects. The most costly stage is the maintenance process, which needs a lot of effort to understand the current software. Therefore, software reverse engineering is widely used to deal with that challenge. This paper proposed a new approach to reverse engineer a modified UML class diagram that focuses on the internal structure of the class with detailed class metrics. Moreover, a class call graph is generated statically for the set of target classes. The proposed approach is applied to an open source project to demonstrate its effectiveness.
- 14:40 Generating test sequences from UML use-case diagrams
- Testing the software is an essential phase of the software development life-cycle. There are many ways to generate the test sequences, which are required to perform the testing tasks. UML use case diagrams describe the software specifications, which makes it one of the possible sources that can be used to generate the test sequences. In this paper, UML use case diagrams are used to generate test sequences using a set of processes. As for the first process, the UML use case diagrams are converted to activity diagrams. The activity diagrams are used to make the use cases' relationships clear to cover all the paths in the testing. Then, the activity diagrams simplified by converting them to activity graphs, as pre-processing step to generate the test sequences. The final process is to generate the test sequences by extracting the information from the activity graphs. The approach has been evaluated using three case studies and a comparison with other approaches from the prior work.
S6: Cyber Security-1
- 13:20 Impact of Spectre/Meltdown Kernel Patches on Crypto-Algorithms on Windows Platforms
- This paper evaluates the performance impact of kernel patches of two recently discovered catastrophic hardware vulnerabilities. These vulnerabilities can cause significant harm to security and privacy through memory leaks on a wide range of modern processors including Intel, AMD, and ARM processor families among others. They also can affect IOS and Android mobile devices. These vulnerabilities are known as Spectre and Meltdown, and were publicly disclosed in conjunction in January 2018. Now, there are multiple variants of exploits of these flaws. The only fix since their discovery barring designing new hardware is through software patches. Such patches can negate some of the speedups in computing that were gained through out-of-order execution and branch prediction. Since cryptography is the heart of data security, it is more important to study the performance impact of these patches on crypto-algorithms since exploiting these vulnerabilities can disclose secret keys and other confidential information within few minutes. The paper first explains these vulnerabilities and their exploits, and reviews relevant related work. Then, it describes an immense number of conducted experiments on Windows platform with and without enabling the kernel patches while running various crypto-algorithms. The results are statistically analyzed to test the hypothesis of whether there is a significant impact on performance for various data sizes and types.
- 13:40 A perspective study towards biometric-based rider authentication schemes for driverless taxis
- Driverless vehicles exploit Artificial Intelligence (AI) to offer rides to their users with minimal or no human input. Driverless vehicles as taxis are indeed evolving an entirely new transportation concept, Transportation-As-A-Service (TAAS), steered by well proven client-server infrastructures. Typically, the servers, managed by multinational companies (e.g., Waymo), operate driverless taxis for public transportation, whereas the clients are simply the smartphone-based applications that can be used by customers for one-time registration and booking rides with driverless taxis anytime anywhere. In this paper, we perform a perspective study towards secure and usable biometric-based rider authentication schemes for driverless taxis. Our study is complemented by an online survey that comprises 75 responses from participants, worldwide. We published the survey on social media platforms like Linkedin, Facebook, Twitter, and various public transport users groups to collect public poll for rider authentication for driverless taxis. Approximately, 90% of the overall participants in our survey either strongly agreed or agreed with the necessity to deploy biometric-based rider authentication for driverless taxis. Then, considering the survey's results, we proposed the design of a possible solution for riders biometric authentication.
- 14:00 Detection of Malicious Emails through Regular Expressions and Databases
- In modern days, electronic mail or in other terminology, Email is a very common way when it comes to communicating with others. The communication is mostly intended to be formal as in with companies, firms, organizations, etc. Emailing is done from anywhere and at any time. However, due to the increased usage of such mailing service, black hats or unethical hackers decided to target such service to harm by any means. The paper aims to find an optimal logical method of reducing the probability of falling victim to such malicious emails by filtering them out.
- 14:20 A secure Framework for Mobile Cloud Computing
- Mobile Cloud Computing (MCC) has only been a natural evolution that has emerged post the amalgamation between cloud computing, the advancements in wireless technologies, and mobile technologies dominating the scene as of late. This merger between these technologies has provided benefits, but it has also enhanced the inherited security and privacy concerns. This paper will present an overview of the security and privacy concerns of mobile cloud computing and propose MCC security framework to deal with these security and privacy concerns.
- 14:40 Security Issues in IoT: A Survey
- Internet of things (IoT) could be found on everywhere in today live. It is used in smart cities like roads, hospitals and in smart homes like controlling door or air conditioner unit or preventing fires and much more. These devices that use IoT are connected to the internet and send and receive many of important data through the network. This open the attacker's starvation to invade the IoT networks and got its valuable data. The problem with the IoT devices is the limited performance components that make it difficult to apply the existing security method on it. This limitation requires a need to present lightweight algorithms which support the IoT devices. The survey in this paper reviewed security issues related to IoT such as types of attacks, security framework of IoT, encryption algorithms, authentication methods and hardware based security support for IoT. The aim of the survey is to highlight security problems to help increase the user awareness and thus help stopping many kinds of attacks due to the limitation of the IoT system.
Sunday, September 22 15:00 - 16:00
LB-1: Lunch Break Day-1
Monday, September 23
Monday, September 23 8:00 - 9:00
R2: Registration-Day2
Monday, September 23 9:00 - 9:25
KS-3: Keynote Speaker-3: The Game of Empathy, Privacy and Ethics
Monday, September 23 9:25 - 9:50
KS-4: Keynote Speaker-4: On the difficulties of making The Internet of Things secure
We are assisting to an increasing digitalization in many economic sectors. From automotive to healthcare, from manufacturing to smart buildings, just to name few examples. All this goes in the direction of realizing the Internet of Thing vision. A world of connected devices with which we interact to hopefully enhance our life experiences. Security and privacy are two important problems that need to be addressed to make such a connected world a nice and safe place to live. In this talk, i will present the reasons that make these problems complex and difficult to solve, presenting also several examples of recent attacks to the Internet of Things. I will conclude, presenting some of the work I am doing with my group to contribute in making the Internet of Things more secure.
Monday, September 23 9:50 - 10:15
KS-5: Keynote Speaker-5: Arabic Text Processing

Indexing text documents consists of analyzing the content of the text in order to retrieve its subject. In this work, we propose a new model to enhance auto-indexing Arabic texts. Our model denotes extracting new relevant words by relating those chosen by the previous classical methods, to new words using data mining rules. The model uses an association rule algorithm for extracting frequent sets containing related items - to extract relations between words in the texts to be indexed with words from texts that belong to the same category. These associations of words extracted are illustrated as sets of words that appear frequently together. Our results show significant improvement in terms of accuracy, efficiency and reliability when compared to previous works.
Monday, September 23 10:15 - 10:40
KS-6: Keynote Speaker-6: Demystifying the Social Internet of Things (SIoT)
Advancements in the Internet of Things (IoT) allowed "Things" or "objects" to have virtual identities, exchange data, and discover and consume offered services. The Social Internet of Things (SIoT) is deemed the next evolutionary step of IoT where interacting objects autonomously establish social relationships analogous to humans' social networking. Hence, SIoT promotes an ecosystem where humans and objects can interact within a social framework. Such social structure allows efficiently coping with a large number of objects within SIoT but raises concerns in terms of architectural design, security and trustworthy interactions. In this keynote, the SIoT paradigm will be explored including research endeavours.
Monday, September 23 10:40 - 11:00
SB-2: Short Break
Monday, September 23 11:00 - 12:20
S7: Cloud Computing & Big Data
- 11:00 Predictive analytics to improve outcome based funding for public universities in South Africa through Big Data
- the development of 4th industrial revolution brought many opportunities for organisations to improve their competitive advantage as their way of doing things has changed and their operations processes has moved towards digitization. The innovative stand makes organizations to generate voluminous data silently which add more potential to improve business value and competitive advantage, universities being one of the areas that can gain valuable insights from this data. However advanced analytics is required since this data is generated in heterogenous formats from different sources which is where the emergence of big data concept was coined. The paper aims to discover unstructured and structured data generated from the universities with its substantial relevance to support decision making towards outcome-based funding through predictive analytics
- 11:20 Comparative Study between Web Services Technologies: REST & WSDL
- The field of web services has become one of the hottest points of discussion during the last decade, along with the evolving web technology. The possibility of having diverse systems connected to each other has been enormously and successfully achieved with the aid of web services. Despite the fact that web services are built over the concept of web programming, developers must have in mind that there are substantial comparison aspects relevant to creating and hosting a web service. Knowing the procedure of constructing a web service, as well as its architecture and standards being used is highly required and important. This paper is intended to provide web programmers some of the basic concepts regarding two of the most commonly used web services nowadays. These two web services are WSDL and REST. It firstly provides the architecture, standards, and steps on how to create them. And lastly, it concludes with some findings and observations.
- 11:40 Reducing Cloud provisioning Cost Using Spot Instances hopping
- The cost reduction is one of the attractive features offered by the cloud. Spot leasing is one way to reduce the cost. Spot leasing is done by leasing the unused excess instances with low price. Spot instances are facing risks that minimize their reliability and desirability. Risks including instances reclaiming and dynamic price changing. Minimizing the risks associated with the spot leasing is going to help to increase the utilization of the spot instances, which in turn is going to attract more users. In this paper, a framework has been proposed to mitigate the instances reclaiming risk while reducing the leasing cost as possible. This is done by monitoring many markets and hopping between instances. The proposed framework has been evaluated through simulating randomly generated data and actual data collected from Amazon web services. The proposed framework recorded 9% to 42% of cost reduction compared with the actual cost.
- 12:00 Utilizing excess cloud resources to reduce the cost and time of distributed processing
- Distributed computing requires to have large resources in order to perform. The distributed computing system performance is proportionally increasing along with the increasing of the available resources. A work obtained from a single computer in one hundred hours can be achieved within one hour by integrating a hundred computers together. This makes the distributed computing a preferred choice for applications that need to work on big data or intensive parallel processing. Maintaining high number of resources is considered a costly issue, especially for the applications that operate in an infrequent periods. On the other hand, the cloud service providers invested in building their data centers and preparing them to handle the dynamically changing loads. By adding extra resources, the cloud providers successfully provide scalability features to its customers. Such excess resources can be leased with relatively low cost compared with other services, but at the same time it comes with some risks. Deploying the distributed computing on the cloud using the excess resources will help in solving many issues. From on point, it will help minimizing the total costs of provisioning which the distributed computing suffering from. Another point is that the cloud providers maintaining large resources which can be used to making the process faster, while relaying on the distributed computing system to manage the risks associated with the excess resources allocation. In this paper, a survey for distributed computing implementations attempts has been provided, while showing some benefits and problems associated with such integration.
S8: Robotics, Computer Vision, and HCI
- 11:00 Gender Identification Using 3D Touch Sensor Extended Version
- Gender identification has immense value in science and technology. It helps optimizing a software application pertaining to users' gender. In this study, we show how the 3D touch sensor can be used for gender identification. The 3D touch sensor is newly introduced in the handheld mobile devices. This sensor measures how much force the user applies to the touch screen. Therefore, these biometric measurements can be exploited for gender identification. This study develops a special iOS application to collect 3D touch sensor's data from a user. Discrete wavelet transform based features extracted from such data can then be used to train ensemble bagged tree classifier for the purpose of gender identification. The developed iOS application and proposed classifier are tested using data collected from 185 males and 181 females. The results are evaluated in terms of sensitivity and specificity, giving averaged values of 78.2% and 79.1% respectively; however, the sensitivity/specificity could exceed 97% in some cases. Therefore, a 3D touch sensor has a potential value in providing biometric data useful for gender identification.
- 11:20 Design and Analysis of Multicore Matrix Multiplier for Image Processing
- The need for the increase in processing speed has been increasing lately. Parallel computing and compound instructions are techniques that have been employed to increase the speed of processing devices. In this paper, a high-performance multi-core matrix multiplier using parallel computing techniques has been designed and implemented. Simulation results are promising and indicate that the area and power consumed by the proposed matrix multiplier increase linearly with the number of processors and latency was found to decrease by a factor of 1/p where p is the number of processors. Due to its salient features, the proposed multi-core multiplier can be utilized to increase processing speeds in graphical processor units.
- 11:40 Phi-descriptor based fuzzy modeling for more spatial relations
- In numerous fields of computer vision and pattern recognition, spatial relations between image objects play an important role. Descriptor serves as a base for molding spatial relationships. Numerous descriptors have been proposed that have applications in areas like image recognition, extraction, indexing of images, image comparison, description of the scene, and map-to-image conflation, but these descriptors are immature to explore complex relations like "between", "surround" and "among". A recently proposed Phi-descriptor having many good features as compared to the other position descriptors is extended for these relations. Phi-descriptor allows the extraction of a wide range of spatial relations including distance and directional relations, the set and non-set topological relations. The important property of the Phi-descriptor is its linear time complexity for raster objects which make the descriptor computationally fast. In this paper fuzzy functions are designed for the extraction "between", "surround" and "among" relations using Phi-descriptor. The novelty of this model is the use of the Phi-descriptor for the extraction of more complex relations that are not taken into account yet. Results of this model are compared with those of existing models on standard image data and are found satisfactory as per expectations.
- 12:00 Electroencephalography Features Extraction and Deep Patterns Analysis for Robotics Learning and Control through Brain-Computer Interface
- Electroencephalography (EEG) has been a recent subject for various medical, clinical and non-clinical uses and applications. This is due to the extensive amount of information hidden within the human EEG brainwaves and brain neural activities. Within this article, we shall present another novel approach related to mining with well, healthy, and clinically approved EEG brainwaves related to events related to human vision EEG, and how a robotics system can be controlled and moved through such Brain-Computer Interface (BCI). The adopted thought is more related to the usage of time related features, and the time-spectral (wavelet) features extraction, hence to recognize the complicated eye-thoughts EEG patterns through a recognition algorithm. Two approaches have been used for the pattern recognition, this include the SVM, and the PAC based Random Forest with 30 trees. Results also include how a robotics system can be controlled using the EEG brainwaves.
S9: Cyber Security-2
- 11:00 A Novel Honeynet Architecture using Software Agents
- As security risks increase around the worldwide, each host in any computer networks may be controlled by the attackers to build a group of computer networks are called the botnet. One of the best technologies to monitor and analyze the new techniques and tools of the attackers are honeynet technology. The honeynet allows the security researchers around the world to discover the most recent threats and vulnerabilities of the computer networks. Moreover, the development of the virtualization technology and software agents make the configuration of the honeynet more productive, flexible and intelligent. In this paper, a new honeynet architecture has been proposed. The architecture is designed on the concept of software agent and virtualization technology. The main functions of the architecture are to capture and monitor the attacker activities. A Proof of concept prototype has been implemented and validated. The proposed architecture has been tested on a large scale experiment against a botnet attack, which is the distributed denial of service (DDOS). The percentage of the success rate of the validation process of the prototype is 98%. The results of the large scale experiment have been achieved a percentage of 79% as a success rate.
- 11:20 The Effect of the Choice of Financing on SMES' Post-Entry Performance - Case of Bahrain
- This study investigates the effects of the choice of financing on SMEs in the context of Bahrain. SMEs play a major role in the private sector and have the power to influence countries' economies, especially in emerging markets which have recently witnessed the globalization trend opening their economies to capital inflows. However, information asymmetries between lenders and entrepreneurs raise the issue of the different financing implications on SMEs' post-entry performance. Furthermore, scholars have argued that theories developed through studies conducted on markets in the Western context are challenged by contexts from non-Western settings. While numerous works have studied the implication of financing structures on firm performance in developed markets, especially the U.S.A. and Europe, little work has been done on the effects of the choice of financing on firms in contexts outside the Western world. This investigation is tackled by conducting one-on-one in depth semi-structured interviews with entrepreneurs in the Kingdom of Bahrain. The findings identify three main methods of financing namely debt, subsidy and personal, friends and family financing, and highlight that a flexible and adaptive approach to startup financing is linked to better post-entry performance thereby aiding entrepreneurs, lenders and practitioners to better understand the financing intricacies involved.
- 11:40 Malicious Relay Node Detection with Unsupervised Learning in Amplify-Forward Cooperative Networks
- This paper presents malicious relay node detection in a cooperative network using unsupervised learning based on the received signal samples over the source to destination (S-D) link at the destination node. We consider the situations in which possible maliciousness of the relay is the regenerative, injection or garbling type attacks over the source signal according to attack modeling in the communication. The proposed approach here for such an attack detection problem is to apply unsupervised machine learning using one-class classifier (OCC) algorithms. Among the algorithms compared, One-Class Support Vector Machines (OSVM) with kernel radial basis function (RBF) has the largest accuracy performance in detecting malicious node attacks with certain types and also detect trustable relay by using specific features of the symbol constellation of the received signal. Results show that we can achieve detection accuracy about 99% with SVM-RBF and k-NN learning algorithms for garbling type relay attacks. The results also encourage that OCC algorithms considered in this study with different feature selections could be effective in detecting other types of relay attacks.
- 12:00 Cloud Multi-tenant Security Architecture
- For any business needs, organizations needs to invest time and budgets to scale up their IT infrastructure such as hardware, software and services. Organizations are frequently unable to achieve optimal of the IT infrastructure. Cloud computing is a shift that provide computing over the internet in a service that is consist of highly optimized virtual data centers that provides the hardware, software and information resources. The organizations can simply connect to the cloud and use the available resources on a pay - per - use basis this helps the companies avoid capital expenditure. Deploying a cloud computing, the services in a model consist of Infrastructure (IaaS), Platform (PaaS) and Software (SaaS). In a Software-as-a-Service (SaaS) contains a multi-tenancy architecture where multiple tenant applications can be developed using components in the software SaaS infrastructure. However, with many benefits there are related challenges and issues with the architecture of the model.
Monday, September 23 12:20 - 12:40
P2: Prayer
Monday, September 23 12:40 - 14:20
S10: Smart Cities-2
- 12:40 A New Paradigm for a Marketplace of Services: Smart Communities in the IoT Era
- We propose a vision and a new paradigm for a Mar- ketplace of Services as an integral part of a Smart Community Infrastructure. The Smart Communities of the (near) future will provide a large number of services to be offered as utilities and sold on a metered basis. These services will be aggregated and synthesized from a hierarchy of resources produced and shared by the community itself. Smart Community members or visitors will purchase as much or as little of these services as they find suitable to their needs and are billed accordingly. The context and condition of the members and the environment play a major role in service offerings and adaptations. Smart communities have four fundamental characteristics that can be derived im- mediately from the way they are built: sustainability, resilience, empathy-driven proactive intelligence, and emergent behavior. These fundamental characteristics are a direct consequence of the underlying platform construction and management of the marketplace and its underlying IoT infrastructure. We illustrate our vision using examples of the services that the marketplace may offer. We also highlight some major research challenges that need to be resolved to make our vision reality
- 13:00 Smart library as Learning Hub in Kingdom of Bahrain .. Sustainable approach
- In smart cities, Cyber life is the future of our practical experience. There are many expectations in changing urban design and architecture roles. Researchers in the field of the Social-technical branches face different challenges in the implementation of smart city initiatives Designers have to be ready to match the rapid technology and our life' requirements. The emphasizing of having a Smart city upon the socio-economic development has to be the priority while thinking. Therefore, this research is to present the concept of having a cyber hub for universities students in Kingdom of Bahrain that has different facilities that students may need in term of learning, social, economic, arts, ...etc. A place where users can gather and socialize, learn and do their work or activities during the day in a flexible. A place has a primary target of developing students'' learning skills in different cyber levels and approaches. The concept of creating a cyber hub is related to social and informal learning and development; the idea will assist the users that will be occupied the spaces, it will promote social within the learning environment by providing facilities that will support users to easiness their learning workability further in a better environment. Therefore, the target of having this hub is to have an intimate workspace for students that they can work in flexible time, which is not relating to the formal education time in universities. There will be smart facilities that support the students need during the day, for example, ordering foods and drinks by having coffees while doing the work. Students will have various opportunities to find any information and knowledge in different style either in a traditional style or in a different manner, for example, a book or digital in the library. However, what is the expectation of this hub? Moreover, what it looks like? The answer will come in this research
- 13:20 A Fault Location System Using GIS and Smart Meters for the LV Distribution System
- This paper presents a proposed fault locating system for the low voltage distribution systems in Bahrain using the existing installed smart meters and the geographical information system (GIS). The currently used methods for locating faults in the low voltage distribution system in Bahrain are very time consuming, unreliable and potentially very harming to the cable connections. Therefore, this paper implement the use of smart meters in fault location, by utilizing the exiting functionalities of smart meters in order to make the proposed fault location system cost effective. Simulation using MATLAB/SIMULINK and calculation of fault location is conducted in order to investigate the possibility of finding fault location based on voltage sag readings that the smart meter would measure at each house. The smart meter during the power failure would use a GSM connection to communicate the power failure to the server, comparing it with all turned off smart meters, and locate the faulty feeder which will then be displayed on the geographical map using a custom python code integrated with the program. The initial results of the actual practical testing and implementation of the proposed fault location system is presented.
- 13:40 Neural Network-based Control for Wind Turbine System with PMSG
- With the renewable energy market advancing, wind energy is used as a clean source for power generation. PMSG based wind turbine generations has emerged as useful technology for wind power harvesting. Due to the intermittence of the wind, there is a need of effective MPPT controller that always tracks the maximum power and generates a reference angular velocity at the shaft at which the PMSG rotor should operate. This paper proposes a neural network-based control for wind turbine system with PMSG. This technique efficiently tracks maximum power at optimum turbine speed from the neural network model. This model also generates optimum reference speed at the shaft for PMSG rotor. The speed control method is adopted, where the PI controller is fed with the error of reference shaft speed and PMSG feedback rotor speed. This method makes the controller to be robust as the PMSG output rotor angular speed operates at the reference speed and the speed and electromagnetic torque follows the variation in wind speed.
- 14:00 Performance Based Design of Diagrid Tall Buildings for Earthquake Loads
- A triangulated exoskeleton, or diagird, structural system has emerging as a structurally efficient and architecturally valid solution for tall buildings. The diagrid creates a highly efficient and redundant tube structure by providing a structural network allowing multiple load paths. The diagrid system has higher inherent torsional rigidity than most other structural systems. Although engineers recently started using this structural system for tall buildings, almost all its applications have been in areas of low seismicity. The main goal for this study is to examine the potential utilization of this highly efficient and economic structural system to tall buildings in high-seismic regions. Three buildings with different height (82-, 64-, and 38-story) and footprint are selected. Detailed performance-based assessments are carried out for earthquake action at various hazard levels for serviceability, survivability, and collapse prevention. The results from the analyses showed a superior performance for the diagrid structures under seismic loading.
S11: Software Engineering - 2
- 12:40 Web 2.0 Testing Tools: A Compendium
- Providing clients with high quality web applications has long been a major concern for web developers especially with the increased diversity of web frameworks and functionalities. In the past years, several tools and methodologies have been used for evaluating and measuring the quality of web services. One of the ways of guaranteeing high quality applications is done through testing. Testing is an important aspect of every software development process which companies rely on to elevate all their products to a standardized set of reliable software applications while ensuring that all client specifications are met. Therefore, a battery of testing tools, techniques and frameworks were invented to ensure the quality of web applications developed in order to serve clients around the clock. In this paper, we purvey some of the widely known tools and models as far as web testing is concerned.
- 13:00 Validating Software Security using Regular Expressions
- In modern society, software security has become an essential part of most software systems. However, validating software security is a challenging task. This paper presents an approach to test and validate software security through static analysis using regular expressions. The aim is to show a complete view of testing software security, which covers all possible issues that may exist in the source code. Moreover, a prototype has been designed and deployed on a server and evaluated using a case study.
- 13:20 Machine Learning Models for Software Cost Estimation
- Software cost estimation is a critical task in software projects development. It assists project managers and software engineers to plan and manage their resources. However, developing an accurate cost estimation model for software project is a challenging process. This paper builds a software cost estimation model using machine learning approach. Different machine learning algorithms are applied to two public datasets to predict the software cost in early stages. Results show that machine learning methods can be used to predict software cost with high accuracy rate.
- 13:40 Ground Operations Management using a Data Governance Dashboard
- An incident involving the use of chemical, biological, radiological, and nuclear (CBRN) materials might represent a significant challenge for the emergency services. The physical examination on the scene by hand may not be possible or could be severely restricted due to the presence of hazardous material or risk of building collapse. The paper presents how the ROCSAFE project addresses these goals. ROCSAFE is a multidisciplinary research project with advances made across diverse topics from robotics to sensor technology to analytical and situation awareness software. The main goal is to fundamentally change how the incidents are assessed and ensure the safety of crime scene investigators, by reducing the need for them enter dangerous scenes to gather evidence. The paper describes the project architecture providing a comprehensive and engaging view of the critical data that need to be monitored for supporting the decision-making process. A customizable situational awareness system is described using a geo-dashboard interface with data visualizations features, providing users with at a glance awareness of current performance. In this research, the geo-dashboard corresponds to a visualization tool that integrates information from multiple components into a unified display. It enables users to respond to unfolding situations more quickly, initiate alerts and associated response plans more effectively, and escalate/notify others as necessary. The perspective of the end-users is essential to the successful outcome of ROCSAFE; therefore, a significant effort was made to gather and analyze end-user data requirements. A further consideration for the choice of operational scenarios was to consider the viewpoint of the individuals who will use that system.
- 14:00 A Novel Architecture to Verify Offline Hand-written Signature using Convolutional Neural Network
- Handwritten signature plays an important role in our legal life for authentication and verification. The dynamic information in case of offline signature verification is lost and designing an accurate feature extractor is difficult that can distinguish between skilled forgeries and genuine signatures. In this paper, a Convolution neural network (CNN) is proposed for signature verification (SV). Different from state of the art, the paper considers authentication based training model by training the signatures in pairs. The simulation results using datasets shows that the proposed method achieves 27 % (relative) better results than the benchmark schemes. The paper also integrated different data augmentation techniques for the signature data, which further improved the efficiency of the proposed scheme by 14 % (relative). Keywords- Handwritten Signature, authentication, verification, Convolutional neural networks, signature verification, dataset, data augmentation
S12: Internet of Things & Informatics -3
- 12:40 Optimal Resource and Task Scheduling for IoT
- The internet of things, or IoT, is a system of interrelated computing devices with the ability to transfer data over a network without requiring human or computer interaction. They are embedded with electronics, Internet connectivity, and other forms of hardware, objects, animals or people that are provided with unique identifiers (UIDs) and generally equipped with a supportive Wireless Sensor Network (WSN). Each sensor node is connected to a gateway node through internet and these acts as a repository of user queries; they preprocess and separated into spatial and temporal tasks. The preprocessed tasks should be executed in a suitable sensor node or in a group of sensor nodes, which will make it effective for the application. We studied resource and task scheduling for the IoT systems which aims at minimizing average data rate and minimum task performing rates of IoT devices. To achieve this goal, we are proposing a hybrid resource and task scheduling algorithm by combining priority non pre-emptive algorithm and Ant Colony Optimization (ACO) to prioritize the tasks and find optimal path to direct them to its concerned sensor node for execution. Here we concentrate on the tasks which do not have a real-time requirement and can be stored in the task queues of the IoT devices and performed later
- 14:20 The Perspectives of the Integrated IoT Devices in the Improvement of the Industrial Lab Experience
- IoT devices have a wide spectrum of application in the real-life environment. While these applications range based on the area covered, having the optimal scenario related to the devices covering the optimal area is a challenge. In this work, we consider the improvement of the industrial laboratory by transferring it to the smart lab by the employment of the IoT devices. We analyzed the trade-offs between different scenarios of the smart lab with focus on security and congestion of the network and its effects on overall performance, For the smart lab case study we can conclude that security-enabled feature (encryption) will not significantly affect the performance of the smart lab comparing it to the benefits of IoT integrated device on overall improvement of lab experience given that the traditional lab had significant time delay.
- 16:00 eGovernment service quality measurement scales: literature review
- The aim of this paper is to investigate the service quality dimensions that were discussed by previous literature. Service quality is important nowadays which enable organizations to understand users' expectations of services. Using the literature review methodology, the researchers were able to identify the fundamental dimensions that are common among most of studies at e-government service quality, such as information quality, interaction quality, efficiency quality, reliability quality, transaction transparency, privacy and security.
- 17:40 The Effects of Information Technology and E-Learning Systems on Translation Pedagogy and Productivity of EFL Learners
- This study is concerned with addressing the increasing demands for reliable translation services due to different factors including the advent of the electronic text, the prolific production of texts in different disciplines, the increasing communication between individuals from different languages and cultures, and the unprecedented development of new technologies, software and applications (referred to in the literature as CAT tools). It has become clear that human translation cannot address these increasing needs of customers and businesses. In the face of this problem, this study is based on the hypothesis that translation technologies and machine translation systems can be usefully integrated into translation classrooms for improving the translation quality and performance of learners. For the purposes of the study, an experimental study was carried out where 20 participants in Level 6 who are attending the translation course in Prince Sattam Bin Abdulaziz University volunteered to take part in an optional legal translation course. Participants were divided into 2 groups: an experimental group and a control group where the instruction of participants in the experimental group was supported by CAT tools and those in the control group were taught using only conventional teaching methods. Results indicate clearly that there was a significant difference between CAT users and nonusers in favour of the experimental group (p < .05). It can be concluded then that translation systems and software can be usefully used for improving the translation performance of EFL students. The implications of the study are useful for translation industry. The integration of translation technologies (CAT) into the learning environment not only transfers the practical and professional skills that well qualify graduates for the translation industry, but it also creates an environment that enables them to acquire different skills in some related disciplines such as terminology, assessment of translation techniques, interaction between humans and the machine, and text analysis. It is suggested then that translation instructors should adopt blended learning models where they integrate translation technology with conventional teaching methods. They should also encourage their students to use technology as a way for improving their translation performance. Educational institutions are thus recommended to provide professional training for instructors on the use of language software and translation systems and make these available for students.
- 19:20 Tourism Recommendation System based on User Reviews
- Recently, recommendation systems have become an active topic. Studies indicated that existing tourism recommendation systems provide misleading recommendations that do not actually meet tourists expectations. One of the main reasons for this problem is that most of these systems neglect previous user reviews. This paper proposes a tourism recommendation system with the integration of the user review element. Based on three factors- number of reviews, rating, and sentiment, the user reviews are analyzed and then used in the recommendation of hotels for tourists.