2020 International Conference on Innovation and Intelligence for Informatics, Computing and Technologies (3ICT) Program

Program- with Presenter's Biodata (PDF)
Short Program (PDF)
Speaker: MS Teams Background-1 Download
Speaker: MS Teams Background-2 Download
Speaker: MS Teams Background-3 Download
Time Elsewhere

Sunday, December 20

12:00-12:05 OC-1: Opening Ceremony
12:05-12:10 OC-2: Quran Recitation By
12:10-12:20 OC-3: Talk by His Excellency President of the University of Bahrain
12:20-12:40 K-1: Keynote Speech -1: "Innovation toward digital payment"
12:40-12:50 OC-4: Presentation for accepted paper statics by Dean of IT College
12:50-13:00 Keynote-2: The role of 5G in a post-pandemic economy
13:00-13:20 B-1: Break 1
13:20-13:50 KS-3: Keynote Speaker-3: "Artificial Intelligence in Cloud Computing and Internet-of-Things"
13:50-14:20 KS-4: Keynote Speaker-4: From Cloud to Fog Computing Scheduling Real-Time Applications
14:20-15:20 LB:: Lunch Break Day-1
15:20-16:40 S1-A: Machine Learning in Finance,
S1-B: Convolutional Neural Network-1,
S1-C: Smart Cities,
S1-D: E-Learning; Multimedia; Educational Technology,
S1-E: Informatics-1
16:40-17:00
17:00-17:30 P: Prayer
17:30-18:50 S2-A: Machine Learning for Big Data Analytics,
S2-B: Convolutional Neural Network-2,
S2-C: Cyber Security-1,
S2-D: Software Engineering,
S2-E: Informatics -2
18:50-19:10
19:10-19:30 B2: Break 2
19:30-20:00 VT: Virtual Tour to Bahrain

Monday, December 21

12:00-12:50 PD -: Panel Discussion: "Women in Tech"
12:50-13:20 B3: Break 3
13:20-15:00 S3-A: Internet of Things,
S3-B: Cloud Computing & Machine Learning,
S3-C: Telecommunication and Networking,
S3-D: Robotics, Computer Vision, and HCI
15:00-16:00 LB-2: Lunch Break Day-2
16:00-16:30 KS-5: Keynote Speaker-5: Preserving Data/Query Privacy Using Searchable Symmetric Encryption
16:30-17:00 KS-6: Keynote Speaker-6: Network Automation: Challenges and Opportunities
17:00-17:30 P: Prayer
17:30-19:10 S4-A: Cyber Security & Machine Learning,
S4-B: Wireless Sensor Network,
S4-C: Blockchain & Cyber Security-2,
S4-D: Deep & Machine Learning
19:10-19:30 CS: Closing Session

Sunday, December 20

Sunday, December 20 12:00 - 12:05 (Asia/Bahrain)

OC-1: Opening Ceremony

Chair: Faisal Hammad

Sunday, December 20 12:05 - 12:10 (Asia/Bahrain)

OC-2: Quran Recitation By

Dr. Ahmed Zeki
Chair: Faisal Hammad

Sunday, December 20 12:10 - 12:20 (Asia/Bahrain)

OC-3: Talk by His Excellency President of the University of Bahrain

Prof. Riyad Hamzah
Chair: Faisal Hammad

Sunday, December 20 12:20 - 12:40 (Asia/Bahrain)

K-1: Keynote Speech -1: "Innovation toward digital payment"Details

Mr. Abdulwahed Janahi, Chief Executive, The Benefit Company
Chair: Faisal Hammad

Sunday, December 20 12:40 - 12:50 (Asia/Bahrain)

OC-4: Presentation for accepted paper statics by Dean of IT College

Dr. Lamya Al jasmi
Chair: Faisal Hammad

Sunday, December 20 12:50 - 13:00 (Asia/Bahrain)

Keynote-2: The role of 5G in a post-pandemic economyDetails

Mr. Anas Shahadi
Chair: Faisal Hammad

Characteristics and the promise of 5G technology; Macroeconomic benefits of 5G; 5G Value Chain and stakeholders : Education institutes importance in preparing next wave economy development talents; Potential Use cases contribution on UN sustainable development goals (SDGs). Stakeholders shall align and cooperate to fully realize the socio-economic value that 5G can deliver through its defining key features and to unlock various use cases across multiple industry sectors, using the 5G Ecosystem Cycle as a framework by defining key actions that stakeholders can take to contribute to the successful deployment of 5G.

Sunday, December 20 13:00 - 13:20 (Asia/Bahrain)

B-1: Break 1

Sunday, December 20 13:20 - 13:50 (Asia/Bahrain)

KS-3: Keynote Speaker-3: "Artificial Intelligence in Cloud Computing and Internet-of-Things"Details

Prof. Vincenzo Piuri - Professor at the Università degli Studi di Milano, Italy
Chair: Noora Alghatam

Recent years have seen a growing interest among users in the migration of their applications to the Cloud computing and Internet-of-Things environments. However, due to high complexity, Cloud-based and Internet-of-Things infrastructures need advanced components for supporting applications and advanced management techniques for increasing the efficiency. Adaptivity and autonomous learning abilities become extremely useful to support configuration and dynamic adaptation of these infrastructures to the changing needs of the users as well as to create adaptable applications. This self-adaptation ability is increasingly essential especially for non expert managers as well as for application designers and developers with limited competences in tools for achieving this ability. Artificial intelligence is a set of techniques which greatly can improve both the creation of applications and the management of these infrastructures. This talk will discuss the use of artificial intelligence in supporting the creation of applications in cloud and IoT infrastructures as well as their use in the various aspects of infrastructure management.

Sunday, December 20 13:50 - 14:20 (Asia/Bahrain)

KS-4: Keynote Speaker-4: From Cloud to Fog Computing Scheduling Real-Time ApplicationsDetails

Prof. Helen Karatza - Professor Emeritus at Aristotle University of Thessaloniki, Greece
Chair: Ahmed Fahad

Cloud computing has become an emerging area of research for many years now. Particularly important in cloud computing is to run delay-sensitive applications. It is essential that effective scheduling techniques are utilized ensuring timeliness. This can be achieved due to cloud's high-performance computing capacity for real-time processing. In recent years there is an expansion of the Internet of Things (IoT). IoT applications generate huge amounts of data and it is critical to process these data in real-time and provide immediate decisions. As a result, fog computing has been introduced as a computing paradigm extending the cloud to the edge of the network, thus reducing the latency of IoT data transmission. The potential of the research on cloud and fog computing is strong due to the challenges in dealing with real-time applications in the IoT domain. However, the computational capacity of fog servers is usually restricted, therefore it is necessary to explore alternative techniques that involve the collaboration between the cloud and fog resources. Consequently, appropriate scheduling of time-sensitive applications is required to fully exploit the capabilities of cloud and fog computing so that the deadlines are met. In this keynote we will present various aspects of cloud and fog computing, from the perspective of scheduling real-time applications and we will conclude with future research directions in the cloud and fog computing areas.

Sunday, December 20 14:20 - 15:20 (Asia/Bahrain)

LB: Lunch Break Day-1

Sunday, December 20 15:20 - 17:00 (Asia/Bahrain)

S1-A: Machine Learning in Finance

Chairs: Ebrahim Abdulla Mattar, Athraa Almosawi
15:20 Fraudulent Transaction Detection in FinTech using Machine Learning Algorithms
With the advancement of e-commerce, online transactions purchases using credit and debit cards have drastically increased. This has caused a burst in credit and debit card fraud and has become a profoundly significant global issue. Fraud touches every area of our lives and is a growing concern that effects both businesses and customers. As machine learning techniques provide unique and efficient solutions, they are applicable in various types of problems. Recently, machine learning algorithms have been widely applied as a data mining technique for classification problems. In this paper, a binary classification problem is considered where a transaction can be classified as either fraudulent or legitimate transaction. The goal is to classify the transactions using five different machine learning algorithms. The transaction dataset (Task1 and Task2) is preprocessed, and then SGD, DT, RF, J48 and IBk machine learning classifiers are applied. After applying the classifiers, the results are compared to analyze which classifier performs the best. Based on the experimental results, it is found that the accuracy percentage of all the five classifiers for Task1 and Task2 datasets is ranging between 97.78% to 98.1%, with no major difference. As the dataset is highly imbalanced, the kappa statistic value is also considered. For both datasets, the RF classifier had the greatest value of kappa statistics, whereas SGD and J48 had the lowest value for Task1 and Task2, respectively. Other evaluation metrics were also considered for evaluating the performance of the applied classifiers. Overall, these classifiers achieved similar results for Task2 dataset. As negative Kappa statics and MCC values were obtained, the SGD classifier for Task2 dataset had the worst results in comparison. Based on evaluation criteria such as Kappa statistic and MCC values RF outperformed the others for both the datasets.
15:40 Intrusion Detection System using Feature Selection With Clustering and Classification Machine Learning Algorithms on the UNSW-NB15 dataset
The identification of malicious network traffic through intrusion detection systems (IDS) becomes very challenging. This malicious network appears as a network protocols or normal access. In this paper, for the classification of cyberattacks, four different algorithms are used on UNSW-NB15 dataset, these methods are naive bays (NB), Random Forest (RF), J48, and ZeroR. Also, K-MEANS and Expectation Maximization (EM) clustering algorithms are used to cluster the UNSW-NB15 dataset into two clusters depending on the target attribute attack, or normal network traffic. To develop an optimal subset of features, Correlation-based Feature Selection (CFS) is used, then the mentioned classification and clustering techniques are used. The used methods gives an efficient tool for studying and analyzing intrusion detection in large networks. The result show that RF and J48 algorithms performed best results with 97.59%, and 93.78%, respectively.
16:00 A Novel Design of a Fully Seamless Payment Experience
A novel design of a fully seamless and cashless payment gateway is outlined with the challenges that arise from establishing a seamless payment experience within the existing monetary and e-system in the Kingdom of Bahrain. The proposed design integrates various existing technologies such as RFID, cloud computing, in-memory data structures, and more; to build a user experience that is seamless and requires no physical interaction to complete the payment. The gateway also uses two-factor authentication seamlessly, to verify the buyer's identity. After discussion of the principles followed for the design, the underlying hardware and software architecture, and means of integrating internal and external systems; a list of challenges and lessons learned will be discussed.
16:20 Evaluation of Graphical Password Schemes in Terms of Attack Resistance and Usability
User Authentication is an important component of security. There are several mechanisms for authentication in use, such as alphanumerical usernames and passwords. However, due to the well-known weaknesses attached to this method, graphics-based passwords were suggested as an alternative. Due to the ability of humans to remember images faster and for a longer period. This study comprises of comprehensive research in the graphical password schemes and evaluates each of the available schemes at two main areas (attack resistance and usability). In the end, it also provides an answer to the question "Are graphical passwords more secure than alphanumerical passwords?".
16:40 Predicting Price of Daily Commodities using Machine Learning
Daily commodities are necessities that take up a large part of the product market. The fluctuation of the price in daily commodities is an apparent phenomenon that has a high bearing on the cost of living. Early prediction of price can help to control prices by monitoring and adjusting the price in the market beforehand so that the commodity market runs in a stable way. Suppliers and manufacturers can choose to produce or supply commodities accordingly. It will help to balance inventory and profitability as well as improve availability to consumers. Thus, it is directly related to the interest of consumers and producers. The selection of suitable and favorable algorithms is one of the most practiced studies in the field of data forecasting. It takes the advantage of the empirical evidence available at hand to choose the most appropriate model since no model can be contemplated as the best. In this study, We performed an extensive and comprehensible experimental evaluation to predict commodity price using state-of-the-art Machine Learning algorithms.

S1-B: Convolutional Neural Network-1

Chairs: Alauddin Yousif Al-Omary, Ahmed M. Zeki
15:20 Real Time AI-Based Pipeline Inspection using Drone for Oil and Gas Industries in Bahrain
The inspection process of oil and gas platforms is highly crucial. Multiple factors need to consider in such a pro-cess. Cost, safety, environment considered to be crucial fac-tors in the inspection process. Current processes to conduct such inspection in Bahrain mainly rely on human interac-tions. These processes considered to be costly, risky and may harsh the environment at some point. Furthermore, there are some conditions where a human cannot visualize some type of faults with their naked eyes. However, this research aims to implement an Unmanned System in the inspection process of oil and gas platforms in Bahrain. The proposed system is a drone equipped with a thermal cam-era that can monitor oil and gas pipelines to detect leak-ages and cracks in pipelines in a remote and risky area. This system uses AI-based on-board processing for leak-age detection. Moreover, the implemented system has high precision accuracy classifier implemented on-board. This classifier is accelerated to achieve real-time processing per-formance. Thus, it implemented using parallel processing in hardware implementation. This system will have a real-time alert with less than 100 ms delay. This research aims to reduce the total current inspection cost and time required for alerting pipeline leakage.
15:40 Reinforcement Learning for Physics-Based Competitive Games
Physics-based games are vast in terms of possible state spaces. There are many strategies that can be implemented in competitive games such as playing passively and waiting for the opponent to make a mistake, playing aggressively to force mistakes from the opponent, or even using environmental objects to an agent's advantage. The vastness of possibilities makes it difficult for a programmer to account for all these situations and create a rule-based intelligent and believable hard-coded AI agent. This project seeks to take advantage of reinforcement learning to create agents that can adapt to dynamically changing physics-based environments such as the example of competitive vehicular soccer games. It seeks to produce believable agents that perform intrinsic behaviors such as defending their goal and attacking the ball using reward functions. Through trial-and-error, the reward function is modified to progressively form behavioral patterns that improve in performance. The performance tests prove that a reward function that considers different state space parameters can produce better performing agents compared to ones with a less defined reward function and state space. Moreover, the final agent trained through the experiments has proved to be believable and hard to distinguish from a human player.
16:00 Prediction of Traffic Crash Severity Using Deep Neural Networks: A Comparative Study
World health organization (WHO) reported that millions of people are killed and injured in road traffic crashes (RTC). The consequences of the increasing rate of traffic crashes include significant social and economic welfare loss. The severity of the RTC is an important element to investigate and address the welfare loss. Accurate prediction of RTC severity is beneficial to trauma centers as it generates crucial information that can be used to take the required actions that will help in reducing the aftermath of crashes. This study aims to evaluate the performance of deep neural network (DNN) in predicting the severity of traffic crash using attributes that can be identified quickly on crash sites. Moreover, the DNN model's performance was compared with that of the support vector machine (SVM) model, which is widely used for traffic crash severity prediction. Compared to SVM, it was found that DNN is superior in predicting RTC severity with prediction accuracy and F1 score of 95% and 93% respectively.
16:20 Compression Techniques for Handwritten Digit Recognition
Compressing images before recognition leads to many benefits including efficient computation, compact models, and optimal memory utilization. Several techniques for compression of handwritten digits have been investigated and implemented. This paper presents three compression techniques used in signal processing for compressing handwritten digit images, which are Discrete Cosine Transform (DCT), Discrete Sine Transform (DST) and Wavelet Transform (WT). These techniques are evaluated for their ability to compress the digit images while retaining useful information needed for classifying them, subsequently. Experiments conducted on the publicly available MINST dataset show the effectiveness of the techniques. With the presented techniques, we were able to compress the original images by 48.98%, 71.30%, and 87.24% while leading to reduction in accuracy by only 1.413%, 3.187%, and 7.238%, respectively, on an independent test set.
16:40 Neural Networks Representation For Semantic Networks
Semantic network is one of the very well-known knowledge representation approaches. In various cases there is knowledge inaccuracy or incompleteness, such problems can be overcome through machine learning. In this paper we present an approach to convert semantic networks to neural networks to be ready for machine learning process. The conversion process considers all possible concepts and relations in the semantic network whether they already exist or not. The proposed neural network structure has various and several layers, such as concepts layer, relation layer and concepts and relation layer. For the learning process, it is proposed to have independent training for several neural networks.

S1-C: Smart Cities

Chairs: Isa Salman Qamber, Ehab Juma Adwan
15:20 Capacity Margin Probabilities Neuro-Fuzzy Model Development and LOLE Calculation
It is necessary to calculate appropriate loss term available for the suitable power system reliability. The current study discusses the Loss of Load Expectation (LOLE) in the targeted system which is assessed using the advanced Adaptive Neuro-Fuzzy Inference System (ANFIS) method. The obtained LOLE is calculated with the Capacity Margin Probabilities for the considered power system. Therefore, the considered system has a very low value of LOLE which means it is a reliable system. Avoiding the black-out is the novelty of the present study for a power system and obtaining a high reliable system as well. This helps to reach the demand energy for different sectors at any required period. Furthermore, the developed model will satisfy the countries development economy. In the country under consideration, the results obtained is helping to reduce the capital investment. Also, it will limit the equipment installed and the load expectation. In conclusion, the Neuro-Fuzzy applied in the development and planning of power system is one of the most accurate method.
15:40 Monetary Benefits of Solar Energy for Smart Cities Development
As cities expand, so does their energy use. And as cities strive to become "smart," renewable solar and wind power can play a crucial role in helping them achieve their goals. Therefore, this research paper analyzes how renewable energy affects the households energy cost which leads to ward smart city development. For this propose, data was collected from 100 households using a structured questionnaire through purposive sampling. Linear regression model was applied to analyze the hypotheses and the current findings indicate that the use of solar energy has negative and significant effect on household energy cost in Bahrain. Moreover, the empirical findings also confirmed that the total income of the household significantly and positively moderating the relation between usage of solar energy and household energy cost, however, the family size and average education moderating the relationship negatively. Therefore, the study suggested that government should encourage to build the future buildings especially the public buildings and housing societies by installing the solar panels at their roofs to cut the energy cost and to deal the environmental issue.
16:00 Design and Implementation of Smart Home using WSN and IoT Technologies
Internet of Things (IoT) and Wireless Sensor Network (WSN) technologies can be used to implement smart home. IoT connects home devices to the internet through Wi-Fi to be remotely controlled and monitored. WSN technology clusters sensors and actuators that sense and collect data from different parts of the smart home and place them at a central location. In this paper, smart home design and implementation have been introduced using IoT and WSN technologies. The design was simulated first using the Cisco Packet Tracer simulation software, and then the Raspberry PI mini-computer board has been used for the hardware implementation part. Different devices and sensors used to implement smart home safety, security, and control functions have been simulated, implemented, and tested to check their reliability in implementing the smart home.
16:20 A Secured and Authenticated State Estimation Approach to Protect Measurements in Smart Grids
Many sectors are in vital requirement to the electric power supply. So, any interval in electric power will affect the operation as well as the ecosystem. Cyber-attacks in power system are focusing on smart grids (SGs) vulnerabilities to cause a partial or total blackout. The main security challenges in the network security of the SG are false data injection (FDI) attacks. The attacker tries to modify the transmitted measurements in FDI attacks via SG objects like smart meters and buses. A well-designed protection scheme for SG authentication is still a daunting task. State estimation (SE) is a significant feature for detecting errors in modern SGs, which provides contributions to both management and control of power grids. A new architecture to solve the problem of FDI attacks is proposed in this study. First, the sensors capture measurements from power grids and encrypt them using an elliptical curve cryptography algorithm. Then, the measurements transmitted to a centralised aggregator, which is responsible for the results of SE. Second, after obtaining the measurements, the aggregator estimates the state using the least-squares weighted SE method with the aid of an improved particle swarm optimisation algorithm. All requests received by the control server are authenticated to ensure that requests are sent from an approved aggregator. The given architecture solves a significant issue of FDI attacks. The proposed architecture conducted on the IEEE 14-bus system. The findings show a positive performance in terms of minimising the estimation error.
16:40 Multi-Input Multi-Output DC-DC Converter Network For Hybrid Renewable Energy Applications
In order to fulfill the increasing demands of energy requirements using the eco-friendly methods of energy generation instead of conventional hazardous methods that have caused severe damages globally. Hybrid renewable energy sources (HRES) are utilized for energy generation for developing smart cities and to make the power systems efficient. HRES systems are designed considering the advance requirements of energy consumption. As the technology is evolving, scientists have developed power saving appliances that operates on low level DC voltage. Since different appliances have different DC voltage ratings, to operate them efficiently multi-level DC voltages are often required. Conventional power systems offer single input and single output (SISO) configuration that cannot fulfill the power requirements of domestic and commercially utilized appliances. However, multi-input multi-output (MIMO) configuration is suitable to fulfil the multi-level voltage requirements. DC-DC converters are responsible to deliver the stable output power in HRES systems. This paper presents implementation of MIMO network using buck-boost converters. The proposed system is simulated in MATLAB Simulink with all possible scenarios. To control the operation of buck-boost converter, individual control technique is integrated to sustain the desired output value during intermittent conditions. For that purpose, proportional integral (PI) is utilized as a control technique, that regulates the operation of buck-boost converter with different input and output settings.

Sunday, December 20 15:20 - 16:40 (Asia/Bahrain)

S1-D: E-Learning; Multimedia; Educational Technology

Chair: Ali H Zolait
15:20 Barriers to the Adoption of Technology in Learning and Assessment of Undergraduate Architecture Students
The current study examines the barriers to the adoption of technology in learning and assessment of architectural courses in an architectural programme approved by the Council of Architecture, India. This research identifies and validates five barriers namely technological barriers, interaction barriers, evaluation constraints, time risks, and psychological barriers. Data was collected through a self-administered and structured questionnaire targeting 311 students pursuing an undergraduate programme in reputable architecture schools of two popular private universities in north India. CFA (Confirmatory Factor Analysis) was applied to calculate validity and composite reliability. To examine the hypothesized relationships, path analysis was carried out using Structural Equation Modelling (SEM). The findings of the paper revealed that the time risk emerged as the strongest barrier followed by the interaction and technology risk respectively. In contrast, evaluation risk had the least influence on the intention to adopt online teaching and assessment and surprisingly, psychological risk had insignificant relationship. This research aims to understand hindrance factors in the adoption and assessment of online learning in the wake of COVID-19. It provides valuable insights for architecture schools to overcome these barriers and adopt online teaching learning effectively.
15:40 Digital Media and Students' AP Improvement: An Empirical Investigation of Social TV
Social media has integrated into traditional TV that has improved the learning, entertainment and communication process. Social TV is a new learning and connectivity aspect using interactive media platforms. This study highlights the impact of STV on students' AP in Jordan using a quantitative approach and a selected sample (n=516) of university students. The findings revealed a robust and significant relationship between STV and students' AP. STV has significantly improved their interaction and learning experiences. They were capable of learning new things from watching STV and shared their experiences with others through online platforms. The study proposed a research model and assessed it using the PLS-SEM technique.
16:00 Arab Film TV School
In the world of distance education and electronic exams, I presented the idea of establishing a school to teach the arts of cinema and television with the Egyptian Ministry of Culture - Cultural Development Fund as a pioneering Egyptian experience in distant learning addressed to all Arabic speakers around the world, especially the governorate youth who dream of studying these arts, but their social and material conditions do not allow most of them By moving even to Cairo. It was important that its materials be taught in Arabic, so that any citizen of the Arab world who is interested in cinematic or television work, or intends to belong to it, can obtain his information easily and conveniently. In fact, I consider its creation as a cinematic and television cultural development for every connoisseur, hobbyist, and professional of these two arts. And it has importance and influence on the large numbers that work in these two fields and have not received any studies so far. The opening of the first free virtual school site on the Internet was to teach cinematic and television arts .. at any hour of the day ... and without a building or a lecture hall, the information only reaches the student via modem lines, and contains everything that interests him from Knowledge that helps him in this field, and equips him with sufficient information about the technical and technological aspects in these arts School Sections The school focuses on teaching the arts of script, directing, production, decoration, photography, montage, sound, and animation, and every piece of information is explained in multimedia, i.e. still image, motion image, audio, text, and computer graphics. to ensure that simple information is communicated to all students and those interested in the arts.
16:20 Influence of Work-based Learning on Students' Ethical Orientation
Cooperative education programs often provide the first interaction of college students with the professional workplace, where they develop a pre professional identity and, in particular, develop and conceptualize a knowledge of accountability, expectations, attitudes, beliefs and ethical values within their future profession. This study scrutinizes the influence of cooperative education - as a mean of work-based learning - on the ethical perceptions of Saudi business students, by investigating the differences in students' perceptions in accordance with their work-based experience, academic major, and formal ethics education. The study hypotheses were tested on a sample of 234 students at KFUPM Business School in Saudi Arabia and the results showed an evolution process of the ethical perceptions, which indicates a maturation of the ethical behavior due to the business environment exposure and not only from practical training or any other single factor. It also shows the insignificance of formal education of ethics as students' ethical perceptions develop during college life towards society's expectations.

S1-E: Informatics-1

Chair: Jihene Kaabi
15:20 Parametric Modeling of the Cost of Power Plant Projects
Estimating a realistic budget for a complex power plant projects is always challenging, which results in cost overruns of such projects worldwide. Many factors affect the cost of power plant projects. Among others, quantifying the effects of the project's specific attributes and the country's economic parameters are always ignored in the existing literature. Thus, the broader aim of this study is to analyze the effects of the project's specific attributes and the country's economic parameters on allocating the costs of different power plant projects. Accordingly, this study first analyzes the development trends of power plant projects in Bangladesh and determines the factors affecting power plant projects' costs using parametric analysis. Following that, the effects of project attributes and the country's economic parameters on project budgets are quantified using multiple linear regression models. Finally, a parametric model is developed for predicting the cost of a power plant project. The study outcomes will help policy planners or top-level decision-makers prepare budgets for future power plant projects in Bangladesh and economically similar countries.
15:40 Future Job Market of Information Technology in the Kingdom of Bahrain
The college of Information Technology at University of Bahrain conducted a job market study to investigate the Information Technology job market needs at the Kingdom of Bahrain. Twelve fields have been identified to be the most needed by the global IT job market. The most important job roles have been identified for each field. An online survey has been distributed to employers at different key organizations in Bahrain. The study revealed that the fields of IT Systems and Projects Management, Security, and System Analysis, Design, and Development are the most fields in demand in Bahrain for employment. Required job roles for each of those fields have been also identified.
16:00 A Software trigger based synchronization for multipurpose distributed acquisition systems
Synchronization is one of the key aspects and a requirement for modern day technologies. It's a requirement for almost every sector including power, health, retail, navigation, banking. Synchronization is also important when using distributed acquisition systems. Synchronous acquisition is strictly needed in applications like Phasor Measurement Units (PMU) and wide area sensor networks. Since the data from the sensors are all collectively processed by a central unit, it is important that the data from all sensors is synchronized. Therefore, the concept of synchronized sampling when using distributed acquisition systems forms an interesting research area. There have been numerous research papers discussing different methods to do this. But, in most of them, the available technology is specific to a particular application. This paper shows an initial attempt to present a solution with a PTP driven software trigger that could potentially be used for multiple applications.
16:20 Generating Object Placements for Optimum Exploration and Unpredictability in Medium-Coupling Educational Games
Educational games are powerful tools in the era of Education 4.0. However, their applications are hampered by many issues, including high development costs. Procedural content generation (PCG) and medium coupling are two potent methods for reducing an educational game's development cost; however, their joint application is under-studied. In this study, we explored the topic through a PCG application for generating placements of objects representing elements of learning content in a game map. The content consisted of correct answers, which the player must gather in the correct order, and wrong answers, which the player must avoid. We employed an evolutionary algorithm in three stages of the generation: it first generated a minimal set of correct answer element objects (CAEOs), followed by generating a copy of the set, and then a set of wrong answer element objects (WAEOs). We employed and tested a fitness function that calculated the mean and standard deviation of object pair distances. The results show that the generation algorithm is capable of generating CAEOs and WAEOs distributions that are unpredictable and encourage exploration. We find that multiplying the fitness function's standard deviation variable by a specific value, which depends on how many objects are to be generated, is crucial to the algorithm's success. We also discuss the limitations of this study and directions for future ones.

Sunday, December 20 17:00 - 17:30 (Asia/Bahrain)

P: Prayer

Sunday, December 20 17:30 - 19:10 (Asia/Bahrain)

S2-A: Machine Learning for Big Data Analytics

Chairs: Riyadh Ksantini, Nabil Benamar
17:30 Towards harnessing based learning algorithms for tweets sentiment analysis
In the digital era of information, the number of web users and the volume of data being generated and exchanged over the internet has drastically increased. Sentiment analysis and opinion mining are open research fields having various everyday life applications, such as Twitter. Twitter is flooded with opinions, emotions, views and discussions among different communities across the world. Therefore, the tweet is related to various domains, such as stock market, product opinion and political elections. In this study, a classification problem is considered having a given set of tweets with various features and diverse opinions. The goal is to obtain expressions of opinion representing a target feature and to categorize these tweets as positive, neutral, or negative sentiment using different machine learning techniques. The dataset sentiment140 (STS-Test) is preprocessed and very commonly used for research purposes. The studied machine learning classifiers are random Forest, libSVM, IBk (kNN), decision table, and AdaBoostM1. Several evaluation measures are used for evaluating the performance of studied classifier on the selected dataset. These measures include accuracy, precision, recall, and F-measure. The confusion matrix is also obtained as a result of each classifier. The obtained results revealed that the RF classifier obtained the best accuracy results whereas the AdaBoostM1 classifier obtained the worst accuracy results when tested for STS-Test dataset.
17:50 Support Vector Regression based Direction of Arrival Estimation of an Acoustic Source
The direction-of-arrival (DOA) estimation of an acoustic is instrumental in many applications such as surveillance, robotics, defense, etc. This paper proposes the DOA estimation technique using a support vector regression (SVR) machine-learning model trained on the signals acquired from the uniform linear array (ULA) of microphones. The SVR machine-learning model has been trained using the correlation coefficients of signals at different microphones as the features of the model. The root-mean-square angular error (RMSAE) parameter has been used for performance comparison of SVR with that of Delay-and-Sum (DAS) beamforming, multivariate-linear regression (MLR), and multivariate-curvilinear regression (MCR). From the results, it has been observed that the SVR model outperforms the DAS beamforming method as well as other regression models viz. MLR and MCR.
18:10 DBMS, NoSQL and Securing Data: the relationship and the recommendation
In modern days, database systems usage keep increasing where they have become an essential part of our life in storing records. As technology advances, databases receive updates or new improved methods that, alongside it, risks emerge and increase if left unresolved or unaware of such ongoing risks. Such risks would lead to data availability, integrity, and confidentiality to be broken and leaked out into the hands of cybercriminals. This paper aims to show two types of databases, analyze their security, and propose a solution to securing and maintaining the availability, integrity, and confidentiality of the data stored in the database.
18:30 An overview on Big Data Mining Using Evolutionary Techniques
Big Data processing suffers from several limitations due to its magnitude making it time consuming to implement any kind of analysis such as data mining. Evolutionary Algorithms (EAs) are metaheuristic optimization algorithms inspired by the natural behavior of the population evolution such as Genetic Algorithm, Artificial Bee Colony, Artificial Ant Colony and other swarm intelligence algorithms. EAs have recently been used to overcome Big Data limitations, especially memory consumption and the long execution time. This paper provides an overview of the recent research papers that utilize evolutionary algorithms to deal with the optimization problems related to Big Data mining such as clustering, classification and features' selection.
18:50 Measuring Performance Portability of Stencil Kernels on CPUs and GPUs
Heterogeneous computing systems are common nowadays in High-Performance Computing farms. The different architectures are used to accelerate applications that have demanding computation requirements. However, these different architectures introduce new programming models that need new implementations and optimisations. Therefore, performance portability frameworks are used to increase the productivity of application developers in heterogeneous environments. Performance portability frameworks allow users to create one implementation that can be compiled to target different architectures while preserving an acceptable performance. In this work, we measure several performance portable implementations of the stencil algorithm on CPUs and GPUs. To begin with, we choose different performance portability frameworks to implement the algorithm. Then we choose suitable methodologies to measure the performance portability of the implementations. The measurement is used to indicate which architecture is more suitable for the stencil algorithm. Besides, it is used to compare the different performance portability frameworks against each other.

S2-B: Convolutional Neural Network-2

Chairs: Hala Hatoum, Salah Al-Majeed
17:30 Toward Hybrid Deep Convolutional Neural Network Architectures For Medical Image Processing
Artificial Intelligence (AI) has gained a great interest in improving systems and mechanisms. The talent of AI has been shown through its upgraded technologies including Deep Learning (DL), which has proven valuable performances in image processing. The enhancement of image treatment and analysis, particularly medical imaging, has became one of the most important steps toward the improvement of several systems in different applications such as medical treatment, analysis, and prognosis systems. Deep Convolutional Neural Network (CNN) presents one of the most applied DL approach in medical imaging. CNN covers more than one architecture which historically proved relevant performance results. In this paper, An analytical review of selective CNN architectures including ResNet, DenseNet, and wider networks, particularly Inception-V4, will be presented. Toward the need for a hybrid algorithm, a proposed architecture composed of two different CNNs will be defined by focusing mainly on the conveniences of both DenseNet and Inception-V4 networks.
17:50 Convolutional Neural Network with Attention Modules for Pneumonia Detection
In 2017, pneumonia was the primary diagnosis for 1.3 million visits to the Emergency Department (ED) in the United States. The mortality rate was estimated to be 5%-10% of hospitalized patients, whereas it rises to 30% for severe cases admitted to the Intensive Care Unit (ICU). Among all cases admitted to ED, 30% were misdiagnosed, and they did not suffer from pneumonia, which raises a flag for the need for a more accurate diagnosis method. Several methods for pneumonia detection were recently developed using Artificial Intelligence (AI) in general and more specifically, using deep neural networks. However, the significant limitations and concerns on the generalizability of such models and the barriers facing the employment of this technology for clinical practice is worth acknowledging. In this paper, an Attention model is used with a Convolutional Neural Network (CNN) for lung pneumonia diagnosis. The backbone of the model is a ResNet50 architecture with an added dual attention layer. The model was trained on a chest x-ray dataset for the aim of chest pneumonia classification. The model achieved an average validation accuracy of 97.82% and AUROC of 0.98842 on our split with cross validation. Regarding the original split, accuracy was 77.63% and AUROC was 0.7967 on the official test set. In summary, incorporation of established computer vision techniques such as Attention modules seems to be a promising approach for advancing medical image analysis.
18:10 Classification of Chicken Meat Freshness using Convolutional Neural Network Algorithms
Broiler chicken meat is one of the most widely consumed meat types in Indonesia, this high level of consumption makes a lot of consumer demand in the market. However, there was a found seller who sells broiler chicken meat that are rotten. In this study, we develop chicken meat freshness identification using a convolutional neural network algorithm. This study used the image dataset of broiler chicken breasts. There are two categories of chicken meat used in the study, namely, fresh and rotten. The meat images were acquired by using a smartphone camera. For the process of cropping chicken meat images, we use thresholding with the Otsu method and conversion of RGB images to binary images to select the area of RGB images before cropping the images. The chicken meat images were cropped into three sizes and then used as a dataset in the study. The chicken meat image dataset was trained using a simple architecture that was self-made called Ayam6Net, we also used the AlexNet, VGGNet, and GoogLeNet architectures as a comparison. Ayam6Net has the highest accuracy of 92.9%. From the experiment results, we can conclude that using Ayam6Net architecture with dataset 400x400 pixels has a better accuracy result compared with other architectures and other sizes image datasets.
18:30 Evaluation of CNN Models with Transfer Learning for Recognition of Sign Language Alphabets with Complex Background
Sign language acts as a mediator for deaf and speech-impaired community to visually communicate with one another and engage in their environment. As a state-of-the-art recognition technique in computer vision, deep learning models have demonstrated success for several tasks. In this paper, we present a Convolutional Neural Network (CNN) approach with transfer learning to recognize Arabic and American Sign Languages alphabets with complex background. The descent underlying concept of transfer learning is to adopt a model pretrained on a massive annotated dataset and fine tune the later layers on the target dataset. We applied different techniques to improve the accuracy of the proposed approach such as data augmentation, batch-normalization, and early stopping. The proposed model is evaluated on three datasets and experiments reveal improved results with high recognition rates.
18:50 Visual Drone Terrain Classification: A Manual Classification Approach
This research investigates a method for performing manual classification of the terrain in imagery from the on-board camera of an unmanned aerial vehicle, to develop classifiers for systematic terrain classification. Drone images were captured across rural County Donegal in Ireland, and software was developed to manually label the terrain in these images, labelled in a lattice of 30 x 30-pixel tiles. This dataset was used to train both a classic computer vision model and a Convolutional Neural Net model to classify the type of terrain under the UAV. The accuracy of the computer vision approach to the classification was compared to that of a Convolutional Neural Network trained using the Semantic Segmentation approach. The Convolutional Neural Network classifier was found to be the most accurate approach, achieving an f1 score of 0.95.

S2-C: Cyber Security-1

Chairs: Abdulla Alasaadi, Hosam Alamleh
17:30 Privacy Engineering Methodologies: A survey
Software engineers continue to struggle to create privacy-friendly systems despite having guidelines as Privacy by Design (PbD) and the European Union's (EU) General Data Protection Regulation (GDPR). These guidelines highlight what is needed to achieve greater privacy, but does not specify how to do so. In this paper, we survey 56 academic publications from 2007 to May 2020 discussing the current privacy engineering methodologies and propose a taxonomy based on the theoretical backgrounds and origin of methodology (security-based or privacy-friendly). We found a significant increase in publications after the official implementation of GDPR. Despite an increasing number of solution proposals, there are substantial opportunities for researchers to generate empirical research that validates and evaluates current methodology proposals. Thus delivering better approaches to privacy engineering and protecting individuals' privacy in IT systems.
17:50 Architecture for Continuous Authentication in Location-Based Services
Location-based services are widely used today. The huge growth of location-based services is enabled by growth of the usage of mobile devices. Moreover, these devices can sense several types of signals over the air using different radio frequency technologies (e.g., Wi-Fi, Bluetooth, cellular signals, GPS satellites, etc.) Moreover, they can calculate their location using any of the previous mentioned technology. Some application use location generated by mobile devices as a factor in proving services. This is known as location-based services. We believe that the location generated by mobile devices can be utilized for the purpose of continuous authentication. In this paper, we propose an architecture for continuous location-based authentication utilizing location data generated by the mobile devices. Then, we present an experiment to test the proposed system.o test the proposed system.
18:10 Evolution of the Security Models in Cognitive Radio Networks: Challenges and Open Issues
Inclusion of Cognitive Radio Network (CRN) in different networking technologies is proven to offer an efficient communication system due to its potential characteristics of managing the transmission and reception attributes, especially in the wireless environment. CRN harnesses the capability of resisting interference between the licensed users and unlicensed user. However, it is still challenging to countermeasure the level of threats in the wireless medium by CRN. With various research-based solutions towards securing CRN, existing approaches still witness security breach when CRN is deployed over complex networking technologies. Therefore, this paper provides a technical insight in gauging the level of effectiveness in the existing security aspect and facilitates the upcoming researchers with information about the current research gap and possible solutions to leverage the security.
18:30 RPL rank attack detection using Deep Learning
Internet of Things (IoT) is a network of interconnected smart devices. It provides a set of services in different domains to improve the quality of human daily life. However, protecting information systems and transmitted data from attacks is critical in IoT especially for devices running over Low Power and Lossy Networks (LLNs) and using RPL routing protocol. In recent times, the enormous network traffic generated in seconds is difficult to analyze with the traditional rule-based approaches. Therefore, Intrusion detection systems (IDS) are seen as the most important tool to ensure this role. The proposed work focus on 1) Creating a misbehaving of RPL protocol by implementing a rank attack in the network and 2) proposing an IDS based on the multi-Layer Perceptron (MLP) neural network with the aim to verify and classify normal and abnormal network traffic. The experiment achieved a high percentage of training dataset accuracy F1 scores and Recall up to (94.57%), (98%) and (100%), respectively.
18:50 A Systematic Literature Review of ECC Security Schemes for IoT Healthcare Applications
Recent advancements in biosensors, wireless communication, and embedded systems have greatly promoted the development of the medical sector.This resulted in positive impacts to this sector which include increased reliability and efficiency, time and money savings, improving the quality of patient's health, improving health standards for mass population, providing convenience for medical workers, and optimizing the allocation of medical resources. However, these applications became susceptible to security attacks due to the high sensitivity of the patient's health data which can threaten the patient's life, and well-being. In addition, the resource-constraint healthcare devices, and untrusted cloud-servers pose a challenge to secure these applications. Therefore,researchers were motivated to implement security schemes for IoT healthcare applications, and overcome the security challenges they pose. Hence, security schemes based on Elliptic Curve Cryptosystem (ECC) were pursued by researchers for their small key sizes, minimal storage and communication requirements, as well as provide the same security as other public cryptosystems, like Rivest, Shamir, and Adleman (RSA), that use large key sizes. This paper aims to provide a systematic literature review of the current ECC security schemes for IoT healthcare applications, and will focus on the security features provided by these schemes and their performance analysis.

S2-D: Software Engineering

Chairs: Lamya Al jasmi, Fawzi Albalooshi
17:30 Word2Vec Duplicate Bug Records Identification Prediction Using Tensorflow
Bug duplication reporting is one of the most widespread software problems that cause inconvenience for the internal software stakeholders. It is useful for developers to eliminate redundant bug records where the fewer bugs duplicated records in bug reports documentation the more efficiently allocated resources are set to fix and enhance the software features. In this paper, the word embedding (Word2Vec) approach is used on four different software components from the Mozilla Core dataset with different sentence types through the duplicated bug category records to compare whether two given bug record descriptions are categorized as related bugs records. Besides, this paper proposes three different similarity measures and explores the accuracy of each measure. The study results show that the approach's accuracy is proportional to the existence of similar words within any of the two given two bug records descriptions. Additionally, we found that percentage of similarity accuracy is improved by finding the closest word using the Euclidean distance method than traversing for more index adjacent values within the trained word vector array.
17:50 An Automatic Approach to Measure and Visualize Coupling in Object-Oriented Programs
The task of measuring coupling in software systems is important to evaluate the software quality such as maintainability and reusability. The software engineers and developers spend much time and effort in order to accomplish this task. Measuring the coupling automatically helps in getting rid of the manual measurement inadequacy. In this paper, we proposed an approach to automatically measure and visualize the coupling among classes of object-oriented programs. The generated representations provide an overall recognition of the coupling in an effective and interactive way. The approach parses the program source code, using an existing tool, into an XML file, and extracts the class tokens according to the definitions of the coupling metrics. Then, it determines the coupling relationships through matching these tokens with other classes tokens. Finally, it generates interactive visualizations of the coupling utilizing several charts. A case study has been conducted to validate the proposed approach. The results indicated that the generated visualizations facilitate a comprehensive realization of the program coupling, and leads to a proper estimation of the software quality in terms of its coupling measure.
18:10 Software Risk Estimation Through Bug Reports Analysis and Bug-fix Time Predictions
Categorizing the level of software risk components is very important for software developers. This categorization allows the developers to increase software availability, security, and provide better project management process. This research proposes a novel approach risk estimation system that aims to help software internal stakeholders to evaluate the currently existing software risk by predicting a quantitative software risk value. This risk value is estimated using the earlier software bugs reports based on a comparison between current and upcoming bug-fix time, duplicated bugs records, and the software component priority level. The risk value is retrieved by using a machine learning on a Mozilla Core dataset (Networking: HTTP software component) using Tensorflow tool to predict a risk level value for specific software bugs. The total risk results ranged from 27.4% to 84% with maximum bug-fix time prediction accuracy of 35%. Also, the result showed a strong relationship for the risk values obtained from the bug-fix time prediction and showed a low relationship with the risk values from the duplicated bug records.
18:30 Improving Software Reuse Prediction Using Feature Selection Algorithms
Software engineering is witnessing a market struggle in delivering functional products due to time constraints. Software developers are reusing already existing components instead of developing the whole software from scratch. Predicting successful reuse can help before any reuse attempt. In this paper, feature selection was utilized to extract the most relevant attributes from a public dataset. Results showed that feature selection improved software reuse experience prediction.
18:50 Software Change Proneness Prediction Using Machine Learning
Software change-proneness is one of the vital quality metrics that represents the extent of change of a class across versions of the system. This change may occur due to evolving requirements, bug fixing, or code refactoring. Consequently, change-proneness may have a negative impact on software evolution. For instance, modules that are change-prone tend to produce more defects and accumulate more technical debt. This research work applies different Machine Learning (ML) techniques on a large dataset from a wide commercial software system to investigate the relationships between object-oriented (OO) metrics and change-proneness, and determine which OO metrics are necessary to predict change-prone classes. Moreover, several state-of-the-art combining methods were evaluated that were constructed by combining several heterogeneous single and ensemble classifiers with voting, Select-Best, and staking scheme. The result of the study indicates a high prediction performance of many of the ensemble classifiers as well as the combining methods selected and proved that ML methods are very beneficial for predicting change-prone classes in software. The study also proved that software metrics are significant indicators of class change-proneness and should be monitored regularly during software development and maintenance.

Sunday, December 20 17:30 - 18:50 (Asia/Bahrain)

S2-E: Informatics -2

Chairs: Muain Aljamlan, Jafla Al-Ammari
17:30 The PANDA approach as a method for creating female STEMpreneurs
Female influence in the context of female entrepreneurship especially in the industries in the field of science, technology, engineering, and mathematics (STEM) has been found to be gender biased. But how to create more female entrepreneurs in STEM industries (STEMpreneurs)? The German university Hochschule Fresenius has developed a project, named PANDA, which connects established companies of the STEM industry and their business ideas with students of STEM study paths. The target of PANDA is to enable companies to innovate like a startup and on the other hand to inspire students of STEM study paths to become an entrepreneur. Caused by the discussion about female influences and the small share of female entrepreneurs in STEM industries this study has the motivation to analyze if an approach like PANDA can contribute to increase the share of female entrepreneurs in STEM industries. This is achieved through the description of the approach in the context of the Open Innovation framework and the analysis of the share of female students (42 %) in 9 executed PANDA projects.
17:50 Research trends in Sentiment Analysis and Opinion Mining from Knowledge Management approach: A science mapping from 2007 to 2020
The internationalization of markets and the increasing intensification of competition, forces managers of organizations or companies to make precise decisions in the shortest possible time. When the organization promotes the exchange of knowledge, it is oriented towards the sustained improvement of processes and internal communication. In the same way, relations with the outside environment improve: customers and suppliers. The birth of the Internet and later of social networks led to public ideas, debates and opinions; which constitute an interesting source to detect opinion trends among users. This complex data analysis is carried out by different disciplines of artificial intelligence with the implementation of algorithms. One of those disciplines is the Analysis of Feelings, which through an opinion, emotion or attitude that is mainly inferred from a text, analyzes and classifies them. The approach of these methodologies assumes the interest of this work for the extraction of information that allows its later use in the decision-making of organizations. Finally, to learn more about its evolution, trends, research areas, authors and publications, a Bibliometric Analysis was applied from 2007 to date on Sentiment Analysis and Opinion Mining from Knowledge Management.
18:10 Vision-based Approach for Automated Social Distance Violators Detection
Social distancing is a necessary precaution measure taken in order to have more control over the outbreak of infectious diseases such as COVID-19. Most of Social distancing monitoring approaches are based on Bluetooth and mobile phones that require an app to be downloaded on all phones. This paper proposes a different approach to monitor social distancing, using cameras, and combining different computer vision algorithms. The approach utilizes the concept of inverse perspective mapping (IPM) together with the camera's intrinsic information to produce a bird's eye view with real-world coordinates of the frame being processed from a video source. The process starts with image enhancement, foreground detection using Gaussian Mixture Model (GMM) background subtraction, tracking using Kalman filter, computing real-world distance measurements between individuals, and detecting those who have been in less than 2 meters apart as they are considered to be in contact. This tool could assist the efforts of the governments to contain the virus. It can be implemented in closed areas or institutions, monitor the extent of people's commitment, and provide analysis and a faster approach to detect possibly corona suspicion cases. The approach is tested on the task decomposition data set, which included frames of closed areas and the camera's intrinsic parameters. Another data set was created with different scenarios to increase the confidence level of our algorithm. The results showed the success of our approach in detecting the violation in social distancing with accurate measures of the realworld coordinates.
18:30 Virtual Reality Street-Crossing Training for Children with Autism in Arabic Language
Unintentional injuries poses a serious and life-threatening risk to children with Autism Spectrum Disorder (ASD) so providing them with safety training is essential, and while providing this training in a natural environment can put lives at risk, virtual reality offers a safe alternative. This paper presents the design and application of a Head Mounted Display (HMD) immersive virtual reality system that improves the street-crossing skills of children with ASD. To create the most suitable learning environment, both qualitative and quantitative research methods are used for data collection. A structured questionnaire is employed for all stakeholders, including teachers, specialists and parents being involved in the design process. Semi-structured interviews are used to obtain information from specialists who identify various design principles. These design principles are then assessed and discussed to provide insight into how they might be used in future research.

Sunday, December 20 19:10 - 19:30 (Asia/Bahrain)

B2: Break 2

Sunday, December 20 19:30 - 20:00 (Asia/Bahrain)

VT: Virtual Tour to BahrainDetails

Chair: Abdulla Alqaddoumi

Monday, December 21

Monday, December 21 12:00 - 12:50 (Asia/Bahrain)

PD -: Panel Discussion: "Women in Tech"

Mrs. Mariam Jumaan, Mrs. Muna Al Hashemi, Mrs. Najwa Abdul Rahim, and Dr. Lamya Al jasmi,
Chair: Hessa Al-Junaid

Monday, December 21 12:50 - 13:20 (Asia/Bahrain)

B3: Break 3

Monday, December 21 13:20 - 15:00 (Asia/Bahrain)

S3-A: Internet of Things

Chairs: Aisha Bushager, Ala Khalifeh
13:20 Internet of Things Based Environment Monitoring and PM10 Prediction for Smart Home
People spend 80 to 90% of their time in the building environment, including home and office. Therefore, maintaining a comfortable living environment in indoor premises is extremely relevant. Among several hazardous indoor air pollutants, the particulate matter has been considered to leave a considerable impact on human health and well-being. It is observed to be the main cause of several chronic health issues such as respiratory illness, lung cancer, allergies, and asthma. In this study, the framework of an Internet of Things based environment monitoring system is presented to deal with the rising threats of indoor air pollution. Furthermore, PM10 concentration is predicted using Random Forest algorithm. Predicting PM10 levels can help the building occupants to take adequate ventilation measures ahead of time. The overall performance of the proposed prediction system is measured using five different parameters where RMSE = 0.594, MSE = 0.353, MAE = 0.337, R2 Score = 0.996, and MAPE = 3.90%. Furthermore, the overall accuracy of the system on the test dataset is 97.72%. This system can work as an integral part of a smart home while ensuring real-time pollution monitoring. The guided and controlled switching of heating and ventilation systems can reduce the overall energy consumption.
13:40 Modelling Industrial IoT System Complexity
The complexity of Industrial Internet of Things (IIoT) system is captivated via a static Euclidean complexity space, where information complexity boundaries expand over time and serve as an indicator for system instability. A model-based static and dynamic conceptions of complexity have been introduced. The necessary capabilities were theoretically demonstrated, alongside a set of assumptions concerning the behaviour of industrial system complexity and its functions as a core foundation for the proposed model. First ideas for practical implications for industrial IoT system complexity analysis are deducted and briefly discussed.
14:00 Collaborative Data Anonymization for Privacy-Preserving Vehicular Ad-hoc Network
Optimization of data usability and privacy protection is a challenging goal. Although the released data are sanitized with the upgraded model of data semantic changer, always face privacy and scalability issues. On the other hand, local anonymization may not be suitable in a multi-nodal environment such as a vehicular network. Alternatively, hiding sensitive attributes are essential alongside discreet data privacy. In this paper, we proposed a collaborative privacy-preserving data anonymization technique for the vehicular network. Moreover, the method is capable of achieving a desirable level of data sanitization in a group. The use of k-anonymity, l-diversity and t-closeness make the proposed technique more strong to protect data privacy in different stages of communication. Furthermore, the performed analysis of the proposed scheme ensures usability and practicality. Lastly, the future direction of the stipulated field shows the research path in collaborative data anonymization in the vehicular environment.
14:20 IoT Based Intelligent Control System for Smart Building
Internet of Things (IoT) enabled systems allow users to achieve deeper automation, integration and analysis within the system. It finds applications across many fields like healthcare, industries, smart homes, safety and government sectors through their unique flexibility and ability to be suitable for any environment. Within this context, building automation plays an important role in maintaining living standard of people by providing a secure and convenient environment. The objective of our work is to develop an automatic control of air conditioner (AC) and intruder detection surveillance system for smart buildings. The system automatically controls the compressor cycle of AC based on the values of temperature sensor and motion sensor obtained using the NodeMCU microcontroller. Once the motion is detected, temperature sensor senses the room temperature and if the temperature goes above a threshold value, AC will be turned on and set in to a predefined value. Additionally, if any unknown person entering the house is detected, the system will recognize the intrusion and alert the authorized user via short message service and email along with the picture of the intruder. The intrusion detection and subsequent actions are implemented with the aid of openCV library, Haarcascade classifier and python.
14:40 A Novel Low-Energy CNTFET-Based Ternary Half-Adder Design using Unary Operators
Energy consumption is a critical factor to be reduced when designing embedded systems and IoT devices. By using Multiple-valued logic (MVL) circuits, interconnections complexity and energy consumption are decreased in comparison to binary systems. This paper uses MVL circuits to present a ternary half-adder (THA) with reduced energy consumption to maintain the battery usage in nano-scale embedded systems and IoT devices. The proposed CNTFET-based circuit uses a dual-voltage (Vdd and Vdd/2) and novel unary operators to improve the performance. Extensive HSPICE simulations show impressive improvements in reducing transistors count, decreasing energy consumption, increasing noise tolerance, and enhancing the robustness of process variations compared to previous circuits.

S3-B: Cloud Computing & Machine Learning

Chairs: Mazen Ali, Ayman A. Abdel-Hamid
13:20 Combining Spot Instances Hopping with Vertical Auto-scaling To Reduce Cloud Leasing Cost
Cost reduction is one of the main goals for many organizations. That is why cloud computing is considered as a valid option due to its flexibility. The cloud computing components and the way it was built requires having some extra resources to be used for elasticity feature. Such resources were offered for leasing with reduced cost but with the ability to reassigning to a different user at any moment. Eliminating the interrupting issue allowed to utilize the extra resources to minimize the leasing cost even more. Moreover, the auto-scaling feature is considered as another way that could be used to minimizes the costs. In this paper, a framework has been proposed to minimize the leasing costs. The framework utilizes the cloud unused resources and at the same time implementing the vertical auto-scaling feature. The proposed framework eliminates the associated risks that came with using the spot instances (another name for extra cloud resources) by migrating between different markets. The framework evaluation results were obtained from a simulation process after using real data taken from Amazon AWS. The result shows a cost reduction that reaches up to 78% and 52% compared with on-demand and spot plans accordingly.
13:40 An Incentive Mechanism for Computing Resource Allocation in Vehicular Fog Computing Environment
Vehicular Fog Computing (VFC) has recently become a promising research field. Fog Computing (FC) paradigm is used to enhance the quality of services of cloud computing by extending it to the edge of the network using collaborative devices nearby the end users. The VFC deploys computing resources of vehicles situated at the edge of the network to serve local on-demand mobile applications. Most of the previous research works assume that vehicles can serve as fog nodes unconditionally, which is not always true. In this paper, we investigate the problem of resource allocation in a VFC environment where resource-sharing is needed. Data service operators (DSO) and the fog nodes (FN) i.e. Vehicles participate in computation offloading and serve User Equipements (UE) demands. Hence, a joint optimization approach is designed to model the interactions among DSOs, FNs and UEs. We present a new incentive mechanism to stimulate vehicles to serve as fog nodes and share their computing resources based on UE demands. The simulation results show that the proposed approach significantly improves the performance of a VFC in terms of resource-sharing by dedicating the computational resources of vehicles to the UEs demands.
14:00 A Cloud-based Mobile Healthcare Monitoring Framework with Location Privacy Preservation
Nowadays, ubiquitous healthcare monitoring applications are becoming a necessity. In a pervasive smart healthcare system, the user's location information is always transmitted periodically to healthcare providers to increase the quality of the service provided to the user. However, revealing the user's location will affect the user's privacy. This paper presents a novel cloud-based secure location privacy-preserving mobile healthcare framework with decision-making capabilities. A user's vital signs are sensed possibly through a wearable healthcare device and transmitted to a cloud server for securely storing user's data, processing, and decision making. The proposed framework integrates a number of features such as crowdsensing for collecting information about a person's privacy preferences for possible places and applying them to a user that did not set his privacy preferences, machine learning (ML) for classifying the health state of the user, and location privacy preservation methods (LPPM) such as obfuscation, perturbation, and encryption to protect the user's location and provide a secure monitoring framework. The proposed framework detects clear emergency cases and quickly makes a decision before sending data to the cloud server. To validate the efficiency of the proposed framework, a prototype is developed and tested. The obtained results from the proposed prototype prove its feasibility and utility. Compared to the state of art, the proposed framework gives an adaptive context-based decision for a location sharing privacy and controlling the trade-off between location privacy and service utility.
14:20 Detecting Malicious DNS over HTTPS Traffic Using Machine Learning
Network with the internet has grown-up very faster compared with any other technology around the world. From the beginning of the Internet, the Domain name system (DNS) is an integral and important part of it. The primary task of DNS is to redirect the users at correct computers, applications, and files by mapping IP and domain name. Due to certain security flaws of DNS, it is always a major attack target for attackers like DNS-based malware, DNS-amplification, false-positive triggering, DNS tunneling, etc. DNS over TLS (DoT) and DNS over HTTPS (DoH) are recently developed and deployed by Google and Cloudflare to prevent these types of attacks. DoT and DoH are the standard protocols which mainly designed for privacy and security by encrypting the DNS traffic between users and DNS resolver servers. This paper uses various machine learning classifiers such as (i) Naive Bayes (NB), ii) Logistic regression (LR), iii) Random forest (RF), (iv) K nearest neighbor (KNN), and (v) Gradient boosting (GB) to detect the malicious activity at DNS level in the DoH environment. The experiments are conducted on a benchmark MoH dataset (CIRA-CIC-DoHBrw-2020). Several features are used to develop a robust model. The experimental outcome confirmed that the RF and GB classifiers are better choices for the said problem. Since, majority of the malicious activity detected by the developed model, it can be said that the ML-based algorithms are a better option for the prevention of DNS attacks on DoH traffic.
14:40 Forensic Gender Discrimination in Malaysian Population Using Machine Learning Methods
Latent fingerprint is one of the most encountered evidence in a crime scene and is useful for personal identification. In forensic investigation, reliability of the identification process is greatly affected by the visibility of the minutiae features. This is because the matching of known and unknown fingerprints achieved according to the types and locations of the minutiae features. However, most of the fingerprints recovered from the crime scene are of low quality, i.e. incomplete print or minutiae features. Under such circumstances, forensic analyst can try to determine gender of donor of the latent fingerprint. This piece of information could help to narrow down the scope of searching of suspect. Forensic discrimination of gender based on ridge counts has been proposed for a few decades ago. Despite multiple works have reported the use of machine learning methods in gender classification using fingerprints, the fingerprint features were mainly extracted from the images. In this work, the diagonal ridge counts were calculated manually within a well-defined region, i.e. 25 centimetres squared. This approach is more relevant to a real crime scene investigation. This work employed two well-known machine learning algorithms, i.e. naïve Bayes (NB) and Classification and Regression Trees (CART) algorithms, in discriminating gender based on the ridge counts. The performances of predictive models have been assessed by ethnicity and finger digit via bootstrapping without replacement approach. Results showed one-digit samples can perform as good as the five-digit or ten-digit samples. Comparing to the global predictive model, ethnicity-specific models of Indian and Malay subjects, respectively, showed better improvement. Moreover, by considering all five digits of a particular hand as input data, NB tended to outperform CART; whilst the relative performances reversed when only one digit was considered as input data. In conclusion, fingerprint ridge counts can be a potential indicator of gender in the Malaysian population.

S3-C: Telecommunication and Networking

Chair: Aysha Ebrahim
13:20 Delay Tolerant Network protocols for an Expanding Network on a Railway
The paper presents the analysis of the Delay Tolerant Network (DTN) methods in an expanding communication network on a railway line. Trains act as moving objects that send and receive messages or collected telemetry data to or from an external network. They can exchange the data with each other, delegating delivery. The paper compares various DTN protocols' ability to reduce the delay of data delivery and the additional buffer load needed. An analysis is based on the real European railway line example (map and train schedule), with various loads and different coverage of a mobile network. The effectiveness of DTN during migration to networks of a higher data rate (for example, with 5G capabilities) is discussed.
13:40 Overlay Convergence Analysis in P2P Networks: An Assessment of the 2PC Algorithm
Peer-to-Peer (P2P) Networks for live streaming require low latency and low discontinuity in media transmission among peers. When latency is high, multiple users watch the video at different times, and in the case of high discontinuity, various parts of the media are not viewed by network users. Also, other factors compromise the quality of service of a P2P network of live streaming, such as the presence of a large number of peers that do not contribute to the distribution of the media, known as free riders, and the constant arrival and departure of peers during transmission, known as peer churn. An alternative to preserve the quality of a P2P network of live streaming is the usage of algorithms for construction and maintenance of the overlay network. One of these algorithms is the Peer Classification for Partnership Constraints (2PC), proposed to allow a large number of free riders in the network. 2PC acts by imposing constraints on partnerships between peers according to their contributions in the media transmission. 2PC was successfully tested on PlanetLab and its authors states the algorithm attracts high-contribution peers close to the server, while pushes low-contribution peers to the edge of the overlay. However, the authors have not demonstrated that this peer organization in the overlay actually happens. In this work, by analyzing the logs of the execution of the 2PC algorithm, together with the application of graph structures, we evaluated the application of the 2PC and identified that the partnership relationships between peers imposed by the algorithm organizes the overlay as expected.
14:00 RPL Assessment using the Rank Attack in Static and Mobile Environments
Routing protocol running over low power and lossy networks (RPL) is currently one of the main routing protocols for the Internet of Things (IoT). This protocol has some vulnerabilities that can be exploited by attackers to change its behavior and deteriorate its performance. In the RPL rank attack, a malicious node announces a wrong rank, which leads the neighboring's nodes to choose this node as a preferred parent. In this study, we used different metrics to assess RPL protocol in the presence of misbehaving nodes, namely the overhead, convergence time, energy consumption, preferred parent changes, and network lifetime. Our simulations results show that a mobile environment is more damaged by the rank attack than a static environment.
14:20 Scene Change Based Video Watermarking Algorithm
In this work, a fully digital video watermarking algorithm is proposed. In other words, a novel hybrid DWT-based digital and blind video watermarking algorithm, associated with error correcting code (HDWT-DB-VW-ECC), is introduced. Interestingly, our proposed algorithm partitions the watermark depending on the number of cover video scenes and accordingly embeds each part of the watermark considering different video's scene frames and wavelet domain. To make sure that the algorithm's robustness is maintained, the error correcting code is incorporated into the watermark and particularly embedded in the audio channel. Actually, our HDWT-DB-VW-ECC algorithm mostly takes into consideration all types of attacks including intentional, unintentional, geometric, statistical and other attacks to which the video frames could be prone. Furthermore, it is a blind scheme, that is to say, it retrieves the embedded watermark without the need of the original video. In addition, the watermark embedded is perceptually invisible. The proposed algorithm design, experimental results, and comparisons with many other video watermarking schemes are fully described.
14:40 Security Concerns in Smart Traffic Routing System
Intelligent Transportation System (ITS) is considered one of the emerging technologies that get significant attention from many researchers today. Many pieces of research focus on ITS, while some also focus on the security and the privacy challenges related to ITS. The number of IoT devices are projected to be 10 Billion estimated, which shows the increased demand for the usage of IoT devices across the globe. Due to its numerous benefits, IoT devices have been increasing quite rapidly and is forecasted to reach approximately 20 billion connected IoT devices by the end of the year 2020. IoT is expanding into every aspect of our lives, from industry to commerce, transportation to healthcare, military to our households. One such usage has been noticed in ITS on the streets in the form of IoT-based Smart Traffic Routing System (STRS). STRS controls the traffic flows in the junction without requiring substantial traffic policing at the junction. Nevertheless, the security concerns in the STRS prevent widespread implementation all around the world. In this paper, we will discuss the basic functioning of the STRS followed by the security concerns and impacts related to the STRS, and finally, we will discuss some solutions to prevent these security concerns. Furthermore, we will mention the future work of study in this regard. The scope of this paper is limited to proposing a solution and will not include the implementation of the solution.

S3-D: Robotics, Computer Vision, and HCI

Chairs: Fatema Albalooshi, Resala Aladraj, Dr
13:20 Deep Learning Enhanced Electromagnetic Imaging Scheme
An inverse electromagnetic imaging scheme is introduced based on projected nonlinear Landweber (PNLW) that achieves sparsity constraints. The scheme is enhanced by three neural networks to i) predict Landweber step sizes through each iteration, ii) predict the sparsity projection level which is based on the first-norm constraint, and iii) enhance images recovered by the PNLW. The proposed supported machine learning scheme has an accelerated rate of convergence and finer reconstruction resolution if compared to the original PNLW scheme. In addition, the proposed scheme does not require user intervention to inspect different levels of inverse parameters. The numerical results shown in this paper prove the superiority of the proposed scheme.
13:40 Interactive Manipulator Arm
This paper presents an intelligent 4DOF manipulator arm that supports different industrial trends. The paper proposes a motion planning algorithm depending on the image processing technique. Virtual animation software is used to design different scenarios to visualize and mimic real-world scenarios. The manipulator's arm programmed using three switching modes. The first mode is the image processing mode (automated mode) in which the camera detects and recognizes the color and the shape of the objects using contour approximation to recognize the shape of the object and take a decision regarding that. The second mode is the voice control mode; in this mode, the arm is controlled by certain voice commands. Finally, the hardware in the loop mode (computerized mode) in which the robot arm is controlled by using computer instantaneous or programmed commands. Hundreds of testing done to check the response and movement of the robot arm for the required commands and many modifications were also done to adjust and calibrate the manipulator's arm. The results showed the capability of the proposed manipulator arm of doing different tasks in different modes with good accuracy.
14:00 Real-time Shadow Detection and Removal by Illumination Drop Point Analysis
The existence of shadows in natural scenes cause challenges in computer vision applications such as object detection. In this paper, a proposed approach is introduced for shadow detection in single images. Unlike other approaches, our algorithm uses the variation in the RGB components in order to locate the drop in intensity and analyze it. The input image would be subjected to noise reduction using a Gaussian filter then vertical scanning is applied where the pixels at every column of the image are collected, grouped using a threshold to obtain smooth knee points through the variation of the intensity levels and analyzed. Shadow removal is done and then horizontal scanning is carried out following the same procedure. The analysis and optimization process was done on 300 images and the final testing was done on 4000 images from the SBU shadow images dataset. The results have shown the success of our algorithm to detect high accuracy of shadows than other approaches.
14:20 An Exploratory Pilot Study on Human Emotions during Horror Game Playing
Enhancements in technology have allowed games to expand the variety of ways for new and better experiences. These experiences are often expressed by the players through their emotions either internally, externally, or in both manners. Nevertheless, it is crucial for developers to make sure that the players are not overwhelmed with emotions eventually ruining the experiences. With the introduction of immersive modalities, games can read the emotions of the players, allowing the developers ensure that the game does not overwhelm the players in any way. This is done by having the game adapt to the readings of the player's emotions, allowing the players to cope and regulate their emotions frequently. The technologies can also allow players suffering from emotional illnesses to regulate their emotions and support studying methods of regaining control over their emotions. In this project, an immersive modality for emotional valence recognition is integrated into a horror game that includes the feature of adapting to the emotions of the players. It also aims to determine how well the readings from the modality are, by comparing them with the self-evaluation report of participants. The chosen modality for emotion recognition is facial emotion readings through a camera. To test the consistency and accuracy of the modality with the personal experiences of the players, the results from a self-evaluated questionnaire containing personal experiences answers are compared with the readings from the modality using the Pearson's Correlation Coefficient formula indicating their correlation strength. Moreover, the research explores the temporal effect of induced emotional states in situations where the game play experience varies. The results show a moderately reliable consistency result of the facial emotion recognition modality and the self-evaluated emotional states. Finally, while initial results indicate that induced emotions may have a temporal effect, more data is needed to validate this observation.
14:40 Self-Driving Car Lane-keeping Assist using PID and Pure Pursuit Control
Detection of lane boundaries is the primary role for monitoring an autonomous car's trajectory. Three lane identification methodologies are explored in this paper with experimental illustration: Edge detection, Hough transformation, and Birds eye view. The next step after obtaining the boundary points is to add a regulation rule to effectively trigger the regulation of steering and velocity to the motors. A comparative analysis is made between different steering controllers like PID or by using PID with a pure pursuit controller for the Lane Keeping Assist (LKA) system. A camera that sends wireless data to ROS via Nvidia Jetson Nano is used to obtain environmental information. The data is interpreted by the processor, which transmits the desired output control via rosserial communication to Arduino.

Monday, December 21 15:00 - 16:00 (Asia/Bahrain)

LB-2: Lunch Break Day-2

Monday, December 21 16:00 - 16:30 (Asia/Bahrain)

KS-5: Keynote Speaker-5: Preserving Data/Query Privacy Using Searchable Symmetric Encryption

Prof. Kevin Curran - Professor of Cyber Security at Ulster University
Chair: Abdulla Alqaddoumi

The benefits of Cloud computing include reduced costs, high reliability, as well as the immediate availability of additional computing resources as needed. Despite such advantages, Cloud Service Provider (CSP) consumers need to be aware that the Cloud poses its own set of unique risks that are not typically associated with storing and processing one's own data internally using privately owned infrastructure. Recent years have seen a number of such incidents occur, whereby customer data hosted on the Cloud has been leaked. The ideal solution to achieving an optimal balance of data security and functionality within the Cloud involves the CSP having the ability to search and operate on data while it is in encrypted form. New techniques such as Fully Homomorphic Encryption and Searchable Encryption have arisen to make this a reality. Fully-Homomorphic Encryption supports computations over data in encrypted form but an efficient Fully-Homomorphic Encryption remains someway off. Searchable Encryption however, despite being a relatively obscure form of Cryptography is now at the point that it can be deployed and used within the Cloud. Searchable Encryption can allow CSP customers to store their data in encrypted form, while retaining the ability to search that data without disclosing the associated decryption keys to CSPs. Symmetric Search Encryption (SSE) represents one of the few forms of Searchable Encryption that is achievable using established standardised encryption algorithms. This talk will discuss a Searchable Symmetric Encryption scheme which is efficient enough to be deployed in a Cloud environment to achieve industry acceptable search speeds whilst maintaining Data Privacy.

Monday, December 21 16:30 - 17:00 (Asia/Bahrain)

KS-6: Keynote Speaker-6: Network Automation: Challenges and OpportunitiesDetails

Prof. Raouf Boutaba, The University of Waterloo.
Chair: Hessa Al-Junaid

Automation has been the holy grail of network management research for decades; it aims at achieving autonomous networks, i.e., networks capable to autonomously monitor their status, analyze problems, make decisions, and execute corrective actions. Despite several attempts to achieve autonomous networks in the past, their practical deployments have largely remained unrealized. Several factors are attributed to this, including the existence of many stakeholders with conflicting goals, reliance on proprietary solutions, the inability to process network monitoring data at scale, and the lack of global visibility restricting network-wide optimizations. The stars are now aligned to realize the vision of network automation thanks to (i) advances in network softwarization; (ii) recent breakthroughs in machine learning; and (iii) the availability of large-scale data processing platforms. However, a number of challenges must be addressed in order to create the synergy between these different technology domains and achieve autonomous networks. This talk will discuss some of these challenges with particular focus on programmable network monitoring leveraging network softwarization, predictive machine learning for automated management decision making, and on-demand orchestration of network services.

Monday, December 21 17:00 - 17:30 (Asia/Bahrain)

P: Prayer

Monday, December 21 17:30 - 19:10 (Asia/Bahrain)

S4-A: Cyber Security & Machine Learning

Chairs: Abdul Fattah Salman, Wael Farag
17:30 Multi-Agent Reinforcement Learning using the Deep Distributed Distributional Deterministic Policy Gradients Algorithm
In this paper, the Deep Distributed Distributional Deterministic Policy Gradients (D4PG) reinforcement learning algorithm is adopted to train a multi-agent action in a cooperative game environment. The algorithm is experimented on training the agents to play a game of tennis against each other. The architectures of the actor and cretic networks are meticulously designed and the D4PG hyperparameters are carefully tuned. The trained agents are successfully tested in the Unity Machine Learning Agents environment. The testing shows the powerful performance of the D4PG algorithm in training multi-agents in complex environments
17:50 A Review of Various Attack Methods on Air-Gapped Systems
In the past air-gapped systems that are isolated from networks have been considered to be very secure. Yet there have been reports of such systems being breached. These breaches have shown to use unconventional means for communication also known as covert channels such as Acoustic, Electromagnetic, Magnetic, Electric, Optical, and Thermal to transfer data. In this paper, a review of various attack methods that can compromise an air-gapped system is presented along with a summary of how efficient and dangerous a particular method could be. The capabilities of each covert channel are listed to better understand the threat it poses and also some countermeasures to safeguard against such attack methods are mentioned. These attack methods have already been proven to work and awareness of such covert channels for data exfiltration is crucial in various industries.
18:10 An Experimental Evaluation of the Advanced Encryption Standard Algorithm and its Impact on Wireless Sensor Energy Consumption
This paper investigates the effect of changing various parameters of the Advanced Encryption Standard (AES) and its effect on the energy consumption on Wireless Sensor Network (WSN). In particular, the effect of using the Electronic Code Block (ECB) and the Cipher Block Chaining (CBC) modes are investigated while using different key-sizes, under Non-Line of sight environment, and for different distances. To achieve that, an experiment has been conducted using the Waspmote microcontroller that utilizes the Xbee wireless modules. Our experimental evaluation showed that it is recommended to use CBC with the highest key size (i.e. 192 or 256), while switching the wireless module ON, thus having two layer of encryption for high secure data even though it has higher energy consumption. For moderately sensitive data, it is recommended to switch OFF the radio encryption and use the highest encryption key in the microcontroller, which showed a good compromise between the attained security level and consumed energy. However, for less sensitive data, it is recommended to use ECB with the lowest key size (i.e. 128) while switching the encryption in the radio module OFF, since this configuration leads to the lowest energy consumption.
18:30 Technology Acceptance Model Based on Needs, Social Influence and Recognized Benefits
This study aimed at proving that relying on the Technology Acceptance Model (TAM) is no longer suitable for modern days technology, especially the ease of use perception. It identifies more objective foundations and traits that could enhance users' satisfaction as well as the usage and value of an Information Technology (IT) system. TAM adoption studies were all based on forming hypotheses and then evaluating whether or not to accept these hypotheses. This study aim is to evaluate adoption after the fact, and measure the factors affected the adoption. Therefore, no assumptions are made, and not hypotheses were develop or tested. This study formed an adoption model based on using an instrument for collecting facts form the technology users. The collected data was used to determine the relationship between the different constructs of the instrument, and identify the factors that encouraged the users to use and then adopt the technology. In this study, we found that Needs, Social Influence, Recognized Benefits, and Systems' Reliability to be the key factor that would motivate users to use and continue using a new technology. An adoption model was driven from the findings. This study shows that ease of use is no longer the main factor influencing the adoption of IT products or services. There are more important factors, and this study examined these factors based on real outcome rather than being perceived by users.
18:50 Ali: The Intelligence Agent for E-government Services with Framework for Privacy and Security
"Tawasul" the National Suggestion & Complaint system, is the communication channel in Bahrain for citizens and residents to express suggestions, questions and complaints about 40+ government entity and their service. And like any call center have its limitation and workload issues. This paper demonstrates an Intelligent Agent 'Ali" powered by Artificial Intelligent that engage with eGovernment customer's in a conversational manner. "Ali" is trained to understand questions and respond (NLP) and deep learning, "Ali" can respond directly to all received calls, he can answer questions, take actions by integrating to any Call center system like CRM, directing requests to the appropriate area within government, filling out forms and Scheduling appointments. He is a Bahraini agent talks two languages (Arabic and English) and with intelligent capabilities. "Ali" will do a Cyber Risk Assessment to use a security framework that assure customers privacy and security with regularity aspect in Bahrain.

S4-C: Blockchain & Cyber Security-2

Chairs: Maan Aljawder, Mohamed Abdeazeem
17:30 Blockchain Decentralized IoT Trust Management
IoT adds more flexibility in many areas of applications to makes it easy to monitor and manage data instantaneously. However, IoT has many challenges regarding its security and storage issues. Moreover, the third-party trusting agents of IoT devices do not support sufficient security level between the network peers. This paper proposes improving the trust, processing power, and storage capability of IoT in distributed system topology by adopting the blockchain approach. An application, IoT Trust Management (ITM), is proposed to manage the trust of the shared content through the blockchain network, e.g., supply chain. The essential key in ITM is the trust management of IoT devices data are done using peer to peer (P2P), i.e., no third-party. ITM is running on individual python nodes and interact with front-end applications creating decentralized applications (DApps). The IoT data shared and stored in a ledger, which has the IoT device published details and data. ITM provides a higher security level to the IoT data shared on the network, such as unparalleled security, speed, transparency, cost reduction, check data, and Adaptability.
17:50 Energy Trading Based on Smart Contract Blockchain Application
Energy and clean energy are big concerns and interests. As the needs differ from area to another ,different solutions appear. Energy cost ,availability , reliability and trading rules are important keys in energy market. Energy sharing is a hot topic as a consumer being a part of the sustainable distributed system also making benefits such as Prosumer. Blockchain technology provides more secure , distributed and fast way to transact financial payments between clients. This paper provide a simulation case for energy sharing concept using smart contract as a tool to rule the sharing process on isolated network to obtain the optimum performance and efficiency of the service. Creating a base for more conditions and complex scenarios and linking the sharing process to financial transactions.
18:10 Privacy-Preserving Blockchain Framework Based on Ring Signatures (RSs) and Zero-Knowledge Proofs(ZKPs)
Blockchain emerged as the technology of choice for most applications because of its immutability property. Blockchain's immutability property promotes network resilience against any removal or modification of data stored on the ledger. However, large scale networks such as the Internet of Things (IoT) that may contain millions of nodes can significantly increase Blockchain size and raise serious privacy challenges. Moreover, the General Data Protection Regulations (GDPRs) that were enforced in May 2018 by European countries allows individuals and organizations to take control of their data by establishing stringent rules which includes "the right to be forgotten." In cases where there are ethical or legal reasons, individuals and companies might want their personal data stored to be erased whether such data is stored locally or on the ledger. The philosophy underpinning Blockchains is that forcing nodes to erase data is akin to denying them a role to play in the Blockchain ecosystems as full nodes. In this paper, we attempt to challenge this notion by proposing a model framework for an enhanced privacy that is akin to Blockchain erasure. Our approaches combine ring signatures (that have long been used to generate anonymous signatures) and Zero-Knowledge Proofs (ZKPs) that can help to disguise user's wallet addresses.
18:30 Cyber-Physical Systems as Sources of Dynamic Complexity in Cyber-Physical-Systems of Systems
Cyber-Physical-Systems (CPS) represent a technology that connects the physical world and the virtual world to intelligently enhance the operational environment they are operating in. CPS represent complex systems that potentially offer a variety of features like improving the reliability, cost efficiency or maintainability of physical systems. Nevertheless, these improvements are still accompanied by unprecedented complexity, high uncertainty, and overall vulnerability of the CPS. This paper provides an analysis of CPS as sources of dynamic complexity in Cyber-Physical-System of Systems (CPSS) via networks of information transferring, storing, and generating agents. Dynamic system complexity and the impact on the CPS and CPSS is explored via Shannon Entropy and joint entropy. As a final step, a trade-off relationship between the benefits and drawbacks of increased dynamic complexity in CPS and CPSS is presented and leads to the conclusion that both are effective tools for value generation in which dynamic complexity leads to decreased system efficiency.
18:50 Revolutionising Higher Education by Adopting Blockchain Technology in the Certification Process
Distributed ledger technologies (DLT), such as blockchain, have recently gained prominence as one of the latest technological revolutions. Therefore, there are many current studies exploring blockchain adoption in various fields including the higher education sector. Recently, the education sector has emerged as one of the fields in which investments for blockchain-based systems and services are desirable. However, the extant literature lacks a guiding framework for the integration of blockchain and other relevant technologies in the use of certificating systems that issue authentic and sharable student credentials. Existing credentialing systems use analogue operations to manage certificate generation. These systems are slow and unreliable, in some cases, and may raise other cultural and social issues, depending on the context of the education system. Consequently, this paper presents an analysis of blockchain adoption in this field, specifically with regard to the process of generating and sharing higher education student certificates. The paper outlines the first phase of ongoing research project by proposing a certificates' validating and sharing framework that will guarantee the authenticity of the shared higher education certificates with providing high privacy and security aspects in a blockchain network. The paper includes the design of a blockchain-based certificating system architecture to address issues and solutions in higher education systems. Thus, deploying blockchain in the higher education sector is expected to be beneficial as it solves some existing issues with the certificating process.

S4-D: Deep & Machine Learning

Chairs: Qasem Obeidat, Ayman Al-khazraji
17:30 Elastic Net to Forecast COVID-19 Cases
Forecasting novel daily cases of COVID-19 is crucial for medical, political, and other officials who handle day to day, COVID-19 related logistics. Current machine learning approaches, though robust in accuracy, can be either black boxes, specific to one region, and/or hard to apply if the user has nominal knowledge in machine learning and programing. This weakens the integrity of otherwise robust machine learning methods, causing them to not be utilized to their full potential. Thus, the presented Elastic Net COVID-19 Forecaster, or EN-CoF for short, is designed to provide an intuitive, generic, and easy to apply forecaster. EN-CoF is a multi-linear regressor trained on time series data to forecast number of novel daily COVID-19 cases. EN-CoF maintains a high accuracy on par with more complex models such as ARIMA and Bi-LSTM, while gaining the advantages of transparency, generalization, and accessibility.
17:50 Comparison of Naive Bayes and Decision Tree for Classifying Hepatocellular Carcinoma (HCC)
Cancer is a disease that causes abnormal cell growth in the body. An example is a liver cancer and it has several types. One of which is Hepatocellular Carcinoma (HCC), and it is the most common one. HCC usually affects people with cirrhosis and hepatitis B or C. Affected people sometimes do not show any specific signs or symptoms at an early stage, and it is usually diagnosed when it has reached a critical stage. Therefore, accurate classification is needed in helping the medical field to classify people with HCC. The research aims to classify HCC patients using supervised machine learning. The HCC dataset from Al-Islam Hospital, Bandung, Indonesia was classified using Naive Bayes and Decision Tree. Both of these methods were compared to determine which one worked best in terms of accuracy. The result showed that Naive Bayes and Decision Tree achieved the best accuracy at 98.25% and 100% respectively. Considering this result, it is reasonable to conclude that Decision Tree performs better in accuracy for HCC classification.
18:10 Effect of Mindfulness Meditation toward Improvement of Concentration based on Heart Rate Variability
Mindfulness meditation is a type of therapy for a psychological cure like depression and anxiety that can significantly increase peoples' ability to concentrate and focus. Thus, this paper describes the analysis of mindfulness meditation effect toward concentration study in term of heart rate variability (HRV) signal. A memory test is used as a medium to test the concentration level of 20 participants, and their performance of the electrocardiogram signal was recorded. Peaks detection method and Pan-Tompkin method are used to extract the features like PQRST peaks and R-R interval from the ECG signal. Then, the extracted ECG signal features are classified using KNN method for before and after meditation during the memory test. The result shows that the effect of mindfulness meditation can improve the performance of participants' concentration level. The highest accuracy, sensitivity and specificity performance is obtained from the combination of all six features (P, Q, R, S, T peaks, and R-R interval value), which is 84.58 %, 88.77% and 80.39%. The analysis of memory test produces higher memory test score (69.2%), lesser miss selection (60.8%) and shorter taken time to complete the memory test (2.268 minutes) after mindfulness meditation compared to before mindfulness meditation. The R-R interval value represents heart rate variability (HRV) is important to prove that most of the participants are more relax and can handle their stress better after doing mindfulness meditation.
18:30 Detection of Parkinson's Disease (PD) Based On Speech Recordings using Machine Learning Techniques
There are some neurodegenerative diseases which are unable to cure such as Parkinson's disease (PD) and Hungtinton's disease due to the death of certain parts in the brain that is affecting older adult. PD is an appalling neurodegenerative health disorder that linked to the nervous system which exert influence on motor functions. PD also often known as idiopathic disorder, environmental and genetic factors related, and the causes of PD remain unidentified. To diagnose PD, the clinicians are required to take the history of brain condition for the patient and undergoes various of motor skills examination. Accurate detection of PD plays a crucial role in aiding and providing proper treatment to the patients. Nowadays, there has been recent interest in studying speech-based PD diagnosis. Extracted acoustic attributes are the most important requirement to predict the PD. The experiment was conducted on speech recording dataset consisting of 240 samples. This work studies on the feature selection method, Least Absolute Shrinkage and Selection Operator (LASSO) with multiple machine learnings such as Random Forest (RF), Deep Neural Network (DNN), Gradient Boosting Machine (GBM) and Support Vector Machine (SVM) as the classifier. Throughout this research, train test split method and k-fold cross validation were implemented to evaluate the performance of the classifiers. Through LASSO, Support Vector Machine Grid Search Cross Validation (SVM GSCV) outperformed other 7 models with 100.00 % accuracy, 97.87 % for recall, 65.00 % for specificity and 97.10 % of AUC for 10-fold cross validation. Finally, Graphical User Interface (GUI) was developed and validated through the prediction over UCI speech recording dataset which achieved 96.67 % accuracy for binary classification with 30 samples.
18:50 Detecting Medical Rumors on Twitter Using Machine Learning
Twitter is a platform that is used extensively to share medical-related information. However, it is considered a challenge to distinguish between rumors and trust-worthy medical tweets. The purpose of this paper is to develop an automated solution that detects medical rumors on Twitter using machine learning. The system streams real-time Twitter data related to the health field using a list of medical keywords, which is automatically updated by using the Wikipedia API. Assertions are a type of speech-acts that can be characterized as true or false, they are detected through a classifier that has been built for this project. The assertions provided by the classifier are then ranked by their credibility; verified sources are ranked higher than non-verified sources. Non-verified sources on the other hand, are compared to trusted tweets by context and sentiment, if the tweets are dissimilar then they go through a machine learning classifier that analyzes user-based, content-based and network-based features with an accuracy of 90%. The ranked tweets are then used to monitor the credibility of health-related tweets in real-time. The system has shown to be operational and the algorithms, web-application, and database have shown successful integration with each other to provide an effective user-friendly interface.

Monday, December 21 19:10 - 19:30 (Asia/Bahrain)

CS: Closing Session

Dr. Lamya Al jasmi
Chair: Abdulla Alqaddoumi

Monday, December 21 17:30 - 19:10 (Asia/Bahrain)

S4-B: Wireless Sensor Network

Chair: Mohamed Baqer
17:30 Proposition of Low-Cost Wireless Sensor Network for Real-Time Monitoring and Early Wildfire Detection in Lebanon's Forests
Wildfires have been destroying Lebanon's forests and leading to the damage of its natural resources and ecological system. Recently, wildfires have impacted harmlessly the economy and even have caused losses in human lives. The absence of a national forest management policy and the lack of human and technical resources contribute to the degradation of Lebanon's forests. The early detection of fires can contribute to controlling fires before it takes up wide spaces. In this paper a low-cost real-time wireless sensor network dedicated for early detection of fires is proposed. It considers flexibility, scalability and power consumption requirements. The system architecture and the implementation of the proposed system are described. Applied testing and verification processes demonstrate the system effectiveness and feasibility.
17:50 Study of Attenuation of Centimetric Waves of the Fifth Generation of Cellular Telephony in Vegetation Areas using the Kriging Model
This article aims to present a study on a model of radiofrequency signal attenuation in areas covered by vegetation (trees) using the Kriging technique, to be applied to the Fifth Generation (5G) of cellular telephony. For the processing of the model, the data obtained by Ko, Junghoon et al. in his 28 GHz outdoor measurement campaign, held at the Korea Advanced Institute the Science and Technology (KAIST) campus in Deajeon, South Korea. The results showed that the kriging technique obtained a better performance compared to the models proposed by Ko, Junghoon, et al, and Fuzzy logic, reaching the lowest error.
18:10 A New technique for Underwater Wireless Sensor Network: Modified-Slotted-ALOHA Protocol
Increasing throughput, time saving, and decreasing the energy consumption are vital issues in the Underwater Wireless Sensor Networks (UWSNS) research field. Therefore, there is a strong need to improve MAC protocol performance in UWSNS. In this paper, a modified -Slotted-ALOHA protocol based on back-off technique is proposed. The simulation results of the proposed protocol give a good performance in the throughput rate, energy consumption, and the average delay in comparison with the other three protocols: Pure ALOHA, Slotted ALOHA, and Time-Saving ALOHA protocol with Slotted Carrier Sense.
18:30 A Novel Genetic Model for Drone Positioning in Wireless Sensor Networks
Society is investing in Wireless Sensor Networks (WSN) in rural areas to protect the environment and agriculture while increasing security and enhance urban mobility in urban areas. In this context, Unmanned Aerial Vehicles (UAV) can be used for the collection of data recorded by sensors. However, the use of UAVs has introduced new challenges to be mitigated. In particular, in the WSN field, UAVs must follow strategies to increase the lifetime of the network, while also decrease the energy spent to realize the data transfer. Thus, in this work, we propose and evaluate a novel genetic algorithm considering the most important requirements listed in the literature. Experimental evaluations with different WSN setups show that the proposed genetic model is more effective in optimizing the network lifetime than classical strategies presented in the literature.
18:50 Co-Design Approach and Co-Simulation Tools for Networked Cyber-Physical Control Systems
Networked cyber-physical systems, which are formed by the synthesis of computational strategies, communication techniques as well as control theory, have recently received a great deal of attention in the control engineering framework. However, the design, analysis, and synthesis of networked control systems bring a number of challenges due to the network-induced imperfections. With this motivation in mind, the objectives of the current research are twofold. Firstly, an overview of the most important network-induced constraints is provided. Then, the co-design approach and co-simulation tools devoted to networked control system applications are discussed. The problems between the communication medium and the control method are elaborated. From this aspect, the present research provides a practical guide of simulation platforms devoted to networked control system applications.