2025 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT) Program

Critical milestones are due

Bahrain time Monday, November 17 Tuesday, November 18 Wednesday, November 19
9:00 ‑ 9:40 OC: Opening Ceremony S2-A: Smart cities-1
S2-B: Deep Learning-1
S2-C: Cyber security-2
S2-D: Telecommunication and Networking-1
S2-E: Financial Technology & Artificial Intelligence
S3-F: Educational Technology & Learning Analytics
S4-A: Telecommunication and Networking-2
S4-B: Deep Learning; Pattern Recognition
S4-C: Robot Vision; Human Computer Interaction; Motion detection
S4-D: Deep Learning; Machine Learning
S4-E: e-Government; e-Health; e-Business
S4-F: Machine Learning and Smart Applications
9:40 ‑ 10:10 KS1: Keynote-1:
10:10 ‑ 10:40 K2: Keynote-2:
10:40 ‑ 11:00 BD-1: Break Day-1 BD-2: Break Day-2 BD-3: Break Day-3
11:00 ‑ 12:40 S1-A: Machine Learning & Big Data Analytics
S1-B: Deep Learning and Vision Systems
S1-C: Cybersecurity-1
S1-D: Cybersecurity and Privacy
S1-E: Smart and Sustainable Systems
S3-A: Artificial Intelligence
S3-B: Deep Learning-2
S3-C: AI & Analytics in Finance
S3-D: e-Business; Technology's Impacts on Society
S3-E: Big Data Analytics
S5-A: Smart Cities-2
S5-B: Cyber security-3
S5-C: Deep Learning; Knowledge Representation; Pattern Recognition
S5-D: Deep Learning; Image Processing
S5-E: Internet of Things
12:40 ‑ 13:00 CD-1: Closing Day-1 CD-2: Closing Day-2 CS: Closing Session

Monday, November 17

Monday, November 17 9:00 - 9:40 (Asia/Bahrain)

OC: Opening Ceremony

Monday, November 17 9:40 - 10:10 (Asia/Bahrain)

KS1: Keynote-1:

Monday, November 17 10:10 - 10:40 (Asia/Bahrain)

K2: Keynote-2:

Monday, November 17 10:40 - 11:00 (Asia/Bahrain)

BD-1: Break Day-1

Monday, November 17 11:00 - 12:40 (Asia/Bahrain)

S1-A: Machine Learning & Big Data Analytics

Chair: Ahmed Zeki
11:00 Understanding Changing Attitudes to Plastics Consumption in the Gulf using Big Data Analytics
Stuart J Barnes and Richard Rutter

Plastic comes from oil and the majority of the world's oil reserves are in the Middle East. Much of the research into plastics consumption has focused on Western countries. The Gulf Cooperation Council (GCC) countries are also avid consumers of plastics, but yet, little empirical research has been done to understand the drivers for this consumption - an important input into any policy development to manage the future impact of plastics. This research uses the power of big data and text analytics to try and shed light on the drivers of plastics consumption in the six GCC countries. Using unsupervised and supervised machine learning on more than a million articles and tweets, we find that newspaper coverage focuses on social awareness, governance, and private/commercial topics, whilst social media is focused on plastic items and plastics issues. A longitudinal analysis of the six GCC countries finds remarkable differences in attitudes and policy approaches to plastics consumption, and distinct differences in the effect of different approaches on sentiment towards the topic of plastics. The paper provides some unique and important insights into perceptions of plastics consumption and pollution in the Islamic world. The paper includes policy recommendations and directions for future research.

11:20 Developing a Tailored Framework for Systematic Literature Reviews in Machine Learning (ML-SLR): Addressing Challenges and Advancing Research
Hasan Abdulla, Fatima Aljazeeri, Nabil Hewahi and Wael M El-Medany

The interdisciplinary nature, rapid evolution, and heterogeneity of machine learning (ML) research present significant challenges for conducting Systematic Literature Reviews (SLRs) effectively. This paper proposes a comprehensive framework tailored to these challenges for performing SLRs in ML research. The framework supports researchers through three essential stages. The planning stage focuses on defining precise research questions, establishing inclusion and avoidance criteria, and formulating a rigorous review protocol. It is followed by the conducting stage, which involves implementing a systematic search, selecting studies, extracting data, performing quality assessment, and synthesizing results. The final reporting stage emphasizes presenting findings comprehensively and sharing SLR artifacts to promote transparency, replicability, and reuse. The framework incorporates methodologies adapted for ML, including selecting relevant digital libraries, applying discipline-specific quality criteria, and adopting synthesis techniques suitable for ML contexts. The framework was validated in a real-world ML scenario to confirm practicality and effectiveness. The study contributes a robust tool that enhances the reliability, reproducibility, and insightfulness of Machine Learning Systematic Literature Reviews (ML-SLRs), thereby advancing the consolidation of knowledge and progress in ML research.

11:40 Business Registers Data Processing for Entity Resolution and Relationship Mapping Using Graph Database
Richard Marko and Lukas Grejtak

We evaluate the current state of international business registers by analyzing their accessibility, completeness, level of digitalization, and associated costs across jurisdictions. We introduce a unified data pipeline designed to collect, clean, tokenize, normalize, and integrate heterogeneous records, with a focus on resolving entities and mapping crossborder relationships between individuals and legal entities. The system automates the acquisition of structured data from Slovak and Czech business registers and currently maintains information on over 500,000 individuals. Our methodology combines string normalization, fuzzy and phonetic matching, and graph database techniques to improve the accuracy of entity resolution. In a case study involving 84 companies, the system achieved a deduplication rate of 69.2%, reducing 1,952 person records to 601 unique individuals. The deduplication logic is applied not only to persons, but also to legal entities, addresses, and positions, enabling consistent and reliable record consolidation across the entire data model. The system constructs largescale corporate graphs with hundreds of thousands of nodes and relationships, uncovering previously unlinked ownership and management connections. The tool is accessible via a web interface and REST API, and provides a foundation for scalable, automated corporate network analysis for financial analysts, regulators, and investigative researchers

12:00 Automated Data Collection and Registration System for Politically Exposed Persons: A Solution for Financial Innovation
Richard Marko and Ctibor Kovalcik

The publication focuses on the automated collection of data concerning politically exposed persons (PEPs) from publicly accessible web sources. Its primary objective was to design and implement a system that efficiently acquires, processes, and stores information about PEPs while adhering to anti-money laundering (AML) regulations and best practices. The work involved analyzing the legal and regulatory importance of PEPs and identifying relevant, though often unstructured and inconsistent, data sources. A robust system architecture was developed to automate data extraction, validation, and normalization, ensuring reliable handling of diverse formats and sources. Beyond basic identity information, the system gathers additional data such as photographs, biographical details, and contextual metadata to create a comprehensive profile. Multiple external sources were integrated to facilitate cross-referencing and verification of the collected data. During implementation, appropriate technologies and tools were chosen to support scalable processing and structured storage within a database environment. Finally, the solution was critically assessed in comparison to existing methods, highlighting its benefits, limitations, and practical applicability in the AML field.

12:20 Drone Super Resolution for Computer Vision Tasks: A Brief Survey
Aysha Aljawder and Alauddin Yousif Al-Omary

With the increased usage of drones in critical fields such as security and agriculture, it is imperative that the images and videos captured for Computer Vision (CV) tasks are of high quality. Super Resolution (SR) has emerged as a major technique to enhance the quality of visual data. Due to this, there have been numerous applications of Drone Super Resolution (DSR) in literature. To aggregate these applications, this paper aims to conduct a brief survey on DSR usage to improve CV tasks. Recent literature from 2020 to 2025 that directly address DSR have been selected for this survey. Considerable findings include the current research's focus on images and lack in videos, the utilization of Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs) for DSR, the main fields in which DSR is applied, the main CV tasks used with DSR as well as its main evaluation metrics. Key challenges found in this domain address the utilization of datasets for DSR, the quality trade-off of different DSR models and the lack of video DSR. The information collected hopes to direct further research in DSR to address these challenges and refine State-of-the-Art (SOTA) results.

Monday, November 17 11:00 - 12:40 (Asia/Bahrain)

S1-B: Deep Learning and Vision Systems

Chair: Sameh Foulad
11:00 LightViT: Lightweight Vision Transformers for Real-Time Microcrack and Scratch Detection in Glass
Fatema Albalooshi, Mohammed Redha Qader, Yasser Ismail and Fred Lacy

This paper presents LightViT, a novel lightweight Vision Transformer (ViT) architecture tailored for real-time microcrack and scratch detection in glass. While conventional ViTs offer impressive accuracy in visual recognition tasks, their high computational complexity poses significant challenges for deployment on resource-constrained platforms. LightViT addresses this limitation by integrating efficient attention mechanisms with optimized network pruning techniques, resulting in a substantial reduction in model size and computational overhead without sacrificing detection performance. Experimental evaluations on a publicly available glass defect dataset reveal that LightViT achieves a detection accuracy of 98.5% for microcracks and 97.2% for scratches, while maintaining an inference speed of 30 frames per second (FPS) on typical embedded hardware. This corresponds to a 75% reduction in latency compared to state-of-the-art, heavy-weight ViT models. The proposed framework offers a practical and efficient solution for automated quality control in glass manufacturing, enabling robust, real-time defect detection in resource-limited environments.

11:20 Accelerating MobileNetV2's Initial Convolution with a Hardware-Efficient Verilog Core
Eman Mersal Yaqoob and Fatema Albalooshi

This paper introduces an optimized hardware accel- erator for the first 3x3 convolution layer of MobileNetV2, imple- mented as a modular Verilog design. The accelerator leverages parallel 8-bit multipliers that generate 16-bit products, which are efficiently summed using a hardware summation tree. All outputs are registered as 32-bit values, while the input consists of 72-bit patches and kernels. Operating at 150 MHz, the design achieves a low latency of 13 ns per convolution with registered output. To support real-time processing of 30 frames per second for 224x224 resolution video, the system requires seven parallel accelerator cores. The architecture maintains strict bit-level equivalence with TensorFlow Lite's depthwise convolution, ensuring compatibility with established deep learning workflows. Targeted at edge AI applications such as smartphone cameras and IoT devices, the hardware prioritizes efficiency and predictability, delivering consistent low-latency performance essential for real-time vision systems. The modular Verilog implementation facilitates seam- less integration with commercial neural network accelerators, enabling scalable deployment in embedded platforms. When used alongside standard deep learning toolchains, this accelerator pre- serves full compatibility with MobileNetV2 inference pipelines, making it a practical solution for hardware-constrained, real-time AI applications

11:40 Sub-problem Reduction Using Hypervolume Difference for Decomposition-based Many-objective Optimization
Makoto Ohki

This paper proposes novel algorithms to solve a many-objective optimization problem (MaOP) by decomposing MaOP into two- or three- objective optimization problems (MOPs) and reducing the number of these sub-problems. First, all combinations of two or three objectives are enumerated from all objectives, and each objective subset is defined as a sub-problem. Next, for an external non-dominated solution set, each sub-problem performs the non-dominated selection and obtains a sub-problem non-dominated solution set. The conventional method employs the size of the sub-problem non-dominated solution set as the contribution of the sub-problem, repeatedly select a sub-problem with great contribution in descending order of the contribution based on the set covering model until when the non-dominated solution set corresponding to the selected sub-problems sufficiently covers the original non-dominated solution set. When certain non-dominated solutions are unevenly distributed in the objective space, those concentrated solutions are associated with specific sub-problems, and the sub-problems are necessarily not ranked properly. In other words, the domain of the objective space covered by the surviving sub-problems that are not removed loses diversity. In order to reduce as many sub-problems as possible while maintaining as much diversity as possible in the objective space, this paper proposes to employ the sum of hypervolume differences (HVDs) of the non-dominated solutions of each sub-problem as the sub-problem contribution. Furthermore, the following two termination conditions are prepared, which are the coverage rate of the non-dominated solution set of the selected sub-problems as in the conventional method and the HVD rate of the selected sub-problems. The proposed algorithm, the conventional method, and NSGA-III, which is considered effective for solving MaOPs, are applied to test problems to verify their effectiveness.

12:00 Advancements in Image Segmentation and Classification: A Survey
Fatema Sayed Hasan Alawi and Alauddin Yousif Al-Omary

Image segmentation and classification are fundamental tasks in computer vision, forming the backbone of many real-world applications from autonomous driving to medical diagnostics. With the rise of deep learning and attention-based architectures, these two tasks have witnessed rapid advancements in accuracy, efficiency, and adaptability. This survey reviews recent progress in image segmentation and classification, highlighting benchmark models, datasets, applications, challenges, and future directions.

12:20 The Power of Deep Learning in Intrusion Detection
Zainab Salman and Alauddin Yousif Al-Omary

Intrusion Detection Systems (IDSs) are an essential part of cybersecurity, aimed at detecting and preventing unauthorized access or anomalies in computer networks. This paper presents an improved Convolutional Neural Network (Enhanced CNN) as a deep learning model for intrusion detection that uses the NSLKDD dataset, a widely used benchmark dataset. Data preparation techniques such as normalization, one-hot encoding, and K-fold validation have been performed on the dataset before processing. The results show that the model has achieved a high test accuracy of 99.08%, which is superior to other research in this context, along with high results in precision, recall, and F1-score of 99.4%, 99.4%, and 99.1%, respectively. The mentioned results indicate that the proposed model outperforms existing related works, not only in terms of accuracy but also in terms of other evaluation metrics.

Monday, November 17 11:00 - 12:40 (Asia/Bahrain)

S1-C: Cybersecurity-1

Chair: Wael M El-Medany

Monday, November 17 11:00 - 12:40 (Asia/Bahrain)

S1-D: Cybersecurity and Privacy

Chairs: Yaqoob S Al-Slais, Faisal A Alghamdi
11:00 The Hacker View: Significance of External Attack Surface Management to Organizations
Mohamed Sami Ragheb and Alauddin Yousif Al-Omary

Organizations often struggle to adequately identify and assess their Attack Surface (AS), leaving numerous critical digital assets unsecured and vulnerable to adversarial exploitation. As these digital assets play a pivotal role in projecting an organization's reputation and presence on the Internet, ensuring their security is paramount. This paper contributes a structured analysis of External Attack Surface Management (EASM), clarifying its processes (asset discovery, categorization, mapping, and risk scoring) and demonstrating their practical application through a commercial platform example. Unlike existing surveys, the paper highlights operational insights from applying EASM to real-world contexts and proposes future research directions to improve detection accuracy and asset validation.

11:20 Lite-FedMask: Lightweight Secure Aggregation for Federated Learning in Resource-Constrained IoT Environments
Walla Khalaifat, Jalal Khlaifat, Wael M El-Medany and Haroun Alryalat

Recently, the number of Internet of Things (IoT) devices has been on the rise, generating vast amounts of data. However, traditional machine learning methods require this data to be sent to a central server for training, which not only leads to bandwidth inefficiencies but also raises significant privacy concerns. This growing challenge highlights the need for alternative approaches that can better handle data locally. Federated learning has emerged as a preferred solution, allowing for local training on devices, sending only model updates to the central server for aggregation to accommodate the growing number of IoT devices. Thus, ensuring the security of these model updates during the aggregation process is vital. Current methods mainly depend on various forms of encryption or differential privacy, which can be complicated and may lead to reduced accuracy. Additionally, these approaches are often resource-intensive, posing challenges for resource-constrained IoT devices.This research aims to propose a lightweight secure aggregation technique for federated learning tailored to resource- constrained IoT environments.

11:40 Cybersecurity Assurance from Personal Viewpoints: Examining the Role of Cookies and Web Privacy
Ali H Zolait, Nawraa Alzaki, Sakeena Althahba and Zainab Naser

Cybersecurity and online privacy are crucial in the digital era, as more people use the internet daily. To ensure that users are aware of the privacy and security implications of cookies, it is essential to educate them on controlling and removing them, promoting safe and private cookie storage and usage procedures, and teaching companies about the risks associated with cookies and how to protect their privacy. This will help users make informed decisions regarding their usage and protect their privacy. The researchers conducted a survey for data collection and analysis, sample size, and selection process for assessing the knowledge and maturity levels of citizens and residents in the Kingdom of Bahrain using a quantitative method. This study explored the negative effects of cookies on cybersecurity, highlighting the need for increased awareness and education about the issue through literature and case studies. The study investigates the impact of cookies on user experience, privacy, and security, focusing on tracking, data collection, user permission, and their effects on cyber-security. The researchers discovered that it was difficult to identify and accurately measure variables affecting user awareness. Cookies pose privacy concerns as they track user activities, provide targeted advertisements, and gather private data, potentially leading to privacy violations and misuse without user consent. It is crucial to raise public awareness about the consequences of cookies and Internet privacy on cybersecurity.

12:00 Toward a Centralized Solution for Aggregating Penetration Testing and Risk Mitigation
Maryam Moosa Alkhunaizi, Taif Nabeel Shehada and Abdulla Aldoseri

As cybersecurity threats escalate, organizations struggle to track vulnerabilities and manage them efficiently. Existing tools produce fragmented reports that complicate decision-making for both technical and non-technical users. This research proposes a centralized, dockerized solution that consolidates multi-source vulnerability data into a single interface. Leveraging Large Language Models (LLMs), the solution generates tailored recommendations and training to address specific security issues. Designed for adaptability and scalability, the proposed solution aims to simplify cybersecurity operations and enhance response to evolving threats.

12:20 Cybersecurity Challenges in Remote Work: A Study on Employee Awareness and Adoption
Ali H Zolait, Ali Hasan and Ammar Saif

The purpose of this study is to investigate the acceptance and awareness of remote work by employees and its security challenges. In addition, it discusses the challenges, risks, and causes of security breaches in remote work. A quantitative approach was adopted to collect data and achieve research objectives. A research questionnaire was developed, pretested, and then randomly distributed to a sample of Bahraini employees. An Extended Technology Acceptance Model (TAM) was adopted to determine, validate, and test factors that affect the use of remote work by employees. The findings showed that the factor that had the most effect on the intention to use was perceived usefulness, which in turn had the greatest effect on the actual use of remote work. In addition, Bahraini employees are accepting and aware of remote work and its security methods. The user knowledge and the people affecting each other make the use of remote work more secure. Therefore; organizations must give more programs and courses about information security and its best practice. In recent years, remote work has gained significant prominence among policymakers, individuals, and the academic community. To investigate this phenomenon, a novel research framework grounded in the Technology Acceptance Model (TAM) was employed, leading to the development of an integrated conceptual model.

Monday, November 17 11:00 - 12:40 (Asia/Bahrain)

S1-E: Smart and Sustainable Systems

Chair: Sami Mohd Dagash
11:00 A Framework for Digital Contact Tracing for COVID-19 Pandemic based on the Kingdom of Bahrain Smartphone Application
Mohamed Baqer, Ahmed Boung and Hala Hatoum

During the COVID-19 pandemic, smartphones and digital contact tracing applications played a pivotal role in controlling the spread of the virus by implementing social distancing and quarantine enforcement. Numerous countries launched their own national contact tracing apps, each tailored to local needs but sharing core functionalities. This study proposes a framework intended as a blueprint for the design and development of such applications. The proposed framework identifies key building blocks for mobile-based contact tracing, relevant to both the COVID-19 context and potential future pandemics requiring similar interventions. This framework is informed by an analysis of the BeAware app, developed by the Kingdom of Bahrain, with particular attention to its key features and design choices.

11:20 Intelligent Watering System for Crop Fields Using IoT-Based Automation and Environmental Sensing
Mohammad Maroof Siddiqui

Efficient use of water is very crucial for agricultural productivity, especially in water-scarce and erratic-rainfall regions. Traditional irrigation often results in water wastage, poor crop yield and high operation cost. This paper describes the development and implementation of an intelligent watering system for crop fields based on the use of Internet of Things (IoT) technologies and environmental sensors. The device can monitor the moisture of the soil along with the temperature and humidity and can control the watering/hydrating facilitated through data analytics. It was not only for max water output but also to do away with the human intervention and save resources. The proposed system is scalable, low cost and has potential implications for sustainable farming by combining IoT platforms, wireless communication modules and decision algorithms.

Keywords-IoT-based Irrigation, Smart Agriculture, Environmental Sensing, Automated Watering System, Precision Farming.

11:40 Efficient Video Transmission over LoRaWAN Using Metadata and Generative AI
Abdelhak Heroucha, Thiago Abreu and Abdelhamid Mellouk

Transmitting video data over low-power, long-range networks remains a major challenge. To address this, we introduce a lightweight IoT architecture that optimizes video transmission using edge computing and metadata extraction. The system utilizes the Luckfox Pico Max RV1106 to perform real-time object detection with YOLOv5 Tiny at the edge, extracting metadata such as object classes, bounding boxes, and confidence scores. Rather than transmitting full video, only key metadata and selected frames are sent via LoRaWAN (RYLR998), significantly reducing data size. A server-side pipeline, integrating large language models (LLMs) and video generation tools, reconstructs contextual visualizations from this metadata, offering an efficient alternative to traditional streaming. Experimental results show up to 94% reduction in bandwidth usage while maintaining perceptual video quality. This architecture supports scalable, energy-efficient deployments in applications like remote surveillance, smart cities, and autonomous systems.

12:00 Exploring the Influence of AI-Powered Tools on English-Medium Instruction: A Case Study of Teachers' Perspectives on Magic School AI
Dhikra Amel Bouzid, Rim Gasmi, Abdellatif Boudiaf, Abdelgheni Benhamed, Khathir Chine and Moussa Boubekeur

This study explores the role of Artificial Intelligence (AI) tools in enhancing the effectiveness of English as a Medium of Instruction (EMI) in higher education. With the increasing integration of English in non-native academic contexts, the research investigates how AI technologies-such as language models, speech recognition systems, and intelligent tutoring platforms-support both instructors and students in overcoming linguistic and pedagogical challenges. Through a qualitative methodology involving literature review and Survey analysis, the study identifies AI's potential to personalize learning, improve language proficiency, and foster learner autonomy. It further examines how AI tools facilitate real-time feedback, pronunciation accuracy, and vocabulary development, thereby strengthening the overall EMI experience. The findings suggest that AI integration in EMI not only enhances comprehension and communication but also bridges the gap between language competence and subject mastery. However, the study also acknowledges challenges such as technological accessibility, ethical concerns, and the need for instructor training. The paper concludes that while AI tools are not a panacea, they offer valuable support in EMI environments when integrated thoughtfully and strategically. Recommendations are provided for future research and pedagogical practices aimed at optimizing AI use in multilingual academic settings.

12:20 Decelerative Capabilities of Eddy Currents in Electromagnetic Braking Systems as a Reliable Revolutionary Technology in Automative Industry
Mansoor AbdulRedha Ahmed, Sayed Munther Radhi Hasan, Jaffar Hasan Abdulla and Salwa Baserrah

Eddy current braking is a method of electromagnetic braking that uses the eddy current to produce a resistive force without any direct mechanical contact. The proposed setup produces a magnetic brake that uses an electromagnet to produce a magnetic field which penetrates a rotating brake disc, generating eddy currents in that disc, which push against the rotating motion without any physical contact. By eliminating friction, this process reduces maintenance demands and enhances the longevity of halting systems. The effect of eddy current has been investigated on different materials. The prototype was designed, developed, and assessed under IEEE, IEC, and ASME standards and guidelines to analyze the braking torque. The experiments clearly illustrated the system's capability to complement and supplement the current traditional deceleration mechanism, especially in high-speed applications, where the system would perform most effectively. This validated experimental research highly assists the current trend in the vehicular industry to adopt sustainable, economical, affordable, and efficient braking solutions, boosting non-contact braking, and contributing to integrating this highly effective technology into modern vehicles, while complying with engineering safety and performance standards.

12:40 Comparative Analysis to Assess YOLO Model Performance in AI-Based Dynamic Fire Protection System
Mohammed Majid M Al Khalidy and Ahmed Mohammed Majid Al Khalidi

Ensuring public safety is paramount, particularly in fire protection systems, to mitigate property damage and, most critically, to preserve human life. Conventional fire detection systems, especially those deployed in large venues and double-volume halls, often exhibit limitations in detection accuracy, range, and response time. This study explores object detection performance by integrating deep learning-based object detection models. Four YOLO (You Only Look Once) versions, 8, 9, 10, and 11, were used and compared, with an intelligent dynamic fire hose nozzle to enhance fire detection capabilities and facilitate real-time emergency response. A system prototype incorporating the YOLO models was developed to assess the efficacy of each approach. The performance of the integrated system was systematically evaluated to determine its effectiveness in improving detection accuracy and response efficiency. The findings highlight the potential of model-based fire detection systems for large-scale implementation, offering a promising advancement in intelligent fire protection technology.

Monday, November 17 12:40 - 13:00 (Asia/Bahrain)

CD-1: Closing Day-1

Tuesday, November 18

Tuesday, November 18 9:00 - 10:40 (Asia/Bahrain)

S2-A: Smart cities-1

Chairs: Ali H Zolait, Christina Georgantopoulou
9:00 Compressive Strength Prediction of Reactive Powder Concrete Through Artificial Neural Networks
Muhammad Ajmal, Danish Ahmed, Md Shah Alam, Nuha Alzayani, Rashed Abdulrahman Ismaeel and Sani I. Abba

In this study, an artificial neural network (ANN) model was employed to predict the 28-day compressive strength of reactive powder concrete (RPC), based on critical mix design variables, which include the water-to-binder ratio, cement content, silica fume dosage, and superplasticizer content. A feedforward neural network, applying the Levenberg-Marquardt algorithm for its training process, was investigated through four separate architectures (30, 40, 50, and 60 hidden neurons) to identify optimal performance outcomes. Correlation analysis disclosed that the water-cement ratio exerted the most significant negative influence on strength (r = -0.71), whereas superplasticizer content demonstrated the highest positive correlation (r = 0.69). The model comprising 60 neurons exhibited superior predictive accuracy, achieving an ideal training fit (R² = 1.0) and robust generalization on test data (R² = 0.90), accompanied by the minimal validation mean squared error (MSE = 6.51). Smaller networks (30-50 neurons) displayed expedited convergence but slightly diminished generalization capability. The findings underscore the efficacy of ANNs in modeling the intricate behavior of RPC, thus providing a dependable instrument for mix optimization with reduced experimental trials. This study presents pragmatic insights into the advancement of high-performance concrete design through data-driven methodologies.

9:20 Optimizing Network Bandwidth Allocation in Smart Cities: A Bibliometric and Scoping Review
Erick Sorongan, Arif Djunaidy and Tony Dwi Susanto

The efficient allocation of communication bandwidth is crucial for the effective management of resources in smart cities. This study employs a comprehensive approach that combines a scoping review and bibliometric analysis to systematically map and synthesize existing research on bandwidth allocation within the context of smart cities. Adhering to the PRISMA 2020 guidelines, a structured search was conducted in the Scopus database (2018-2024) employing field-restricted Boolean queries (TITLE-ABS-KEY) with language and document-type filters. From a total of 843 records, 736 publications were selected for inclusion after rigorous screening. The bibliometric analysis subsequently normalized 2,267 keywords into 81 distinct terms, revealing that the five most frequently occurring terms-‘smart city' (201), ‘machine learning' (177), ‘deep learning' (157), ‘Internet of Things' (145), and ‘traffic prediction' (49)-account for approximately 32% of all co-occurrence links. A country-level analysis further identifies China as the leading contributor with 6,427 citations, followed by the United States and India, reflecting their substantial research output. Three distinct research clusters were subsequently identified: (i) digital sustainability and e-government, (ii) AI-driven optimization and network management, and (iii) intelligent transportation. These findings directly inform the development of adaptive and scalable bandwidth allocation strategies, highlight existing gaps in the areas of explainability, privacy, and cost-awareness, and provide a reproducible research agenda for municipal planners and scholars.

9:40 Software-Defined Radio Implementation of M-PSK Transceiver for Narrowband PLC Using MATLAB and USRP
Kenneth Chauke, Akintunde Oluremi Iyiola and Thokozani Calvin Shongwe

This paper presents a real-time implementation and experimental evaluation of a single-carrier (M)-ary Phase-Shift Keying (M-PSK) ((4)/(8)/(16)-PSK) transceiver for narrowband power line communication (NB-PLC). MATLAB/Simulink and Universal Software Radio Peripheral (USRP) hardware were used, with a galvanic isolation coupling circuit. Signal-to-noise ratio (SNR) levels of (10), (15), and (20~\mathrm{dB}) are emulated via digital scaling. A ``Hello world'' payload is transmitted across (1{,}000) frames under three controlled test conditions: no active load, dimmer-only, and dimmer-plus-heater. System performance is assessed through frame synchronization and bit error rate (BER) analysis. Synchronization misalignments account for up to (80%) of errors in high-noise conditions. Detection probability dropped from (99.5%) to (78%), while root mean square error (RMSE) rose from (0.15) to (1.20) samples. BER dropped below (8 \times 10^{-4}) for (4)-PSK at (20~\mathrm{dB}) but reached (0.40) for (16)-PSK at (10~\mathrm{dB}). These results highlight NB-PLC's potential and constraints for reliable, low-cost smart city communications, including smart metering, street lighting, and distributed IoT sensing.

10:00 Real-Time Causal Pattern Discovery in Structure Fires: A Cloud-Based AI Framework for Smart Emergency Response:
Swarnamouli Majumdar, Lorant Szolga, Abhishek Bhardwaj and Sélim Gawad

Structure fires pose significant risks to human life and infrastructure, yet real-time analytical capabilities for fire incident data remain limited. This paper presents a novel cloud-based artificial intelligence framework that enables interpretable, data-driven pattern discovery from large-scale fire incident records in near real time. Unlike prior studies that focus primarily on predictive accuracy or post-event analytics, this work emphasizes causal understanding and operational integration. The framework combines unsupervised clustering and association rule mining with a scalable AWS-native deployment. Measurable contributions include identifying geotemporal clusters of structure-fire causes across U.S. regions, quantifying rule-based relationships between ignition sources and contributing factors, and demonstrating a reduction in data-to-decision latency through automated daily dashboard updates. Results from the 2024 U.S. Structure Fire Incident dataset show that the system effectively transforms raw NFIRS records into proactive, explainable insights-offering a benchmark for real-time, AI-enabled fire intelligence in smart cities.

10:20 Dynamic Feature Modules for Scalable and Modular Mobile Applications: A Case Study in Smart Agriculture
Yan Watequlis Syaifudin, Muhamad Syaroful Anam, Triana Fatmawati, Nobuo Funabiki, Noprianto Noprianto, Chandrasena Setiadi, Rokhimatul Wakhidah, Amar Alpabet Fuaduzakiah and M. Hasyim Ratsanjani

The increasing demand for efficient and scalable digital tools in agriculture has led to the development of modular mobile applications capable of dynamically adapting to user needs and environmental constraints. This paper presents a mobile smart farming application built using Dynamic Feature Modules (DFMs), enabling on-demand feature delivery to reduce initial download size, optimize resource usage, and enhance performance. The system integrates IoT-based environmental monitoring, AI-driven plant disease detection, and market analytics through a modular architecture that supports seamless feature expansion. Performance evaluation was conducted under controlled 4G network conditions (15 Mbps average throughput) on mid-range Android devices, measuring execution latency, memory footprint, and module delivery efficiency. The technical evaluation shows that the base module size was reduced to 15.1 MB from 48.7 MB in a monolithic build, with average load times of 1.2 seconds for the core module and 1.1 to 2.3 seconds for on-demand modules. Stability tests confirmed consistent CPU and memory usage, while backend API response times averaged 0.8 seconds. User engagement also improved due to contextual feature prompting and personalized access. These results demonstrate that DFM-based modularization enhances scalability, maintainability, and user experience for future smart agriculture platforms.

Tuesday, November 18 9:00 - 10:40 (Asia/Bahrain)

S2-B: Deep Learning-1

Chairs: Ayman Al-khazraji, Zainab Salman
9:00 Self-Supervised and Domain-Adaptive Deep Learning for Early Detection of Cyber-Attacks in Healthcare IoT Systems
Md Iftekhar Monzur Tanvir, Nusrat Yasmin Nadia, Habibor R Rabby, Md Habibul Arif, Sheikh Razia Sultana and Kamruddin Nur

The proliferation of Internet of Things (IoT) devices in healthcare environments has enhanced patient monitoring, diagnostics, and operational efficiency, but it has also expanded the attack surface of critical health infrastructure. Conventional intrusion detection systems often struggle to generalize across heterogeneous network domains and to detect attacks in their earliest stages. This paper proposes a self-supervised and domain-adaptive deep learning framework that integrates contrastive self-supervised pretraining, domain-adversarial feature alignment, and temporal attention to achieve early and accurate detection of cyber-attacks in healthcare IoT systems. The approach leverages a large-scale network intrusion dataset for source-domain training and an IoT healthcare traffic dataset for target-domain evaluation. Experimental results demonstrate that the proposed model consistently outperforms CNN, LSTM, CNN-BiLSTM, Transformer, and baseline DANN architectures across accuracy, F1-score, ROC-AUC, PR-AUC, early detection rate, and domain generalization gap. The framework not only improves cross-domain generalization but also reduces mean detection time, enabling proactive defense against both known and emerging threats in critical healthcare environments

9:20 Advancing Bangla NLP: A Dual-Headed Transformer Model for News Topic and Sentiment Classification
Israk Hasan Jone, Badrul Alam, Kazi Tamim and Omar Faruq Shikdar

Language is the most effective means of human communication, yet machines still struggle to understand its context and sentiment. While NLP research has made great strides in Latin-based languages, Bangla remains under-resourced despite its wide use. Text classification, a core NLP task, is essential for organizing unstructured content like digital news. In practice, readers often infer both topic and sentiment from headlines-but automating this process in Bangla remains a challenge due to the lack of robust tools. To address this, we propose a dual-headed deep learning model for Bangla news headline classification and sentiment analysis. We introduce a novel dataset for this study comprising 5,549 headlines, pre-processed using normalization and tokenization. Leveraging the BanglaBERT transformer, our model categorizes headlines by theme (e.g., politics, religion, sports, other) and predicts sentiment (positive, negative, neutral). Compared with LSTM and XLM-RoBERTa, BanglaBERT consistently delivered superior performance and achieved an accuracy 84.38% for category classification and 73.26% for sentiment analysis. A web has also been developed for this.

9:40 Preserving the Past with AI: Classifying Architectural Heritage Images Using MobileNetV2 and DenseNet
Khalid Alemerien, Mutaz Almahadin and Sadeq Al-Suhemat

Preservation and documentation of architecturally significant heritage are required in order to safeguard the protection of the cultural identity. Architectural heritage image classification plays a critical role for automated site recognition of historical sites and digital preservation and documentation of cultures. It was demonstrated that deep learning (DL)-based models with convolutional neural networks (CNNs) perform very accurately on image classification tasks. In this paper, we examined the effectiveness of MobileNetV2 and DenseNet models in the architectural heritage image classification problem. Besides, MobileNetV2 is a lightweight and efficient architecture that is ideal for mobile and embedded systems. DenseNet offers greater feature propagation and reuse to improve image classification accuracy. We trained and tested both models using a dataset of diverse architectural heritage images. We contrasted their performance with respect to accuracy, computational expense, and model size. Experimental findings disclose that two architectures are very accurate in image classification. DenseNet achieved a higher rate of accuracy (96%) because it excels in feature extraction. MobileNetV2 achieved an accuracy rate of 94.68% and produced a trade-off between accuracy and computational expense. The outcome implies the possible contribution of CNN architectures towards the preservation of digital heritage in that it facilitates automatic detection and classification of architectural monuments.

10:00 Flexible Upsampling for Fractional Scaling in Single Image Super-Resolution
Fikri Yoma Rosyidan, Rarasmaya Indraswari and Urbano B Patayon

We present a Flexible Upsampling Block (FUB) for single-image super-resolution (SISR) that natively supports fractional scaling factors (e.g., 1.25×, 1.5×, 1.75×) within a SwinIR-based backbone. FUB dynamically applies PixelShuffle for integer scales and in-network bicubic interpolation for non-integer scales, while a dimension-consistent data protocol (mod-cropping and bicubic downsampling) ensures exact LR-HR alignment. Trained on DF2K with L1 loss and Adam-family optimization, at 1.25x, the model achieves up to 40.95 dB PSNR and 0.98 SSIM on Set5, 37.12 dB / 0.96 on Set14, 36.03 dB / 0.96 on B100, and 34.47 dB / 0.96 on Urban100. Qualitative inspection shows sharper edges and more faithful textures compared to the low-resolution inputs and closely approximates the ground truth. The fractional path introduces only modest computational overhead relative to the integer path, making FUB a practical choice when precise non-integer magnification is required.

10:20 CNN-Based System for Plant Disease Identification from Leaf Images
Rarasmaya Indraswari, Isna Aulia Fadilla Norsi, Rika Rokhana, Wiwiet Herulambang, Urbano B Patayon and Odo Nelle R. Balaga

The demand for fruits and vegetables in Indonesia has been steadily increasing in recent years. However, production growth is often hindered by plant diseases, which account for an estimated 20% annual yield loss. Therefore, early monitoring and identification of diseases during the planting process are necessary to prevent the impact. Traditionally, farmers rely on manual visual inspection on the plant leaf to identify diseases. However, this method is both labor-intensive and limited in its ability to cover large cultivation areas or provide actionable decision-making insights. This study presents the development of a web-based plant disease identification system using Convolutional Neural Networks (CNN) and transfer learning. The proposed model adopts EfficientNet-B0, pretrained on ImageNet, as the base architecture, followed by a customized head model and an output layer for classifying images into sixteen plant disease categories. Experimental findings indicate that the proposed model, trained and validated on a dataset of 30,038 images from apple, grape, corn, and tomato leaves, attained a maximum accuracy of 99%.

Tuesday, November 18 9:00 - 10:40 (Asia/Bahrain)

S2-C: Cyber security-2

Chairs: Orlando Catuiran, Sarah Al-Shareeda
9:00 Adopting Zero Trust Architecture to Secure Microservices in Healthcare Insurance: A Qualitative Study Using the TOE Framework
Chandra Prakash and Mary Lind

The healthcare insurance industry continues to struggle with outdated legacy systems, prompting a shift toward microservices to modernize its application landscape. However, the distributed nature of microservices and external exposure make the current perimeter-based security models insufficient, risking data breaches and regulatory non-compliance. Though existing studies have examined the challenges with microservices architecture in general or Zero Trust Architecture (ZTA) adoption, no prior studies have examined the microservices security challenges in the healthcare insurance industry. This qualitative study utilized the Technology-Organization-Environment (TOE) framework to build the theoretical foundation to develop the research questions, interview questions, and analysis of the collected data. Insights were collected from 12 practitioners through in-depth semi-structured interviews to understand real-world ZTA adoption experiences. Identified themes from data analysis revealed the importance of policy-based access control, identity and access management, encryption, and proactive monitoring. The findings highlight the ZTA's critical role in risk mitigation and regulatory alignment, offering actionable guidance for healthcare insurers while transitioning to secure, compliant, and modernized architecture like MSA. The study contributes to academic and industry discourse by presenting ZTA implementation implications for governance, architecture, and cybersecurity strategy.

9:20 Adaptive Rate Limiting Against DDoS HTTP Flood Attacks Using Machine Learning
Salem Omar Sati, Abobaker Elgasaier, Islam Alsigoutri and Mahmud Milud Mansour

Distributed Denial-of-Service (DDoS) attacks, particularly HTTP Flood Attacks, pose significant threats to web service availability by exploiting legitimate HTTP protocols to overwhelm servers. Traditional detection methods often fail to adapt to evolving attack patterns, necessitating intelligent and explainable machine learning (ML)-based solutions. This paper evaluates three ML models-Support Vector Machine (SVM), Random Forest (RF), and k-Nearest Neighbors (KNN)-for detecting HTTP Flood Attacks, the paper test's model performance through precisionrecall curves and comprehensive metrics, including accuracy, recall, F1-score, and Average Precision (AP). Results demonstrate that all models achieve perfect AP, indicating optimal separability between attack and normal traffic. Further analysis reveals that RF and SVM exhibit superior robustness, with balanced precision and recall, while KNN shows marginally lower recall at high decision thresholds. Our findings underscore that RF and SVM are ideal for real-time deployment, combining high detection accuracy (AP=1.00) with actionable insights for adaptive rate limiting. KNN, despite its precision, may lack recall consistency in adversarial scenarios.

9:40 Comprehensive Security Assessment and Forensic Investigation of a WordPress-Based Website
Mansoor Khan, Athirah Mohd Ramly, Saeed Sharif and Abdul Basit Ibrahim

WordPress, powering over 40% of websites globally, is a primary cyberattack target due to core, plugin, and theme vulnerabilities. This study addresses the underdeveloped areas of forensic investigation and post-breach mitigation in WordPress environments. It employs vulnerability assessment, penetration testing, and forensic analysis with tools like WPScan, Metasploit, Autopsy, and FTK Imager to evaluate risks, simulate attacks, and reconstruct breach scenarios. Findings will enhance understanding of attack methodologies, forensic readiness, and recovery strategies. A post-breach security framework will be developed, offering a structured approach to secure WordPress, respond to breaches, and improve forensic capabilities. This research aims to close the gap between prevention and post-breach investigation, improving WordPress website security and resilience.

10:00 An Intelligent Deep Learning Framework for Real-Time DDoS Attack Mitigation
Mahmud Milud Mansour, Najia Ben Saud and Salem Omar Sati

This research introduces an advanced deep learning framework for detecting Distributed Denial of Service (DDoS) attacks in web application firewalls through multi-modal feature analysis. Our methodology integrates intelligent feature selection with optimized deep learning architectures to address evolv- ing cyber threats. By employing Mutual Information, Pearson Correlation, and Principal Component Analysis, we identify critical protocol-specific signatures across HTTP, SYN, UDP, and ICMP flood attacks. Evaluation on the CICIoT-2023 dataset demonstrates exceptional performance with 99.2% detection accuracy and 98.7% F1-score for binary classification, alongside 97.3% accuracy for attack-type identification. The framework reduces false positives by 32% compared to conventional methods while maintaining real-time processing capabilities under 2ms per packet latency, providing robust protection against sophisticated DDoS attacks.

10:20 Detection and prediction of insider threats: A Survay
Sami Qada, Alauddin Yousif Al-Omary, Jafla Al-Ammari, Saud Almubarak and Husain Meleeh

The main objective of this paper is to thoroughly Explore the detection and prediction of various insider threats that pose substantial risks to cybersecurity. With the rapid digitalization of modern workplaces, it is essential to safeguard computer systems, networks, and digital devices from the unethical conduct and detrimental actions of workers inside the firm. The current research on insider threats covers a wide range of subjects, including preventive and security strategies, psychological factors that contribute to the emergence of insider threats, suitable data sources for effective utilization, and the essential role of early detection in mitigating insider threats. This study provides a comprehensive comparison of various results from researchers who have worked significantly in this subject, including their methodology and tactics for identifying and managing these risks. This comprehensive comparison seeks to augment the discussion on insider threats and improve detection and prevention strategies, serving as a significant reference for future research.

Tuesday, November 18 9:00 - 10:40 (Asia/Bahrain)

S2-D: Telecommunication and Networking-1

Chairs: Luisella Balbis, Noora Saad Alromaihi
9:00 AI-SEFAS: An AI-Driven Framework for Spectral Efficiency Optimization in Fluid Antenna Systems for 6G Networks
El Miloud Ar-Reyouchi, Asmaa Kharouaa, Ayoub Hadj Sadek and Kamal Ghoumid

This paper introduces AI-SEFAS (Artificial Intelligence-powered Spectral Efficiency Fluid Antenna System), an artificial intelligence-driven framework designed to optimize spectral efficiency (SE) in sixth-generation (6G) wireless networks utilizing Fluid Antenna Systems (FAS). By integrating Convolutional Neural Networks (CNNs) for fast beamforming weight prediction and Deep Reinforcement Learning (DRL) for adaptive antenna repositioning, the proposed approach jointly addresses the challenges of dynamic user mobility, channel variations, and multi-user coordination. Unlike conventional methods that treat beamforming and antenna placement as separate problems, AI-SEFAS performs end-to-end optimization to maximize spectral utilization (bps/Hz) while effectively mitigating inter-cell interference in ultra-dense deployments. Extensive simulations under realistic 6G scenarios demonstrate that AI-SEFAS achieves substantial gains in spectral efficiency, adaptability, and scalability compared to state-of-the-art MIMO and RIS-assisted systems. These results highlight the potential of AI-SEFAS as a key enabler for intelligent and flexible next-generation wireless networks.

9:20 MuLTEcast-Bandit: Lightweight Adaptive Multicast/Unicast Selection for Massive Event Streaming in 5G/6G Edge Networks
Omar Dario Delgado Brito

Unicast adaptive bitrate (ABR) streaming maximizes individual Quality of Experience (QoE) but wastes radio resources when thousands request the same content simultaneously. Conversely, static multicast saves spectrum yet degrades QoE because it ignores time-varying context. We present MuLTEcast-Bandit, a lightweight contextual bandit (LinUCB) that runs at the mobile edge and decides, per group and decision epoch, whether to transmit via multicast or unicast and at which bitrate, using features such as group size, channel quality, cell load, and recent stall events. Across massive-event scenarios, MuLTEcast-Bandit consistently approaches the QoE of pure unicast while substantially reducing spectrum consumption, and it clearly outperforms static multicast and threshold-based heuristics without manual tuning. These findings highlight a practical trade-off: near-unicast QoE at a fraction of the resource cost, enabled by a deployable, computation-light policy.

9:40 Performance Analysis of RIS-aided Synchronized Systems
Hadi Hariri and Nazih Salhab

The uncontrollable nature of radio wave propagation poses a significant challenge to wireless network performance. As a promising solution, Reconfigurable Intelligent Surfaces (RIS) can intelligently reconfigure the propagation environment by shaping and directing radio waves. In this paper, we investigate an RIS-assisted uplink communication system where a user communicates with a base station (BS) via an RIS. We propose a mathematical model for the Signal-to-Noise Ratio (SNR), outage probability, and data throughput under Nakagami-m fading, with a specific focus on the critical impact of phase synchronization. Then we formulate our optimization problem with the aim of maximizing the achievable rate. Our results demonstrate that synchronized RIS phase shifts lead to a notable improvement in received SNR and a significant reduction in outage probability. Specifically, an interestingly low outage probability is achievable at a transmit power of 15 dBm with phase synchronization, highlighting the substantial gains over both non-synchronized RIS or conventional communication without using an RIS. Our analysis of these three scenarios-with synchronized RIS, non-synchronized RIS, and without RIS confirms the pivotal role of RIS phase synchronization for optimizing performance.

10:00 Fake Base Station Detection in Interference-Rich ISAC Networks Using Machine Learning Approaches
Sümeye Nur Karahan, Muhammad Bilal Janjua and Ibrahim Yazici

This study proposes a comprehensive framework for the detection of fake base stations (FBS) within integrated sensing and communication (ISAC) networks, characterized by multiple sources of interference. We develop a signal model that incorporates a mix of self, mutual, and clutter interference to facilitate practical realization of ISAC systems. Utilizing a one-class support vector machine, we achieve efficient detection of FBSs with robust recall through the optimization of radial basis function kernels. Additionally, we introduce an artificial intelligence-driven multi-criteria detection algorithm, incorporating a hierarchical machine learning framework to ensure zero false negatives. Furthermore, we present an aggressive anomaly detection strategy that prioritizes threat detection over classification precision by employing Chebyshev inequality-based adaptive thresholding. Comprehensive simulations demonstrate that the system achieves an overall accuracy of 64.15% while maintaining an average confidence level of 93.84%, indicating a high degree of certainty in decision-making processes.

10:20 Design and Evaluation of Polar Coders for FPGA using High Level Synthesis
Varsharani S and Manikandan J

Channel coding plays a crucial role in 6G, ensuring reliable and efficient data transmission by adding data redundancy to combat noise and interference. Polar coding is considered as one of the advanced techniques and a significant candidate for 6G technology due to its ability to approach Shannon's capacity limit with low complexity. Polar codes are being used for various applications, including 6G channel coding. In this paper, study, evaluation and implementation of polar coders for FPGAs using the concept of High Level Synthesis (HLS) is proposed for 1024, 2048 and 4096-bit coders and the results are reported. The proposed design consumes a maximum of 17% resources with a power consumption of less than 150mW and this is achieved due to the HLS based design. Performance of the decoder in correcting errors was also analyzed and the results are reported. The main advantage of proposed work is that the design can be easily scaled to larger bit sizes, beyond 4096 bits with negligible increase in resource utilization and power consumption.

Tuesday, November 18 9:00 - 10:40 (Asia/Bahrain)

S2-E: Financial Technology & Artificial Intelligence

Chairs: Maan Aljawder, Sahar Ismail
9:00 From Wall Street to Heartbeat: AI Forecasting for Financial Volatility and Investor Well-being
Yara Ibrahim, Saeed Sharif and Nancy Diaa

This study proposes an explainable deep learning framework for stock price forecasting that addresses both predictive performance and the health implications of financial volatility-particularly its link to cardiovascular risk. The model integrates Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and one-dimensional Convolutional Neural Network (1D-CNN) architectures, trained on multivariate OHLCV data from the S&P 500 (2019-2023) using a 30-day lookback and one-step-ahead prediction. To improve interpretability and behavioral usability, the framework incorporates SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME). These tools enhance transparency, reduce ambiguity aversion, and promote trust-critical for users facing emotionally charged financial decisions that may exacerbate cardiovascular stress. Empirical results show that LSTM achieves the best performance (RMSE = 0.0215, R² = 0.904, MAPE = 2.06%, directional accuracy = 52.2%), with statistically significant outperformance over GRU and CNN validated via Diebold-Mariano tests. SHAP identifies ‘Close' price and ‘Volume' as dominant features, while LIME confirms consistent local interpretability across recurrent models. By combining accurate forecasting with interpretable AI, the proposed framework not only enhances decision-making in algorithmic trading and portfolio management, but also contributes to reducing stress-induced health risks associated with market uncertainty

9:20 A Bilingual Augmented Generative AI Chatbot Architecture for Inclusive Retail Banking in Egypt
Marwan El-Baily, Yousef Ayman, Fares El-Sayed Mahmoud, Fathy Mahfouz, Ahmed Yasser and Yara Ibrahim

This paper presents a bilingual Modern Standard Arabic-English generative AI chatbot architecture designed to advance ethical financial inclusion in Egyptian retail banking. The system delivers real time, context aware guidance on core deposit, credit, card, savings, and basic investment products while scaffolding foundational financial literacy for underserved and first-time users. Ethical design principles drive every architectural layer: privacy by design (local redaction and minimal data retention), fairness and bias mitigation (balanced training snippets, dialect normalization, adversarial intent audits), transparency (explainable retrieval augmented generation with rationale snippets), accountability (immutable interaction logging and audit flags), accessibility (low bandwidth optimization, screen reader friendly dialogue formatting, adjustable explanation depth), and user protection (suitability filters, risk and cost disclosures, rate source provenance, anti-hallucination guards). The modular stack integrates intent and entity parsing, a curated banking knowledge base, retrieval augmented generation, compliance and suitability filtering, and adaptive dialogue management that calibrates tone and complexity to inferred literacy and device constraints. A hybrid dense sparse index plus deterministic fallback templates reduces latency and constrains unsupported content under rural connectivity limitations. Evaluation covers intent accuracy, entity F1, hallucination incidence, response relevance, bilingual clarity, and pre/post comprehension uplift, alongside ethical performance indicators (bias dispersion metrics, disclosure completeness rate, privacy leak tests, accessibility success rate). Results indicate that a retrieval grounded, compliance aware, ethically aligned generative framework can operationalize trustworthy, inclusive, auditable governed banking advisory in emerging market contexts while maintaining robustness and scalability.

9:40 Digital Transformation in Auditing: Key Drivers of Information Technology Adoption by Government External Auditors
Anita Carolina, Nur Aini Rakhmawati and Bambang Setiawan

This research investigates the impacts of information technology competency, accounting information system complexity, organizational support, and external pressure on the adoption of information technology in auditing, drawing on the concepts of the task-technology fit theory. A survey of government external auditors in Indonesia was conducted. Partial least squares structural equation modeling was utilized to examine the relationship between variables. The findings revealed that information technology competency, accounting information system complexity, organizational support, and external pressure influence technology adoption. This study extends task-technology fit theory by highlighting internal and external factors in technology adoption in a high-risk, regulated profession. Practical recommendations are provided for organizations to enhance their information technology adoption mechanisms, consequently enhancing the effectiveness and efficiency of the audit process, which eventually results in higher-quality audit report.

10:00 AI-Driven Forecasting of Food Waste in Institutional Settings for Sustainable Resource Planning
Huda Zain El Abdin, Ms, Ranyah Ghaleb Taha and Tala Fuad Musleh

Food waste remains a critical economic and environmental issue, particularly in large-scale food service environments where fluctuating daily demand complicates planning. These inconsistencies often result in overproduction, leading to unnecessary waste, increased operational costs, and environmental harm such as resource depletion and greenhouse gas emissions. This study introduces a data-driven approach that leverages supervised machine learning models-including Random Forest (RF), Extreme Gradient Boosting (XGBoost), K-Nearest Neighbors (KNN), Gradient Boosting Regressor (GBR), Support Vector Regression (SVR), and Linear Regression (LR)-to accurately predict daily food waste. These models outperform traditional statistical methods and offer insights into the influence of key operational factors, aiding informed decisions around staffing and procurement. Among all models, RF showed superior performance with a root mean square error (RMSE) of 6.95 kg and an R-squared coefficient of determination (R²) score of 0.91. Partial dependence analysis revealed that variables like meals served and prior waste levels are strong predictors of future waste. The novelty of this research lies in its integration of predictive modeling with practical food service strategies, forming a decision-support system that enhances operational efficiency. By connecting machine learning with real-world management, the study provides a scalable and interpretable framework that empowers institutional food providers to minimize waste, optimize resource use, and advance sustainability through smart, data-informed planning.

10:20 NLP Interfaces for Climate Risk Models: Bridging Policy Questions and Quantitative Analysis in Banking
Rohit Nimmala and Jagrut Nimmala

When California announced its EV mandate, banks needed rapid analysis of cascading market impacts. Traditional climate scenarios, updated annually, couldn't help. This gap between static projections and dynamic policy reality motivated our work on natural language interfaces for economic climate models. Building on recent advances including BIS Project Gaia, we enable banks to ask questions like "What if California bans gas cars by 2030?" and receive quantified impact analysis with uncertainty bounds. The system integrates LLM capabilities¹ with validated economic models to deliver insights about cascading effects through feedback mechanisms: reinforcing loops, balancing loops, and tipping points. Our models achieve R-squared of 0.85 +/- 0.03 in historical validation against five major climate policies (EU ETS, Germany Feed-in Tariff, China NEV mandate, California Cap-and-Trade, UK Carbon Price Floor) with 89% directional accuracy. We implement eigenvalue analysis for mathematical feedback identification, model calibration using BEA Input-Output data, Monte Carlo convergence analysis demonstrating 500 draws sufficient, and framework design compatible with TCFD/ECB reporting.

Tuesday, November 18 9:00 - 10:40 (Asia/Bahrain)

S3-F: Educational Technology & Learning Analytics

Chairs: Amal Alrayes, Ali Tarhini
9:00 Engaging Primary School Students in Iban Language Through Game-Based Learning
Azlin Dahlan, Zamlina Abdullah, Nurazian Binti Mior Dahalan, Muhammad Hamiz Mohd Radzi and Iman Zuladha Romaizi

The Iban language is an indigenous language of Sarawak, Malaysia, taught in schools at various levels. However, traditional teaching methods, especially in primary schools, can be less engaging for young students. Recognizing this challenge, there is a need for innovative educational tools that make learning the Iban language enjoyable and accessible while aligning with school curriculums. Introducing IILL (Indigenous Interactive Language Learning), a game-based platform designed specifically for seven-year-old primary students based on the Bahasa Iban Tahun 1 curriculum. This exciting new tool was developed using the ADDIE model, ensuring a structured approach to instructional design. To evaluate its effectiveness and usability, researchers conducted tests using the System Usability Scale (SUS) questionnaire. The results revealed an average score of 47.5, indicating that while many users found the game useful, there were areas for improvement, such as navigation, interface design, and overall user experience. Despite these challenges, IILL shows promise as a valuable educational resource. Future enhancements will focus on expanding its content and making it more accessible across multiple platforms, ensuring that learning the Iban language remains fun and interactive for young students

9:20 Bridging the Learning Gap: The DIGIT Framework for Data-Driven E-Learning Transformation
Shalini Rastogi, Deepika Pandita and Madhura Bedarkar

In the modern educational landscape, e-learning systems must adapt to harness the power of data-driven decision-making for sustainable learning outcomes. This research aims to fill the gap in digital education strategy by proposing the DIGIT Framework, which combines traditional instructional principles with data analytics to guide educators and administrators through digital transformation. The study uses a qualitative approach, integrating interviews and focused group discussions with academic professionals across diverse institutions. The findings reveal that data-driven e-learning enhances institutional efficiency, instructional decision-making, and pedagogical innovation, but faces challenges such as data silos, resistance to digital adoption, and ethical concerns. The study introduces the DIGIT Framework, which includes Data Integration, Innovation Enablement, Goal Alignment, Information Sharing, and Technology Leverage as essential pillars for transforming educational delivery in the digital age. The originality of this research lies in its holistic approach, bridging the gap between traditional teaching models and contemporary data-driven e-learning strategies.

9:40 Enhancing Self-Learning Experiences: Multimedia Integration Topic Implementation in Flutter-Based Learning Assistance System
Triana Fatmawati, Yan Watequlis Syaifudin, Akhmadheta Hafid Prasetyawan, Endah Septa Sintiya, Nobuo Funabiki, Andi Baso Kaswar, Pramana Yoga Saputra, Yuri Ariyanto and Rokhimatul Wakhidah

The rapid rise in smartphone usage, particularly Android devices, has created a robust market for mobile applications, driving the demand for skilled mobile developers proficient in various programming languages and frameworks. Mobile programming has become increasingly complex due to the need to navigate multiple operating systems and user experiences. The Flutter framework has gained popularity for its capability to develop high-performing applications for both iOS and Android from a single codebase, attracting educational institutions to include it in their programs. This study focuses on the implementation of the Multimedia Integration topic within the Flutter Programming Learning Assistance System (FPLAS), designed to facilitate self-learning in mobile application development by providing structured materials and automated testing resources. An evaluation with 40 students demonstrated the effectiveness of this approach, as all participants completed all tasks associated with the multimedia application MediaPlayer within the allotted time, receiving positive feedback for the clarity of the learning guidance and suggesting improvements for improved clarity.

10:00 Challenges and Strategies for Integrating Quantum Computing into Learning Analytics
Alessandro Pagano, Veronica Rossano and Francesca Pia Travisani

Quantum computing (QC) is rapidly emerging as a transformative technology capable of significantly advancing learning analytics (LA). This paper investigates how QC can effectively address the challenges associated with the processing and analysis of complex educational datasets. Explore applications of quantum methods in personalized recommendation systems, student performance prediction, educational content optimization, and adaptive learning platforms. In addition, the study examines critical integration issues-including compatibility between classical and quantum frameworks, algorithmic complexity, and performance evaluation. By highlighting both advantages and limitations of quantum approaches, this research offers insight into how QC can complement and enhance current LA practices, ultimately contributing to the development of more precise, efficient, and personalized educational analytics tools These insights may serve as a foundation for future research and practical deployments of quantum-enhanced educational analytics

10:20 Conceptualizing Authenticity in Generative AI-Driven E-Learning: A Critical Examination of Generative Artificial Intelligence in Educational Systems
Najiburrahman Najiburrahman, Hasan Baharun, Siti Aimah, Riza Faisol, Febri Qushwa and Ayatullah Maulidy

This study aims to explore the ambiguity of authenticity in an online learning system based on generative AI and examine the implications of using this technology on academic integrity and students' cognitive quality. The method used is a qualitative study with an exploratory case study design at two higher education institutions that have integrated generative AI into e-learning practices. Data were obtained through in-depth interviews, observations of digital interactions, and analysis of academic documentation. The results of the study show three main findings: the ambiguity of the meaning of academic authenticity due to the use of AI, the limitations of the e-learning system in detecting and managing AI output and the tendency for students to depend on AI which has an impact on decreasing critical thinking skills. The implications of this study emphasize the urgency of redefining the concept of authenticity, reformulating a process-based evaluation system, and developing ethical policies for the use of AI in higher education.

Tuesday, November 18 10:40 - 11:00 (Asia/Bahrain)

BD-2: Break Day-2

Tuesday, November 18 11:00 - 12:40 (Asia/Bahrain)

S3-A: Artificial Intelligence

Chairs: Orlando Catuiran, Yasser Ismail
11:00 Reinforcement Learning for Robust Explosive Demolition: Overcoming Collapse Failures through Adaptive Decision Making
Marios Tsaousidis, Theofanis Kalampokas, Anastasia Moutafidou, Eleni Vrochidou and George A Papakostas

To ensure cost-effective, safe and efficient demolition of a building, careful planning of the process, as well as simulations to ensure safe collapse, are necessary. The work deals with the planning and simulation of building demolition under controlled explosion using Reinforcement Learning (RL) in a Unity simulation environment. RL offers a powerful framework for adaptive decision-making in dynamic and uncertain environments. In the realm of controlled building demolition using explosives, where precision, safety, and reliability are paramount, traditional methods fall short. The aim of this work is to define the optimal placement locations and amount of explosive material for the building to follow the desired demolition plan. Implementation includes the design of a building in Unity with realistic properties and dimensions, so that the strength and behavior of the materials during the explosion could be efficiently simulated. RL is used to determine the optimal location and quantity of explosives, while the proposed designed approach allows for the visualization of the simulation results, i.e., the progressive collapse of the building throughout its total demolition.

11:20 Hybrid Model ARIMA-Random Forest for Energy Forecasting of Smart Energy Monitoring System
Norith Chealy, Channareth Srun, Mengseu Pheng, Virbora Ny, Sros Nhek and Chivon Choeung

The current contribution is a hybrid model interfacing ARIMA (AutoRegressive Integrated Moving Average) with a Random Forest for energy consumption forecasting with a proposed smart energy monitoring system. The ARIMA is used as a predictor to catch periodic time series data in periods, while the random forest would finalize these periodic patterns in energy consumption trends. The embedding of these models within a smart energy monitoring system creates a chance of improving the forecasting accuracy and enables the facilities consumption behaviors effectively, leading to support in energy optimization as a result. We validate our methodology by applying it to actual household data and find that the performance was significantly better than standalone ARIMA or Random Forest. When the dataset size increases, the hybrid model, as discussed, gives an accuracy of 0.98 and the lowest MSE from 0.34 to 0.03. ARIMA yields fairly steady accuracy (0.56-0.61) and an increase in MSE being relatively slight (0.46-0.48). MSE drops from 0.44 to 0.16, and accuracy improves from 0.68 to 0.90 for Random Forest as dataset size increases. This indicates the feasibility of how much this work is an applicable and reliable alternative that enhances the efficiency of energy administration, especially in domestic units.

11:40 SmartTicket: Chatbot-Based Ticketing System for Museums
Meheraab Chothia, Arshaan Iqbal, Shanmugam S and Jothikumar C

Conversational AI is becoming a widely used tool in end-user applications. Through advances in its technology, it is capable of interpreting intent and extracting structured information from a user, which makes it ideal for systems like ticket booking, where the accuracy and clarity of the information is essential. This paper presents the development of an LLM-powered chatbot which is designed to handle user interactions which concern booking, updating or deleting a ticket for museums along with general queries which concern the museums. Since the performance of the application depends on the accuracy of the LLM model being used, a comparison of different models is conducted in this paper. We are primarily comparing three instruction-tuned variants of Meta's Llama family of models: Llama-3.1-70B-Instruct, Llama-3.2-90B-Vision-Instruct, and Llama-3.3-70B-Instruct. Additionally, we are comparing the results of two smaller models from the same family: Llama-3.1- 8B-Instruct and Llama-3.2-3B-Instruct. This is being done to get an understanding of how well the smaller models will perform when compared to the larger models for the same task. Results indicate that naturally the larger models (70B and 90B) outperform the smaller models, being capable of extracting entities with a higher accuracy. However, when it comes to classifying the intent of the user, returning responses in a valid JSON format, and identifying missing elements, the performance of the smaller models closely matched the performance of the larger models. For the larger models, Llama 3.3 narrowly outperforms Llama 3.2 and Llama 3.2, despite Llama 3.2 having 90 billion parameters and Llama 3.3 having 70 billion.

12:00 AwalTiraWhisper: A Fine-tuned Version of OpenAI's Whisper Model to Translate Spoken Tamazight into Written Arabic and English
Ayman Ait Achour, Youssef Souati and Asmaa Mourhir

Translating Tamazight speech into written text is important to support communication, preserve cultural identity, and improve access to digital services for Tamazight-speaking communities. The goal of this research is to build a system that automatically translates spoken Tamazight into Arabic and English using deep learning techniques. We collected and prepared a dataset of 46.6 hours of Tamazight audio from multiple dialects and used transfer learning to fine-tune OpenAI's Whisper model. The results show that the best-performing model achieved a BLEU score of 9.68% for Tamazight-to-English translation. Performance by dialect showed BLEU scores of 10.46% for Tashelhit, 9.53% for Tamazight, and 9.22% for Tarifit. These results demonstrate that Whisper can be adapted to support zero-resource languages like Tamazight and help bridge the digital language gap.

12:20 Aspect-Based Sentiment Analysis for Arabic Business Insights with Generative Language Models
Asma Mohammed Shayea and Mohab A Mangoud M

With the growth of online reviews across social media and e-commerce platforms, businesses face increasing difficulty in monitoring and leveraging customer feedback. Traditional sentiment analysis lacks the depth required for actionable insights. Research on Aspect-Based Sentiment Analysis (ABSA), with generative large language models (LLMs), and efficient fine-tuning for the Arabic language remains limited. This paper presents a scalable Arabic ABSA framework and introduces a practical approach to fine-tune LLMs such as T5, GPT, DeepSeek, and Allam using Low-Rank Adaptation (LoRA) and prompt-based optimization to achieve high accuracy on limited Arabic datasets with reduced computational cost. Among the evaluated models, GPT-4o-mini achieved the best performance in aspect extraction (F1-score 0.80), while Allam excelled in sentiment classification (F1-score 0.83). Finally, the paper demonstrates how these results enable businesses to extract actionable insights through a real-world analytics platform built upon the proposed framework.

12:40 Barriers to Sustained Engagement with Diabetes Self-Management Apps: A Focus Group Study
Md Rownak Alam, Suthashini Subramaniam, Chen Kang Lee, Jaspaljeet Singh Ranjit Singh and Ramesh Kumar Ayyasamy

Available diabetes self-management (DSM) applications are considerably common, but long-term user interaction is low because of usability, motivational, and structural barriers. The research enquires into the challenges by performing five focus group discussions (FGDs) involving 25 respondents representing different individuals of various ages, nationalities, and their status of having diabetes. Thematic analysis revealed a strong user preference for manual tracking or general wellness gadgets over diabetes-specific apps, citing barriers such as complex interfaces, data inaccuracy, and intrusive paywalls. Key desired features included seamless integration with glucose monitors, clear data visualization, and clinician-sharing capabilities. The results indicate the preference of easy to use, trustworthy, and customizable tools which fit the current health routines and medical care and high trust in applications suggested by medical professionals. Findings underscored that sustained engagement is driven by visible health improvements and app reliability, while adoption is heavily influenced by clinician recommendations. Other structural factors, including paywalls, cost of devices, and poor digital literacy levels (particularly among the older adults), also act as a barrier to adoption. The recommendations would include the user-centered design with flexible interfaces, low-cost access model, and easily corrected data accuracy, as well as culturally sensitive features. The findings underscore the importance of developing collaborative approaches between developers, clinicians and communities to develop DSM solutions capable of providing quantifiable health impact as well as being based on user preferences towards a sustainable self-management of diabetes.

13:00 AI Agents in Mental Health Treatment: a Systematic Review
Juswaldi Nur, Kimberly Tan and Richard Wiputra

The rapid integration of artificial intelligence (AI) tools such as chatbots, language learning models (LLMs), and conversational systems has improved mental health therapies, especially for depression and anxiety. This methodical study assesses the application of AI characteristics, therapeutic efficacy, and effectiveness in mental health care. Using the PRISMA working methodology, we provide 21 studies conducted between 2020 and 2024 that examine mental health tools based on AI. Everyone agrees that AI agents have increased user and therapeutic potential. Several studies have shown that the percentage of symptoms decreased from 19% to 42%, and increased compliance compared to traditional control groups. Emotional adaptation mechanisms and personalized feedback significantly impact user retention and productivity. However, evaluating long-term efficacy, ethical transparency, and engagement with therapeutic systems under constraints is essential. This study emphasizes the need for careful development and rigorous clinical validation as it highlights the transformative potential of Agent AI in mental health care.

Tuesday, November 18 11:00 - 12:40 (Asia/Bahrain)

S3-B: Deep Learning-2

Chairs: El-Sayed M El-Alfy, Khadija Ateya Almohsen
11:00 EDRL-DS: Energy-Aware Deep Reinforcement Learning with Dynamic Scaling for Latency-Efficient Task Scheduling in Medical IoT Edge Computing
El Miloud Ar-Reyouchi, Asmaa Kharouaa, Ayoub Hadj Sadek and Kamal Ghoumid

This paper introduces EDRL-DS, a novel energy-aware scheduling framework that integrates Deep Reinforcement Learning (DRL) with Dynamic Voltage and Frequency Scaling (DVFS) to enable efficient and adaptive task management in Medical Internet of Things (MIoT) edge environments. The proposed framework utilizes a Deep Q-Network (DQN) to dynamically optimize processor configurations and task scheduling in response to real-time workload variations, battery levels, and application demands. By intelligently adjusting voltage and frequency settings, EDRL-DS minimizes overall energy consumption and latency while maintaining strict quality of service requirements critical in healthcare-related IoT applications. Extensive simulations demonstrate that EDRL-DS achieves up to 35% energy savings compared to baseline approaches and significantly outperforms existing DRL-based methods in both energy efficiency and deadline compliance. These results underscore the potential of EDRL-DS for deployment in latency-sensitive, energy-constrained IoT systems, supporting scalable and sustainable operation across edge devices and cloud infrastructures.

11:20 A Focused Survey on Multimodal Recipe Extraction from Cooking Videos
Khadija Salih Alj, Kevin Smith and Yassine Salih-Alj

The explosion of short-form cooking videos on platforms such as TikTok, YouTube, and Instagram presents new challenges for extracting structured culinary knowledge. These videos are fast-paced, informal, and highly multimodal, blending visuals, speech, ambient audio, and overlaid text in unstructured and asynchronous ways. Existing surveys in food computing primarily address static images or written recipes, offering limited guidance for video-centric multimodal extraction. To address this gap, this survey presents a focused review of recent advances in multimodal artificial intelligence techniques for extracting recipes from social media cooking videos. Methodologically, this work proposes a modular taxonomy encompassing five core stages: video segmentation, modality specific feature extraction, cross-modal alignment, instruction generation, and structured output formatting. Using this framework, representative models, including vision-language transformers, audio-aware encoders, and cross-modal fusion networks, are analyzed in terms of accuracy, scalability, and robustness. Benchmark datasets are also compared along dimensions of modality coverage, annotation richness, and cultural inclusivity. The findings identify key research challenges, including noisy input handling, cultural adaptation, and ethical concerns. Hence, this work lays the groundwork for future research on multimodal recipe extraction and its deployment in intelligent food computing systems, such as dietary tools, kitchen assistants, and content-aware search engines.

11:40 Towards Transparent Diagnostics: Explainable AI Across Biomedical Signals
Ali Mohammad Alqudah and Zahra Moussavi

This paper delves into the pivotal role of Explainable Artificial Intelligence (XAI) within the realm of biomedical signal analysis. As AI systems become increasingly integrated into clinical workflows, the imperative for transparency and interpretability in their decision-making processes has become paramount. This work explores the foundational principles of XAI, its diverse applications across various biomedical signals, including tracheal breathing sounds, and the profound clinical advantages derived from its implementation. We emphasize how XAI can significantly enhance diagnostic accuracy, foster trust among healthcare professionals, and facilitate more informed patient management by providing clear insights into AI-driven predictions. Furthermore, the paper addresses the inherent challenges and outlines future trajectories for XAI in this rapidly evolving and critical domain.

12:00 AI Mobile Health Applications: Real-Time Monitoring and Support
Tharun Anand Reddy Sure and Akshay Mittal

Mobile health (mHealth) applications increasingly leverage artificial intelligence (AI) to transform healthcare delivery on smartphones and tablets. This comprehensive survey examines the role of AI in health-related mobile apps across four key domains: personalized health monitoring, diagnostic decision support, mental health coaching, and clinical triage. Through systematic analysis of current implementations, technical frameworks, and emerging trends, we review how machine learning (ML) and natural language processing (NLP) techniques are embedded in mHealth apps. We present four detailed case studies demonstrating AI capabilities: an insulin dose calculator achieving 90% food recognition accuracy, skin cancer detection matching dermatologist-level performance (87% agreement), cough analysis for pediatric illness diagnosis with 87% pneumonia detection accuracy, and AI triage chatbots providing clinically appropriate recommendations comparable to human doctors. Our analysis of system architectures compares on-device ML frameworks (TensorFlow Lite, Core ML) versus cloud-supported models, highlighting privacy-performance trade-offs. Key findings indicate that AI enables significant improvements in predictive analytics accuracy (15-30% over traditional methods), personalized intervention effectiveness, and healthcare accessibility in underserved regions. However, critical challenges persist, including data privacy concerns affecting 78% of users, algorithmic bias in 23% of diagnostic tools, high app abandonment rates (68% within 30 days), and evolving regulatory frameworks. Emerging innovations, including federated learning, TinyML for edge computing, explainable AI for clinical trust, and large language model-powered conversational agents, are positioned to address these limitations and enable the next generation of intelligent mHealth applications.

12:20 A Systematic Mapping Review of Flight Delay Prediction: Models, Data Sources, and Evaluation Metrics
Pedro Victor Brasil Ribeiro, Dimas Betioli Ribeiro and Ronaldo Martins Da Costa

Flight delays have become a significant challenge in the global aviation industry, leading to financial losses, operational disruptions, and passenger dissatisfaction. In response, an increasing number of studies have explored predictive modeling techniques to forecast delays. However, the literature remains fragmented, with varied methodologies, data sources, and evaluation metrics. This paper presents a systematic mapping review of flight delay prediction studies published over the past five years. A total of 59 articles were selected following PRISMA guidelines and screened across major academic databases (ACM DL, Scopus, and Web of Science). Each study was categorized according to the modeling approach-Statistical Models (SM), Machine Learning (ML), or Deep Learning (DL)-as well as the types of data employed: Flight Data (FD), Meteorological Data (MD), and Spatial Data (SD). The results indicate that ML and DL models dominate recent research, particularly for classification tasks, while SM approaches maintain relevance in regression contexts. Additionally, the most frequently used data source was FD, present in all studies, with MD and SD appearing less consistently. Performance metrics such as Accuracy, Recall, Precision, MAE, MAPE, and RMSE were statistically analyzed across model categories. The findings reveal that DL models outperform others in classification tasks, whereas SM and ML models offer more stable results in regression scenarios. This review contributes to consolidating the field, identifying research gaps, and guiding future work on robust and interpretable flight delay prediction models.

Tuesday, November 18 11:00 - 12:40 (Asia/Bahrain)

S3-C: AI & Analytics in Finance

Chairs: Ehab Juma Adwan, Mmatshuene Anna MA Segooa
11:00 A Hybrid Deep Learning Architecture for Multi-Task Financial Forecasting: Integrating Bi-LSTM and Hierarchical Attention
Hoang-Huu-To Nguyen, Lam Mai, Thuy-Thuy Tran and Thi-Thanh-Thuy Nguyen

This paper introduces a novel hybrid ensemble model for forecasting the Vietnamese VN-Index, achieving a significant R2 of 0.0980 and a directional accuracy (DA) of 67.03% on historical data from 2010-2023. Our approach integrates Bidirectional Long Short-Term Memory (Bi-LSTM) networks with a Hierarchical Attention Network (HAN) to capture complex intra- and inter-timeframe relationships. Unlike prior works, the proposed architecture is a multi-task model that simultaneously predicts returns and trends. The final ensemble, which combines a baseline Bi-LSTM with two specialized HAN variants, surpasses the baseline's 0.0618 R2 and 58.78% DA. The results demonstrate that integrating multi-timeframe attention within a multi-task ensemble setup provides substantial gains in both predictive accuracy and robustness for a volatile emerging market index.

11:20 Forecasting Gold Futures Using Attention-Enhanced LSTM Networks Incorporating Market Signals
Sa'eed Serwan Abdulsattar, Mohab A Mangoud M and Ayman Al-khazraji

This study proposes a novel deep learning frame work for gold futures price forecasting, combining Long Short Term Memory (LSTM) networks with a dual attention mechanism and a comprehensive set of technical indicators. While traditional LSTMs struggle with feature and temporal relevance in volatile markets, our attention enhanced architecture dynamically weights both critical time steps and predictive indicators, including momentum, volatility, and trend signals, to improve accuracy and interpretability. Empirical evaluation on gold futures data (2010-2025) demonstrates the model's superiority, achieving a Mean Absolute Percentage Error (MAPE) of 1.13% and an R² of 0.991, with a 34.7% reduction in MAE after calibration. The attention mechanism further provides actionable insights by identifying dominant market regimes, while a lag aware calibration module enhances directional forecasting. By bridging sequential modeling with adaptive feature selection, this work advances financial time-series forecasting, offering a robust tool for traders and policymakers navigating commodity market uncertainty.

11:40 Achieving Canadian Housing Project Success through Artificial Intelligence, Risk Monitoring and Controls
Usman Ahmad, Muhammad Abrar-ul-Haq, Abdul Rauf, Muhammad Ashfaq and Subeika Rizvi

This study aims to examine the role of internal project risks and project risk monitoring and controls towards the success of housing projects in St. John, Newfoundland and Labrador, Canada. The sample was drawn from the project team members involved in the housing projects in St. John, Newfoundland and Labrador, Canada. The data were collected from 246 participants using a purposive sampling technique. Data was analysed by employing the PLS-SEM method. The results revealed that project risk has a significant negative effect on both project success and project risk monitoring and control. However, project risk monitoring control insignificantly mediates the impact of project risks on project success, whereas project risk monitoring control has a positive and insignificant influence on project success. PRM principles and guidelines insignificantly moderate the effect of project risks on project success. Likewise, awareness of AI in project risk analysis insignificantly moderates this effect. The study recommends that the project managers should prioritise the identification and mitigation of internal project risks to minimise their adverse impact on project outcomes.

12:00 Unlocking Adoption: Determinants of Behavioral Intention to Use App-Based Feeder Transportation
Rafid Ikbar Athallah and Mudjahidin Mudjahidin

Application-Based Feeder Transportation (AFT) provides shared mobility that bridges last-mile gaps, promotes sustainable transit, and supports e-government initiatives. However, ridership remains far lower than that of private-vehicle trips, and research on societal acceptance of AFT is still limited, particularly in Indonesia. Therefore, the goal of this study is to investigate determinant behavior intention to use AFT, what factors influence their PU, and the part that social influence plays. The research does this by adding new constructs and linkages to the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2). In July 2025, 210 users participated in our online survey using a Likert-scale that measured ten constructs. These constructs include Perceived Ease of Use (PEU), Perceived Usefulness (PU), Social Influence (SI), Perceived Risk (PR), Environmental Awareness (EA), Facilitating Conditions (FC), Hedonic Motivation (HM), Price Value (PV), Habit (HB), and Behavioral Intention (BI). Data analysis were carried out using partial least squares structural equation modeling (PLS-SEM). According to the findings, EA, SI, PU, and HB have a major beneficial impact on BI. PEU and SI have a major beneficial impact on PU, but PR has a detrimental one. SI exerts a major detrimental impact on PR and a beneficial impact on PEU and PU. However, PEU, PR, FC, HM, and PV showed minor impact on BI. This study extends the literature on acceptance of application-based feeder transportation and offers practical recommendations for AFT providers to enhance adoption

12:20 Role of Tech Factors on Entrepreneurial Intentions among Women in Tech
Supriya Srivastava, Ardhendu Shekhar Singh and Vishal Soodan

Despite a surge in digital inclusion, women's representation in tech-driven entrepreneurship remains disproportionately low. The gender gap in intention to start a venture among women in technology is often attributed to systematic barriers, socio-cultural conditioning, limited access to mentorship, digital confidence, and institutional support. This study explores the underlying cognitive and contextual enablers influencing women's entrepreneurial intention (EI) in the digital era. The study identifies key determinants like Tech Self-Efficacy (TSE), Digital Risk Appetite (DRA), and a Growth-Centric Mindset (GCM), which drive EI among women in technology fields. This study used a thematic analysis method to derive themes from semi-structured interviews. Thematic analysis revealed systematic narratives about TSE, DRA, and GCM as major key drivers of EI. Policy support and social capital were identified as moderating variables. The author also proposes a conceptual model that maps the key enablers derived from the study leading to EI among women. The study used qualitative thematic analysis from 24 semi-structured interviews with aspiring women entrepreneurs who have an educational tech background and are from Indian metropolitan tech hubs like Pune and Bengaluru. The study has practical implications for industries, educational institutions, policymakers, incubators, and tech educators aiming to design a more inclusive, gender- sensitive entrepreneurial ecosystem. This study contributes to the domain of women entrepreneurship and gender studies, with specific relevance to emerging economies undergoing digital transformation.

Tuesday, November 18 11:00 - 12:40 (Asia/Bahrain)

S3-D: e-Business; Technology's Impacts on Society

Chairs: Yaqoob S Al-Slais, Saqib Saeed
11:00 Social Media Marketing and Brand Loyalty in Online Travel Agent Platforms: A Satisfaction-Based Approach
Adhe Muflihatul Ulya, Jili Gartia Hapsari Uni Hadiati, Ridho Bramulya Ikhsan, Hartiwi Prabowo, Ahmad Fikron Maulida and Enina Putri

Rapid technological developments have made life easier, including business. Social media, which was initially used as a means of communication for the public, now also plays an important role in business marketing by increasing brand loyalty through customer satisfaction. Online travel agents are one type of business that uses social media for marketing. This study aims to analyze the effect of social media marketing on brand satisfaction and loyalty among online travel agents in Indonesia. This study involved 218 respondents using purposive sampling. The collected data were analyzed using Partial Least Square-Structural Equation Modeling. The study's results have proven that social media marketing positively and significantly influences satisfaction but does not influence brand loyalty. Customer satisfaction also has a positive and significant influence on the brand loyalty of online travel agents. The findings provide insights for online travel agent companies to optimize their social media marketing strategies to enhance customer loyalty through customer satisfaction.

11:20 From feeds to impulsive buying: How social media activities drive among Indonesian Gen-Z consumers: Moderated by self-control
Excel Favour Yosafat Tomboelu, Shaera Nailo Fitria, Novian Abby Sumarsono, Ridho Bramulya Ikhsan, Hartiwi Prabowo and Kartika Aprilia Benhardy

Social media and e-commerce encourage the creation of impulsive behavior in Generation Z. However, self-control studies are still rarely carried out in the context of impulsive behavior. Therefore, this study aims to investigate the effect of social media community and advertising on impulsive buying intention and buying behavior. In addition, this study also examines the effect of impulsive buying intention on impulsive buying behavior moderated by self-control. This research uses quantitative analysis with the data source, a questionnaire distributed to 260 respondents using purposive sampling. The data analysis tool uses partial least square-structural equation modeling. The results showed that social media community and advertising positively and significantly affect impulsive buying intention. Impulsive buying intention has a positive and significant effect on impulsive buying behavior. Self-control is proven to moderate negatively and significantly in the relationship between impulsive buying intention and buying behavior. This research contributes to e-commerce players that social media positively impacts consumer impulsive behavior

11:40 Drivers of E-Recruitment Adoption in Indonesia's Automated Businesses
Rini Sari, Aisyah Qanita, Marita T. Salandanan and Baghas Budi Wicaksono

Digital transformation in various fields has driven the implementation of information technology, including in the employee recruitment process. E-recruitment, as part of the human resource information system (HRIS), is increasingly used due to its efficiency in reaching candidates widely. In Indonesia, around 55% of companies have adopted HRIS to manage the recruitment process by 2023. However, although the implementation of e-recruitment continues to increase, challenges remain, especially in building user trust and reducing stress when using digital platforms. Objectives of this study is to investigate the variables that effect behavioral intentions in adopting e-recruitment in automation-based businesses in Indonesia. Using the TAM approach, this study explores the role of Perceived Effectiveness (PU), Perceived Ease of Use (PEU), Vividness (VIV), and Perceived Internet Stress (PIS) on user intentions in using e-recruitment. A total of 204 respondents contributed to this study. Data analysis using the SEM-PLS method. The results showed that PEU had the greatest influence on user intention, followed by PU, indicating that the easier and more useful a system is, the higher the likelihood of users adopting it. Vividness and PIS factors also have an influence, although their impacts are smaller. Practical implications highlight the importance of a user-friendly interface, attractive system design, and technical support features to reduce user stress in encouraging the adoption of e-recruitment in Indonesia.

12:00 Towards Trustless Academic Records in Higher Education: Integrating Blockchain and IPFS for Verifiable Student Credentials
Yan Watequlis Syaifudin, Abiyasa Putra Prasetya, Raihan Hidayatullah Djunaedi, Pramana Yoga Saputra, Hein Htet, Asep Sunandar, Salies Apriliyanto, Vipkas Al Hadid Firdaus and Maskur Maskur

The management of academic credentials in higher education faces persistent challenges related to fraud, administrative inefficiency, and lack of interoperability, particularly as institutions increasingly recognize non-curricular and micro-credentials that lack standardized validation mechanisms. This paper presents a decentralized system for issuing, storing, and verifying student credentials using Hyperledger Besu and the InterPlanetary File System (IPFS). By leveraging a permissioned blockchain, the system ensures tamper-proof recording of credential metadata, while IPFS provides secure, decentralized storage of certificate files through content-based addressing. Smart contracts automate key processes with role-based access control and introduce automatic tracking of non-curricular achievement credits. The frontend, built with ReactJS and integrated via Thirdweb SDK, enables intuitive user interaction. Functional testing confirms system correctness, while security and performance evaluations demonstrate robustness, transparency, and scalability. The proposed solution empowers students with lifelong ownership of their academic records and enables trustless, instant verification by employers and institutions, paving the way for a more secure, efficient, and learner-centric credentialing ecosystem in higher education.

12:20 Addressing Employee Disengagement using Artificial Intelligence in E-Businesses: Application of a Data-Driven Approach
Deepika Pandita, Debosmita Saha, Varnalika Vuyyuru and Fatima Vapiwala

Employee disengagement affects the stability and productivity of an e-business and is a significant contributor to turnover. Conventional HR practices depend mainly on employee surveys, assessments, and performance reviews, but these approaches take time and effort and are not bias-free. On the other hand, AI revolutionises these processes by introducing real-time behavioural analytics, sentiment analysis, and predictive modelling to detect subtle shifts in employee motivation and emotional well-being. This study investigates how AI can revolutionise HR by identifying patterns of disengagement through behavioural analytics and how well it can stop employee attrition. This study employed a qualitative research design by conducting semi-structured interviews with 47 managers from the cities of Mumbai and Pune based on their professional backgrounds and exposure to AI-driven HR tools. The thematic analysis reveals the causes of employee disengagement as well as how AI can be leveraged to foster employee engagement levels. Based on the findings, an AER Model is proposed, highlighting a systematic method combining managerial assistance, tailored engagement tactics, and AI monitoring. The AER model focuses more on balancing out the AI and human components and ensures that both complement rather than compete with each other, ultimately leading to employee engagement and retention.

Tuesday, November 18 11:00 - 12:40 (Asia/Bahrain)

S3-E: Big Data Analytics

Chair: Rashed Bahlool
11:00 Comparative Performance Analysis of String Matching Algorithms and Data Matching Frameworks Using Python Libraries on Academic Datasets
Muhammad Ridho Waradana, Venia Anisya Rakhmanda, Ganendra Aby Bhamakerti, Fikri Yoma Rosyidan and Nur Aini Rakhmawati

This study conducts a comparative analysis of string matching algorithms and data matching frameworks using Python libraries applied to academic datasets. It examines key algorithms Jaro-Winkler, Edit Distance, Jaccard, and Ham- ming Distance alongside frameworks such as Splink, Record Linkage, FuzzyWuzzy, RapidFuzz, NLTK, DiffLib, RLTK, and Data Matching. The findings indicate that no single method consistently outperforms others across all scenarios. Algorithms like Jaro-Winkler and frameworks such as Splink are well-suited for tasks requiring high precision. In contrast, methods like Edit Distance and RLTK are more effective when the goal is to maximize recall. Implementing a tiered, adaptive pipeline helps balance these trade-offs, supporting reliable and cost-effective entity resolution across varied academic records.

11:20 Forecasting ASEAN Stock Returns Using Support Vector Regression
Tran Trong Huynh and Bui Thanh Khoa

The ASEAN financial markets play a vital role in the region's economic development, offering high growth potential and attracting global investors. However, these emerging markets remain susceptible to both domestic and external shocks, including fluctuations in commodity prices, exchange rates, and geopolitical tensions. Understanding the drivers of stock return variation is essential for investors and policymakers alike. This study examines the impact of macro-financial and geopolitical variables-such as exchange rate returns, gold and oil prices, global equity performance (S&P 500), and the Geopolitical Risk Index (GPR)-on monthly stock returns across six ASEAN countries from 2000 to 2025. While traditional econometric models like pooled Ordinary Least Squares (OLS) are commonly used, this study also explores the forecasting power of Support Vector Regression (SVR), a machine learning technique known for its robustness in nonlinear and high-dimensional environments. The results indicate that exchange rates, oil prices, and US equity markets are key determinants of ASEAN stock returns. SVR outperforms both OLS and the Random Walk benchmark in out-of-sample prediction, especially during periods of heightened volatility. These findings challenge the strict form of the Efficient Market Hypothesis and highlight the added value of incorporating macro-financial and political uncertainty into predictive modeling through advanced machine learning methods.

11:40 Short-Form Video Content and User Engagement Factors in Social Media: A Descriptive Analysis in Indonesia
Rayhan Juniano Rachman and Eka Miranda

Short-form videos have taken over most of social media platforms, yet sustaining user engagement remains challenging. This study explores the relationship between video content characteristics such as the video length, time of posting, content style, and platform-specific features and their influence on engagement metrics which are likes, shares, and comments. Instead of using predictions like past studies, this research uses descriptive statistics based on answers from Gen Z respondents in Indonesia specifically in JABODETABEK area, gathered via online form survey. The findings show that shorter videos, posting at peak times, entertaining content, and using platform features can all boost engagement. The study provides potential insights for content creators, marketers, and social media platforms users by confirming these relationships. This study also supports for future studies to explore wider kind of audiences, smarter data analysis, and how social media platform algorithms impact engagement.

12:00 Analysis of Distributed Government Data Using Artificial Intelligence Techniques
Abdul-Razak Almohamed Almahmoud and Mohamad-Bassam Kurdy

The analysis of government data contributes to monitoring government performance, participating in decision-making, and tracking financial and administrative activities to enhance transparency and accountability in government institutions. Despite the anticipated benefits of government data analysis, these data face several challenges due to their distributed and heterogeneous nature. Knowledge graphs can provide an effective solution for representing information in an organized and logical manner. However, integrating knowledge graphs requires syntactic homogeneity among the vocabularies of RDFS graph schemas, which poses a significant obstacle to their integration. To address this issue, we conducted an alignment between semantic web techniques and artificial intelligence techniques to retrieve information and uncover implicit relationships based on context and shared semantics among texts. For this purpose, we utilized Bert embeddings. The cosine similarity algorithm achieved good results on long texts, but its performance on single words and short texts was extremely poor. We improved the cosine similarity calculation method on Bert embeddings, which enhanced the efficiency of the Bert model by 95%. Ultimately, we succeeded in developing an integrated pipeline for analyzing distributed government data, starting from data collection and organization, through processing and analysis, and concluding with visualization. The model's test results achieved an average accuracy of F1-Score = 96%.

12:20 Object Detection for Smart Retail Systems using OpenCV and DeepLearning
Sai Charan Yarrangari, Saeed Sharif, Halima Kure and Hamed Balogun

Artificial Intelligence continues to be pivotal in recent transformations of smart retail systems through inventory tracking, automated checkout processes, and shelf management, etc. We proposed an AI powered object detection pipeline for smart retail environments using two states of the art use of deep learning models for object detection, YOLOv5 and Faster R-CNN. Our system enables automatic identification and localization of retail products on supermarket shelves, thus reducing manual interaction, improving operational workflows, and enabling real-time decision making. We used a curated dataset of annotated shelf images and processed using OpenCV for image normalization, augmentation, and annotation. Both YOLOv5 and Faster R-CNN show better performances compared to YOLOv2, YOLOv3, YOLOv7, and SSD trained on Pascal-VOC and MS COCO. The single-stage architecture used in YOLOv5 makes it excel in real-time inference scenarios because it converges fast, while Faster R-CNN has the best precision for densely packed or partially occluded objects through its two-stage process. We export our models to ONNX to show they can be deployed on edge devices and serve as a scalable efficient AI-enabled solution for small-to-medium sized retailers

Tuesday, November 18 12:40 - 13:00 (Asia/Bahrain)

CD-2: Closing Day-2

Wednesday, November 19

Wednesday, November 19 9:00 - 10:40 (Asia/Bahrain)

S4-A: Telecommunication and Networking-2

Chairs: Luisella Balbis, Gerald P Arada
9:00 An Intelligent IDS/IPS Framework to Fortify Industry 4.0 Applications
Samir Qurbani, Athirah Mohd Ramly, Pardha Saradhi Chakolthi and Saeed Sharif

The development of modern industrial facilities' manufacturing operations due to the revolution of Industry 4.0 has become possible due to the implementation of the Industrial IoT and Cyber Systemstechnologies. It is facilitated with their technical and economic benefits, such as automation, remote management, self-organization, and smart real-time decision-making. However, at the same time, the specific nature of the industry and increased use of the network communication in its architecture led to significant cybersecurity issues. The more complex are the requirements to modern networks, as well as IDS/IPS signature-based solutions. Such complex attacks as APT, Zero-Day Exploits, and insider attacks can now not be solved with the use of traditional Intrusion Detection Systems. The objective of this research work was to propose and create a deep learning IDS/IPS for Smart Factory that can effectively detect these and other types of cyberattacks with a low false positive rate. For IDS/IPS creation, the work used BiLSTM deep learning method with the attention mechanism to improve the temporal pattern recognition in data for anomaly classification. The KDDTrain+ and KDDTest+ datasets were used for model training and testing, while SMOTE was applied to class balancing. The Bayesian Optimization through Keras Tuner was used as a hyperparameter optimization technique. The best model's validation accuracy was 89.2%. The model evaluation was performed by the F1-score, ROC-AUC, and confusion matrix. The results allow concluding about the effectiveness of the proposed IDS/IPS solution to detect various attack types with a high degree of efficiency for a wide range of potential cyberattacks, as well as its potential practicality for scalable real-time cybersecurity in industrial settings.

9:20 Latency Management In IP Networks Using Forecasting Techniques and Two-way Active Measurement Protocol Data
Bongani Kubayi, Irvine Mapfumo and Thokozani Calvin Shongwe

Fifth-generation (5G) networks have become the backbone of time-critical applications across domains such as the internet of things (IoT), industrial automation, and healthcare, intensifying the need for reliable, low-latency communication. Accurate latency forecasting is essential to meet the stringent performance demands of these systems. This study evaluates two statistical time series models. Seasonal autoregressive integrated moving average (SARIMA) and error, trend, and seasonal (ETS) models for predicting latency in internet protocol/multiprotocol label switching (IP/MPLS) transport networks supporting 5G services. Latency data were collected over 15 days from a 10-gigabit per second (Gbps) IP/MPLS ring, encompassing six network links and aggregated into two representative routes. SARIMA effectively modeled stable, seasonal latency patterns on Route 1, while ETS showed strong generalization across both routes. However, both models faced challenges when applied to volatile latency behavior, particularly on Route 2. The findings emphasize the importance of aligning model selection with the underlying variability of network conditions. SARIMA is better suited for predictable, structured traffic, while ETS performs reliably in environments with less consistent patterns. This research contributes to proactive latency management in 5G networks and highlights the potential of hybrid or machine learning approaches for improving forecasting accuracy in future studies.

9:40 Semantic Communication Architectures Using Transformers for 6G systems
Karrar Imran Ogaili, Hasan Abdulameer Hasan and Ali Z.K. Matloob

This work presents a CSI-adaptive transformer-based semantic communication framework for 6G networks aimed at maintaining high semantic fidelity and task accuracy under diverse channel conditions. The framework combines semantic feature extraction, transformer-based contextual encoding, channel state information-aware symbol mapping, and device-edge collaborative processing, optimized through a joint loss function balancing task accuracy, semantic similarity, and channel efficiency. Experiments on text, image, speech, and multimodal datasets under Rayleigh and Rician fading demonstrated superior precision, recall, F1-score, and accuracy, with the lowest mean squared and root mean squared errors. The system sustained robust performance in low-SNR conditions while reducing latency and energy consumption, making it suitable for mission-critical and resource-constrained 6G applications.

10:00 Spectrum Occupancy Assessment of the UHF TV Band in the University of the Philippines Los Baños
Annie Liza C. Pintor and Gerald P Arada

As a solution to spectrum scarcity, TV white space is being explored since the very high frequency (VHF) and the ultra-high frequency (UHF) TV Broadcast Bands remain underutilized in many regions. The first step towards the utilization of TV white space is identification of available frequencies or channels that could be deployed for different applications. This study investigates TV white space in six locations in the University of the Philippines Los Baños using a handheld spectrum analyzer connected to a rubber duck UHF antenna and a laptop containing the spectrum analyzer software to identify the noise threshold. Determination of the noise threshold in each location is based on the 80% method suggested by Recommendation ITU-R SM 1753. The highest and lowest noise threshold values are - 84.5 dBm and - 100.9 dBm. Parameters evaluated from the noise threshold include total occupied channels (TOC), total available channels (TAC), and spectrum resource occupancy (SRO). Results show that in all locations, the UHF TV band is underutilized, with a maximum and minimum SRO of 39.47% and 23.68% respectively.

10:20 Catheter Motion Control and Detection in Robot-Assisted Cardiac Catheterization
Naman Gupta, Ranjan Jha, Deepak Kumar and Sanjeev Kumar

Robot-assisted cardiac catheterization addresses the limitations of manual catheter manipulation, which traditionally exposes the medical team to continuous x-ray radiation, posing significant health risks. This innovative system enables surgeons to remotely control catheter movements, drastically reducing x-ray exposure while enhancing procedural precision and safety. The system facilitates two primary motions: translational (forward and backward) and rotational (clockwise and counterclockwise), allowing for highly precise navigation of the catheter within the patient's cardiovascular system. Control over these motions is achieved through advanced controllers and actuators, integrated with real-time feedback mechanisms for accurate displacement, velocity, and angular monitoring. Experimental results confirm the system's capability to simultaneously perform translational and rotational movements. As demonstrated, the system achieves a translational speed of approximately 15 RPM, with a minimum displacement precision of 1 mm. Also, it achieves a minimum rotational angle of 0.225 degrees, ensuring exceptional accuracy in rotational motion. These capabilities underscore the system's precision in catheter manipulation, meeting the stringent requirements of interventional cardiology procedures. By automating these critical motions, the system eliminates the need for direct manual handling, thereby improving procedural safety and reducing the dependency on continuous x-ray imaging. This technological advancement not only minimizes health risks for the medical team but also enhances the precision and reliability of cardiac interventions, ultimately contributing to better patient outcomes. The experimental findings validate the system's potential to revolutionize interventional cardiology by offering safer, more precise, and efficient solutions for complex cardiac procedures.

Wednesday, November 19 9:00 - 10:40 (Asia/Bahrain)

S4-B: Deep Learning; Pattern Recognition

Chair: Thamer Muhammed Jamel
9:00 AI-Powered Driver Safety: Integration of Real-Time Drowsiness Detection with GPS Navigation System for Accident Prevention
Khakam Maruf, Rizal Justian Setiawan and Almer Makasa Wibawa

Driver safety represents a critical component of sustainable transportation systems; nevertheless, driver drowsiness continues to account for a substantial proportion of fatal traffic incidents worldwide. The present study seeks to design and implement an AI-driven safety mechanism that fuses real-time drowsiness detection with GPS-based navigation to mitigate accident risk. Employing a Research and Development (R&D) methodology, the investigation proceeds through sequential phases of problem analysis, system design and planning, prototype fabrication, empirical testing, and comprehensive evaluation. The hardware architecture comprises an ESP32 microcontroller, an OLED display, a NEO-6M V3 GPS module, an audible buzzer, a user-interface push button, and a webcam. Drowsiness detection is realized via computer-vision algorithms, wherein the Eye Aspect Ratio (EAR) serves to quantify blink dynamics and the Mouth Aspect Ratio (MAR) is utilized to identify yawning events. Experimental validation yielded a detection accuracy of 85 percent under optimal illumination, 68 percent under low-light conditions, and 72 percent when evaluating lateral facial profiles. Upon detection of drowsiness, the system issues an immediate auditory alert and transmits geolocation data via the Telegram messaging platform. These findings demonstrate the system's potential to provide timely warnings and facilitate rapid response, thereby substantially contributing to the prevention of drowsiness-related traffic accidents.

9:20 Artificial Intelligence-Powered Agriculture: Bridging Data, Automation, and Sustainability
Renato R. Maaliw III, Carl B. Monterey, Felino Jr. J. Gutierrez and Kathreena G. Engay-Gutierrez

Agriculture face urgent challenges, including the need to feed a growing global population with fewer resources, adapt to climate change, and ensure environmental sustainability. This article examines how artificial intelligence (AI) can help address these issues, drawing on a narrative literature review of publications from 2020 onward identified through scholarly databases with keyword-based selection. The review highlights that machine learning-driven precision farming systems can help in crop yields, soil fertility, and optimize irrigation to use water efficiently. We find that deep learning and computer vision enable monitoring and detection of diseases and pests, allowing proactive interventions. Robotic and autonomous systems are automating labor-intensive tasks in farm operations, improving productivity, addressing labor shortages, and enhancing safety. In addition, big data and predictive analytics power decision support tools to make informed decisions. However, the adoption of AI in agriculture faces challenges such as data scarcity, the digital divide, limited rural infrastructure, and ethical concerns. Overall, our study underscores AI's transformative potential to make agriculture more sustainable and productive, emphasizing the need for inclusive, responsible innovation to make sure these benefits reach all stakeholders.

9:40 Opposition Learning Integrated Red Panda Optimization for Deep Neural Feature in Menopausal Nutrition Intelligence
Logapriya E, Surendran R, M Mohana and Sundara Rajulu Navaneethakrishnan

The growing prevalence of menopausal health issues highlights the need for personalized nutrition recommendation systems beyond generalized clinical guidelines. This study proposes an Opposition Learning-Based Red Panda Optimization (OL-RPO) framework combined with a deep learning classifier for feature selection and nutrition prediction in menopausal women. Using a dataset of 4910 records covering dietary patterns, symptoms, and health indicators, OL-RPO enhances exploration and exploitation by merging opposition learning with Red Panda Optimization, effectively avoiding premature convergence. Experimental results show that OL-RPO surpasses Genetic Algorithm, Particle Swarm Optimization, Whale Optimization Algorithm, and standard RPO, achieving 98.1% accuracy with higher precision, recall, and F1-score. Additionally, irrelevant features are reduced by 43%, ensuring robust high-dimensional analysis. The study contributions include OL-RPO-based feature selection, integration with deep learning, and clinical validation of nutrition-related factors, demonstrating its value as a decision-support tool for personalized menopausal healthcare.

10:00 Hybrid Machine Learning Approaches for Agronomic Price Prediction Using Multi-Source Data
Shanmugam S, Shail Tejas Shah, Jothikumar C and Aditya Raj Raj Singh

In the agricultural sector, farmers can greatly benefit from accurate price forecasting. accurate forecasting can help farmers make well informed decisions. But forecasting agronomic commodities come with various challenges. The price of agricultural goods is mainly dependent on historical prices, weather patterns, economic indicators, and satellite imagery. This paper suggests a hybrid machine learning framework that uses ensemble learning to integrate the outputs of specialized models for each type of data .Each model ha been trained on one factor that contributes to the future price of the commodity .Ridge Regression model is used to predict the economic trend, LightGBM which Is a derivative of CNN is used forecast weather, and LSTM networks record temporal trends in past prices. This hybrid approach performs better than traditional models in metrics like RMSE, MAE, and R2 according to experiments conducted on multi-source datasets.The results indicate that the multi model approach is more effective and dynamic because it can take into account multiple factors when compared to traditional approaches .

10:20 Cone-Rod Dystrophy Multimodal Prediction
Seif Wael, Caroline Atef Tawfik and Ahmed Maged Fahmy

Cone-Rod Dystrophy (CRD) is a congenital and uncommon neuro-retinal disease, characterized by gradual loss of both cone and rod photoreceptors, which leads to problems of visual acuity, colour-vision, and, in the end, to peripheral vision accessibility. Optical Coherence Tomography (OCT) and Electro-retinography (ERG) are methods used in current clinical practice to observe pathophysiological processes in the retina, providing different information regarding the structural integrity and functional role of the retina, respectively. The inherent subtlety of the clinical manifestations in the early stages of the disease, combined with interpretative deficiencies in unimodal assessment, underscores the need for innovative diagnostic methods. This paper examines the possibility of a multimodal artificial intelligence framework that utilizes both deep learning and machine learning to support the diagnosis of CRD. These high-resolution OCT images are then processed using a set of convolutional neural networks, residual networks, and vision transformers. At the same time, the tabular data of ERG is checked by Logistic Regression, Support Vector Machine, Random Forests, and XGBoost. A late-fusion network then combines image features extracted from OCT and ERG classification into a single model. Moreover, the system can produce organized diagnostic reports that combine visual character information, tabular information, and text. The findings of the extensive test help to conclude that multimodal is more effective than unimodal independent analyses, thus proving the idea that early CRD can be detected and used in real conditions of clinical practice and even make future attempts at large-population screening.

Wednesday, November 19 9:00 - 10:40 (Asia/Bahrain)

S4-C: Robot Vision; Human Computer Interaction; Motion detection

Chairs: Ghadeer I Khalil, Sahar Ismail
9:00 Robotic System for Electronic Component Identification Based on Visual Object Localization
Nguyet Giap, Hung Vu Van, Nam Nguyen Nhat, Tan Nghia Duong and Dzung Nguyen Tien

This paper presents the design and implementation of an automated robotic system for classifying electronic components based on object localization. The system integrates a SCARA robotic arm (IR-S4 series), a vision camera, and a programmable logic controller (PLC) to replace manual labor in electronic manufacturing lines. The proposed solution addresses three main challenges: hardware integration, real-time communication among devices, and robot programming for accurate component classification. The system operates with three coordinated robotic arms, each responsible for sorting specific modules from mixed input pallets. A camera system detects the coordinates of components and transmits them to the robots, which perform pick-and-place operations with vacuum tools. The control architecture is implemented using InoRobotLab and IRCB500 controllers, ensuring stable, synchronous operation. Experimental results demonstrate that the system achieves a classification accuracy of over 97.8%, with average processing time per component under 2.5 seconds. Communication between the PLC, robots, and camera modules remained stable throughout continuous testing cycles. The system successfully classified over 500 components per hour with negligible error rate, proving its practical applicability in industrial settings. Future improvements include expanding classification to a broader range of component types, enhancing vision accuracy through AI-based object detection, and developing adaptive learning mechanisms for improved autonomy. These enhancements will further increase system flexibility and deployment potential in Industry 4.0 environments.

9:20 Omnidirectional Recognition Of Medical Emergency Situations Through Body Gesture Using 360 Degree Camera
Cherry Joy L. Escoña, Anne Jeline R. Lazaga, Ma. Aubrey D. Mensote, Lea Grace C Obtinalla, Sherwin Vasquez and Roselito E. Tolentino

This paper presents an omnidirectional system for recognizing medical emergency situations through body gestures. This research addresses the significant limitations of previous systems which often relies on traditional cameras with a restricted field-of-view. A 360-degree camera has been utilized by the system for omnidirectional gesture recognition of body gestures. The researchers constructed MediaPipe for real-time pose estimation and a Random Forest classifier trained on five emergency gestures: headache, cough, chest pain, vomiting, and fall. The system achieved a gesture recognition reliability of 95.37% across multiple angles and distances, and a combined recognition-response success rate of 96.2% in robot-assisted medicine dispensing and alert triggering. These results demonstrate the effectiveness of omnidirectional vision in enabling robust, angle-independent gesture recognition suitable for medical monitoring applications.

9:40 Reimagining Knee Rehabilitation: A Conceptual Breakthrough in Personalised Physiotherapy
Samir Morad, Kunal Dinesh Keswani and Saeed Sharif

This article presents an innovative concept for a next-generation knee rehabilitation brace designed to transform traditional physiotherapy practices. Rooted in the integration of cutting-edge technology with proven therapeutic principles, this device aims to accelerate recovery, enhance treatment precision, and deliver personalized care. Unlike conventional methods that often lack adaptability and demand prolonged recovery periods, this novel design addresses these limitations by offering a smarter, more efficient solution for both patients and physiotherapists. While the brace remains at the conceptual stage, its potential to improve clinical outcomes, reduce rehabilitation time, and alleviate pressure on healthcare systems is substantial. This forward-thinking approach marks a promising shift in the future of knee rehabilitation, where individualized, tech-enabled care could soon become the standard by physiotherapists and patients.

10:00 Sense of Blindness: A Fully Non-Visual VR Experience to Foster Empathy Toward Urban Blindness
Lorenzo Appice, Pietro De Giglio, Alessandro Pagano, Veronica Rossano and Francesca Pia Travisani

In recent years, Virtual Reality (VR) has emerged as a powerful medium for simulating complex human experiences and promoting empathy through immersive learning. This paper presents Sense of Blindness, a VR-based serious game designed to raise awareness about the everyday challenges faced by blind individuals in urban environments. Unlike traditional empathy-oriented experiences, our system deliberately removes all visual input during gameplay, relying exclusively on spatial audio and haptic feedback to simulate non-visual navigation. The user interacts with the environment through a virtual white cane, receiving directional and obstacle-related cues via audio and controller vibrations. The game unfolds within a realistic urban setting inspired by an Italian pedestrian street and includes a structured sequence of obstacles, crosswalks, and auditory landmarks. The design prioritizes accessibility and user experience, providing both visual and auditory menu systems and a progressive audio-based tutorial. A preliminary evaluation involving 24 sighted participants was conducted using the Game Experience Questionnaire (Gamex). Results indicate a significant increase in user-reported empathy, awareness of visual impairment, and engagement. Participants particularly appreciated the immersive quality of spatial audio, which was rated as the most impactful sensory component. The study contributes to the field of Human-Computer Interaction (HCI) by exploring the design space of fully non-visual VR experiences and by demonstrating the potential of sensory-only interaction models to foster inclusion and reflection. Future developments include expanding the scenario catalog, longitudinal impact analysis, and application in educational and public contexts.

10:20 A 3D Camera-Based Controller Using Vector Mathematics for 2-DOF Forearm Angle Tracking
Johnnur P Alcasid, Shaina Rove C. Almira, Mariella E. Buenaventura, Regine B. Icawat, Kiana Mae Y. Lumagui and Roselito E. Tolentino

The human forearm provides a natural and intuitive source for controlling robotic systems yet accurately tracking its two-degree-of-freedom motion without physical instrumentation presents a significant challenge. This paper presents a non-intrusive algorithm that mathematically models forearm motion using skeletal joint data. Skeletal tracking, facilitated by a 3D depth camera, acquires the spatial coordinates of key body joints in real-time. The Orbbec Astra Pro Plus camera was selected for its high-precision depth sensing, enabled by structured light technology, making it ideal for accurate joint detection. Tracked elbow and wrist joints are used to construct a forearm vector, and vector analysis via the dot product is applied to compute its pitch and yaw angular displacements. The accuracy of these computed angles is validated against measurements from a wearable sensor module (MPU6050 and HMC5883L). The algorithm demonstrated high precision, achieving an average absolute angle difference error of 2.14° for pitch and 1.66° for yaw movements. This confirms the algorithm's effectiveness for high-fidelity control and suggests promising applications in defense, advanced robotics, and enhancing motion capture systems in robotics.

Wednesday, November 19 9:00 - 10:40 (Asia/Bahrain)

S4-D: Deep Learning; Machine Learning

Chairs: Sankaranarayanan Suresh, Bashayer Hussain Ahmed, Bh
9:00 Enhancing Glioblastoma Survival Forecasting via Ensemble Learning of Clinical and Genomic Features
Ichraq El Hachimy, Nabil Benamar and Nabil Hajji

Brain cancer is the uncontrolled growth of malignant cells in the central nervous system, and the most malignant primary brain tumor is the gliomas. Glioblastoma (GBM), a World Health Organization (WHO) grade IV astrocytoma, represents the most malignant histologic grade and accounts for approximately 54% of all the gliomas with an incidence rate of 3.19 per 100,000 population per year. GBM diagnosis requires a multidisciplinary approach involving clinical evaluation and neuroimaging with histopathology and molecular profiling. Due to the aggressiveness and heterogeneity of GBM, accurate survival prediction continues to be challenging. We constructed a stacked ensemble learning model comprising Ridge Regression, Lasso Regression, XGBoost, and RandomForestRegressor in this work and used it on survival outcome classification in glioblastoma. The model achieved a Concordance Index (C-index) of 0.73 on the test data set and was good on ranking patients according to survival time. By integrating linear models that can handle well with multicollinearity and selection of features and tree models that capture non-linear relation and interaction, the ensemble uses the complementary strengths of the models to generalize well and over-fit less. This performance is on par with the usual single model methods even with highly dimensional and highly complex clinical-genomic interaction involved in glioblastoma data. Clinically, an over 0.70 C-index implies the potential value of the model in stratifying the patient and predicting prognosis and supports individualized treatment planning and informed clinical decisions. Future work will include the inclusion of additional omics data, model hyperparameter optimization through cross-validation and validating the model on independent sets to confirm its robustness and translatability.

9:20 Multi-Task Hybrid Deep Learning with Cross-Domain Alignment for Real-Time Sentiment Analysis in Clinical Decision Support Systems
Habibor R Rabby, Md Habibul Arif, Md Iftekhar Monzur Tanvir, Nusrat Yasmin Nadia, Sheikh Razia Sultana and Kamruddin Nur

Sentiment analysis in healthcare enables early detection of mental health risks, assessment of patient satisfaction, and support for clinical decision-making. However, existing approaches often address medical sentiment classification and mental health status detection as separate tasks, limiting their ability to leverage shared linguistic cues and adapt across heterogeneous data sources. This paper presents a Multi-Task Hybrid Deep Learning with Cross-Domain Alignment framework for realtime sentiment analysis in clinical decision support systems. The proposed architecture integrates a shared transformer-based encoder, a domain alignment module, and task-specific classification heads to jointly model two complementary datasets: a multi-class mental health status corpus and a medical sentiment dataset. Experiments on both datasets demonstrate that the proposed model consistently outperforms state-of-the-art baselines in accuracy, macro-F1, and other evaluation metrics while maintaining low inference latency suitable for real-time deployment. These results highlight the potential of cross-domain multi-task learning to improve sentiment analysis performance in diverse healthcare contexts, enabling more responsive and context-aware clinical decision support.

9:40 Monotonic Learning: Guaranteed Improvement in Neural Network Training
Sa'eed Serwan Abdulsattar, Fayzeh Abdulkareem Jaber, Hani Ibraheem Al-Balasmeh, Rahmeh AbdulKareem Jaber, Ahmad Abdulkareem Jaber and Nour Abdulkareem Jaber, Ph

This paper presents Monotonic Neural Training, a novel approach to neural network optimization that eliminates performance regressions while improving model quality. Unlike conventional methods that accept all parameter updates, our method enforces strict improvement criteria, ensuring each step advances validation performance. Traditional training often suffers from unpredictable dynamics, with sudden deterioration requiring constant monitoring and manual intervention. Mono tonic Neural Training addresses this via an "improve-or-reject" mechanism: updates are only committed if they demonstrate measurable improvement. The approach treats optimization as a quality-controlled process rather than blind descent. Candidate updates are validated, and failing updates trigger automatic retries with varied batch sampling and intelligent perturbations, forming a self-correcting training loop that guarantees monotonic progress. Experimental results on standard regression tasks show a 21% R² improvement (0.585 vs. 0.483) over conventional training. While computationally intensive-requiring 3,068 retries for 25 successful improvements-the method produces substantially higher-quality models with complete training stability. Its selectivity is striking: only 0.8% of updates are accepted, effectively rejecting 99.2% of potentially harmful updates. Beyond per formance gains, Monotonic Neural Training navigates complex loss landscapes autonomously, adapting to training difficulties without human supervision. Even steps requiring over 1,100 retries ultimately succeed, demonstrating that patient, quality focused optimization outperforms rapid but unstable approaches. Ultimately, the fastest path to superior models is not accelerating updates but ensuring that every update genuinely improves performance.

10:00 AI Copilots for Machine Learning Pipelines: A Comparative Study on Code Quality and Model Performance
Nikos Flamourtzoglou, Eleni Vrochidou and George A Papakostas

Artificial Intelligence (AI) continues to reshape, among others, the field of software development, its integration into coding workflows opens up both opportunities and critical questions. The rise of AI-powered coding assistants has introduced a paradigm shift in the way developers approximate programming tasks. This work aims to investigate the efficacy and quality of AI-powered coding in the development of end-to-end machine learning pipelines. More specifically, a comparative study of Amazon Q, GitHub Copilot, Windsurf and Gemini is conducted, by developing two machine learning pipelines, for customer churn prediction and Titanic survival prediction. Each assistant was prompted with the same instructions towards a fair evaluation, and their outputs were assessed using static and dynamic code analysis tools, such as Pylint, Radon, SonarQube and MLflow. The evaluation is focused on code quality, model performance, feature importance, explainability and fairness. These findings provide practical insights into the coding assistants' strengths and limitations, offering guidance for developers and researchers for the selection of the appropriate tool for machine learning tasks.

10:20 AI-Powered Brain Tumor Detection Using Convolutional Neural Networks
Saeed Sharif, Madhav Raj Theeng Tamang and Pardha Saradhi Chakolthi

Timely and precise detection of brain tumours plays a vital role in enhancing patient care. However, conventional diagnostic techniques that rely on manual interpretation of MRI scans are often slow and susceptible to human mistakes. This research focuses on building an intelligent system for automated brain tumour recognition through the use of Convolutional Neural Networks (CNNs). The proposed method utilizes a pre-trained AlexNet model, which was refined using MATLAB's Deep Learning Toolbox and tested on a publicly accessible brain tumour MRI dataset from Kaggle. To ensure consistency and boost the model's generalization capability, image preprocessing steps such as resizing, normalization, and data augmentation were applied. The dataset was split into 70% for training and 30% for testing. The modified model reached a validation accuracy of 86.3%, along with high F1-scores across different tumour types, reflecting stable and dependable classification results. Performance was also benchmarked against other architectures like VGG16 and ResNet50, with AlexNet demonstrating superior computational efficiency while preserving accuracy. This work underlines the potential of CNN-based approaches in clinical diagnostics and lays the groundwork for advancements in model transparency, broader datasets, and clinical implementation.

Wednesday, November 19 9:00 - 10:40 (Asia/Bahrain)

S4-E: e-Government; e-Health; e-Business

Chairs: Ali H Zolait, Ali Tarhini
9:00 AI for Employee Wellness: A Dual-Framework Approach to E-Health Transformation
Deepika Pandita, Alisha Verma, Pranav Mishra and Shalini Rastogi

This study explores the integration of AI-powered solutions in e-health to address rising healthcare demands and the limitations of traditional support care systems. Using a mixed-methods approach, the research gathers qualitative and quantitative data from IT-sector employees to assess AI readiness, trust, and effectiveness. The findings reveal that while employees are open to AI-driven interventions, concerns regarding privacy and empathy persist. The study proposes a dual-framework model: the AI E-Health Implementation Roadmap and the AI-Enhanced E-Health Intervention Framework, offering a structured approach for organizations to adopt scalable, ethical, and personalized AI solutions. The research highlights AI's potential to provide real-time support, enhance employees' well-being, and promote proactive intervention. It offers practical recommendations for HR professionals, emphasizing the need for transparent governance and human-AI collaboration. This study contributes to the growing literature on AI adoption in digital e-health practices, providing a novel, data-driven model for e-health enhancement.

9:20 Exploring E-Governance for Sustainable Grazing Administration in Nigeria: Legal Viewpoint, Challenges and Prospects
Clement Chidi-Ife Chigbo, Joseph Ochayi Sunday, Udosen Jacob Idem, Ebenezer Tunde Yebisi and Abiodun Ayodele Ojo

The persistent crisis between herders and farmers in Nigeria has underscored the urgent need for sustainable grazing management system solutions. Despite various policy efforts, challenges remain due to poor implementation and a lack of effective coordination. This study explores the potential of E-Governance as a tool to enhance grazing management sustainably through legal frameworks. The study adopts a doctrinal and policy analysis approach, examining relevant Nigerian laws, digital governance strategies, and international best practices. This work reveals that while Nigeria's E-Governance infrastructure is evolving, its application in grazing management remains limited, particularly since there is no specific law to address the intersection of grazing and agricultural sectors. The absence of a single specific legal framework linking E-Governance to grazing limits its implementation. A technique for reforming legislative regimes to improve stakeholder roles and ICT artefacts for successful E-Governance for sustainable grazing management in Nigeria was proposed, in line with worldwide best practices.

9:40 Blood Resource Availability and Compatibility Locator for Real Time Management System
Clark Raven S Dalauta, Ian Emmanuele Generalao, Disimy P Simenahan, Jayrhom R. Almonteros, Jenie L. Plender-Nabas and Jesterlyn Q Timosan

Access to blood availability is essential during a medical emergency. The faster a patient finds sufficient blood resources, the more lives it is possible to save. The current practice of finding available blood and donors relies on hospitals and blood banks such as the Red Cross; however, no inventory system is available to the public yet requires contacting or visiting the nearest office. This study developed a Blood Resource Availability and Compatibility Locator and Management System that aims for better coordination and real-time checking of available resources with map integration to save time in finding blood appropriate blood resources. The system underwent a System Usability Scale (SUS) test with a respondent of one hundred (100). Thus, receiving a total score of 80.53 translated as Good in the acceptable scale. Key findings include 87.5% ease of navigation and 75.85% donor engagement improvement. Challenges such as real-time update reliability (64.4%) highlight areas for refinement. The project demonstrates the potential of technology to optimize healthcare responsiveness and save lives through improved blood donation processes. The project demonstrates the potential of technology to optimize healthcare responsiveness and save lives through improved blood donation and compatibility locator.

10:00 Real-Time Climatic Suitability Modelling for Ixodes hexagonus Using Presence-Only Data in Urban UK Environments
Saeed Sharif, Pardha Saradhi Chakolthi, Samantha Lansdell and Sally Cutler

Ticks are becoming increasingly recognized as disease vectors in urban environments due to their interactions with climate change, urban green spaces, and urban wildlife. With a notable presence in residential and peri-urban environments, Ixodes hexagonus is however understudied and underrepresented in ecological modelling literature. This work aims to develop a real-time tick risk prediction system for I. hexagonus with a One-Class Support Vector Machine (OC-SVM) model trained on presence-only data. Climate data (temperature, humidity, dew point, and precipitation) was sourced in real-time from the Visual Crossing Weather API and combined with occurrence data from the NBN Atlas of the United Kingdom. The software architecture is built around microservices with a React frontend, a Spring Boot backend, and a Python FastAPI microservice for machine learning (ML) inference. An OC-SVM model was chosen for its adaptability to ecological presence-only datasets and was benchmarked against three other unsupervised machine learning models: Isolation Forest, Local Outlier Factor, and Kernel Density Estimation. Models were evaluated across three main criteria: seasonal simulations, spatial variation over 8 UK cities, and climatic threshold validation. OC-SVM outperformed alternative models in terms of biological realism, seasonality, and reactions to biologically plausible climatic challenges such as decreased organism activity below 5°C and during precipitation. A new spatial proximity analysis was performed between I. hexagonus and the European hedgehog (Erinaceus europaeus) occurrence records as a corollary. 96.1% of tick observations were within 2 km of hedgehog records. These results not only corroborate ecological assumptions about host association, but also offer new insights to support future data incorporation of host information into predictive models. Public health agencies and ecologists can derive actionable insights from the system through real-time tick risk predictions specific to location. The work shows how interpretable ML can be used to improve urban vector monitoring and how a flexible software framework can be applied to multiple species and regions

10:20 A Theoretical Model Explaining the Impact on Satisfaction, Engagement, and Repurchase Intention in Modern Retailing
Do Bui Xuan Cuong and Bui Thanh Khoa

This study aims to test a theoretical model that explains how an omnichannel customer experience can improve satisfaction, engagement, and repurchase intention in modern retail. Using a quantitative research approach with a survey of 533 customers, the data were analyzed with Partial Least Squares Structural Equation Modeling (PLS-SEM). The findings confirm that all proposed hypotheses were substantiated with a high degree of statistical significance, demonstrating a strong model fit. The results show that omnichannel customer experience positively affects satisfaction (SAT), engagement (CE), and repurchase intention (RI). Additionally, SAT and CE both positively influence RI and act as partial mediators between the omnichannel experience and repurchase behavior. These findings emphasize the importance of creating seamless experiences across all channels to encourage customer loyalty and repeat purchases. The study strengthens the theoretical foundation of the Stimulus-Organism-Response (SOR) framework and offers practical insights for firms undergoing digital transformation

Wednesday, November 19 9:00 - 10:40 (Asia/Bahrain)

S4-F: Machine Learning and Smart Applications

Chair: Maizatul Akmar Ismail
9:00 Comparative Evaluation of Machine Learning And Deep Learning Classifiers Used For Exploring Action Recognition In Basketball Videos
Aliga Paul Aliga, Kingsley Eghonghon Ukhurebor and Joshua Nehinbe

Action recognition in realistic basketball videos continues to be a significant challenge in computer vision especially if it involves imbalanced datasets and multimodal inputs. This study proposes a comprehensive benchmarking framework that systematically evaluates the performance of both classical machine learning algorithms, notably Logistic Regression, Random Forest, Decision Tree, Gradient Boosting and Support Vector Machines (SVM) and deep learning models (i.e. Convolutional Neural Networks (CNN), Long Short-Term Memory networks (LSTM), CNN-LSTM hybrids) and Temporal Convolutional Networks (TCN) on a binary action recognition task. The task involves classifying 80 basketball video clips into "play" (unsuccessful) or "win" (scoring) actions. We used Python programming language to implement the models and incorporate stratified k-fold cross-validation into the architecture to ensure robust performance estimation. The evaluation metrics include accuracy, F1-score (weighted), confusion matrices and ROC curves. Results reveal considerable variation in model behavior under class imbalance, created by more "win" than "play". For instance, classical machine learning models (i.e Decision Tree, Gradient Boosting and SVM) achieved accuracies of 70%, 62.5% and 60-62%, respectively. Deep learning models (i.e. CNN, LSTM and TCN) achieved accuracies between 60% and 65% while Random Forest showed more modest performance (50-57%). Notably, several models frequently failed to predict the minority class (play), leading to undefined precision and highlighting the inadequacy of traditional evaluation metrics in skewed classification settings. Despite similar aggregate accuracies, the models exhibited divergent strengths in recognizing underrepresented classes, emphasizing trade-offs between overall performance and class-specific recall. This work contributes a reproducible evaluation framework for action recognition tasks and provides critical insights into the sensitivity of different classifiers to class imbalance and this has been a crucial factor often overlooked in comparative studies specifically in basketball sports video analytics.

9:20 YOLOv11-Based Early Warning System for Violence and Weapon Detection
Muhammad Syarif Januriansyah and Febriliyan Samopa

The increasing occurrence of physical violence and weapon misuse in public spaces has created a demand for automatic detection and early warning systems. Advances in deep learning have enabled surveillance systems to process video streams in real time, providing opportunities for early warning mechanisms. This study presents a YOLOv11-based early warning system that integrates with a PyQt5 graphical user interface and instant notifications via Telegram API. The dataset used in this study was constructed from Kaggle, Roboflow Universe, and manually collected samples, covering nine classes that include physical violence, firearms, melee weapons, and traditional Indonesian weapons such as celurit and keris. The system processes images and video streams to identify violent actions and multiple weapon types, delivering alerts to designated users within 1-2 seconds. Experimental results on a test dataset achieved an average precision of 90.1%, recall of 81.5%, and mAP of 87.9%, with stable real-time performance at 35 FPS across different input formats. In addition to controlled testing, the system was also validated using CCTV footage to assess its applicability in practical surveillance conditions. These findings highlight the potential of combining object detection, a desktop GUI, and real-time communication into a unified tool for security monitoring. The proposed system demonstrates strong potential for practical surveillance applications, though further validation under diverse real-world conditions would enhance its robustness.

9:40 Implementation of The YOLOv11 and MobileNetV3 in a Real-Time System for Detecting and Estimating the Gender and Age of Visitors
Muhammad Farid Hakim, Faisal Rahutomo and Joko Hariyono

Understanding visitor demographics in real-time is crucial for optimizing services and strategic decision-making in various sectors. However, existing data collection methods are often manual, inefficient, and limited in scope. This study proposes a fully automated system for real-time visitor detection and demographic analysis (gender and age) by leveraging existing CCTV infrastructure. The system employs a two-stage deep learning pipeline integrating YOLOv11 for efficient detection of visitors and faces, followed by a fine-tuned MobileNetV3-Small model trained on the FairFace dataset for simultaneous gender classification and age group estimation. The YOLOv11n-face model achieved an mAP@50-95 of 0.55 for face detection, while the MobileNetV3 model reached 86.6% accuracy for gender classification and 55.8% for age estimation. Integrated system tests demonstrated real-time operation at ~6.5 FPS on limited hardware, with a counting error rate of 9.52% due to occlusion. These results validate the feasibility and efficiency of combining YOLOv11 and MobileNetV3 for real-time visitor demographic analytics.

10:00 Comparative Analysis of Transformer-Based Embedding Models for Clinical Trial Retrieval
Sarthak Srivastava, Arti Devi, Neha Kriti, Shashank Uttrani, Rajeshwari S. Punekar, Krishna Pudari, Shruti Kaushik and Varun Dutt

Identifying similar clinical trials is vital for improving research efficiency and patient access to appropriate treatments. Traditional keyword-based search methods are limited in capturing complex semantic relationships between trials. This study explores the effectiveness of two transformerbased embedding models i.e all-mpnet-base-v2 (from Sentence Transformers) and text embedding-3-small (from OpenAI), in retrieving similar clinical trials. A dataset of 2,222 obesity-related trials was used to evaluate their performance through semantic similarity search and exclusion-based filtering using GPT-4. The models were compared based on Euclidean distance, cosine similarity, and statistical significance testing. The results show that OpenAI's model produced more compact embeddings, with a lower average Euclidean distance of 0.84, while the Sentence Transformer model captured broader variation, with an average Euclidean distance of 1.07. This innovative research highlights the potential of embedding-based approaches for improving clinical trial retrieval and offers valuable insights into model behavior, with core implications for future AI-driven healthcare tools.

10:20 AERIS: Interpretable Stacked Ensemble with IoT Sensing for Proactive Humidity Prediction in Medical Logistics
Tushar Vasudev, Surbhi Ranga, Praveen Kumar, Sahil Sankhyan, Venkata Uday Kala and Varun Dutt

Accurate environmental condition prediction is essential for safe temperature- and humidity-sensitive medical product transportation. In this work, we introduce Adaptive Ensemble Regression with Integrated Sensing (AERIS), an interpretable real-time proactive humidity prediction model for proactive medical cold-chain logistics. AERIS is a heterogeneous ensemble of base models Random Forest, Support Vector Regression, XGBoost, k-Nearest Neighbors, Multi-Layer Perceptron, and deep learning models (LSTM, GRU, CNN) combined in a Lasso regression meta-learner that promotes sparsity and interpretability. AERIS had 2.112% RMSE and 0.906 R² score on 10-minute-ahead humidity prediction on a real-world data of 2,865 IoT sensor readings from 17 medical transportation operations and outperformed all base models. Unlike standard black-box ensembles, AERIS enables model interpretability and selection, and is best suited for regulated, resource-constrained environments. These results make AERIS a scalable and resilient tool, enabling a shift from reactive monitoring to proactive environmental control in medical logistics.

Wednesday, November 19 10:40 - 11:00 (Asia/Bahrain)

BD-3: Break Day-3

Wednesday, November 19 11:00 - 12:40 (Asia/Bahrain)

S5-A: Smart Cities-2

Chairs: Fatema Albalooshi, Gerald P Arada
11:00 Deploying and Scheduling Urban Roadside Unit Based on Road Network Dynamics
Elmer R Magsino, Gerald P Arada and Catherine Manuela Lee Ramos

Efficient delivery of geographical area information between nearby vehicles and infrastructures that is still relevant and up-to-date has now become a critical component of an Intelligent Transportation System (ITS). Characterizing road networks based on the high degree of vehicular mobility, density, and speed captures the dynamics of an urban road topology that can be utilized to describe network availability, connectivity, and data dissemination among vehicles and infrastructures. Since the positions and speeds of vehicles are instantaneously changing, deploying roadside units (RSUs) can facilitate real-time data storage, offloaded computations, and data exchange. In this work, we propose and evaluate an RSU deployment scheme based on clustering empirical mobility traces, irregularly partitioning the urban area using Voronoi diagrams, and implementing the set covering principle to locate the candidate RSU positions. We performed extensive simulations on three urban mobility datasets and analyzed the results obtained from area size variation, scalability, and coverage.

11:20 SmartTransit PH: An Android App for Public Vehicle Navigation
Jenie L. Plender-Nabas, Jayrhom R. Almonteros, Jennierose C. Pascual, Neil Anthony B. Dayday and Jesterlyn Q Timosan

This study presents the development of SmartTransit PH, an Android-based mobile application designed to assist commuters in navigating public transportation routes within Butuan City, Philippines. The rapid urbanization and growing number of tourists in the city have highlighted the need for accessible and reliable transit information, particularly for individuals unfamiliar with local routes and modes of public transport. Leveraging the capabilities of GPS and Android's Network Location Provider, the application provides real-time, location-aware services that offer commuters essential information such as optimal public utility vehicle (PUV) routes, estimated travel time, fare costs, distance between locations, and the number of transfers required. Developed using the Android SDK in Eclipse IDE with Java, SmartTransit PH integrates an intuitive user interface to ensure ease of use, even for first-time visitors. The app also recommends alternative routes for flexibility and allows users to save trip details through screenshots for offline access, addressing connectivity limitations in certain areas. This research contributes to smart mobility solutions by offering a scalable, user-centric navigation tool tailored to enhance the commuting experience through public transport in emerging urban centers like Butuan City

11:40 The Effects of Dynamic Pricing in the Search for Parking Availability and Economics
Elmer R Magsino, Gerald P Arada and Catherine Manuela Lee Ramos

Parking fee plays a crucial role in urban mobility as it affects cruising time when searching parking availability, walking time from parking slot to destination, and daily living expenses. In this study, we assess the economics of temporal dynamic parking pricing methods, Linear and Min-Max Rates, by incorporating empirical vehicular spatial coordinates and parking duration behavior. Utilizing urban car mobility movements as possible parkers, we investigate the behavior of drivers when selecting a parking space based on their allowance dictated by the Fixed Rate benchmark. We observed the space occupancy of chosen urban establishments and its generated revenues/losses. Our findings show that parking pricing schemes are implemented to regulate space occupancy of each establishment. As we further evaluate Fixed, Linear, and Min-Max Rates pricing ways, for a given parking duration default, the Fixed Rate benchmark tends to be more expensive than the Min-Max Rate pricing. The Linear Rate is always found to be the most expensive among the three for customers but most profitable for commercial establishment

12:00 A Vehicular Neural Network for Self-Driving Cars in the Philippine Setting
Rafael Dominic Lim Montaño and Gerald P Arada

The advancement of autonomous vehicles has been significantly propelled by the incorporation of deep learning, especially with vehicular neural networks (VNNs). Data from several sensors (such as lidar, radar, and cameras) is processed by these networks to facilitate activities like path planning, object detection, and decision-making. Autonomous driving systems generate an internal representation of their environment, and one of the many promises of their implementation is the reduction of road-related incidents and providing driver assistance in vehicle operation. This study developed an imaging neural network model accustomed to visual road safety classification, road type and marking identification, and vehicle detection for self-driving cars on Philippine roads. Using GoogLeNet, the simulation results have shown that the deep learning models achieved high validation accuracies of 90% in road safety and 96.43% in road identification. This study provided insights into the further development of self-driving technology and its possible implementation in developing countries such as the Philippines.

12:20 Smart Sustainable Technology to Enhance the Neighbourhood Experience: A Way Forward
Prateek Dhasmana, Niva Turna and Parminder Kaur

The needs of residents living in urban areas are always changing. To respond to these changing needs, the development of Smart Neighbourhoods is important. The most sustainable neighbourhoods typically have good walkability, a sense of identity, social cohesion, stability, and resilience to economic and socio-political changes. This study is an exploration on the concept of smart neighborhoods, analyzing efforts to define and provide implementation strategies regarding the notion of "smart, sustainable" communities, in the context of cities in urban India. A systematic review was conducted using three prominent databases: Emerald Insight, ScienceDirect and Scopus. Moreover, the study also explored scholarly literature on neighborhood planning as well as the role of smart technologies in the making and shaping of communities. For smart and sustainable cities to be successful, the data suggest that they will need to have infill and mixed-use development, well designed streetscapes, traffic management, accessible open spaces, and perhaps most importantly, the use of technology in urban services. The findings suggest smart technologies should be involved in areas that include asset and water management, emergency response system, and crime prevention to create a healthier and safer environment for residents. The study concludes that smart urban spaces leverage data analysis for informed decision-making, anticipate and proactively address challenges, and optimize resource allocation. This ultimately improves the quality of neighbourhoods, promoting heathier living conditions and enhancing over all well-being of the residents.

Wednesday, November 19 11:00 - 12:40 (Asia/Bahrain)

S5-B: Cyber security-3

Chairs: Ali H Zolait, Nedal Ababneh
11:00 Real-Time Adaptive Camera-Attack Detection and Correction for Autonomous Vehicle Platoons in Cyber-Physical Systems via a MADRL Approach
Mohamed El jbari

In autonomous vehicle platooning, members of the platoon not only use their own sensor data for making driving decisions. They also rely on data shared by other members of the platoon. This research proposes a novel framework for real-time detection and correction of camera-based attacks on platoon-based autonomous vehicles within Cyber-Physical Systems (CPS) using a Multi-Agent Deep Reinforcement Learning (MADRL) approach. Camera systems, critical for perception in autonomous vehicles, are vulnerable to adversarial attacks such as image manipulation, occlusion, or sensor jamming, which can disrupt platoon coordination and safety. The proposed system leverages MADRL to enable adaptive, decentralized decision-making among vehicles, ensuring robust attack detection and mitigation while maintaining platoon stability. By integrating real-time camera sensor data fusion, anomaly detection, and corrective action policies, the framework enhances resilience against cyber-attacks. Experimental results demonstrate improved detection accuracy, reduced response time, and enhanced platoon performance under various attack scenarios compared to traditional methods.

11:20 DevSecOps in Practice: A Systematic Review of Challenges, Best Practices and Tools
Osama Erfaei Mohamed, Yusuf Mothanna and Saeed Sharif

Integrating security into agile software delivery pipelines is crucial to address the evolving landscape of cybersecurity threats. DevSecOps extends the DevOps model by embedding security at every stage of the software development lifecycle. However, adopting DevSecOps in practice presents significant challenges due to organizational and technical obstacles. This study aims to synthesize current knowledge on the challenges faced by practitioners during DevSecOps adoption, alongside the best practices and tools that support successful implementation. The study also identifies areas for future research in this rapidly developing field. We conducted a systematic literature review (SLR) of 15 peer-reviewed papers published between 2019 and 2025, focusing on the challenges, best practices, and tools associated with DevSecOps implementation. The review highlights key challenges, including cultural resistance, limited security expertise, and difficulties in toolchain integration. In response, best practices such as shift-left security, automation, and cross-functional collaboration are identified as key enablers. Furthermore, emerging tools, including AI-driven security scanners, and policy-as-code frameworks, are shown to offer promising solutions for scalable security integration. This study contributes to the field by synthesizing empirical findings, categorizing actionable best practices and tools, and pinpointing gaps for further research. The insights aim to support industry teams in aligning their DevSecOps strategies with organizational needs while providing a foundation for future academic exploration in this domain.

11:40 A Multi-Stage Hybrid AI Framework for Early Detection of Ransomware in Smart Hospital IoT Networks
Farhan Shakil, Md Nahian Suhaimee, Md Omar Faruq, Md Rifat Al Amin Khan, Mumtahina Ahmed and Kamruddin Nur

The increasing integration of Internet of Things (IoT) devices into healthcare infrastructures has enabled continuous patient monitoring and automated clinical workflows but has also expanded the attack surface for cyber threats. Ransomware attacks targeting smart hospital IoT networks pose severe risks to patient safety by disrupting critical services and encrypting sensitive medical data. This paper proposes a multistage hybrid AI framework for early ransomware detection in IoT-based healthcare environments, combining an unsupervised Variational Autoencoder (VAE) to model benign traffic patterns and detect anomalies with a supervised CNN-LSTM classifier to accurately distinguish ransomware from normal network flows by leveraging both spatial and temporal dependencies. Evaluated on the IoT Healthcare Security Dataset, which simulates an ICU environment with patient monitoring and environmental sensors, the proposed framework achieved 98.73% accuracy, 98.28% F1score, and 99.12% AUC, outperforming both traditional machine learning and single-stage deep learning baselines. Early detection experiments further show that the system can identify ransomware with 88.57% detection accuracy after observing only 10% of malicious packets, maintaining minimal false positives and an average inference latency of just 5.2 ms. These results demonstrate the framework's robustness, real-time suitability, and potential for deployment in smart hospital networks to protect critical healthcare services against evolving ransomware threats.

12:00 Forensic Timeline Construction for the Burner Application on an Android Device
Hudan Studiawan, Mohammad Kamal Nawaf, Radhiyan Muhammad Hisan and Bintang Nuralamsyah

Communication applications are frequently examined in mobile forensic investigations. However, several applications, such as Burner, are designed specifically for anonymous communication. These applications are not natively supported in forensic timeline tools such as Plaso. This study presents the development of a custom Plaso SQLite plugin that parses communication data from the Burner app database on an Android device. The plugin extracts communication metadata such as numbers and timestamps and integrates them into forensic timelines. The proposed plugin was experimented on an Android 13 forensic image and successfully contributed additional events to the timeline.

12:20 Cross-Border Data Transfer and Cyber Risk: A Comparative Legal Exposure Framework for FinTech Transaction Security
Md Omar Faruq, Md Rifat Al Amin Khan, Farhan Shakil, Md Nahian Suhaimee, Mumtahina Ahmed and Kamruddin Nur

Cross-border financial transactions introduce complex cybersecurity risks due to inconsistent legal frameworks and regulatory enforcement across jurisdictions. Traditional fraud detection models often overlook this legal dimension, focusing instead on behavioral or transactional features. This paper proposes a novel deep learning-based framework that integrates a Legal Exposure Index (LEI), derived from global data breach records, into a supervised fraud detection pipeline. Using a synthetic credit card transaction dataset enriched with simulated country identifiers, we classify transactions as either domestic or cross-border and embed jurisdiction-specific LEI values into the model's feature space. Experimental results show that the proposed method outperforms Logistic Regression, Random Forest, XGBoost, and a standard deep neural network, achieving an overall F1 score of 0.817 and AUROC of 0.912, with particularly strong gains in cross-border scenarios (F1 score of 0.782). Additionally, a country-pair fraud risk matrix reveals patterns of elevated cyber risk in weakly regulated corridors. This work highlights the importance of incorporating legal context into FinTech cybersecurity models and provides a scalable foundation for compliance-aware fraud detection in global financial systems.

Wednesday, November 19 11:00 - 12:40 (Asia/Bahrain)

S5-C: Deep Learning; Knowledge Representation; Pattern Recognition

Chairs: Alauddin Yousif Al-Omary, Ramzi A. Haraty
11:00 Comparative Analysis of Spectral and Bispectral Analyses for Obstructive Sleep Apnea Severity Classification
Ali Mohammad Alqudah and Zahra Moussavi

Obstructive Sleep Apnea (OSA) is a prevalent sleep disorder characterized by recurrent episodes of upper airway obstruction during sleep. Accurate and timely diagnosis, along with severity classification, is crucial for effective management and preventing associated health complications. This paper presents a comparative analysis of power spectrum and bispectrum features extracted from physiological signals for the classification of OSA severity. Utilizing a comprehensive dataset and a robust 5-fold stratified cross-validation approach, we evaluate the discriminative power of these features through statistical tests (t-test, ranksum), effect size (Cohen's d), and Receiver Operating Characteristic-Area Under the Curve (ROC-AUC) analysis. Furthermore, top-N feature selection and correlation heatmaps are employed to identify the most salient and non-redundant features. Our findings indicate that while power spectral features provide valuable insights into signal energy distribution, bispectral features, which capture non-linear phase coupling and quadratic phase coupling, offer complementary and, in some cases, superior information for distinguishing between different OSA severity levels. This research highlights the potential of higher-order spectral analysis in enhancing the accuracy and robustness of OSA severity classification, paving the way for more effective diagnostic tools.

11:20 Analyzing Topic Dynamics of Islamophobia on X Around Specific Incidents
Aqsa Iftikhar

This study investigates the temporal evolution of online discourse related to Islamophobia on X (formerly Twitter), with a focus on how significant real-world events shape the themes and structure of public discussions. Leveraging a curated dataset of tweets collected between late March and early April 2023, the research employs advanced topic modeling techniques, including BERTopic, to analyze thematic patterns before and after a documented incident that prompted heightened online engagement. The methodology combines rigorous data preprocessing, event identification, and comparative analysis of topic distributions, supported by keyword salience and network visualization. Results reveal a marked shift from broad, general discussions to incident-specific narratives and hashtag activism following the event. Temporal analysis demonstrates a pronounced increase in tweet frequency immediately after the incident, while network analysis highlights the centrality of event-related hashtags in post-incident discussions. These findings underscore the responsiveness of online discourse to external stimuli and provide empirical insights for understanding the dynamics of public opinion and the spread of hate speech on social media platforms. The study contributes methodological guidance for the application of topic modeling to time-sensitive social media data and offers recommendations for future research in digital discourse analysis.

11:40 Alzheimer's Disease Analysis Using Machine Learning Approach
Abdul Basat, Athirah Mohd Ramly, Pardha Saradhi Chakolthi and Saeed Sharif

Alzheimer's disease is a progressive neurological disorder marked by memory loss, cognitive decline, and functional impairment, which can hinder daily activities and independence. Early diagnosis of Alzheimer's disease is crucial for timely intervention and improved outcomes for patients. In this paper, several supervised ML algorithms are employed for Alzheimer's disease stage classification using a clinical dataset of 2,149 records that includes cognitive, behavioral, and lifestyle variables. The dataset undergoes preprocessing through normalization, encoding, and feature selection methods. The models are trained, evaluated, and the results are reported. Logistic_Regression, Random_Forest, and XGBoost models are trained, their performances are compared by accuracy, precision, recall, F1-score, and ROC-AUC metrics, and XGBoost was found to have the best accuracy at 96.05%, closely followed by Random_Forest. A feature correlation matrix is calculated to assess the multicollinearity and redundancy of the dataset features. Furthermore, a confusion matrix is used to evaluate the model's performance on class-specific predictions. In addition to prediction accuracy, the models' explainability is also explored using SHAP and LIME techniques. These methods reveal the impact of specific features such as MMSE scores, memory loss symptoms, and lifestyle habits on the models' decisions. This information can help provide insights to clinical decision-makers and build a bridge between the black-box models and clinical practice. This study showed that high-performing and interpretable ML models can be used to build an automatic or assistive system for Alzheimer's disease diagnosis and integrate it into healthcare. The paper discusses its contributions to the field of explainable artificial intelligence (XAI).

12:00 Hybrid Vision-Logic Model for Fiber Optic Risk-level Detection
Alissa Velia Royhatul Jannah and Shintami Chusnul Hidayati

The reliability of fiber optic networks is critical to maintaining high-speed Internet services in today's digital society. However, these networks are frequently exposed to external risks that can lead to severe service disruptions and financial losses if not promptly addressed. Traditional monitoring methods rely heavily on manual inspections and internal signal analysis, which are often time-consuming, lack real-time responsiveness, and may fail to detect temporary or external threats.

This paper proposes a hybrid risk-level classification framework that integrates Convolutional Neural Networks (CNNs) with a rule-based reasoning layer to enhance external risk detection for fiber optic cables. The approach leverages visual data collected from routine patrol inspections labeled into three risk categories: low, medium, and high. A CNN model was optimized to classify visual risk indicators. To further improve prediction reliability and interpretability, the model's outputs were refined through expert-defined IF-THEN rules that incorporate historical inspection data and domain knowledge.

Experimental results demonstrate that CNN achieved high accuracy in detecting potential risk factors, while the integration of rule-based reasoning improved context-awareness and decision consistency. Expert evaluations confirmed that the hybrid approach outperformed CNN-only predictions in both accuracy and practical reliability.

12:20 Directly Processing Raw ECG Signal Classification Using 1D CNN with Noise-Robust
Omar Al-Abdali and Abubaker Elbayoudi

Cardiovascular diseases, particularly arrhythmias, are a leading cause of global mortality, necessitating accurate diagnostic tools. This study addressed the classification of five types of cardiac arrhythmias using a one-dimensional convolutional neural network (1D CNN) that directly processes raw electrocardiogram (ECG) signals, thereby eliminating the need for noise removal and enhancing the model's accuracy and suitability for real clinical environments. Data from the MITDB and INCART databases were combined to create a more diverse and balanced dataset, utilizing the SMOTE-Tomek technique to address class imbalance, particularly for the underrepresented APB class. The model relied on short signal segments (only 121 samples) to reduce computational load while maintaining classification accuracy, achieving an accuracy of up to 96.9% after applying SMOTE-Tomek, with a notable improvement in the performance of the less balanced classes. The results demonstrate the effectiveness of combining databases and resampling techniques in improving generalization and reducing model bias, making the proposed approach well-suited for real-world applications in diagnosing cardiac arrhythmias.

Wednesday, November 19 11:00 - 12:40 (Asia/Bahrain)

S5-D: Deep Learning; Image Processing

11:00 Detection and Classification of Diabetic Retinopathy Using EfficientNetB0: A Lightweight Model With Transfer Learning Approach
Saiful Islam Oni, Meheraj Hasan, Arpon Paul Amit, Shahnaj Parvin, Farzana Bente Alam and Kamruddin Nur

Diabetic retinopathy (DR) is a progressive ocular disorder caused by chronic hyperglycemia, leading to retinal vascular damage and eventual vision impairment. As a predominant cause of blindness among diabetic patients, early and precise detection is crucial, as symptoms often manifest only after irreversible vision loss occurs. To address this challenge, we propose a deep learning based application, leveraging a customized EfficientNetB0 architecture, optimized through transfer learning and data augmentation. The APTOS dataset from Kaggle, initially imbalanced, was preprocessed using augmentation techniques to create a balanced dataset comprising 5,000 images across five DR severity grades. Further image enhancements, including erosion and histogram equalization, were applied to improve feature extraction. The model utilized transfer learning by fine-tuning EfficientNetB0 with ImageNet pretrained weights, while early stopping mechanisms were implemented to mitigate overfitting. The proposed model outperformed the state-of-the-art models with an accuracy of 96.00% and precision, recall, and F1-scores of 96.09%, 96.00%, and 95.98%, respectively.

11:20 Early Alzheimer's Detection Using a Novel CNN Model with Class Imbalance Handling
Farah Shaheen and Mohab A Mangoud M

Alzheimer's is a brain disorder that impairs memory and cognition, disrupting daily life and relationships. Due to its significant impact on a person's well-being, and that neurodegeneration is not reversible, it is essential to develop a solution that can allow medical professionals to easily diagnose a patient. A customized CNN-driven deep learning model is developed in this paper to enable early detection and accurate classification of Alzheimer disease (AD) from MRI scans. Since the MRI dataset is class imbalanced, Synthetic Minority Oversampling Technique (SMOTE) is applied to rebalance the training distribution before model fitting. The dataset contains four diagnostic categories that reflect progressive impairment: Moderate, Mild, Very Mild and Non demented cases. The proposed framework was compared against widely used CNN baselines and demonstrated higher diagnostic accuracy and reliability in differentiating stages of dementia.

11:40 Action Recognition in Basketball Videos Using Deep Learning, Spatiotemporal Feature Extraction and Sequence Modeling
Aliga Paul Aliga, Kingsley Eghonghon Ukhurebor and Joshua Nehinbe

Basketball is a fast-paced sport characterized by several tactical actions like passes, dribbles, dunks, isolations, transitions and pick-and-rolls. Automatically recognizing these actions in basketball videos offers immense benefits that closely correlate to performance analysis, strategy development and training optimization. However, the current reliance of some coaches and players on manual annotations of basketball videos is time-consuming and inefficient workflows that can cause inconsistent and tactical oversights. This study addresses the above limitations by proposing an advanced deep learning-based model that leverage on spatiotemporal feature extraction and sequence modeling to recognize actions in 30 annotated basketball videos. We further perform a comparative evaluation of six deep learning classifiers (i.e. RNN, BiLSTM, BiGRU, TransformerRNN, CapsuleNet and a Softmax-based baseline) to ascertain their performance in recognizing six core basketball actions. Pre-trained R(2+1)D ResNet-18 is employed for feature extraction and classification of each model. Performance is assessed using classification accuracy. The results show that BiLSTM and BiGRU outperform other classifiers in capturing complex temporal dependencies inherent in basketball gameplay. CapsuleNet appears promising in concept but has least performance possibly due to its limitations in handling sequential basketball videos. This research highlights the practical potential of integrating advanced deep learning models into sports analytics platforms. By automating the recognition of tactical actions, these models reduce the need for exhaustive manual video labeling, offering coaches, analysts and teams a more efficient, objective and scalable solution for interpreting basketball game footage. Ultimately, this work bridges Artificial Intelligence with sports strategy, paving a novel way for more intelligent, data-driven decision-making in competitive basketball environments.

12:00 ShrimpXNet: A Transfer Learning Framework for Shrimp Disease Classification with Augmented Regularization, Adversarial Training, and Explainable AI
Israk Hasan Jone, DM Rafiun Bin Masud, Promit Sarker, Sayed Fuad Al Labib Fuad, Nazmul Islam and Farhad Billah

Shrimp is one of the most widely consumed aquatic species globally, valued for both its nutritional content and economic importance. Shrimp farming represents a significant source of income in many regions; however, like other forms of aquaculture, it is severely impacted by disease outbreaks. These diseases pose a major challenge to sustainable shrimp production. To address this issue, automated disease classification methods can offer timely and accurate detection. This research proposes a deep learning-based approach for the automated classification of shrimp diseases. A dataset comprising 1,149 images across four disease classes was utilized. Six pretrained deep learning models, ResNet50, EfficientNet, DenseNet201, MobileNet, ConvNeXt-Tiny, and Xception were deployed and evaluated for performance. The images background was removed, followed by standardized preprocessing through the Keras image pipeline. Fast Gradient Sign Method (FGSM) was used for enhancing the model robustness through adversarial training. While advanced augmentation strategies, including CutMix and MixUp, were implemented to mitigate overfitting and improve generalization. To support interpretability, and to visualize regions of model attention, post-hoc explanation methods such as Grad-CAM, Grad-CAM++, and XGrad-CAM were applied. Exploratory results demonstrated that ConvNeXt-Tiny achieved the highest performance, attaining a 96.88% accuracy on the test dataset. After 1000 iterations, the 99% confidence interval for the model is [0.953, 0.971].

12:20 Unsupervised AI-Based Morphological Analysis of Kidney Tissue from CT Imaging
Adedolapo Abdulazeez Sokoya, Charlotte Maughan Jones, Saeed Sharif and Jessica Smart

This study introduces a pioneering framework for comprehensive, label-free analysis of kidney tissue using high-resolution Computed Tomography (CT) images. By integrating classical morphological and texture descriptors with deep feature embeddings, the proposed hybrid pipeline enables robust assessment of renal size, shape, density, and fluid distribution. A dataset of 592 CT slices underwent meticulous preprocessing-featuring ring artifact suppression and targeted region masking-to optimize tissue representation. Dimensionality reduction via Principal Component Analysis (PCA) and t-distributed Stochastic Neighbor Embedding (t-SNE) preceded unsupervised clustering using K-Means and Gaussian Mixture Models (GMM). The optimal clustering performance (silhouette score: 0.73) was achieved with K=2 using handcrafted features, while convolutional neural network (CNN) embeddings revealed additional, previously unrecognized tissue patterns. To our knowledge, this is the first framework to synergistically employ classical and deep unsupervised learning techniques for renal CT analysis. The findings highlight its promise as a scalable tool for early detection of renal abnormalities and tissue-level phenotyping-without the need for labeled data

Wednesday, November 19 11:00 - 12:40 (Asia/Bahrain)

S5-E: Internet of Things

Chairs: Orlando Catuiran, Sankaranarayanan Suresh
11:00 Harnessing IoT and AI Chatbots in HRM: A Tech-Driven Approach to Employee Engagement and Well-Being
Deepika Pandita, Deepansh Dikshit, Gurkanwar ' and Himani Choudhary

The integration of Artificial Intelligence (AI) and the Internet of Things (IoT) is transforming employee engagement and well-being in modern workplaces, particularly in hybrid work settings. This study adopts a mixed-method approach, combining survey data from 39 employees across multiple industries with in-depth interviews with 14 employees to assess the effectiveness, challenges, and employee perceptions of AI-driven HR chatbots and IoT-based health monitoring systems. Findings indicate that AI chatbots improve administrative efficiency but face limitations in accuracy and handling sensitive HR matters, while IoT-enabled health tracking is met with both interest and privacy concerns. The study proposes the AI-IoT Driven Employee Engagement & Well-Being (AIDEW) Model to address these insights. This three-tier framework enhances digital HR accessibility, AI-powered well-being initiatives, and data-driven performance management. This model emphasises ethical AI deployment, bias mitigation, and privacy-first IoT adoption to create a responsive, employee-centric HR ecosystem. The study concludes that organisations must balance technological advancements with human interaction to foster a more inclusive, efficient, and well-being-oriented workplace.

11:20 Tropical Smart Intravenous (IV) Drip Monitoring and Regulation System for Resource-Constrained Medical Settings
Denesse Blasurca Packay, Nico Sebastian Elepaño Roque, Andy Wilson Tan, Rosche Myrhe T Wania and Gerald P Arada

This study details the development and evaluation of a tropicalized smart intravenous (IV) drip system, designed specifically for deployment in resource-constrained medical settings in the Philippines. The system incorporates a strain gauge-based load cell for real-time IV fluid volume and flow rate measurement, complemented by a servo-actuated pinch valve mechanism facilitating flow modulation. A browser-based user interface, locally served by the ESP8266 microcontroller, facilitates platform-agnostic visualization, while time-series data is logged to a cloud-based InfluxDB cloud backend, facilitating long-term archival and analysis. Environmental resilience is ensured through a custom-engineered 3D-printed polylactic acid (PLA) enclosure equipped with passive ventilation and conformal coating insulating internal components, thereby mitigating moisture and thermal stressors. The load sensor maintained an observed accuracy of ±2% during empirical testing conducted in a clinical ward, which was well within the ±5% error limit.

11:40 Smart Home Technology Usage Among Aging and Midlife American Populations: A Systematic Literature Review
Srinivas Goud Thadakapally, Jr, Elyson A De La Cruz and James O. Webb, Jr.

The concept of Ageing-in-Place is slowly gaining popularity among seniors owing to the accelerating numbers of ageing-populations, paving way for smart home technology adoption, in enhancing independence, comfort, convenience, and alleviate the burden of care. The main objective of this systemic literature review is to synthesize insights on smart home technol- ogy adoption based from literature. Literature formulating the basis of this systematic review were retrieved from academic and research databases, including; Google Scholar, PubMed, ScienceDirect, ACM Digital Library, SpringerLink and IEEE Explore. Articles were selected from peer-reviewed journals (2020-2024) that focused on smart home technologies and their impact on older adults. The articles were scrutinized to ensure their credibility and eligibility, and utilized to extract important data and findings. A total of 11 articles were included in the synthesis of the systematic review after through screening phases, data extraction, review, and assessments. The review highlighted important knowledge and insights on smart home device adoption among older American citizens. SHT are important tools in the alleviating the burden of care among older populations, providing benefits such as enhanced convenience, independence, safety, and remote health-monitoring. Index Terms-Internet of Things (IOT), smart home, smart home technologies (SHT), wireless sensor networks, virtual home assistant devices, sensors, assisted living, Ageing-in-Place, re- mote networks, data privacy, convenience, independence, health- monitoring

12:00 IoT-Enabled Assistive Technology for Patients with Disabilities: A Case Study in Oman
Hafsa Omer AL Ansari, Aayad AlHajj, Shatha Hamid Alkharusi, Khadija Alloughani and Ohood Albalushi

This research presents the development of an IoT-based assistive device designed to enhance communication capabilities and improve the quality of life for individuals with disabilities in Oman as a case-study. The study adopts a multi-phase approach, including a comprehensive literature review, user needs analysis, device prototyping, and testing. A survey of over 80 participants, primarily caregivers, was conducted to identify key communication challenges faced by patients with disabilities. Based on these findings, we developed a patient-monitoring glove equipped with multiple sensors and powered by an Arduino Uno and ESP32 microcontroller. The glove integrates vital sign monitoring sensors-LM35 for body temperature, MAX30100 for heart rate, and ADXL335 for fall detection-along with five flex sensors to detect finger movements. Each movement corresponds to a predefined request (e.g., medication, ventilation, bathroom assistance, food, or discomfort alert). Data is transmitted via GPRS and displayed through multiple channels, including an LCD screen, a web dashboard on ThingSpeak, and a mobile application developed using MIT App Inventor. This IoT-driven solution provides an efficient communication bridge between patients and caregivers, offering real-time monitoring and timely support for elderly, paralyzed, and other physically disabled individuals.

12:20 Evaluating IoT-Enabled Waste Management Readiness in Kenyan Urban Slums: A Systematic Review Towards Achieving SDG 6 and 11
Treza Nyanchama Ogeto, Ms, Reny Nadlifatin and Apol Pribadi Subriadi

This study examines the potential of Internet of Things (IoT) technologies to improve waste management in Kenya's informal settlements. Despite the promise of smart bins, RFID, and low-power networks, their implementation in areas like Kibera and Mathare often falls short of addressing local challenges. Using a PRISMA-based systematic literature review of 34 peer-reviewed studies from 2019 to 2025, this research reveals key barriers such as unreliable infrastructure, digital exclusion, and policy gaps that hinder sustainable adoption. Addressing a gap in existing literature, the study proposes a holistic evaluation framework integrating technical readiness, community trust, and policy alignment to assess the feasibility of IoT systems in slum contexts. This is done while borrowing from the TAM and DOI theories. The approach aims to help policymakers, technologists, and urban planners design context-sensitive, inclusive solutions that advance Sustainable Development Goals 6 and 11.

Wednesday, November 19 12:40 - 13:00 (Asia/Bahrain)

CS: Closing Session