List papers
Seq | PDF Download | Title | Abstract | Keywords | DOI | Authors with affiliation and country |
---|---|---|---|---|---|---|
1 | 1570964473 | Real-Time Car Parking Detection with Deep Learning in Different Lighting Scenarios | This paper presents an intelligent parking system utilizing image processing and deep learning to address parking challenges amidst varying lighting conditions. The escalating number of vehicles on the road increased the difficulty and time spent in finding available parking spaces, resulting in more cars congestion in the modern cities. To alleviate this issue, we propose an efficient real-time camera-based system that is capable of detection of open parking slots using deep learning methodologies. Initially, we introduce a simple parking detection technique utilizing image processing. Nevertheless, it proves ineffective in dim lighting. Subsequently, we introduce our AI-powered system, trained on the "COCO" dataset using the object detection deep learning YOLO algorithm. The used dataset has been applied with a large-scale collection of images annotated with object categories, bounding boxes, segmentation masks, and captions. It is shown that this solution accurately identifies available and occupied parking slots by detecting vehicles within the parking area. We proposed strategically positioned webcam that provides comprehensive coverage of the parking area, to be set as an initial image serving as a reference for identifying all parking slots. During operation, the webcam records real-time video footage of the parking area, enabling continuous updates for an accurate count of free and occupied parking slots. The paper details the step-by-step implementation of the system and showcases achieved results under diverse lighting conditions. In conclusion, this research demonstrates the system's effectiveness in mitigating parking challenges through the amalgamation of image processing, deep learning, and real-time video analysis. Additionally, we highlight the future potential for research to further enhance and advance this innovative system. | Smart Parking; Deep Learning; Image signal Processing; Real Time Video; lighting Environment | http://dx.doi.org/10.12785/ijcds/1570964473 | Mohab A. Mangoud (UoB, Bahrain); Fatema Hasan Yusuf (University of Bahrain, Bahrain) |
2 | 1570990891 | NomadicBTS-2: A Network-in-a-Box with Software-Defined Radio and Web-Based App for Multiband Cellular Communication | The proliferation of mobile communications technologies has significantly contributed to the plausibility of emerging economies. However, there still exists a digital divide in several remote and hard-to-reach places, owing to the high CAPital EXpenditure (CAPEX) and OPerating EXpenditure (OPEX) of mobile network operators. In this study, a cost-effective software-defined base station named NomadicBTS-2 is developed and prototyped based on open-source technologies and the Software-Defined Radio (SDR) paradigm. NomadicBTS-2 comprises Universal Software Radio Peripheral (USRP) B200 as the Radio Frequency (RF) hardware front-end. The software backend comprises of open-source software such as USRP Hardware Driver (UHD) and services (i.e., OpenBTS, Asterisk, SIPAuthserve and SMQueue). In addition, we developed a new software (named NomadicBTS WebApp) to configure and monitor the UHD and software services through a web-based Graphical User Interface (GUI). NomadicBTS-2 was tested using two mobile stations (MSs) for simplex and duplex communication while the network link quality parameters were evaluated to determine users' Quality of Experience (QoE). Experimentation results showed that within a pico-cell, the link quality is sufficient for call routing and Short Messaging Services (SMSs) between user-to-user and network-to-user. The prototype provides a basis for a Network-in-a-Box that can be deployed for short-range communication in rural areas, hard-to-reach places, emergency situations, IoT sensor networks and to augment existing base stations to mitigate network congestion. It can also be a viable testbed in teaching and research laboratories to explore new frontiers in SDR, cognitive radio, and other wireless communication domains. | NomadicBTS-2; Software-Defined Radio; Multiband Cellular Communication; Network-in-a-Box | http://dx.doi.org/10.12785/ijcds/1570990891 | Emmanuel Adetiba (Covenant University, Ota, Nigeria); Petrina C. Uzoatuegwu and Ayodele Ifijeh (Covenant University, Nigeria); Abayomi Abdultaofeek (Mangosuthu University of Technology, South Africa); Obiseye O Obiyemi (Durban University of Technology, South Africa); Emmanuel O. Owolabi (Smartyou Integral PTY LTD, South Africa); Katleho Moloi and Surendra Thakur (Durban University of Technology, South Africa); Sibusiso Moyo (Stellenbosch University, South Africa) |
3 | 1570995462 | Leaf Disease Identification through Transfer Learning: Unveiling the Potential of a Deep Neural Network Model | Grape is one of the world's most crucial and widely consumed crops. The yield of grapes varies depending on the method of fertilisation. Hence, some other factors also impact crop production and quality. One of the major elements affecting crop quality output is leaf disease. Therefore, it is necessary to diagnose and classify diseases earlier. Grape production is affected by a variety of diseases. It could reduce the disease's impact on grapevines if the disease were identified earlier, which would result in higher crop output. There has been a lot of experimentation with new approaches to diagnosing and classifying diseases. This endeavour aims to assist farmers in accurately analysing and informing themselves about illnesses in their early stages. The Convolutional Neural Network (CNN) is a powerful tool for defining and classifying the diseases of grapes. A dataset of 3297 photos of grape leaves affected by four distinct diseases and a healthy leaf was used to conduct the entire experiment using python and orange software. Here's a rundown of the whole procedure: Before the actual segmentation of the images begins, input photos are first pre-processed. The images are then subjected to the second processing round using several CNN hyper-parameters. Finally, CNN analyses images for details such as colour, texture, and edges, among other things. According to the results, the proposed model's predictions are 99.3% correct. | Deep Neural Network; Transfer Learning Leaf Diseases; Classification; VGG; SquezeNet | http://dx.doi.org/10.12785/ijcds/1570995462 | Naresh Kumar Trivedi (Chitkara University, India); Deden Witarsyah (Nusa Putra University, Indonesia); Raj Gaurang Tiwari and Vinay Gautam (Chitkara University, India); Alok Misra (Lovely Professional University, India) |
4 | 1570999111 | Detection of Soil Nitrogen Levels via Grayscale Conversion: A Full-Factorial Design of Experiment Approach | In smart agriculture, the detection of the level of soil nitrogen is essential in soil fertility and productivity that correlates to crop yields and fertilizer recommendations. With the advancement of technology, identification of such level is easily obtained using devices that capture images, which are affected by two factors, i.e., the tilting angle of the test tube and the lighting condition. The device produces images containing three colors namely, red, green, and blue values, respectively. Using grayscale conversion, these values are then converted into a single value to analyze appropriately. This study aims to determine the optimal combination of the factors to obtain the correct reading of nitrogen using a full-factorial design of experiments with four replications designed by Minitab®; design of experiment is a valuable statistical tool that promotes efficient experimentations used by scientists and engineers. The results are analyzed using Analysis of Variance and depicted that the determined factors: tilting angle and lighting condition, and their interactions are significant. The developed regression model explains 76.43% of variability among factors. The optimal setting for tilting angle is 90°, while the lighting condition should be indoors. Design of Experiment is a valuable statistical tool that promotes optimization and efficient experimentations used by scientists and engineers. | Automation; Design of Experiments; Full-Factorial; Grayscale conversion; NPK level; Optimization | http://dx.doi.org/10.12785/ijcds/1570999111 | John Joshua F. Montañez (Bicol State College of Applied Sciences and Technology, Philippines) |
5 | 1570999256 | Leveraging ALBERT for Sentiment Classification of Long-Form ChatGPT Reviews on Twitter | Sentiment analysis of content created by users on social media sites reveals important information on public attitudes toward upcoming technologies. Researchers have challenges understanding these impressions, ranging from cursory evaluations to in-depth analyses. Focusing on detailed, long-form reviews exacerbates the difficulty in achieving accurate sentiment analysis. This research addresses the challenge of accurately analyzing sentiments in lengthy and unstructured social media texts, specifically focusing on ChatGPT reviews on Twitter. The study introduces advanced natural language processing (NLP) methodologies, including Fine-Tuning, Easy Data Augmentation (EDA), and Back Translation, to enhance the accuracy of sentiment analysis in lengthy and unstructured social media texts. The primary objective is to evaluate the effectiveness of the ALBERT transformer-based language model, in sentiment classification. Results demonstrate that ALBERT, when augmented with EDA and Back Translation, achieves significant performance improvements, with 81% and 80.1% accuracy, respectively. This research contributes to sentiment analysis by showcasing the efficacy of the ALBERT model, especially when combined with data augmentation techniques like EDA and Back Translation. The findings highlight the model's capability to accurately gauge public sentiments towards ChatGPT in the complex landscape of lengthy and nuanced social media content. This advancement has implications for understanding public attitudes towards emerging technologies, with potential applications in various domains. | Sentiment Analysis; ALBERT; Natural Language Processing; ChatGPT; Long-Form Review | http://dx.doi.org/10.12785/ijcds/1570999256 | Wanda Safira and Benedictus Prabaswara (Bina Nusantara University, Indonesia); Andrea Stevens Karnyoto (Binus University & BDSRC, Indonesia); Bens Pardamean (Bina Nusantara University, Indonesia) |
6 | 1570999297 | Interconnected Stocks Examination for Predicting the Next Day's High on the Indonesian Stock Exchange | We observed in many WhatsApp/Telegram Indonesian stock market groups, but we didn't find any stock prediction method that utilizes interconnectivity between stocks. In this paper, we examined the interconnected stock dynamics in the IDX and used it to predict the next day's high. We employed a novel method called "Connected Stocks + Rolling Window Method" which uses both the temporal dynamics of the stock market and the interconnectedness of IDX's stocks. We explored the characteristics of the interconnected stocks by implementing three machine learning algorithms - K-nearest Neighbor (KNN), Support Vector Machine (SVM), and Random Forest (RF) - and found valuable insight. The experiment showed that several factors including a balanced threshold model and increased stock input size helped the performance of a model, while several factors including window size, additional features added, and using specific sectors as training data did not help the model's performance. The result also showed that several stocks like ANTM and ERAA show signs of interconnectedness and are influenceable while some like KLBF are hard to influence and show no sign of interconnectedness based on their results. This research contributes to a deeper understanding of stock market dynamics on the IDX, especially the characteristics of interconnected stocks on the IDX. | Stock prediction; machine learning; support vector machine; random forest; indonesian stock market | http://dx.doi.org/10.12785/ijcds/1570999297 | Andreas Werner Sihotang (Bina Nusantara University, Indonesia); Andrea Stevens Karnyoto (Binus University & BDSRC, Indonesia); Bens Pardamean (Bina Nusantara University, Indonesia) |
7 | 1570999725 | Development of LoRa Multipoint Network Integrated with MQTT-SN Protocol for Microclimate Data Logging in UB Forest | The UB Forest, located on the slopes of Mount Arjuno, is a significant educational and research area with rich agricultural lands and diverse plant species. Traditional methods of microclimate data collection in this area have relied on manual sensor inspection by local farmers. This study introduces a novel approach by integrating Internet of Things (IoT) technology, particularly employing Long Range (LoRa) communication, to overcome the limitations of conventional WiFi networks in remote data access. The implementation uses ESP32 modules for data transmission and reception, focusing on establishing a LoRa network compatible with the Message Queuing Telemetry Transport for Sensor Networks (MQTT-SN) protocol. This enhances data exchange efficiency and reliability. The system is engineered to transmit 11 distinct microclimate data parameters bi-minutely from two nodes. Preliminary testing reveals a maximum transmission range of 300 meters. However, the data loss rate is significant, averaging 50%, which reduces to 15% at a distance of 100 meters. Signal strength is strongest at -94 dBm for 100 meters and -121 dBm for 300 meters. These results, while promising, fall short of the LoRa Alliance's expected performance metrics, which suggest effective operational distances of up to 2km under optimal conditions. This research demonstrates the potential and challenges of integrating IoT and LoRa technology in agricultural and environmental monitoring. The findings underscore the need for further optimization to achieve the range and reliability required for effective remote monitoring in rural and forested environments. This study sets a foundation for future enhancements in sensor network design and deployment strategies, aiming to improve data accuracy and accessibility for agricultural and environmental research. | Internet of Things (IoT); LoRa; MQTT-SN; ESP32; Microclimate data | http://dx.doi.org/10.12785/ijcds/1570999725 | Heru Nurwarsito and Mohammad Ali Syaugi Alkaf (University of Brawijaya, Indonesia) |
8 | 1570999881 | A Protected Data Transfer through Audio Signals by Quantization combined with Blowfish Encryption: The Genetic Algorithm Approach | Information is actually very potent. It is very common for data to be transferred over the internet, and everyone is responsible for ensuring its security. Data loss, data manipulation, and theft of confidential information are all effects of security events. Information security is a set of practices and protocols that help to secure this information. Such sensitive data can be secured using a variety of methods. Information security includes two important subfields: cryptography and steganography. With the help of cryptography and steganography, information is altered into an unintelligible state and made secret respectively. The purpose of this chapter is to preserve impenetrability and improve invulnerability. The objectives of this chapter are to be recognized by enhancement of Blowfish which is believed as highly secure algorithm; the implementation of chaotic sequence-quantization method for audio samples. The proposed work's performance is contrasted with that of the existing blowfish method and standard audio LSB algorithm. The following criteria shows the demonstration of analysis of the work done - Entropy values, Avalanche effect, Attack scenario, Execution time, PSNR value, Embedding capacity, Structural similarity index etc. The suggested system is the most effective method for intensifying protection and preserving the high caliber of the original entity. | Symmetric Encryption; Quantization; Blowfish; Avalanche Effect; Entropy Value; Attack Scenario | http://dx.doi.org/10.12785/ijcds/1570999881 | Rashmi P Shirole (Visvesvaraya Technological University, Belgaum & NMAMIT, Nitte, India); Shivamurthy G (VTURC, India) |
9 | 1571000399 | Incorporating Transfer Learning Strategy for improving Semantic Segmentation of Epizootic Ulcerative Syndrome Disease Using Deep Learning Model | Automated fish disease detection can eliminate the need for manual labor and provides earlier detection of fish disease such as EUS (Epizootic Ulcerative Syndrome) before it further spreads throughout the water. One of the problems that is faced on implementing a semantic segmentation fish disease detection system is the limited size of the semantic segmentation dataset. On the other hand, classification datasets for fish disease detections are more common and available in larger sizes, which cannot be used in segmentation tasks directly since it lacks the necessary label for such tasks. In this paper, we propose a training strategy based on transfer learning to learn from both ImageNet and classification dataset before being trained on the segmentation dataset. Specifically, we first train the ImageNet pre-trained VGG16 and ResNet50 on a classification task, then we transfer the weights into a semantic segmentation architectures such as U-Net and SegNet, and finally train the segmentation network on a segmentation task. We introduce two different modified U-Net architectures to allow the respective pre-trained VGG16 and ResNet50 weights to be transferred into the architecture. We used a classification dataset containing 304 images of fish diseases for classification task and a segmentation dataset containing 25 images of EUS-affected fishes for the segmentation task. The proposed training strategy is then compared with alternative training strategies such as training VGG16 and ResNet50 on ImageNet alone or classification dataset alone. When applied to SegNet and U-Net, the proposed training strategy surpasses their respective architecture trained on ImageNet or classification dataset alone. Between these two architectures with all compared training strategies, the U-Net+VGG16 architecture trained with our proposed training strategy achieves the best performance with validation and testing mIoU of 57.80% and 60.43%, respectively. The training code is available at https://github.com/RealOfficialTurf/FishDiseaseSegmentation. | Fish Disease Detection; Semantic Segmentation; Transfer Learning; U-Net Model; SegNet Model | http://dx.doi.org/10.12785/ijcds/1571000399 | Anbang Anbang and I Gede Putra Kusuma Negara (Bina Nusantara University, Indonesia) |
10 | 1571000835 | Strengthening Android Malware Detection: from Machine Learning to Deep Learning | In the recent era of the modern world, Android malware continues to escalate, and the challenges associated with its usage are growing at an unprecedented rate. This cause rapid growth in Android malware infections points to an alarming and swift rise in their prevalence, signaling a cause for concern. Traditional anti-malware systems, reliant on signature-based detection, prove inadequate in addressing the expanding scope of newly developed malware. Various strategies have been introduced to counter the escalating threat in the Android mobile field, with many leaning towards machine learning (ML) models limited by a constrained set of features. This paper introduces a novel approach employing a deep learning (DL) framework, incorporating a significant number of diverse features. The proposed framework uses Deep Neural Network (DNN) techniques on a static OmniDroid dataset, comprising 25,999 features extracted from 22,000 Android Package Kits (APKs). Of these, 16,380 features are meticulously selected for analysis, encompassing Permission, Opcodes, Application Programming Interface(API) calls, System Commands, Activities, and Services. Additionally, the data is partitioned feature-wise and subjected to feature selection on each feature set to ensure equitable consideration of all features. A comparative analysis is presented by comparing the framework accuracy with the accuracies produced by the existing ML models. The presented framework demonstrates notable enhancements in detection accuracy, achieving 89.04\% accuracy, attributed to the incorporation of a substantial number of features. | Android malware; malware detection; deep learning; artificial neural network; feature selection; machine learning | http://dx.doi.org/10.12785/ijcds/1571000835 | Diptimayee Sahu, Satya Narayan Tripathy and Sisira Kumar Kapat (Berhampur University, India) |
11 | 1571006545 | AI-based Intelligent Window System for Hospitals in GCC Countries | Climate change stands as a formidable challenge on a global scale, manifesting through alterations in weather patterns and regional ecosystems, with the Gulf region, in particular, facing pronounced shifts due to its distinct climate and high vulnerability to such changes. These environmental shifts have far-reaching effects, not least on the operational dynamics and internal conditions of hospitals establishments pivotal to the delivery of critical healthcare services. Recognizing the urgent need to address these climatic impacts within healthcare settings, this paper introduces a cutting-edge solution: an artificial intelligence (AI)-powered intelligent window system specifically designed to enhance the hospital environment by mitigating the adverse effects of climate change. These smart windows are engineered to process and react to real-time weather data alongside a variety of relevant environmental inputs, enabling them to dynamically modify their functional properties ranging from filtering capabilities to ventilation mechanisms. This adaptive functionality aims to maintain or improve the indoor environmental quality, ensuring that it remains conducive to patient care and staff well-being. Beyond mere environmental control, the system is innovatively tailored to integrate patient-specific health information and preferences, allowing for the customization of key environmental parameters such as lighting levels, ambient temperature, and air quality. This level of personalization is intended to foster an atmosphere that not only promotes healing and comfort but also significantly enhances the patient experience by supporting the overall recovery process. Through this comprehensive approach, our proposed intelligent window system aspires to bridge the gap between technological innovation and healthcare service enhancement, offering a proactive response to the challenges posed by climate change in the healthcare sector. | Intelligent Windows; Air Quality; rtificial Intelligence; Hospital; Sensors | http://dx.doi.org/10.12785/ijcds/1571006545 | Ahmed Jedidi (Ahlia University, Saudi Arabia) |
12 | 1571010950 | Early Autism Spectrum Disorder Screening in Toddlers: A Comprehensive Stacked Machine Learning Approach | In this paper, we have introduced a study that addresses the critical need for early detection of Autism Spectrum Disorder (ASD) in toddlers. ASD is characterized within the context of its profound impact on early childhood development, emphasizing the urgency of identifying it as early as possible. To achieve this, the study employs a diverse set of base models, including Logistic Regression, KNN, Decision Trees (DT), Support Vector Machines (SVM), and Neural Networks (NN), among others, as part of its methodology. One key aspect of the methodology is the meticulous execution of feature selection using these models. The focus is on identifying the top four features that are most indicative of ASD for subsequent training. By leveraging various machine learning algorithms, the study aims to develop accurate predictive models for early ASD detection. The results of the study are promising, with the models achieving high levels of accuracy. The models with the highest accuracy are identified, and a stacking technique is systematically applied, combining the strengths of different classifiers to further enhance performance. The most significant finding of the study is the exceptional accuracy rate of 99.148% achieved by the proposed approach. This high accuracy rate underscores the efficacy of the methodology in early ASD detection. By accurately identifying ASD in toddlers at an early stage, the study demonstrates the potential for timely intervention and support for affected children, ultimately improving their long-term outcomes and quality of life. | Machine learning; Preference algorithm; Stacking; Feature selection; Classification; Confusion Matrix | http://dx.doi.org/10.12785/ijcds/1571010950 | Anupam Das and Prasant Kumar Pattanaik (Kalinga Institute of Industrial Technology, India); Suchetan Mukherjee and Sapthak Mohajon Turjya (KIIT, India); Anjan Bandyopadhyay (Kalinga Institute of Industrial Technology, India) |
13 | 1571012424 | Adopting Complex Networks to Detect Cheat Cases in Electronic exams | For Electronic education considerations, sometimes it is crucial to rely on solutions, even though these solutions have more negative than positive results. One of the most sensitive areas in remote studies is the morals and honesty of the students, precisely when they perform online tests or exams. This study will suggest a monitoring system to avoid cheating with electronic exams depending on the distributed geo-information of students' devices and the integration of complex networks' analysis. This investigation was conducted in a class with equal gender distribution. There were 34 females and 34 males attended the class. The number of e-learning and e-test sessions varied for every student. According to the study, some students only get e-test sessions rather than e-learning sessions. In this instance, the students were removed in order to provide a distribution of honesty ratings that is typical. Following the computation of each student's honesty percentage, the results were distributed regularly according to the total number of students. The findings indicate that when considering the differences in honesty scores for the two genders, distant E-tests perform better with female students than with male students. There are several possible explanations for this, one of which is the social structure of the students. In Middle Eastern cultures, it is common knowledge that men enjoy greater freedom and space than women. This had an impact on the ability of male students to congregate in one place, as this study demonstrated when IP-address physical locations were compared. It was discovered that many students had abnormalities with their Electronic-Study-Profile when taking the E-test, but that the same students also had similarities with the E-Test-Profile. Compared to the male pupils, female students also showed anomalies in their E-Test-Profiles. | Electronic learning; electronic exams; COVID-19; Networks | http://dx.doi.org/10.12785/ijcds/1571012424 | Mahmood Alfathe (Ninevah University, USA); Ali Othman Al Janaby (Ninevah University, Iraq & Electronics Engineering, Iraq); Azhar Abdulaziz (Ninevah University, Iraq); Manar Kashmoola (Mosul University, Iraq) |
14 | 1571012545 | Verbal Question and Answer System for Early Childhood Using Dense Neural Network Method | Questions are a well-known topic in Natural Language Processing (NLP). This feature is very suitable for use in learning activities in kindergarten to help train social interaction. The problem in this research is that the developed system must be able to understand questions from childhood. This is complex, given that their questions often need to be spoken correctly due to their limited ability to formulate questions appropriately. Therefore, this research proposes the Dense Neural Network (DNN) method, which can handle questions with non-linear word order using an Indonesian corpus of 5000 questions and answers. Experimental results show that the proposed DNN approach is superior to the Long Short Term Memory (LSTM) method in understanding and answering questions from young children, especially those that need to be more structured and formulated but have a clear context. DNN also achieved the highest accuracy in the training process, which was 0.9356. In contrast, the LSTM method showed a lower accuracy of only 0.8824. In a test of 2000 questions with different question patterns, the best accuracy was obtained by the DNN method at 93.1\%. The results of this study make an essential contribution to the development of NLP systems that can be used in the context of early childhood learning. | Dense Neural Network (DNN); Long Short Term Memory (LSTM); Natural Language Processing (NLP); Question and answer | http://dx.doi.org/10.12785/ijcds/1571012545 | La Ode Fefli, Yarlin (Universitas Hasanuddin, Indonesia); Zahir Zainuddin and Ingrid Nurtanio (Hasanuddin University, Indonesia) |
15 | 1571013400 | A New Semantic Search Approach for the Holy Quran Based On Discourse Analysis and Advanced Word Representation Models | Semantic search is an information retrieval technique that seeks to understand the contextual meaning of words to find more accurate results. It remains an open challenge, especially for the Holy Quran, as this sacred book encodes crucial religious meanings with a high level of semantics and eloquence beyond human capacities. This paper presents a new semantic search approach for the Holy Quran. The presented approach leverages the power of contextualized word representation models and discourse analysis to retrieve semantically relevant verses to the user's query, which do not necessarily appear verbatim in Quranic text. It consists of three crucial modules. The first module concerns the discourse segmentation of Quranic text into discourse units. The second module aims to identify the most effective word representation model for mapping the Quranic discourse units into semantic vectors. To this end, the performance of five cutting-edge word representation models in assessing semantic relatedness in the Holy Quran at verse level is investigated. The third module concerns the semantic search model. Evaluation results of the proposed approach are very promising. The average precision and recall are 90.79% and 79.57%, respectively, which demonstrates the strength of the proposed approach and the ability of contextualized word representation models to capture Quran semantic information. | Information retrieval; Natural Language Processing; contextualized word representation models; discourse analysis; semantic relatedness; Holy Quran | http://dx.doi.org/10.12785/ijcds/1571013400 | Samira Lagrini (University of Annaba, Algeria); Amina Debbah (University of Badji Mokhtar, Algeria) |
16 | 1571015706 | Big Data and Predictive Analytics for Strategic Human Resource Management: A Systematic Literature Review | In the digital transformation era, businesses generate vast amounts of data from various internal and external sources. This data explosion has not only led to the emergence of big data (BD) and predictive analytics (PA) but also revolutionized the way we approach strategic human resource management (SHRM). With the exponential growth of organizational data in volume, velocity, and diversity, there is a notable opportunity to investigate BD and PA methods that provide executives with future-oriented insights into talent dynamics. This study presents a concise overview of the main themes and patterns using a systematic literature review (SLR). Several studies have been conducted on adopting BD and PA techniques, yielding excellent results and offering valuable insights for strategic human resource management (SHRM) experts and future researchers. The search was restricted to articles written in English and published between 2016 and 2023. After conducting an initial search, approximately 50 articles were identified and screened for relevance using a set of inclusion and exclusion criteria. In the final sample, 21 articles published between 2016 and 2023 met the inclusion criteria. The SLR summary describes the essential findings and limitations. The SLR also evaluated the status of existing research on the topic and identified areas for future research. | big data; performance analytics; systematic literature review; strategic human resource management. | http://dx.doi.org/10.12785/ijcds/1571015706 | Minwir Al-Shammari, Fatema Al bin ali, Mariam AlRashidi and Muneera Albuainin (University of Bahrain, Bahrain) |
17 | 1571016089 | Advanced Heterogeneous Ensemble Voting Mechanism with GRFOA-based Feature Selection for Emotion Recognition from EEG Signal Analysis | Important features of electroencephalogram (EEG) that underlie emotional brain processes include high temporal resolution and asymmetric spatial activations. Unlike voice signals or facial expressions, which are easily duplicated, EEG-based emotion documentation has shown to be a reliable option. Because people react emotionally differently to the same stimulus, EEG signals of emotion are not universal and can vary greatly from one individual to the next. As a consequence, EEG signals are highly reliant on the individual and have shown promising results in subject-dependent emotion identification. The research suggests using ensemble learning with an advanced voting mechanism to understand the spatial asymmetry and temporal dynamics of EEG for accurate and generalizable emotion identification. Using VMD (Variational-Mode-Decomposition) and EMD (Empirical mode decomposition), two feature extraction techniques, on the pre-processed EEG data. When selecting features, the Garra Rufa Fish optimization algorithm (GRFOA) is employed. The ensemble model includes a Temporal Convolutional Network (TCNN), an Extreme Learning Machine (ELM), and a Multi-Layer Perception Network (MLP). The proposed method involves utilizing EEG data from individual subjects for training classifiers, enabling the identification of emotions. The result is then derived via a voting classifier that is based on heterogeneous ensemble learning. Two publicly obtainable datasets, DEAP and MAHNOB-HCI, are used to validate the proposed approach using broader cross-validation settings. | Empirical Mode decomposition; Electroencephalogram; Garra Rufa Fish optimization algorithm; Emotion analysis | http://dx.doi.org/10.12785/ijcds/1571016089 | Rajanikanth Aluvalu (Symbiosis International University, India); Asha V (New Horizon College of Engineering, India); Anandhi Jayadharmarajan (Visvesharaya Technological University & The Oxford College of Engineering, India); MVV Prasad Kantipudi (Symbiosis International Deemed University, India); Mousumi Bhanja (Symbiosis Institute of Technology Pune, India); Jyoti Bali (MIT Vishwaprayag University, India) |
18 | 1571016257 | Adaptive Exercise Meticulousness in Pose Detection and Monitoring via Machine Learning | This machine learning-based fitness monitoring system revolutionizes the industry through advanced computer vision and pose recognition technologies. Sophisticated algorithms including Move-Net and dense neural networks identify body poses during exercises with high accuracy. It analyses joint angles to provide precise form feedback beyond sole identification. An interactive voice assistant translates poses into contextual exercise instructions, repetition counting, and personalized coaching delivered audibly. Modules for exercise recognition, environmental adaptation, and customization accommodate diverse workouts, conditions, and preferences. Cloud-based training with GPU acceleration drives continual evolution. By integrating detected poses with voice-assisted commands, it creates a dynamic, engaging workout experience. This represents a pioneering fusion of machine learning and computer vision establishing new frontiers for intelligent fitness technologies. With its machine learning engine, this state-of-the-art fitness tracking system has the potential to completely transform the fitness sector. Through the utilization of sophisticated computer vision and position recognition techniques, it surpasses traditional fitness tracking approaches. Sophisticated algorithms like Move-Net and deep neural networks, which continuously and accurately evaluate body positions during exercises, are at the heart of it. This innovative combination of computer vision and machine learning is, in short, a quantum leap rather than merely a step ahead. It's changing our perspective on exercise and opening up new avenues for intelligent fitness technologies, which will lead to a healthier and more empowered future. | Dense Neural Network; Dynamic Pose Monitoring; Exercise; Fitness; Machine Learning | http://dx.doi.org/10.12785/ijcds/1571016257 | N. Palanivel, G. Naveen and C. Sunilprasanna (India); S. Nimalan (Manakula Vinayagar Institute of Technology, India) |
19 | 1571016609 | A Hybrid Approach to Enhancing Personal Sensitive Information Protection in the Context of Cloud Storage | The growing use of cloud computing and increasing popularity of digital technologies have made it essential to store and process personal data in cloud environments. As organizations and individuals continue to adopt cloud services, the security of sensitive personal information in this dynamic environment has become a top priority. Ensuring confidentiality, integrity, and availability of personal data in the cloud is critical for mitigating the risks associated with cyber threats. This study examines security issues related to personal information in cloud systems and proposes a new approach that leverages machine learning (ML) classification and data tokenization techniques using serverless and secret vault services provided by cloud service providers (CSPs). Supervised learning algorithms, including Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN), and Multilayer Perceptron (MLP), are used for data label prediction. Notably, we found that the CNN achieved remarkable 100% accuracy on a large dataset, ensuring perfect classification with double validation using pattern matching. Additionally, natural language processing (NLP) techniques are employed to clean and prepare data content, whereas data tokenization is used to ensure data confidentiality and integrity. Furthermore, an analysis of both model overhead and cloud performance revealed that our model is scalable, and that data handling using our approach has no significant impact on time costs. This study also provides an overview of cloud computing, its service models, and the main security threats inherent in the cloud infrastructure. The experimental design and results based on specific datasets validated the effectiveness of the proposed hybrid approach in enhancing the protection of sensitive personal information in cloud storage. | Personally Identifiable Information; Data Security; Cloud Storage; Data Masking; Data Leakage; Cloud Computing | http://dx.doi.org/10.12785/ijcds/1571016609 | Mohammed El Moudni and Houssaine Ziyati (C3S Lab, Higher School of Technology, UH2C, Morocco) |
20 | 1571016631 | Biosignals Based Smoking Status Prediction Using Standard Autoencoder and Artificial Neural Network | Smoking is still a major global health concern since it causes a host of illnesses and early deaths around the globe. Utilizing biosignals to predict smoking status can yield insightful information for tailored interventions and smoking cessation programs. This work presents a novel method that combines an artificial neural network (ANN) and a regular autoencoder to predict smoking status based on biosignals. The proposed method involves preprocessing biosignal data to extract relevant features, which are then input into an autoencoder for dimensionality reduction. The output of an autoencoder is used as input for predicting smoking status using an ANN. The model is trained and evaluated using a dataset containing biosignal data from individuals with a known smoking status. The suggested strategy is effective, as seen by the experimental results, which show a high degree of prediction accuracy about smoking status. The model's performance is further validated through comparisons with existing methods, showing superior performance in terms of accuracy and robustness. The developed model is integrated into a user-friendly application aimed at promoting smoking cessation. In addition to specific online pages aimed at enlightening users about the negative consequences of smoking and the advantages of stopping, the program offers users individualized insights into their smoking status based on biosignals. Additionally, a menu-based chatbot is included to address user queries and provide support for smoking cessation efforts. The implemented deep learning model achieves the desired level of accuracy in predicting smoker status, and the user-friendly application offers a convenient platform for public health and personalized healthcare interventions. | Biosignals; Smoking status; Autoencoder; Artificial Neural Network | http://dx.doi.org/10.12785/ijcds/1571016631 | N. Palanivel (India); S. Deivanai, G. Lakshmi Priya and B. Sindhuja (Manakula Vinayagar Institute of Technology, India) |
21 | 1571016767 | Encryption Technique Using a Mixture of Hill Cipher and Modified DNA for Secure Data Transmission | The 21st century has seen an explosion of information due to the quick development of technology, making information a far more crucial strategic resource. In addition, there has been a development in hackers' ability to steal information with all their might and intelligence. Consequently, secretly transmitting information became the main concern of all agencies. Further, as classical cryptographic methods are now exposed to attacks, protecting data by a collection of steganography and cryptography techniques is becoming increasingly popular and widely adopted. Therefore, it has been determined that DNA use in cryptography could lead to new technological advancements by converting original text into an unintelligible format. In this paper, a new cryptographic technique that combines Modified DNA sequence with Hill cipher has been proposed. The proposed technique includes four phases: In the first phase, Hill cipher technology encrypts plain text into n-bit binary numbers. Second, perform XOR operations on the result, and then a key value with a length of 32 bits is added to the output of XOR. Third, Modified DNA cryptography is applied to generate ambiguity and steganography. The decryption process, which is the last phase, applies to recover the original message on the receiver side. The proposed scheme provides higher data security when compared to several existing schemes. | Hill Cipher; Modified DNA; Cryptography; Steganography | http://dx.doi.org/10.12785/ijcds/1571016767 | Kameran Ali Ameen, Walled Khalid Abdulwahab and Yalmaz Najm Aldeen Taher (University of Kirkuk, Iraq) |
22 | 1571017038 | Efficient Neuro-Fuzzy based Relay Selection in IoT-enabled SDWSN | The Internet of Things is made up of wireless sensor devices (nodes) that work together to create a dynamic network without central management or continuous assistance. High mobility sensor nodes cause periodic topological changes in the network and link failures, which frequently force nodes to rediscover new routes for efficient data transmission in IoT, this brings attention to the issue of energy management and improvement in network lifetime. The relay selection is one method to reduce the node energy in the IoT network. However, designing communication protocols for relay selection, especially for dynamic networks, is a big challenge for researchers. To overcome these challenges, Software Defined Networking (SDN) architecture is used to minimize the overhead of sensor nodes by managing the topology control and routing decisions through artificial intelligent algorithms. The fuzzy logic and neural networks are combined to solve more complex problems such as decision-making, and optimization. This paper presents an Energy-aware Relay Selection Technique using an adaptive Neuro fuzzy-based model (ERST) to optimize the overall energy usage and improve the span of the network, a relay node is selected depending on remaining energy, signal strength, and expected transmission ratio. The proposed ERST uses a fuzzy logic inference system to make intelligent decisions based on the fuzzy rules. The neural network can be trained to fine-tune the fuzzy system using the feedback concepts to select the optimal relay node. In addition, the simulation results prove that the suggested work outperforms the previous protocols in terms of an 8% improvement in packet delivery ratio, reduces 5% of end-to-end delay, 4% minimization of energy usage, and an 8% increase in average throughput and overall network lifetime. | Internet of Things; Relay Selection; Energy Efficiency; Fuzzy-Logic; Neural networks | http://dx.doi.org/10.12785/ijcds/1571017038 | Poornima M R (UVCE, Bengaluru, India); VImala H S (Professor, India); Shreyas J (Manipal Institute of Technology, India) |
23 | 1571017778 | Design and Development of Novel AXI Interconnect based NoC Architecture for SoC with Reduced Latency and Improved Throughput | A novel AXI interconnect-based Network-on-Chip (NoC) architecture is presented in this research. The purpose of the architecture is to make System-on-Chip (SoC) designs more efficient by reducing latency and improving throughput. Because of its high performance and bandwidth capabilities, the Advanced eXtensible Interface (AXI), which is a component of the Advanced Microcontroller Bus Architecture (AMBA) of the ARM architecture, is used. This configuration makes it possible to communicate effectively inside the chip. The proposed architecture overcomes the scalability limits that are inherent in conventional bus systems. This is accomplished by integrating AXI with NoC principles, which enables more efficient data transmission over a greater number of linked modules. By introducing an effective routing system and a network interface that has been improved, This research work enables packet transfer to occur without interruption. A 2x2 mesh topology is used to simulate the proposed architecture, and an XY routing algorithm is included into the simulation in order to guarantee that deadlock and livelock-free operations are carried out. This highlights the potential of the proposed architecture in high-performance computing applications that require rapid data exchange and minimal response times. The simulation results demonstrate significant improvements over traditional interconnect approaches, yielding a lower latency of 0.99 microseconds and a higher throughput of 4.363 flits per cycle which demonstrates the potential of the proposed architecture | AXI Interconnect; NoC; SoC; Router; Mesh Topology | http://dx.doi.org/10.12785/ijcds/1571017778 | M Nagarjuna (Vardhaman College of Engineering, India); Girish V Attimarad (K S School of Engineering and Management Bangalore, India) |
24 | 1571018142 | Machine Learning based Material Demand Prediction of Construction Equipment for Maintenance | Construction managers faced the Construction Equipment (CE) challenges related to running repair and replacement of spare part materials as well as shortage of materials, sudden damage of spare parts and unavailability of necessary materials at job sites frequently. Regular follow up and track of materials availability and their usage at each stage of requirement phase becomes essential. This study presents Machine Learning (ML) based material demand prediction. Training of ML models utilizes historical maintenance, and procurement periodic data related to materials of the CE. This study highlights the use of Multiple Linear Regression (MLR), Support Vector Regression (SVR), Decision Tree (DT) Regressor and ensemble boosting models as Random Forest (RF) Regressor and Gradient Bosting Regressor (GBR). According to the performance measurement of each model, RF performs better and is used for prediction. Material demand prediction helps in maintenance and operational planning of CE. Subsequently, approach assists in addressing issues early by involving operators and site owners, enabling preventive actions to be taken before the scheduled procurement process. This study addresses the corrective measurement of the model using periodic data. The model performance results indicate that early prediction of maintenance costs based on the quantity of essential materials withdrawn from demand is helpful for budgeting expenditures. | Construction Equipment; Machine Learning; Material Demand; Maintenance | http://dx.doi.org/10.12785/ijcds/1571018142 | Poonam Prashant Katyare and Shubhalaxmi S. Joshi (Vishwanath Karad MIT World Peace University, India) |
25 | 1571020029 | A Comparative Analysis and Review of Techniques for African Facial Image Processing | Facial recognition algorithms power various applications, demanding representative and diverse datasets. However, developing reliable models for African populations is hindered by the scarcity of African facial image databases. This study addresses this gap by analyzing the state and potential of African facial image collections. The methodology involves collecting and analyzing indigenous African datasets and evaluating factors like temporal relevance, geographic coverage, and demographic representation. We evaluate the quality and diversity of existing datasets, and the ethical and cultural issues of data collection. We also apply machine-learning techniques, namely Principal Component Analysis (PCA) and Support Vector Machines (SVM), to analyze and classify facial features of three African ethnic groups. The study shows that PCA can capture facial variations, and SVM can achieve 55% accuracy, with group differences. Findings highlight the potential of machine learning for inclusive facial recognition but also reveal challenges, including data imbalance and limitations in chosen features. To achieve fair and reliable facial recognition, future directions advocate for a culturally sensitive approach and highlight the importance of representative dataset systems found in Africa. Also, a concentration should be on collecting data from underrepresented regions and ethnic groups. The collection of diverse and culturally sensitive datasets can be facilitated by collaborative activities between researchers and local communities. | Facial Image Processing; Bias; Facial Image Datasets; Machine Learning; Classification and Clustering; Digital Signal Processing | http://dx.doi.org/10.12785/ijcds/1571020029 | Amarachi Udefi (Obafemi Awolowo University Ile Ife, Osun State & Grundtvig Polytechnic Oba, Anambra State, Nigeria); Segun Aina (Obafemi Awolowo University Ile-Ife, Osun State, Nigeria); Aderonke Lawal (Obafemi Awolowo University Ile Ife, Osun State, Nigeria); Niran Oluwaranti (Obafemi Awolowo University, Ile-Ife, Nigeria) |
26 | 1571020041 | Detecting Cyber Threats in IoT Networks: A Machine Learning Approach | Internet of Things (IOT) network security challenges in cybersecurity are among the key demands that are oriented towards the safety of data distribution and storage. Prior to the present research, the loopholes that have been found in the field of tackling this danger were the greatest, especially in real-world IoT setups. Hereby, in this study, we create room for the previously unfilled gap using our innovative method to detect network cybersecurity in IoT networks. The technique is based on merging machine learning and neural network algorithms that are trained on vast IoT historical datasets. Several diverse methods, particularly gradient boosting, convolutional neural networks (CNNs), long short-term memory networks (LSTMs), and recurrent neural networks (RNNs), are used to detect and categorize network traffic aspects that potentially suggest cyber risks. The evaluation of each algorithm's performance is based on conventional metrics, which are, for instance, accuracy, precision, recall, and F1-score. Through rigorous testing, we do illustrate the applicability of our technique, in which our solution recognized and curbed the cyber threat in IoT networks, offering the most accurate results of 93% using gradient boosting. Our discussed work can be taken as confirmation of the current advancement of machine learning and deep learning techniques in the scope of increasing cybersecurity in IoT environments. And furthermore, our examined facts may serve as the starting point of future refined investigations in this regard. | Internet of Things; Cybersecurity; Machine learning; Network security | http://dx.doi.org/10.12785/ijcds/1571020041 | Atheer Alaa Hammad, Sr (Ministry of Education & Directorate of Education, Iraq); May Adnan Falih (Southern Technical University, Iraq); Senan Ali Abd (University of Anbar, Iraq); Saadaldeen Rashid Ahmed (Artificial Intelligence Engineering, Iraq) |
27 | 1571020131 | Feature Engineering for Epileptic Seizure Classification Using SeqBoostNet | Epileptic seizure, a severe neurological condition, profoundly impacts patient's social lives, necessitating precise diagnosis for classification and prediction. This research addresses the critical gap in automated seizure detection for epilepsy patients, aiming to improve diagnostic accuracy and prediction capabilities through Artificial Intelligence driven analysis of Electroencephalography (EEG) signals. The system employs innovative feature combination such as spectral and temporal features, combining Uniform Manifold Approximation and Projection (UMAP) with Fast Fourier Transformation (FFT), and a classification technique called Sequential Boosting Network (SeqBoostNet). SeqBoostNet is a groundbreaking stacked model that integrates machine learning (ML) and deep learning (DL) approaches, leveraging the strengths of both methodologies to swiftly differentiate seizure onsets, events, and healthy brain activity. The method's efficacy is validated on benchmark datasets such as BONN from the UCI repository and real-time data BEED from the Bangalore EEG Epilepsy Dataset, achieving remarkable accuracy rates of 98.40% for BONN and 99.66% for BEED datasets. The practical significance of this study lies in its potential to transform epilepsy care by providing a precise automated seizure detection system, ultimately enhancing diagnostic accuracy and patient outcomes. Furthermore, it underscores the importance of integrating advanced AI techniques with EEG analysis for more effective neurological diagnostics and treatment strategies. | Epileptic Seizure; UMAP; Machine Learning; Deep Learning; FFT; LSTM | http://dx.doi.org/10.12785/ijcds/1571020131 | Najmusseher and Nizar Banu P K (Christ University, India) |
28 | 1571020530 | Modified YOLOv5-based License Plate Detection: an Efficient Approach for Automatic Vehicle Identification | Indonesia witnesses a continual annual surge in the vehicle count, with the Central Statistics Agency (BPS) projecting a total of 148.2 million vehicles in 2022, marking a 6.3 million increase from the preceding year. This growth underscores the escalating challenges associated with traffic management and violations. Hence, the development of a robust vehicle number plate image recognition system becomes paramount for effective traffic control, accurate parking records, and streamlined identification of vehicle owners. In this study, the YOLO v5 algorithm is employed in conjunction with the AOLP dataset, encompassing diverse vehicle images under challenging conditions, such as low lighting, intricate viewing angles, and blurred license plates. The YOLO v5 algorithm exhibits noteworthy performance metrics, boasting a recall value of 99.7%, precision reaching 99.1%, mAP50 of 99.4%, and mAP50-95 of 84.8%. The elevated precision signifies the model's proficiency in minimizing identification errors, while the commendable recall highlights its adeptness in locating existing number plates accurately. Concurrently, the Optical Character Recognition (OCR) model, dedicated to character recognition on number plates, attains an accuracy level of 92.85%, underscoring its efficacy in deciphering alphanumeric characters. This integrated approach leverages advanced algorithms to tackle the intricacies of realworld scenarios, affirming its viability for enhancing traffic management systems and bolstering the efficiency of vehicle-related processes. | YOLO; OCR; Object detection; Plate number; Character recognition | http://dx.doi.org/10.12785/ijcds/1571020530 | Rifqi Alfinnur Charisma and Suharjito Suharjito (Bina Nusantara University, Indonesia) |
29 | 1571021663 | Design and Implementation of an Advanced Digital Communication System Based on SDR | Simplicity, flexibility, and high scalability are mandatory for modern digital communication systems. This can be achieved using software-defined radio (SDR) technology, which depends on digital signal processing (DSP) software algorithms. This paper considers designing the modulation and the demodulation parts of a single carrier digital communication system based on a Microcontroller(MC),, in which the signal is modulated digitally using a look-up table (LUT) module, while the receiver demodulates the signal using a digital signal processing algorithm by utilizing a single carrier discrete Fourier transform (DFT), both the receiver and transmitter employ Teensy 4.0 microcontroller which can be programmed using C++ language. The target data rate that will be used as a test for this paper is 10 kilo symbols/sec (KS/s), and it will support multi-modulation types. For the transmitter, modulation schemes such as BPSK, QPSK, 8QAM, 8PSK, and 16QAM are generated, while at the receiver, the symbols phases are exploited to detect the signal, rather than the amplitude, and this method is suitable for any type of modulation schemes, as a summary in this paper will achieve the design of two new ideas first is modulate the signal using MC and the other is the method of demodulate the signal using the MC. | Digital modulation; microcontroller; C++; MPSK; SDR; QAM | http://dx.doi.org/10.12785/ijcds/1571021663 | Sadeem Mohameed (University of Ninevah, Iraq); Mohamad A. Ahmed, Mahmod Ahmed Mahmod and Abdullah B. Al-Qazzaz (Ninevah University, Iraq) |
30 | 1571023677 | Variance Adaptive Optimization for the Deep Learning Applications | Artificial intelligence jargon encompasses deep learning that learns by training a deep neural network. Optimization is an iterative process of improving the overall performance of a deep neural network by lowering the loss or error in the network. However, optimizing deep neural networks is a non-trivial and time-consuming task. Deep learning has been utilized in many applications ranging from object detection, computer vision, and image classification to natural language processing. Hence, carefully optimizing deep neural networks becomes an essential part of the application development. In the literature, many optimization algorithms like stochastic gradient descent, adaptive moment estimation, adaptive gradients, root mean square propagation etc. have been employed to optimize deep neural networks. However, optimal convergence and generalization on unseen data is an issue for most of the conventional approaches. In this paper, we have proposed a variance adaptive optimization (VAdam) technique based on Adaptive moment estimation (ADAM) optimizer to enhance convergence and generalization during deep learning. We have utilized gradient variance as useful insight to adaptively change the learning rate resulting in improved convergence time and generalization accuracy. The experimentation performed on various datasets demonstrates the effectiveness of the proposed optimizer in terms of convergence and generalization compared to existing optimizers. | Deep Neural Networks; Deep Learning; Optimization; Variance; Convergence | http://dx.doi.org/10.12785/ijcds/1571023677 | Nagesh Jadhav and Rekha Sugandhi (MIT Art, Design and Technology University); Rajendra Pawar (MIT Art Design and Technology University Pune India, India); Swati Shirke (Pimpri Chinchwad University, India); Jagannath E. Nalavade (MIT Art, Design and Technology University) |
31 | 1571024157 | New Ensemble Model for Diagnosing Retinal Diseases from Optical Coherence Tomography Images | The vision depends greatly on the retina, unfortunately, it may be exposed to many diseases that lead to poor vision or blindness. This research aims to diagnose retinal diseases through OCT images, focusing on Drusen, diabetic macular edema (DME), and choroidal neovascularization (CNV). A new ensemble model is proposed that proposes new methods and combines them with soft and hard voting methods, it is based on three sub-models (Custom-model, Xception, and MobileNet). Because we noticed that some sub-models are better than others at classifying a particular category, each sub-model was assigned to the category it classifies best. We also used a way to correct final misclassification through a list of negative predictions created to contain categories to which the sub-model is somewhat certain that an image does not belong. The proposed ensemble model achieved a state-of-the-art accuracy of 100%, and the Custom model obtained an accuracy of 99.79% on the UCSD-v2 dataset. The Duke dataset was also employed to verify the performance efficiency of the model, with the ensemble model also achieving an accuracy of 100%, and the Custom model recording an accuracy of 99.69%. In the first dataset, the custom model specializes in Drusen and Normal, Xception in DME, and MobileNet in CNV. While the custom model in AMD, Xception in DME, and MobileNet in Normal in the second dataset. The results of this research emphasize the effectiveness of ensemble learning techniques in analyzing medical images, especially in diagnosing retinal diseases. | Ensemble Learning; Deep Learning; OCT Images; Retinal Diseases; Drusen; DME | http://dx.doi.org/10.12785/ijcds/1571024157 | Shibly Hameed Al-Amiry (University of Al-Qadisiyah, Iraq); Ali Mohsin Al-juboori (Al-Qadisiyah University, Iraq) |
32 | 1571024236 | Convolutional Neural Network with Extreme Learning for the Classification of Plant Leaf Diseases from Images | Contemporary era has assumed significance of Artificial Intelligence along with Deep Learning due to its ability to analyze data and discover latent trends or patterns that were not known earlier. The foundation of the entire world is agriculture and particularly India is highly dependent on it. Farmers are facing many difficulties right from the selection of seed to fertilizer usage, disease control, harvesting and selling the agricultural yield. Technological innovations should be used to facilitate farmers to achieve highest yield possible with minimal expenditure. The prime motivation behind this research stems from the idea that, the ability to detect leaf issues and implement corrective measures can offer a solution to mitigate the decrease in crop productivity. The existing Deep Learning methods like Convolutional Neural Network showed high efficiency Regarding the modification and use of acquired knowledge. A novel framework has been developed by incorporating Convolutional Neural Network and tuning the hyperparameters. Training has been performed using Extreme Learning process which yielded better results. Convolutional Neural Network - Extreme Learning Algorithm is the underlying algorithm. The suggested model's performance is contrasted with many Deep Learning models. The empirical study makes use of the Plant Village dataset. The leaf disease categories considered in this research early blight, black rot, bacterial spot, apple scab, cercospora leaf spot and healthy. Convolutional Neural Network - Extreme Learning achieved 94.28% precision, 95.63% accuracy, 94.68% recall, and 96.23% F1-score using Plant Village dataset, outperforming other classifiers. The research outcomes reflect that the proposed Deep Learning model and algorithm can be used real world computer vision applications pertaining to agriculture. | Convolutional Neural Networks; Deep Learning; Extreme Learning; Hyperparameters,; Plant Leaf Disease Classification | http://dx.doi.org/10.12785/ijcds/1571024236 | Swapna Jamal and John Edwin Judith (Noorul Islam Centre for Higher Education, India) |
33 | 1571024502 | Artificial-Intelligent-Enhanced Adaptive Vertical Beamforming Techniques for 5G Networks | The advances of 5G era systems and technology throughout the years suggests new uses for Adaptive beamforming and Digital Signal Processing (DSP) strategies in the communication systems to determine the transformational capacity of 5G wireless technology. This article evaluates the performance metrics of phase shift beamforming in a system of phased Uniform Rectangular Array (URA) aided with Artificial Intelligence (AI) to improve the link and communication quality in dense user urban environments. We use the conventional Quadrature Amplitude Modulation (QAM) for evaluating its robustness through a series of simulations for Bit Error Rate (BER) under different Signal to Noise Ratio (SNR) values. This study describes the opposition of theoretical and empirical BER to confirm the beamforming algorithm's operation in the communication system. We propose a spatial spectrum technique for a clear visualization of the Direction of Arrival (DoA) that gives the details of signal movement of users in the network and array behavior in the base station (BS). So, these results not only confirm the proposed methods effectiveness in the mobile network, but also highlight the importance of a creative AI system embedded with beamforming in achieving the expected performance metrics and reliability for future 5G communication networks and beyond. | A.I.; Beamforming; BER; Throughput; Vertical; 5G | http://dx.doi.org/10.12785/ijcds/1571024502 | Yousif Maher Alhatim (University of Ninevah, Iraq); Ali Othman Al Janaby (Ninevah University, Iraq & Electronics Engineering, Iraq) |
34 | 1571024840 | Intelligent Approaches for Alzheimer's Disease Diagnosis from EEG Signals: Systematic Review | This systematic review explores the emerging field of Alzheimer's disease (AD) diagnosis using recent advances in machine learning (ML) and deep learning (DL) methods using EEG signals. This review focuses on 38 key articles published between January 2020 and February 2024, critically examining the integration of computational intelligence with neuroimaging to improve diagnostic accuracy and early detection of AD. AD poses significant diagnostic and treatment challenges, which are exacerbated by the aging of the global population. Traditional diagnostic methods, while comprehensive, are often limited by their time-consuming nature, reliance on expert interpretation, and limited accessibility. EEG is emerging as a promising alternative, providing a non-invasive, cost-effective way to record the brain's electrical activity and identify neurophysiological markers indicative of AD. The review highlights the shift towards automated diagnostic processes, where ML and DL techniques play a crucial role in analyzing EEG data, extracting relevant features and classifying AD stages with extremely high accuracy. It describes different methods for preprocessing EEG signals, feature extraction and application of different classifier models and demonstrates the complexity of the field and the nuanced understanding of EEG signals in the context of AD. In summary, although the review demonstrates several advantageous developments, it has highlighted critical challenges and limitations. For example, the AI needs more extensive and more diverse datasets to increase model generalizability and multi-modal data integration to achieve a more comprehensive AD diagnosis. Undoubtedly, its preprocessing techniques and classification techniques must be developed because of the complex nature of EEG data and AD pathology. To conclude, this review portrays EEG-based AD diagnosis as a promising field fueled by computational breakthroughs. Yet the insufficient literature and investigation require additional scientific inquiries and further research. Numerous outlooks highlight co-investigating EEG with complementary biomarkers and investigating innovative ML/DL approaches. | Alzheimer's Disease; EEG; Machine Learning; Deep Learning; AD Diagnosis | http://dx.doi.org/10.12785/ijcds/1571024840 | Nigar M. Shafiq Surameery (University of Garmian, Iraq); Abdulbasit Kadhim Alazzawi (Diyala & Collage of Science, Turkey); Aras Asaad (University of Buckingham, United Kingdom (Great Britain)) |
35 | 1571024871 | RFM-T Model Clustering Analysis in Improving Customer Segmentation | In the dynamic landscape of business, understanding and identifying customers are paramount for effective marketing strategies. This study delves into the realm of customer segmentation, a crucial component of robust marketing strategies, particularly focusing on the widely adopted RFM (Recency, Frequency, and Monetary) model. Various new models of RFM have been explored, with a notable extension being the RFM-T model, introducing the "T" variable to represent Time. This study aims to compare the performance of the traditional RFM model with the innovative RFM-T model, assessing their efficacy in customer segmentation. Utilizing a dataset sourced from a US-based online retail platform, the study employs the K-Means algorithm for segmentation, a method commonly utilized for partitioning data points into distinct clusters. To ascertain the optimal number of clusters, the Elbow Curve approach is employed, offering insight into the granularity of segmentation. Subsequently, the Silhouette Score, a metric used to assess the cohesion and separation of clusters, is leveraged to evaluate the quality and effectiveness of both models. By conducting a comparative analysis of the traditional RFM model and its enhanced RFM-T counterpart, the study endeavors to shed light on their respective contributions to the refinement of customer profiling and segmentation strategies within the online retail industry. Through this exploration, businesses can glean valuable insights into the evolving landscape of customer segmentation, thereby enabling them to tailor their marketing efforts more precisely and effectively to meet the dynamic needs and preferences of their target audience. | RFM; RFM-T; Time; K-Means algorithm; Customer segmentation | http://dx.doi.org/10.12785/ijcds/1571024871 | Astrid Dewi Rana, Quezvanya Chloe Milano Hadisantoso and Abba Suganda Girsang (Bina Nusantara University, Indonesia) |
36 | 1571024987 | Investigational Study for Overcoming Security Challenges in Implantable Medical Devices | Implantable Medical Devices (IMDs) have gained significant popularity due to their telemetry capabilities, making them a preferred choice for patients and medical professionals alike. However, like any networked device, IMDs are vulnerable to security breaches, which can pose serious risks to human life. Consequently, ensuring robust security measures for these devices is of utmost importance. While researchers have made efforts to address these vulnerabilities, many proposed solutions are impractical due to the inherent constraints associated with IMDs, particularly their limited battery life. This paper presents a comprehensive review of battery-efficient security solutions for IMDs by surveying extensive research literature in the field. By exploring innovative approaches that provide both strong security and optimized energy consumption, this study aims to strike a balance between safeguarding IMDs and prolonging their operational lifespan. The paper consolidates existing research, highlighting promising avenues for practical and effective security solutions in the face of evolving threats. Serving as a valuable reference for future research endeavors, this work emphasizes the criticality of continuous advancements in this field to ensure the well-being of patients who rely on these life-saving devices. Ultimately, it underscores the need to overcome the unique challenges posed by limited battery life in order to enhance the security of IMDs and mitigate potential risks to human health. | Implantable Medical Devices; Security; Privacy; battery-efficient | http://dx.doi.org/10.12785/ijcds/1571024987 | Muawya Al-Dalaien (Princess Sumaya University for Technology, Jordan); Hussein Al bazar (Arab Open University Saudi Arabia, Saudi Arabia); Hussein El-jaber (Arab Open University, Kingdom of Saudi Arabia, Malaysia) |
37 | 1571026011 | Enhancing Bitcoin Forecast Accuracy by Integrating AI, Sentiment Analysis, and Financial Models | This study explores the application of advanced AI models-Long Short-Term Memory (LSTM), Prophet, and SARIMAX-in predicting Bitcoin prices. It assesses the impact of incorporating sentiment analysis from sources like Twitter and Yahoo, processed through Large Language Models. The research aims to understand how sentiment analysis, reflecting investor sentiments and market perceptions, can enhance the accuracy of these forecasting models. The paper investigates the potential synergies and challenges in improving predictive performance by integrating qualitative sentiment data with quantitative financial models. The analysis compares the models' accuracy with and without sentiment inputs, utilizing historical Bitcoin price data and sentiment indicators. This study's motivation is the growing recognition of investor sentiment's impact on market fluctuations, particularly in the highly speculative and sentiment-driven cryptocurrency markets. While robust in handling quantitative data, many studies claim that traditional financial models often fail to incorporate market sentiments. This paper also contributes to financial forecasting literature by offering insights into the benefits and complexities of combining traditional econometric models with sentiment analysis, providing a unique understanding of market dynamics influenced by investor behavior. The findings suggest that sentiment analysis can significantly refine forecasting accuracy, underscoring the importance of incorporating human sentiment and market perceptions in predictive models. | Bitcoin forecasting; LSTM; sentiment analysis; predictive accuracy | http://dx.doi.org/10.12785/ijcds/1571026011 | Mohamad El Abaji and Ramzi A. Haraty (Lebanese American University, Lebanon) |
38 | 1571027104 | Synergistic Exploration Combining Traditional And Evolutionary Methods To Improve Supervised Satellite Images Classification | This paper aims to enhance the performance of supervised classification of satellite images by adopting a spectral classification approach, which often encounters the issue of class confusions due to its reliance solely on spectral information. The proposed approach, EAMD (Evolutionary Algorithm and Minimum Distance), integrates a Genetic Algorithm-based evolutionary method with the Minimum Distance method. During the training phase, the Genetic Algorithm generates an optimal set of subcategories to represent different object classes present in the image and identifies an optimal representative set of pixels for class assignment. Experimental tests conducted on various satellite images yield promising results, demonstrating the capability of Genetic Algorithms to enhance classification accuracy and effectively identify and exclude misleading pixels responsible for class confusions. This aspect is crucial, as the effectiveness of supervised classification heavily depends on the quality of the training samples. Validation of the approach was further reinforced by intentionally injecting erroneous data into the training data. Compared to the Minimum Distance method, the proposed approach successfully detects and avoids the erroneous pixels, a task unaccomplished by the Minimum Distance method. The obtained results demonstrate that the hybrid proposed approach offers significant potential for improving the accuracy and reliability of satellite image classification techniques. | Genetic Algorithm,; Minimum Distance; Satellite Images; Supervised Classification | http://dx.doi.org/10.12785/ijcds/1571027104 | Ismahane Kariche (University of Sciences and Technologie of Oran-Mohamed Boudiaf (USTOMB), Algeria); Hadria Fizazi (University of Science and Technology Mohamed Boudiaf, Algeria) |
39 | 1571027526 | Advancing Text Classification: A Systematic Review of Few-Shot Learning Approaches | Few-shot learning, a specialized branch of machine learning, tackles the challenge of constructing accurate models with minimal labeled data. This is particularly pertinent in text classification, where annotated samples are often scarce, especially in niche domains or certain languages. Our survey offers an updated synthesis of the latest developments in few-shot learning for text classification, delving into core techniques such as metric-based, model-based, and optimization-based approaches, and their suitability for textual data. We pay special attention to transfer learning and pre-trained language models, which have demonstrated exceptional capabilities in comprehending and categorizing text with few examples. Additionally, our review extends to the exploration of few-shot learning in Arabic text classification, including both datasets and existing research efforts. We evaluated 32 studies that met our inclusion criteria, summarizing benchmarks and datasets, discussing few-shot learning's real-world impacts, and suggesting future research avenues. Our survey aims to provide a thorough groundwork for those at the nexus of few-shot learning and text classification, with an added focus on Arabic text, emphasizing the creation of versatile models that can effectively learn from limited data and sustain high performance, while also identifying key challenges in applying Few-Shot Learning (FSL), including data sparsity, domain specificity, and language constraints, necessitating innovative solutions for robust model adaptation and generalization across diverse textual domains. | Few-shot learning; Text classification; Transfer learning; Machine Learning; Pre-trained Language Models | http://dx.doi.org/10.12785/ijcds/1571027526 | Amani Aljehani and Syed Hamid Hasan (King Abdulaziz University, Saudi Arabia); Usman Ali Khan (King Abdulaziz University, Jeddah, Saudi Arabia) |
40 | 1571029493 | Deep Learning in Plant Stress Phenomics Studies - A Review | Efficient crop management and treatment rely on early detection of plant stress. Imaging sensors provide a non-destructive and commonly used method for detecting stress in large farm fields. With machine learning and image processing, several automated plant stress detection methods have been developed. This technology can analyze large sets of plant images, identifying even the most subtle spectral and morphological characteristics that indicate stress. This can help categorize plants as either stressed or not, with significant implications for farmers and agriculture managers. Deep learning has shown great potential in vision tasks, making it an ideal candidate for plant stress detection. This comprehensive review paper explores the use of deep learning for detecting biotic and abiotic plant stress using various imaging techniques. A systematic bibliometric review of the Scopus database was conducted, using keywords to shortlist and identify significant contributions in the literature. The review also presents details of public and private datasets used in plant stress detection studies. The insights gained from this study will significantly contribute to developing more profound deep-learning applications in plant stress research, leading to more sustainable crop production systems. Additionally, this study will assist researchers and botanists in developing plant types resilient to various stresses. | Deep learning; Imaging techniques; Machine vision; Machine learning; Plant stress | http://dx.doi.org/10.12785/ijcds/1571029493 | Sanjyot Patil (Symbiosis Institute of Technology, India); Shrikrishna Kolhar (Symbiosis Institute of Technology, Symbiosis International (Deemed) University (SIU), Pune, India); Jayant Jagtap (NIMS Institute of Computing, Artificial Intelligence and Machine Learning, NIMS University Rajasthan, Jaipur, India) |
41 | 1571031682 | A machine learning-based optimization algorithm for wearable wireless sensor networks | In an Internet of Things (IoT) setting, a Wireless Sensor Network (WSN) effectively collects and transmits data. Using the distributed characteristics of the network, machine learning techniques may reduce data transmission speeds. This paper offers a unique cluster-based data-gathering approach using the Machine Learning-based Optimization Algorithm for WSN (MLOA-WSN) designed in this article for assessing networks depending on power, latency, height, and length. Using the cluster head, the data-gathering technique is put into action, with the data collected from comparable groups transmitted to the mobile sink, where machine learning methods are then applied for routing and data optimization. As a result of the time-distributed transmission period, each node across the cluster can begin sensing and sending data again to the cluster head. The cluster-head node performs data fusion, aggregation, and compression, which sends the generated statistics to the base station. Consequently, the suggested strategy yields promising outcomes as it considerably improves network performance and minimizes packet loss due to a reduced number of aggregating procedures. The existing method for findings of the MLOA-WSN system is a value of 2.43, a packet loss rate analysis of 7.6 and an Average delay analysis of the optimizers for 224. The method was evaluated under various settings, and the outcomes indicated that the suggested algorithm outperformed previous techniques in terms of decreased delay and solution precision. | Wireless Sensor Network; , Data Transmission; Machine Learning; Internet of Things; Optimisation Algorithm; Cluster | http://dx.doi.org/10.12785/ijcds/1571031682 | Sudhakar Yadav N and Uma Maheswari V (Chaitanya Bharathi Institute of Technology, India); Rajanikanth Aluvalu (Symbiosis International University, India); Sai Prashanth Mallellu (Vardhaman College of Engineering, India); Vaibhav Saini (Verolt Technologies Pvt Ltd, India); MVV Prasad Kantipudi (Symbiosis International Deemed University, India) |
42 | 1571031764 | Classification of Road Features Using Transfer Learning and Convolutional Neural Network (CNN) | Efficient and accurate classification of road features, such as crosswalks, intersections, overpasses, and roundabouts, is crucial for enhancing road safety and optimizing traffic management. In this study, we propose a classification approach that utilizes the power of transfer learning and convolutional neural networks (CNNs) to address the road feature classification problem. By leveraging advancements in deep learning and employing state-of-the-art CNN architectures, the proposed system aims to achieve robust and real- time classification of road features. The dataset contained 7616 images of roundabouts, crosswalks, overpasses, and intersections from the MLRSNet dataset and manually extracted satellite images from Malaysia using Google Earth Pro. After that, we merged this dataset. We designed a CNN architecture that consists of 24 convolution layers and eight fully connected layers. Transfer learning models such as ResNet50, MobileNetV2, VGG19 and InceptionV3 were also explored for road feature classification. The best-performing model during the validation phase is InceptionV3, with an accuracy of 98.9777%, whereas the best-performing model during the test phase is ResNet50 and VGG-19 models, with an accuracy of 98.7132%. The proposed CNN model got 95.1208% and 94.4852% accuracy during the validation and test stage. From the evaluation, the best-performing models for road feature classification are ResNet50 and VGG-19, with an accuracy of 98.7132%. | Deep learning; Transfer Learning; Road feature; Classification | http://dx.doi.org/10.12785/ijcds/1571031764 | Mustafa Majeed Abd Zaid and Ahmed Abed Mohammed (Islamic University, Najaf, Iraq); Putra Sumari (Universiti Sains Malaysia (USM) & School of Computer Science, Malaysia) |
43 | 1571032062 | Automatic Detection of Sewage Defects, Traffic Lights Malfunctioning, and Deformed Traffic Signs Using Deep Learning | Effective urban management relies on timely detection and resolution of infrastructural anomalies such as sewage defects, malfunctioning traffic lights, and deformed traffic signs. Traditional methods of inspection often prove inefficient and time-consuming. In this paper, an automatic detection of the urban infrastructural issues, it presents a multi-task convolutional neural network architecture capable of simultaneously identifying sewage defects, malfunctioning traffic lights, and deformed traffic signs from street-level imagery. The model is trained on a diverse dataset comprising annotated images of urban scenes captured under various environmental conditions. We demonstrate the effectiveness of our approach through extensive experimentation and evaluation on real-world datasets. Results indicate that the model achieves high accuracy and robustness in detecting the specified anomalies, outperforming existing methods. Furthermore, we discuss the potential implications of our research for urban management, including improved efficiency in maintenance operations, enhanced safety for commuters, and cost savings for municipal authorities. About 2438 images were collected of 6 categories and were augmented twice. The first augmentation increased by (X9) for by generating data from Keras. The second augmentation was carried out on training data only by (X3) using the Roboflow tool, where we defined the angles of the shape and gave it a class name. An overall accuracy of 86% based on F1-Measure value for all classes while individual classes shows different F1-value based on the available training samples. Overall, this research contributes to the advancement of automated infrastructure inspection systems, facilitating smarter and more sustainable urban environments. | Convolutional Neural Network (CNN); Deep Learning; Sewage Defects; Traffic Lights Malfunctioning; Manhole Damage; YOLO v5 | http://dx.doi.org/10.12785/ijcds/1571032062 | Khalid Mohamed Nahar (Yarmouk University, Jordan); Firas Ibrahim Alzobi (The World Islamic Sciences & Education University, Jordan) |
44 | 1571032285 | Machine Learning-Based Real-Time Detection of Apple Leaf Diseases: An Enhanced Pre- processing Perspective | Cedar, rust, spot, frogeye, and healthy leaf are the five general types of apple leaf diseases (ALDs). An early phase diagnosis and precise detection of ALDs can manage the extent of infection and confirm the well growth of the apple production. The previous analysis utilizes difficult digital image processing (DIP) and can't be sure of a high accuracy rate for ALDs. This article introduces a precise detecting method for ALDs based on the deep learning (DL) method. It contains creating efficient PATHOLOGICAL images and proposing a new framework of a DL method to detect ALDs. Utilizing a database of 3,174 images of ALDs, the researched DL model is trained to detect the five general ALDs. This proposed work specifies that the research segmentation, transformation, and feature extraction methods give an enhanced outcome in disease handle for ALDs with maximum performance of detection rate. This article has created an effort to implement an approach that can detect the disease of apple leaves using different pre-processing methods. ALDs framework is designed for filtration, and color space transformation methods using Median, Gaussian, HIS, and HSV models. Grey Level Co-occurrence Matrix (GLCM) is used for the texture-based feature extraction (FE) method and the image creation method implemented in this article can improve the robustness of the improved feature extraction method. | Apple leaf diseases; Deep Learning; Feature Extraction (GLCM) Method; Segmentation; Transformation | http://dx.doi.org/10.12785/ijcds/1571032285 | Anupam Bonkra (Maharishi Markandeshwar University, India); Priya Jindal (Chitkara Business School, Chitkara University, Punjab, India); Ekkarat Boonchieng (Chiang Mai University, Thailand); Mandeep Kaur (Maharishi Markandeshwar University, India); Naveen Kumar (Chitkara University, India) |
45 | 1571032872 | Blind Image Separation based on Meta-heuristic Optimization Methods and Mutual Information | There are a number of modern disciplines in digital signal processing (DSP) as so-called blind images. The core of this problem is there two images mixed in one image and require separate these images and recovering original images. There are many methods and strategies used to solve this problem. One of these solutions is unsupervised machine learning mechanisms, as in the Independent Component Analysis (ICA), which uses the statistical properties of the latent images. This method essentially is dependent upon the statistical characteristics of an observation signals and the non-Gaussian limitations between the mixed images conditions. For all applications, the ICA needs to enhancing, therefore many optimization methods used for that purpose. The swarm intelligence methods are one of many techniques utilized to enhance the ICA's efficiency. For this purpose, in this paper, three swarm optimization methods used are Quantum Particle Swarm Optimization (QPSO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). These methods implemented, on nine gray-scale images with seven nixing cases, separately. The results are been evaluated under three metrics for assessment are Structural Similarity Index Measurement, Peak Signal to Noise Ratio, and Normalized Cross Correlation. The applying of this system gave optimal results under the specified measurements. | Blind Image Separation,; BSS,; ICA,; Cocktail Party problem | http://dx.doi.org/10.12785/ijcds/1571032872 | Hussein Mohammed Salman (University of Babylon, Iraq); Ali Kadhum M. Al-Qurabat (University of Babylon & College of Science for Women, Iraq); Abd alnasir Riyadh Finjan (Supreme Commission for Hajj and Umrah, Iraq) |
46 | 1571033189 | Collaborative multi-agent deep reinforcement learning approach for enhanced attitude control in quadrotors | Unmanned Aerial Vehicles (UAVs), particularly quadrotors, have become highly versatile platforms for various applications and missions. In this study, the employment of Multi-Agent Reinforcement Learning (MARL) in quadrotor control systems is investigated, expanding its conventional usage beyond multi-UAV path planning and obstacle avoidance tasks. While traditional single-agent control techniques face limitations in effectively managing the coupled dynamics associated with attitude control, especially when exposed to complex scenarios and trajectories, this paper presents a novel method to enhance the adaptability and generalization capabilities of Reinforcement Learning (RL) low-level control agents in quadrotors. We propose a framework consisting of collaborative MARL to control the Roll, Pitch, and Yaw of the quadrotor, aiming to stabilize the system and efficiently track various predefined trajectories. Along with the overall system architecture of the MARL-based attitude control system, we elucidate the training framework, collaborative interactions among agents, neural network structures, and reward functions implemented. While experimental validation is pending, theoretical analyses and simulations illustrate the envisioned benefits of employing MARL for quadrotor control in terms of stability, responsiveness, and adaptability. Central to our approach is the employment of multiple actor-critic algorithms within the proposed control architecture, and through a comparative study, we evaluate the performance of the advocated technique against a single-agent RL controller and established linear and nonlinear methodologies, including Proportional-Integral-Derivative (PID) and Backstepping control, highlighting the advantages of collaborative intelligence in enhancing quadrotor control in complex environments. | Quadrotors; Attitude Control; Multi-Agent Deep Reinforcement Learning; Collaborative Intelligence | http://dx.doi.org/10.12785/ijcds/1571033189 | Trad Taha Yacine and Choutri Kheireddine (Aeronautical Sciences Laboratory Aeronautical and Spatial Studies Institute, Algeria); Mohand Lagha (Aeronautical Science Laboratory Aeronautical and Spatial Studies Institute, Algeria); Khenfri Fouad (Energy and Embedded Systems for Transport, ESTACA'Lab, Laval, France) |
47 | 1571033487 | An Efficient IoT-based Prediction and Diagnosis of Cardiovascular Diseases for Healthcare Using Machine Learning Models | The Internet of Things (IoT) and Machine Learning (ML) models are emerging technologies that are changing our daily lives. These are also considered as game-changing technologies in recent years, catalyzing a paradigm change in traditional healthcare practices. Cardiovascular disease (CVD) is considered a major reason for the high death rate around the world. Cardiovascular disease is caused due to several risk factors like an unhealthy diet, sugar, high Blood Pressure (BP) smoking, etc. Preventive treatment and early intervention for those at risk depend heavily on the prompt and accurate prediction of illnesses. Developing prediction models with improved accuracy is essential given the increasing use of electronic health records. Recurrent neural network variations of deep learning are capable of handling sequential time-series data. In remote places often lack access to a skilled cardiologist. Our proposal aims to develop an efficient community-based recommender system using IoT technology to detect and classify heart diseases. To address this issue, machine learning techniques are applied to a dataset to predict patients with cardiovascular disease because it's difficult for the medical team to identify CVD effectively. A public dataset is used that contains data of 70000 patients gathered at the time of medical examination and each row has 13 attributes. The risk groups were determined by their likelihood of developing cardiovascular disease. As it works successfully in forecasting diseases utilizing the support system. | Internet of things; Machine Learning; heart disease; Decision Tree; Disease Detection | http://dx.doi.org/10.12785/ijcds/1571033487 | Hamza Aldabbas (Albalqa Applied University, Jordan); Zaid Mustafa (Al-Balqa Applied University, Jordan) |
48 | 1571037323 | Enhancing Smartphone Motion Sensing with Embedded Deep Learning | Embedded systems and smartphones are vital in real-time applications, inspiring our interaction with technology. Smartphones possess various sensors like accelerometers, gyroscopes, and magnetometers. Deep Learning (DL) models enhance the capabilities of sensors, enabling them to perform real-time analysis and decision-making with accuracy and speed. This study demonstrates an intelligent system that detects smartphone movements using deep learning (DL) techniques such as convolutional neural networks (CNN) and stacked autoencoders(SAEs). The dataset has six smartphone movements, with 921 samples split into 695 for training and 226 for testing. The best training performance was achieved by Auto-Encoder 1 and Auto-Encoder 2. The SAEs had high classification accuracy (CA) and AUC values of 0.996 and 1.0, respectively. Similarly, CNN performed well with CA and AUC values of 0.991 and 0.998. These results show that CNN and SAEs are effective in identifying smartphone movements. The findings help improve smartphone apps and understand how well they can identify movement. The study indicates that CNN and SAE are influential in accurately identifying smartphone movements. Future research can improve motion detection by integrating more sensor data and advanced models. Using advanced deep learning architectures like RNNs or transformers can enhance the understanding and accuracy of predicting smartphone movements. | Auto-Encoder; CNN; Embedded Systems; Machine Learning; Smart Phone; Sensor Data | http://dx.doi.org/10.12785/ijcds/1571037323 | Terlapu Panduranga Vital (jntuK, India & Aditya Institute of Engineering Technology and Management, India); Jayaram D (Osmania University, India); Vijaya Bendalam (Jntugv, India & Aditya Institute Of Technology And Management, India); Ramesh Bandaru and Ramesh Yegireddi (Aditya Institute of Technology and Management, India); G Stalin Babu (GMR Institute of Technology, India); Vishnu Murthy Sivala and Ravikumar T (Aditya Institute of Technology and Management, India) |
49 | 1571038124 | Enhancing Diabetes Prediction Using Ensemble Machine Learning Model | Diabetes is a disease which is beyond cure and which has adverse effects on the health and hence has to be detected at an earlier time to avoid more damage to the body. This study aims at establishing the use of machine learning in the circumstances of diabetes prediction based on factors such as glucose levels, blood pressure, skin fold thickness, and insulin. The purpose of this study is to identify the potential of using machine learning techniques, such as Support Vector Machine (SVM), Logistic Regression and proposed Ensemble Model for the prediction of diabetes. To this aim, in the current study, a dataset including the fundamental medical features of a general population of patients was employed. Regarding this, the data pre-processing was done with the view of handling missing data, data normalization and feature extraction in a view of enhancing the performance of the proposed model. All the models have been developed, and the data was split to perform k-fold cross validation to make the predictions more accurate. From the evaluation metrics, it is evident that the proposed Ensemble Model is the most appropriate since it has a higher accuracy rate compared to the Support Vector Machine, Logistic Regression model. To compare the performance of each model the metrics used includes accuracy, precision, recall, F1-Score. Therefore, the above analysis shows that the proposed Ensemble model is effective in the prediction of diabetes, and this is why there is the need to consider data mining in order to improve the health care delivery systems. | Diabetes Prediction; Machine Learning; Ensemble Learning; Predictive Modeling; Health Informatics; Diabetes Risk Factor | http://dx.doi.org/10.12785/ijcds/1571038124 | Aniket Kailas Shahade and Priyanka Vinayakrao Deshmukh (Symbiosis Institute of Technology, Pune, India) |
50 | 1571038958 | Shortest Path Optimization for Determining Nearest Full Node from a Light Node in Blockchain IoT Networks | In a blockchain IoT network, there exists a diversity of devices, including full nodes and light nodes, each with varying capacities and roles. Full nodes have the capability to store the entire ledger, whereas light nodes, constrained by limited memory capacity, cannot store. However, light nodes can efficiently retrieve data from full nodes and actively participate in network transaction approvals, especially in critical applications such as military and healthcare sectors. To enable light nodes to approve transaction by verifying blockchain ledgers we need to determine the nearest distance from a light node to a full node is imperative. While several algorithms exist for this purpose, Routing Protocol for Low-Power and Lossy Networks (RPL) emerges as the optimal choice. In comparison to other algorithms like Dijkstra's Algorithm, Floyd-Warshall Algorithm, Genetic Algorithms (GA), and Ant Colony Optimization (ACO), RPL stands out with distinct advantages. While Dijkstra's Algorithm and Floyd-Warshall Algorithm excel in finding shortest paths, they may not be optimized for the unique constraints and dynamics of IoT networks. Genetic Algorithms (GA) offer heuristic solutions but may lack adaptability to real-time changes in network topology, while Ant Colony Optimization (ACO) may face scalability and resource constraints in IoT environments. Conversely, RPL is meticulously tailored for low-power and lossy networks inherent to IoT settings. Its capability to form Directed Acyclic Graphs (DAGs) and dynamically adjust routes based on metrics like hop count and energy efficiency positions it as an ideal choice for determining the nearest distance between light nodes and full nodes in a blockchain IoT network. By capitalizing on its adaptability and efficiency, RPL surpasses other algorithms in enabling efficient data retrieval and facilitating network transaction approvals, thereby ensuring the seamless operation of blockchain IoT systems. | IoT networks; Blockchain IoT; Destination Advertisement Object (DAO) Messages; Directed Acyclic Graph (DAG) topology,; DODAG Information Object (DIO) Messages | http://dx.doi.org/10.12785/ijcds/1571038958 | Vivek Anand M (Galgotias University, India & Kumaraguru College of Technology, India); Srinivasan S (Galgotias University, India) |
51 | 1571050022 | Instruction-Level Customization and Automatic Generation of Embedded Systems Cores for FPGA | Reducing power consumption and improving performance are crucial requirements for many applications, especially power hungry and time-consuming applications. This is particularly true when these applications are running in power or time-constrained environments like battery-operated embedded systems or on Internet of Things (IoT) devices. A general-purpose processor is not promising for this kind of applications as it cannot provide optimized performance and power consumption for specific applications. That is why domain-specific architectures (DSA) are gaining popularity, as they promise optimized performance for these types of applications in terms of throughput, power consumption, and overall cost. Unfortunately, the use of DSA presents inherent limitations as it requires custom design for each group of applications and cannot offer optimized performance for each specific application. This paper explains how to take advantage of the open standard Instruction Set Architecture (ISA) of the fifth generation of Reduced Instruction Set Computer (RISC-V) to automate the generation of a uni-processor core customized for a certain application such that the processor supports only the very specific instructions needed by this application. The proposed generator is capable of producing the Register Transfer Level (RTL) description of a processor core for any desired application given its source code. This work targets Field Programmable Gate Arrays (FPGAs) due to their re-configurability. When compared with general purpose processors, the conducted experiments show that application specific cores generated by our approach managed to achieve energy and execution time reductions reaching 8% and 5% respectively on some of the used benchmarks. The proposed methodology also offers the added flexibility stemming from the possibility to automatically re-configure the FPGA when a new or upgraded software application that would benefit from modifying the set of supported instructions is deployed. | Embedded Systems; Performance; Power Consumption; Instruction-Level Processor Customization; RISC-V; FPGA | http://dx.doi.org/10.12785/ijcds/1571050022 | Omar Yehia and Sandra Raafat (Ain Shams University, Cairo, Egypt); M. Watheq El-Kharashi (Ain Shams University, Egypt); Ayman M. Wahba (Ain Shams University & Faculty of Engineering, Egypt); Cherif R. Salama (The American University in Cairo & Ain Shams University, Egypt) |
52 | 1571020059 | A Generative Encoder-Decoder Model for Automated Quality Control Inspections | This paper introduces a novel generative model based on an encoder-decoder architecture for defect detection within Industry 4.0 frameworks, focusing on the escalating need for automated quality control in manufacturing settings. Precision and efficiency, crucial in such environments, are significantly enhanced by our approach. At the core of our methodology is the strategic incorporation of random Gaussian noise early in the image processing sequence. This deliberate interference disrupts the model's ability to reconstruct images of defective parts, thereby enhancing both the accuracy and robustness of defect detection. The model further integrates skip connections during the decoding phase, with a special emphasis on the first two connections. These are augmented with multi-head attention mechanisms and spatial reduction techniques, followed by targeted convolutions. This intricate configuration helps preserve vital local features while filtering out superfluous data, facilitating precise image reconstruction and effectively addressing the often problematic issue of locality loss during the upsampling process. Moreover, our model excels in maintaining contextual integrity and capturing multi-scale features, which is crucial for detailed defect detection. Each block of the architecture connects to a scaled version of the original image, allowing for nuanced feature analysis. Extensive testing and validation on real-world datasets have proven the model's high efficiency and accuracy in identifying defects, marking a significant advancement in automated quality control systems. | Anomaly Detection; Vision Transformer; Quality Control; Industry 4.0 | http://dx.doi.org/10.12785/ijcds/1571020059 | khedidja mekhilef, Fayçal Abbas and Mounir Hemam (Abbes Laghrour University of Khenchela, Algeria) |
53 | 1571029446 | Digital Intelligence for University Students Using Artificial Intelligence Techniques | The research problem arose from the researchers' sense of the importance of Digital Intelligence (DI), as it is a basic requirement to help students engage in the digital world and be disciplined in using technology and digital techniques, as students' ideas are sufficiently susceptible to influence at this stage in light of modern technology. The research aims to know the level of DI among university students using Artificial Intelligence (AI) techniques. To verify this, the researchers built a measure of DI. The measure in its final form consisted of (24) items distributed among (8) main skills, and the validity and reliability of the tool was confirmed. It was applied to a sample of 139 male and female students who were chosen in a random stratified manner from students at the University of Baghdad; College of Education for Pure Sciences/Ibn Al-Haitham - Department of Computer Science. The proposed AI model used two artificial intelligence techniques: Decision tree (DT) and Random Forest (RF). The classification accuracy using DT was 92.85. The RF technique was applied to find the essential features and the Pearson correlation to find the correlation between the features. The results showed that they possess DI, and this can contribute to developing plans and programs to develop it and improve their skills. | Digital Intelligence; Artificial Intelligence; University; Students; DT; RF | http://dx.doi.org/10.12785/ijcds/1571029446 | Ban Hassan Majeed, Wisal Hashim Abdulsalam, Zainab Hazim Ibrahim, Rasha H. Ali and Shahlaa Mashhadani (University of Baghdad, Iraq) |
54 | 1571032389 | Enhancing Productivity: Design and Implementation of an Automated Elevator for the Rohs Wave Machine | Automation technologies, particularly programmable logic controllers (PLCs), play a crucial role in optimizing processes and improving efficiency. Integrating automation into industrial environments enhances productivity and responsiveness to market dynamics by emphasizing control, monitoring, and analysis of production processes. With this in mind, our project aimed to develop an automated elevator into the Rohs Wave Machine to eliminate manual labor and boost productivity. The project commenced with a comprehensive needs assessment to delineate specific requirements, ensuring that all operational needs were thoroughly understood. Following this, the technical phase began with meticulous design and simulation using Siemens Step 7 software, focusing on both functionality and reliability. This phase also involved detailed engineering of the robust metal support structure, essential for the system's stability and durability. The next crucial step was programming the TSX Micro PLC with SCHNEIDER PL7pro software, which allowed for seamless integration and precise control of the elevator's operations. Finally, with careful attention to detail, all components were systematically assembled, installed, and calibrated, ensuring optimal performance. This rigorous process culminated in the successful operationalization of the automated elevator system. This project epitomizes a synergy of meticulous planning, advanced technical skills, and thorough execution, resulting in a transformative solution that significantly enhances productivity and operational efficiency. | Programmable Logic Controller; Automation; LADDER language,; Step7; PL7pro; Automated Elevator | http://dx.doi.org/10.12785/ijcds/1571032389 | Brahim Zraibi and Mohamed Mansouri (Hassan First University of Settat, Morocco) |
55 | 1571032811 | A Systematic Review on Effectiveness and Contributions of Machine Learning and Deep Learning Methods in Lung Cancer Diagnosis and Classifications | In the current scenario of people's health, lung cancer is the principal cause of cancer-related losses, and its death rates are steadily rising. In any event, radiologists are understaffed and under pressure to work overtime, making it difficult to appropriately assess the increasing volume of image data. Consequently, several researchers have developed automated techniques for quickly and accurately predicting the development of cancer cells using medical imaging and machine learning techniques. As advances in computer-aided systems have been made, deep learning techniques have been thoroughly investigated to aid in understanding the results of computer-aided diagnosis (CADx) and computer-aided detection (CADe) in computed tomography (CT), magnetic resonance imaging (MRI), and X-ray for the identification of lung cancer. To provide a thorough review of the deep learning methods created for lung cancer diagnosis and detection, this study is being done. This study presents an overview of deep learning (DL) and machine learning (ML) approaches for applications centered on lung cancer diagnosis and the advancements of the methods being studied. This study focuses on segmentation and classification, which are the two primary deep learning methods for lung cancer detection and screening. The benefits and drawbacks of the deep learning models that are currently in use will also be covered. DL technologies can deliver accurate and efficient computer-assisted lung tumor detection and diagnosis, as shown by the subsequent analysis of the scan data. This study ends with a description of potential future studies that might enhance the use of deep learning to the creation of computer-assisted lung cancer detection systems. | Lung Cancer; Diagnosis; Classification; Medical Imaging; Machine Learning; , Deep Learning | http://dx.doi.org/10.12785/ijcds/1571032811 | Jayapradha J (SRM Institute of Science and Technology & Kattankulathur, India); Su-Cheng Haw (MMU, Malaysia); Naveen Palanichamy and Elham Anaam (Multimedia University, Malaysia); Senthil kumar Thillaigovindhan (SRM Institute of Science and Technology, India) |
56 | 1571037888 | Cataract Detection and Classification Using Deep Learning Techniques | Detecting eye diseases in the early part can reduce the damage to the eye and get the cure. Using artificial intelligence techniques in medical applications, it can detect and classify eye diseases using deep learning models with color images of the eye. In this paper cataract detection, recognition, and classification have been achieved using Convolutional Neural Network (CNN) deep-learning models applied to retinal fundus color images. A sample of 400 color images dataset has been classified into 300 normal images and 100 cataract images. These datasets were pre-processed automatically using histogram equalization (HE) and contrast limited adaptive histogram equalization (CLAHE) in addition to the segmentation process. There are three models have been used in this work, GoogleNet, ResNet-101, and Densenet201, they are applied in three cases, the first case uses the original images without image preprocess, the second with HE images pre-process, and the third with HE and CLAHE pre-processed images and achieved high testing accuracy exceeding 98% with Densenet201 model and achieved classification accuracy of 90% with GoogleNet model. The experimental results are evaluated using common performance metrics such as accuracy, precision, sensitivity, specificity, and F1-score for both cataract detection and classification cases. The performance of the proposed work makes this model can be used to improve eye health, including accuracy, early detection, training, and future education, and a considerable step toward the automatic detection and classification of cataract efficacy procedures for assisting detection and performing. | Artificial Intelligence; Cataract; CNN; Deep Learning; Densnet201 | http://dx.doi.org/10.12785/ijcds/1571037888 | Abdullah Ali Jabber (Al-Furat Al-Awsat Technical University & Engineering Technical College Najaf, Iraq); Ahmed Hadi (Al-Furat Al-Awsat Technical University, Iraq); Salim Wadi, S. (AL-Furat AlAwsat Technical University, Iraq); Ghada Ali Shadeed (Un, Iraq) |
57 | 1571039804 | Dynamic Fast Convergence Improvement using Predictive Network Analysis | In today's digital age, the smooth operation of organizations heavily relies on the proper functioning of the network infrastructure. Imagine a situation when a major change in the structure of a network causes the interruption of vital services. Consequently, the implementation of network convergence optimization is a vital consideration in practical situations. The aim of our study is to tackle existing issues by implementing a comprehensive approach that integrates predictive analysis. Implementing strategies for adaptive adjustment. Improving effectiveness using the Spanning Tree Protocol (STP). Our goal was to decrease the duration of convergence and improve the network's stability. The study will be undertaken by combining several machines learning techniques, including ARIMA, link prediction, and graph embedding. We performed real-time network monitoring. Utilizing predictive analysis to direct a process of adaptive convergence adjustments. The outcomes were positive, the upgraded STP solution considerably decreases convergence times. with 70% accuracy in forecasting low convergence times. 80% accuracy in forecasting high convergence times. Additionally, it delivers a large reduction in network disturbances. correctly anticipating low interruptions with 80% accuracy. high disruptions with 85% accuracy. Moreover, the approach maximizes resource use. successfully forecasting low usage with 75% accuracy and high utilization with 70% accuracy. Diagonal components suggest correct forecasts, whereas off-diagonal components suggest misclassifications. Overall, the matrix undervalues the solution's resilience. a tremendous positive influence on network stability and efficiency. | Predictive; Network Analysis; Spanning Tree Protocol (STP); dynamic environments; vital real-world issue | http://dx.doi.org/10.12785/ijcds/1571039804 | Mohammed Hussein Ali (Aliraqia University, Iraq) |
58 | 1571047611 | Forecasting Trends in Cryptocurrencies through the Application of Association Rule Mining Techniques | Data mining in the stock market and cryptocurrencies is the most used. In this paper, we applied a data mining approach to implement association rules. Our significant contribution is to ascertain a robust correlation between four cryptocurrencies: Bitcoin, Litecoin, Ethereum, and Monero. Specifically, this paper used data mining techniques to predict and discover association rules between four cryptocurrencies (Bitcoin, Litecoin, Ethereum, and Monero) to identify optimal points for selling and buying. Our suggested models utilized the apriori algorithm to forecast and determine association rules in our datasets. Our significant contribution is to ascertain a robust correlation between four cryptocurrencies: Bitcoin, Litecoin, Ethereum, and Monero. Specifically, we aim to ascertain the current link between Bitcoin and other items during the next 24 hours. In addition, if there is a current buy or sell of Bitcoin, we can forecast, for instance, the movement of Litecoin over the next three hours. We have already carried out this prediction for the other items. Our objective is to propose a prediction model to generate and discover associations between the cryptocurrency, Bitcoin, Litecoin, Ethereum, and Monero. In our research, we used apriori algorithm to produce the association rules. We evaluated the quality of these rules using two metrics: Support and lift. Experiment analysis proves that our method successfully generates a strong association rule. We have already carried out this prediction for the other items. | Cryptocurrency; Bitcoin; Apriori; Association rules; Ethereum; Litecoin | http://dx.doi.org/10.12785/ijcds/1571047611 | Mohamed EL Mahjouby (Sidi Mohamed Ben Abdellah University, Morocco); Bennani M Taj (université sidi mohamed ben abdellah faculté des sciences dhar el mahraz Fes, Morocco); Khalid El Fahssi (Sidi Mohamed Ben Abdellah University, Morocco); Said Elgarouani (Department of Computer Science Laboratory (LISAC), Faculty of Science Dhar El Mahraz, USMBA, Fez, Morocco); Mohamed Lamrini (Sidi Mohamed Ben Abdellah University & Faculty of Sciences Dhar El Mahraz, Morocco); Mohamed EL Far (Sidi Mohammed Ben Abdellah University, Morocco) |
59 | 1571048308 | Development of Cyber Security Awareness and Education Framework | All barriers have been removed by the internet, which has completely changed how we interact with one another, watch movies, make friends, work, play games, shop, listen to music, place food orders, pay bills, and wish people anniversaries and birthdays. Key services and infrastructures in our increasingly networked world are based on digital information [1]. There is no doubt about the benefits offered by the internet but with all these benefits there are many disadvantages, one of them is cybercrime which costs millions of dollars every year. User awareness is considered one of the most significant factors when it comes to dealing with cyber-attacks. Although numerous efforts (such as emails, posters, in-class instruction, web seminars, and games) are done to raise awareness among people but these efforts often fail to achieve the purpose as these initiatives are often implemented without proper planning as there are no standard guidelines that can be followed when developing awareness initiatives. Therefore, this research aims to fill this gap of not having a structured approach of designing awareness programs. This study proposes a framework which can serves as a benchmark in providing guidance in how to successfully plan and implement cyber security awareness campaigns/programs by identifying all the key factors. | cybersecurity,; Cybercrimes,; User awareness,; Awareness Program | http://dx.doi.org/10.12785/ijcds/1571048308 | Amreen Ashraf M. Sharif and Jaflah Alammary (University of Bahrain, Bahrain) |
60 | 1571015450 | Deep Learning Algorithm using CSRNet and Unet for Enhanced Behavioral Crowd Counting in Video | In crowd analysis, video data incurs challenges due to occlusion, crowd densities, and dynamic environmental conditions. To address these challenges and to enhance the accuracy we have proposed Behavioral Crowd Counting (BCC) that combines the Congested Scene Recognition Network (CSRNet) with Unet in video data. The CSRNet combines two networks namely a (1) frontend for feature extraction and (2) backend for the generation of a density map. It effectively tallies individuals within densely populated regions, offering a solution to the high crowd densities constraints. The Unet builds the semantic map and refines the semantic and density map of CSRNet. The Unet unravels complex patterns and connections among individuals in crowded settings, capturing spatial dependencies within densely populated scenes. It also offers the flexibility to incorporate attention maps as optional inputs to differentiate crowd regions from the background. We have also developed new video datasets namely Behavioral Video Dataset from the image dataset of the fine-grain crowd-counting to evaluate the BCC model. Datasets include standing vs sitting, waiting vs non-waiting, towards vs away, and violent vs non-violent videos, offering insights into posture, activity, directional movement, and aggression in various environments. The empirical findings illustrate that our approach is more efficient than others in behavioral crowd counting within video datasets, consisting of congested scenes as indicated by metrics MSE, MAE, and CMAE. | Congested Scene Recognition Network (csrnet); Unet; Feature Extraction; Behaviour,; Crowd Analysis | http://dx.doi.org/10.12785/ijcds/1571015450 | B Ganga (University of Visvesvaraya College of Engineering); Lata B T, K Rajshekar and KR Venugopal (UVCE, India) |
61 | 1571025378 | AI-Based Disaster Classification using Cloud Computing | The combination of cloud computing and artificial intelligence (AI) offers a potent remedy for disaster management and response systems in this age of quickly advancing technology. Using text and image data gathered from social media sites, this project, makes use of the collective intelligence present in the data. We carefully trained a bidirectional LSTM model for textual analysis and a Convolutional Neural Network (CNN) model for image classification using Kaggle datasets. Our system's fundamental component is an API that is installed on an Amazon Web Services (AWS) EC2 instance. To improve performance and stability, the API is strengthened with load balancing, auto-scaling features, and multi-AZ redundancy. The API easily integrates with the trained models to determine whether the content is relevant to a disaster scenario when it receives input data. When a positive classification is made from the processed text or image, an alert mechanism sends out an email notification with important information about the disaster that was discovered. The abundance of user-generated content available on social media sites like Facebook, Instagram, and Twitter presents a special chance to improve the efficacy and efficiency of disaster relief operations. The main objective of this project is to use cutting-edge technologies to sort through massive amounts of social media data and derive useful insights in emergency situations. | Machine Learning; Deep Learning; Long Short-Term Memory; Elastic Compute Cloud; Artificial Intelligence | http://dx.doi.org/10.12785/ijcds/1571025378 | Rathna R (Vellore Institute of Technology, India); Aryan Purohit (Vellore Institute of Technology - Chennai Campus, India); Allen Stanley (Vellore Institute of Technology Chennai, India) |
62 | 1571028796 | Security of SDDN based on Adaptive Clustering Using Representatives (CURE) Algorithm | In the current fact of data center networking, a software-defined data center network (SDDN) has emerged as a transformational solution to address the inherent complexities in network control. Nonetheless, even with so many advantages to look up to, there are critically important issues making its implementation critical, where security, performance, reliability, and fault tolerance are important. For this reason, security becomes a very vital issue, since SDDNs are exposed to many Distributed Denial of Service (DDoS) attacks. In this regard, a new machine-learning-based CURE algorithm framework has been proposed in this paper to outweigh the security challenges. It uses an Adaptive CURE algorithm to minimize the effect of DDoS. The algorithm is designed with adaptive input, depending on the processing resources. The controller captures the suspicious traffic acting as a central coordinator and, if an anomaly in traffic is detected, then the same reforwards a copy of suspicious traffic to the processing and analyzing unit. The adopted approach applies the Adaptive CURE algorithm in processing, through a comprehensive study of the pattern of traffic, the anomalous traffic in the distinguishing of potential DDoS attacks with great accuracy. The algorithm's intelligence facilitates the identification of DDoS attacks. This allows to update switches with suitable flow entries by the controller. Such response mechanisms further improve the security posture of SDDN networks, specifically providing a really strong defense against DDoS attacks. The experiment results show that the proposed framework achieves an accuracy of up to 96.2% with various DDoS attacks. | Software-Defined Data Center Network; DDoS Attack; CURE algorithm; Datacenter | http://dx.doi.org/10.12785/ijcds/1571028796 | Mohammed Swefee and Alharith A. Abdullah (University of Babylon, Iraq) |
63 | 1571087751 | A Multi-Radio Channel Hopping Rendezvous Scheme in Cognitive Radio Networks for Internet of Things | With the rapid expansion of the Internet of Things (IoT), the demand for wireless spectrum is increasing exponentially, both in licensed and unlicensed bands. The existing fixed spectrum assignment policy creates a bottleneck as the spectrum that is not in use remains unutilized or underutilized. To overcome this issue, cognitive radio technology has emerged as a promising solution to spectrum assignment issues. In a Cognitive Radio Network (CRN), unlicensed users or secondary users (SUs) must meet on an available channel to establish a communication link for necessary information exchange. This process is known as rendezvous. However, SUs are unaware of each other if no centralized controller is involved. Channel Hopping (CH) is a rendezvous technique without the involvement of any centralized controller. Most of the existing CH algorithms are based on single radio SUs. As the cost of wireless transceivers is declining, multiple radios can be employed for rendezvous performance improvement. This paper proposes a multi-radio matrix-based CH algorithm that involves employing two radios with each SU instead of one. Compared with existing single radio algorithms, the proposed CH algorithm performs better by lowering the upper bounds on time to rendezvous. Our paper presents a comprehensive analysis of the benefits of incorporating an additional radio, demonstrating how this innovation leads to more efficient and timely rendezvous, thereby enhancing the overall communication capabilities within CRNs. | Cognitive Radio; Rendezvous; Common Control Channel; Channel Hopping | http://dx.doi.org/10.12785/ijcds/1571087751 | Mohd Asifuddola (Aligarh Muslim University, India & Aligarh Muslim University, Aligarh, India); Mumtaz Ahmed (Jamia Millia Islamia, India); Liyaqat Nazir (National Institute of Technology, Srinagar, India); Mohd Ahsan Siddiqui (NITTTR, Chandigarh, India); Shakeel Ahmad Malik (Islamic University of Science and Technology Awantipora, Kashmir, India); Mohammad Ahsan Chishti (National Institute of Technology Srinagar, India) |
64 | 1571009176 | A Systematic Framework To Enhance Reusability In Microservice Architecture | Microservices Architecture (MSA) has gained substantial traction in the software industry in recent years due to its promise of greater scalability, flexibility, and resilience compared to traditional monolithic architectures. This architectural style decomposes applications into small, loosely coupled services that can be developed, deployed, and scaled independently. The rise of cloud computing and DevOps practices has further propelled the adoption of MSA, making it a cornerstone of modern software engineering. Within this context, software reuse and MSA represent pivotal aspects of contemporary software engineering, offering profound implications for development practices and project outcomes. In this work, our objective is to enrich reusability practices within the context of MSA, recognizing and addressing five key challenges: Code Duplication, Technology Heterogeneity, Service Boundaries, Versioning, and Decision-Making. To tackle these challenges systematically, we propose the Reusable Microservices Framework (RMF), a comprehensive development process designed to optimize reusability in MSA environments. Developed in collaboration with MSA practitioners keen on advancing reusability, the RMF integrates expert recommendations and industry best practices. To validate the efficacy of our proposed framework, we conducted a simulation and real-world implementation, including adoption in a software company. Our findings reveal that the RMF can significantly enhance reusability in MSA, with observed improvements exceeding threefold. This study offers actionable insights and a practical framework for leveraging reusability to maximize the potential of MSA in contemporary software development endeavors. | Microservice Architecture; RMF; DDD; MDE; Reusability | http://dx.doi.org/10.12785/ijcds/1571009176 | Mehdi Ait Said, Sr. and Lahcen Belouaddane (Hassan First University, Morocco); Soukaina Mihi (University Hassan first of Settat, Morocco); Abdellah Ezzati (FST SETTAT, Morocco) |
65 | 1571016597 | Modulith Architecture: Adoption Patterns, Challenges, and Emerging Trends | Over the past year, the software architecture field has been dominated by the contrast between microservices and monolithic architectures, driven by the demand for scalable solutions for modern applications. Microservices, with their focus on modularity and independence, have become popular for large-scale systems, offering benefits like enhanced scalability and simplified maintenance. Conversely, monolithic architectures, known for their cohesive design, have been a traditional choice, favored for their simplicity in development. However, they may struggle with scalability as applications grow in complexity. Amidst this, Modulith Architecture (MDA) has emerged in recent years as a solution to the complexities of microservices and the limitations of traditional monolithic architectures. Combining the structural integrity of monolithic systems with the modularity of microservices, MDA offers a holistic approach to software design and development. This study investigates the adoption of MDA through a comprehensive analysis of 32 practitioners' insights. Our objective is to explore the motivations, challenges, and trends surrounding MDA adoption. Employing a qualitative approach through in-depth interviews, we uncover nuanced adoption patterns and identify key factors influencing practitioners' choices. Results indicate a varied adoption spectrum, with motivations ranging from simplicity to cost-effectiveness advantages. Technical challenges, including module dependencies and communication overhead, highlight the intricacies of MDA integration. Emerging trends, such as dynamic module loading, underscore the evolving practices within the field. This study contributes to a deeper understanding of MDA adoption dynamics, offering insights for both researchers and practitioners. | Modulith; Microservices; Modular Monolith; Software Architecture; Qualitative Study; Industrial Inquiry | http://dx.doi.org/10.12785/ijcds/1571016597 | Mehdi Ait Said, Sr. and Lahcen Belouaddane (Hassan First University, Morocco); Soukaina Mihi (University Hassan first of Settat, Morocco); Abdellah Ezzati (FST SETTAT, Morocco) |
66 | 1571017581 | Efficient Task Scheduling in Cloud using Double Deep Q-Network | Cloud computing has transformed data management with its scale and flexibility. Cloud resources are transient and diversified, making task scheduling difficult. This paper proposes Double Deep Q-Network (DDQN) reinforcement learning model to solve the cloud computing task scheduling problem. Double Deep Q-Network (DDQN) is a powerful reinforcement learning system that improves on Deep Q-Networks (DQN). The target network and the online network are the two distinct neural networks that DDQN presents. To create a more consistent and less unpredictable learning process, the target network is updated on a regular basis to imitate the Q-value estimations of the online network. Traditional DQN can have problems with overestimation bias, which is something that this dual-network architecture helps to alleviate. DDQN is a reliable and efficient tool for solving complex reinforcement learning problems. It excels in learning optimal strategies through iteratively improving its Q-value estimations. DDQN presents a robust framework for addressing the challenges inherent in cloud computing task scheduling. Its dual-network architecture and iterative learning process offer a promising avenue for enhancing the efficiency and effectiveness of resource allocation in cloud environments. Through its continuous refinement of Q-value estimations, DDQN emerges as a valuable asset in navigating the complexities of modern data management within cloud infrastructures. | Cloud computing; Data management; Task scheduling; Double Deep Q-Network (DDQN); Reinforcement learning; Deep Q-Networks (DQN) | http://dx.doi.org/10.12785/ijcds/1571017581 | Radhika Senapathi (Centurion University of Technology and Management & Anil Neerukonda Institute of Technology and Sciences, India); sangram Keshari Swain (Centurion University of Technology and Management, India); Adinarayana Salina (Andhra University, India); Bssv Ramesh Babu (Raghu Engineering College, India) |
67 | 1571018374 | Resolving the Ozone Dilemma: An Integration of Game Theory and Time Series Forecasting | The main cause of the ozone layer's depletion, which is a serious environmental problem, is human activity such as the emission of chemicals that deplete the ozone layer, like Chloro-Fluro Carbons. The combination of machine learning (ML) and game theory methods appears to be a novel and promising way to better anticipate and address ozone layer depletion. The interactions between different stakeholders, such as nations or industries, that affect the dynamics of the ozone layer can be modeled using a framework provided by game theory. In the meantime, large-scale dataset analysis made possible by Time Series Forecasting along with correlation allows for more precise forecasts and well-informed decision-making. This study's main goal is to improve the accuracy of ozone layer depletion predictions by utilizing ARIMA Time Series forecasting, correlation with the Air Quality Index along with the science of strategy for better decision-making via Game Theory. The proposed methodology has proposed a way to create a more realistic and comprehensive model by taking into account the strategic interactions among various entities that contribute to the depletion of the ozone layer. By using an interdisciplinary approach, we hope to aid in the creation of practical plans for environmental sustainability and ozone layer protection. ARIMA predicted the values for the upcoming years, with a Root Mean Squared Value of 5.04. The Game Theory approach generates a report tailored to the needs of the user suggesting the protocols to be followed. Finally, the authors also correlated the Air Quality Index with the Ozone Layer Depletion with an accuracy of 82% with Gradient Boosting. | Game theory; Machine Learning (ML); Ozone layer depletion; Sustainable Artificial Intelligence | http://dx.doi.org/10.12785/ijcds/1571018374 | Prutha Annadate and Neha Aher (Dr Vishwanath Karad MIT World Peace University, India); Pradnya Vaibhav Kulkarni (Dr Vishwanath Karad MITWPU Pune, India & MITADT Pune, India); Renuka Suryawanshi (Bharati Vidyapeeth College of Engineering, India) |
68 | 1571020535 | GPR Signal Processing for Landmine Detection: A Comparative Study of Feature Extraction and Classification Algorithms | Landmine detection remains a critical challenge due to the difficulty of identifying buried threats. These hidden explosives pose a significant danger to human lives, hindering economic growth and development efforts. Traditional methods for landmine detection often need to be revised, relying on time-consuming manual techniques or needing more ability to identify non-metallic mines. Fortunately, advancements in technology offer various methods for locating buried landmines. Ground penetrating radar (GPR) has emerged as a powerful tool for subsurface exploration, emitting electromagnetic waves and recording reflections to create an image of buried objects. However, GPR data presents a complex picture, containing reflections from various underground features besides landmines. Effective landmine detection hinges on distinguishing these targets from background clutter. This paper delves into the comparative analysis of feature extraction and classification techniques employed in GPR-based landmine detection. The initial stage involves feature extraction, where algorithms identify and quantify characteristics within the GPR data that discriminate landmines from other objects. Various approaches exist, including image processing techniques like edge detection and statistical methods that analyze signal intensity variations. Machine learning algorithms, such as Support Vector Machines (SVMs) and k-nearest neighbors (k-NN), can learn these discriminatory features from labeled GPR data sets containing confirmed landmine locations. This paper meticulously compares the effectiveness of these techniques using performance metrics like probability of detection (Pd), accuracy, and false alarm rate (FAR). The paper aims to identify the optimal approach for accurate landmine detection by evaluating these metrics across different feature extraction and classification algorithms. This optimal approach should maximize Pd while minimizing FAR, ensuring landmines' safe and efficient identification for humanitarian demining efforts. | Ground Penetrating Radar; Buried Object Detection; Clutter Reduction; Feature Extraction; Classification; Landmine Detection | http://dx.doi.org/10.12785/ijcds/1571020535 | Kalaichelvi T (Pondicherry University, India); Ravi Subban (Pondicherry University, Pondicherry, India) |
69 | 1571023809 | Fortifying Organizational Cyber Resilience: An Integrated Framework for Business Continuity and Growth amidst Escalating Threat Landscapes | In the face of mounting cyber threats disrupting enterprises, this study emphasizes the critical role of organizational resilience in safeguarding business development and continuity. It proposes an integrated framework comprising proactive security policies, resilience testing, collaborative engagement, and the integration of emerging technologies. Employing a meticulous methodology blending literature analysis and framework development, the study identifies key components for a comprehensive cyber resilience framework. This analysis delves into evolving threat landscapes, digital ecosystems, resource constraints, and ethical obligations, surpassing established frameworks by emphasizing customization, collaboration, and proactive measures. The resulting framework is not only robust but also adaptable and ethical, offering strategic guidance for organizations seeking to embed cyber resilience within digital transformation initiatives. While acknowledging limitations and varying applicability based on organizational contexts, the study encourages further validation through field applications to enhance adaptability within diverse cybersecurity ecosystems. The practical implications extend to organizations aiming to fortify cybersecurity measures amid digital transformation. By addressing the dynamic nature of cyber threats and offering practical insights for implementation, the proposed framework supports innovation and growth. It provides a roadmap for organizations navigating the complexities of cybersecurity in the digital age, ensuring they remain resilient in the face of evolving threats. Ultimately, the study advocates for a proactive approach to cybersecurity, recognizing its pivotal role in sustaining business operations and fostering long-term success in today's interconnected world. | Digital Age; Business Continuity; Sustainable Development; Evolving Threats; Cyber Resilience; Future Emerging Technologies | http://dx.doi.org/10.12785/ijcds/1571023809 | Anas Kanaan (University of Petra, Jordan); Ahmad Mtair AL-Hawamleh (Institute of Public Administration IPA, Saudi Arabia); Mohammad Aloun (Irbid National University, Jordan); Almuhannad Alorfi (King Abdulaziz University - Rabigh, Saudi Arabia); Mohammed Abdalwahab Alrawashdeh (Imam Abdulrahman Bin Faisal University, Saudi Arabia) |
70 | 1571024445 | Lane Change Prediction of Surrounding Vehicles using Video Vision Transformers | Anticipating lane changes of surrounding vehicles is paramount for the safe and efficient operation of autonomous vehicles. Previous works employ the usage of physical variables which do not contain contextual information. Recent methodologies relied on action recognition models such as 3D CNNs and RNNs, thereby dealing with complex architecture. Albeit the advent of transformers into action recognition, there are limited works employing transformer architectures. This research addresses the critical challenge of Lane Change Prediction (LCP) for autonomous vehicles, employing Video Action Prediction with a focus on the integration of ViViT (Video Vision Transformers). Utilizing the PREVENTION dataset, which provides detailed annotations of vehicle trajectories and critical events, the proposed approach outperforms prior methods, achieving over 85% test accuracy in predicting lane changes with a horizon of 1 second. Comparative analyses underscore ViViT's superiority in capturing spatio-temporal dependencies in video data while requiring fewer parameters, enhancing computational efficiency. This research contributes to advancing autonomous driving technology by showcasing ViViT's efficacy in real-world applications and advocating for its further exploration in enhancing vehicle safety and efficiency.This study has presented ViViT for lane change prediction in autonomous vehicles, demonstrating notable performance improvements over existing methodologies. ViViT excels in capturing spatio-temporal dependencies in video data while maintaining computational efficiency. | Lane Change Prediction; Video Vision Transformers; Computer Vision; Tubelet Embedding; Autonomous Vehicles | http://dx.doi.org/10.12785/ijcds/1571024445 | Muhmmad Abrar Raja Mohamed (Vellore Institute of Technology, India); Srinath NS and Maria Anu V (Vellore Institute of Technology, Chennai Campus, India); Joshua Thomas John Victor (UOW KDU Penang University, Malaysia); Rathna R (Vellore Institute of Technology, India); Monica K M (Vellore Institute of Technology, Chennai, India) |
71 | 1571024952 | A Hybrid Recommendation System: Improving User Experience and Personalization with Ensemble Learning Model and Sentiment Analysis | Recommendation Systems have been built over the years using various machine learning (ML), deep learning (DL), and natural language processing (NLP) techniques. In this research, we introduce a novel hybrid recommendation system that incorporates sentiment analysis (using NLTK), item-based filtering algorithms, and user-based recommendations. The system intends to outperform previous systems in terms of suggestion quality and robustness by exploiting ensemble models. The study makes use of a proprietary dataset compiled from various sources, including Amazon, Tmdb, and Google reviews. The Synthetic Minority Oversampling Technique (SMOTE) is used to alleviate class imbalance. Textual inputs are subsequently converted into numerical representations for modeling using feature extraction techniques. The ensemble model incorporates supervised machine learning methods such as logistic regression (LR), Naive Bayes (NB), Gini decision trees (DT), random forest (RF), and XGBoost. The system provides personalized recommendation outputs by analyzing the input of each model, revolutionizing the recommendation environment. Our hybrid system attains a commendable accuracy score of 96% attained by the XGBoost algorithm. In this study, we propose a novel hybrid recommendation system based on sentiment analysis and item-based filtering that leverages ensemble techniques going beyond existing approaches. Furthermore, our findings emphasize the significance of benchmark datasets and evaluation measures, particularly in deep learning-based RS, giving useful insights for both researchers and practitioners. Overall, our study adds a new viewpoint to the literature by focusing solely on the fast-growing domain of deep learning-based recommendation systems, providing a nuanced knowledge of the advances, problems, and prospects in this crucial field of research. | Sentiment analysis; Ensemble learning; Recommendation System; Item-Based Filtering | http://dx.doi.org/10.12785/ijcds/1571024952 | Kulvinder Singh (UIET, Kurukshetra University, India); Sanjeev Dhawan (Faculty of Computer Science & Engineering, India); Nisha Bali (Kurukshetra University, India); Ethan Choi and Anthony Choi (Mercer University, USA) |
72 | 1571027149 | Mobile Computer-Assisted Application for Stress Detection Based on Facial Expression Using Modified Convolutional Neural Network | In this challenging digital era, stress has become an inseparable part of daily life, affecting all ages. Although researchers have discussed stress detection extensively, there are few practical and accessible applications for users. This research aims to develop a mobile application utilizing a modified Convolutional Neural Network (CNN) for stress detection based on facial expression, thereby enabling more effective and efficient stress detection and management. The well-known CNN architectures, i.e., DenseNet201, MobileNetv2, and ResNet50, could have been more optimal for detecting stress from facial expressions. Hence, the CNN architectures are modified to enhance the accuracy of the task by adding dropout layers, Pooling2D, and Relu Activation. The research was conducted through data collection, image pre-processing, training the model with the modified CNN architectures, and developing a mobile application for stress detection. With the modifications made, this research succeeded in increasing the model's accuracy in detecting stress from facial expressions, where the modified DenseNet201 achieved the highest accuracy, from 75.90% to 77.83%. The mobile application can detect stress based on facial expression image obtained from file or camera. In conclusion, using artificial intelligence technology, especially through modifying the CNN architecture, enhances the accuracy of stress detection from facial expressions, and the developed mobile application offers a practical solution. | convolutional neural network; stress detection; facial expression; mobile application | http://dx.doi.org/10.12785/ijcds/1571027149 | Slamet Riyadi, Naufal Rozan and Cahya Damarjati (Universitas Muhammadiyah Yogyakarta, Indonesia) |
73 | 1571031205 | Harnessing Deep Learning for Early Detection of Cardiac Abnormalities | Sudden Cardiac Arrests (SCAs) are potentially fatal situations that strike suddenly, frequently without warning, and can have dire repercussions if left untreated. These incidents result in an abrupt loss of heart function caused by an electrical malfunction in the heart. For the purpose of increasing survival rates and reducing long-term damage, early detection and intervention are essential. In this context, there is great potential to improve response mechanisms and deepen our understanding of SCA by utilizing fog computing and Deep Learning (DL) for Internet of Things (IoT) devices. This study's main goal is to investigate how DL algorithms and fog computing can be used with IoT devices to better understand and anticipate sudden cardiac arrests. The goal is to create a reliable, real-time system that can recognize possible SCA events, examine pertinent data, and enable prompt intervention. The study uses a multidisciplinary methodology, combining fog computing for Internet of Things devices with machine learning techniques. With fog computing, real-time data from wearables-like smartwatches and health monitors-is gathered and processed at the edge. After that, patterns and anomalies in the data are analyzed using DL, this work utilizes the Multilayer Perceptron with ReLu as activation function for faster convergence, to find possible signs of an approaching SCA. The model achieved an average accuracy of 98.65%, out-performing previous models and converging faster. Another novel feature is the alert system which sends out an alert message whenever there is a predicted SCA. The study's findings show that the comprehension of SCA is greatly improved when DL and fog computing are combined with IoT devices. Real-time data processing and analysis capabilities of the system enable prompt and focused interventions that may even save lives. | Artificial Intelligence; Deep Learning; Cardiac Abnormalities; AI in Medicine | http://dx.doi.org/10.12785/ijcds/1571031205 | Prutha Annadate (Dr Vishwanath Karad MIT World Peace University, India); Mangesh V Bedekar (School of Computer Engineering and Technology & MIT World Peace University, India); Mrunal Annadate (Dr Vishwanath Karad MIT World Peace University, India) |
74 | 1571031447 | Hybrid Intelligent Technique between Supervised and Unsupervised Machine Learning to Predict Water Quality | Water is the secret of life and makes up almost 70% of the Earth's surface. It has become necessary to protect the water resources around us from pollution and neglect, which can result in the loss of life and health. Artificial intelligence (AI) has the potential to improve water quality analysis, forecasting, and monitoring systems for sustainable and environmentally friendly water resource management. As a result, this work focuses on the prediction of accurate and sustainable water quality prediction model using hybridization between supervised and unsupervised machine learning techniques. A set of multi-model learning features was used to represent the state of the water and determine its suitability category (i.e., safe or unsafe). This is done by building a hybrid model between supervised algorithms (LGBM) and unsupervised algorithms (COPOD, IForest, and CBLOF) after fusing their outliers, and the proposed model is called (HLGBM+Fusion CIC). Also, the Gamel herd swarm optimization algorithm was applied to find the optimum hyper-parameters. The models were evaluated with or without class balancing and compared in terms of accuracy, recall, precision, f1 score, and area under the curve (AUC). The results showed that the proposed model (HLGBM+Fusion CIC) outperformed other models by 99.2% in accuracy, AUC, and f1-score. Also, it achieved 99% precision and 99.3% recall. Finally, this paper presented a framework for researchers using hybrid machine learning to forecast water quality. | Prediction; Water Quality; Artificial Intelligent; Machine Learning | http://dx.doi.org/10.12785/ijcds/1571031447 | Hanan A Aldabagh and Ruba Talal Ibrahim (University of Mosul, Iraq) |
75 | 1571033989 | Bl-Boost: A blockchain-based XG-Boost EHR scheme in Healthcare 5.0 ecosystems | Healthcare 5.0 focuses on a personalized patient-centric approach, and combines advanced technologies like artificial intelligence (AI), blockchain, Internet-of-Things (IoT), and Big data to form preventive, proactive, and emotive healthcare. To assure privacy of electronic health records (EHRs) in Healthcare 5.0, blockchain has emerged as a disruptive technology owing to its properties of assured immutability, chronology, and transparent nature. Recent research has integrated blockchain technology with deep learning (DL) models to enhance the predictive capabilities for future disease occurrences. Nonetheless, DL models often necessitate a substantial volume of labeled data, a resource that may not be readily available in all scenarios. Thus, boosting mechanisms can overcome this limitation by leveraging small labelled datasets and improve the model generalization capability. Motivated by this, we propose a scheme, Bl-Boost, which combines extreme gradient boosting (XG) with long short term memory (LSTM) model for making accurate predictions on EHR data. We store the model predictions on a local interplanetary file systems (IPFS) server, and hash information is published in main blockchain. Via smart contracts (SCs), we for privacy-preserved access control on the data. The experimental validation is performed on the benchmark heart failure prediction dataset in terms of accuracy, loss, and precision matrix for LSTM and XG-Boost LSTM models. We present sample contracts for data sharing, and for blockchain metrics, we validate our performance of scalability, IPFS cost, and trust probability against collusion attacks. The proposed outcomes indicate that the scheme has strong potential for viability in real-world deployment scenarios. | Blockchain; Deep Learning; Healthcare Analytics; Extreme Gradient Boosting; Long Short Term Memory | http://dx.doi.org/10.12785/ijcds/1571033989 | Varun Deshmukh and Sunil Pathak (Amity University, Jaipur, Rajasthan, India); Pronaya Bhattacharya (Amity University, Kolkata, India) |
75 papers.