List papers
Seq | PDF Download | Title | Abstract | DOI | Keywords | Authors with affiliation and country |
---|---|---|---|---|---|---|
1 | 1570892842 | Evolution in Children Fingerprint Recognition Approaches: A Review | Biometric as an identification tool for children recognition was started in the late 19th century by Sir Galton. However, it is still in the developing stage even after the span of two centuries. The main hurdles in this process are the small size, non-uniform growth of different biometrics and lack of public databases of children biometrics. In this paper, the authors have touched all the aspects of fingerprint recognition of children. Childhood is very important and crucial in the life of human beings. Most important vaccinations are given in these years. Children are not able to take care of themselves therefore swapping, abduction and missing happens in this age. The main objective is to reveal the progression study of children's recognition for the age group of 0 to 5 years. The combination of transform domain features and machine learning classifiers gives good accuracy of identification of children. Also, multimodal fusion and deep learning approach will increase the identification accuracy of the children. In this paper, a complete survey of studies done for children recognition using physiological biometric is covered. Detailed discussion on database availability, scanning devices, feature extraction techniques, growth models used and matching algorithms is presented. Fingerprint modality is explored using their trends and challenges. Also, the effectiveness of fingerprint modality for recognition of children is discussed. | http://dx.doi.org/10.12785/ijcds/160173 | Biometrics; children; convolutional neural network; fingerprint; Recognition | Vaishali Hanumant Kamble (Savitribai Phule Pune University, India); Manisha Dale Dale (SPPU, Pune, India); Priyanka Sheshrao Tondewad (India & AISSMS IOIT Pune, India); Pravin Chopade (Pune University & MES's College of Engineering, India) |
2 | 1570898489 | A Classification of Quran Verses Using Deep Learning | Understanding the topics of Quran verses is considered as a main interest of Islamic Scholars, specialists of Quran studies and others. The traditional classification of Quran verses can be simplified and improved using the automated techniques such as Natural Language Processing (NLP) and Machin Learning (ML). While the majority of the current studies have used traditional ML approaches with small datasets, we used the Deep Learning (DL) algorithms with larger dataset for classifying Quran verses. This paper proposes a method for multi-label classification for accurately classifying Quran verses based on 12 predefined main topics using DL. We follow a structured method that consists of multiple steps for achieving the objective of this study. Firstly, a dataset of labeled Quran verses is collected, organized and converted to sequences of numbers to be understood by the DL models. Word embedding vectors using Word2Vec with skip-gram algorithm is created for considering the semantic of words in their contexts in order to improve the models' performances. The sequences of numbers and the word embedding vectors are fed to two different DL models for classification. The RNN and CNN models are used and evaluated based on accuracy, precision, recall, F1-score, and hamming loss metrics. For more accurate performances the cross-validation approach is adopted where the values of 90.38%, 96.98%, 92.49%, 93.81% and 0.0126 for accuracy, precision, recall, F1-score and hamming loss respectively were achieved. The findings of this study help specialists of Quran studies to gain insight and knowledge into topics discussed by Quran verses. | http://dx.doi.org/10.12785/ijcds/160176 | Natural Language Processing; Multi-label classification; Deep learning; Quran; Word2Vec; RNN | Abdelkareem M. Alashqar (Islamic University of Gaza, Palestine) |
3 | 1570901629 | Image Steganography Technique based on Lorenz Chaotic System and Bloom Filter | Steganography is the study of invisible communication, which typically focuses on methods of concealing the existence of the communicated message. Steganography is now widely used as a means of protecting sensitive data. In addition, one of the most common methods of information hiding is the Least Significant Bit (LSB) technique. This technique is extremely vulnerable to statistical attacks which are still challenging. This article presented a new method by using Lorenz's chaotic system to generate a series of pseudo-random positions for embedding secret information within the cover image. Moreover, the bloom filter is suggested to be used to prevent repetition and storage in the same pixel as well as data loss. The similarity metric results show how powerful and secure the proposed method is against both visual and analytical attacks. | http://dx.doi.org/10.12785/ijcds/160161 | Information security; Steganography | Ahmad Salim (Middle Technical University & TIA, Iraq); Khitam Abdulbasit Mohammed and Farah Maath Jasem (University of Anbar, Iraq); Ali Makki Sagheer (Anbar University & College of CS & IT, Iraq) |
4 | 1570906411 | Cloud Forensic Artifacts: Digital Forensics Registry Artifacts discovered from Cloud Storage Application | Cloud storage drives have become very popular nowadays for many people around the world. Understanding how to locate, retrieve and acquire cloud-based data may be complex and time-consuming. Standard digital forensic concepts and thorough chain of custody methods are the main discussion topics in most contemporary academic forensic publications. The traditional approach to computer forensics emphasis physically accessing the media that houses the information that could be of factors that could contribute. On the other hand, while working in a cloud computing environment, accessing the physical media is practically not feasible. Data for a given client could be kept decentralized, spanning several data centers and countries, using various virtual servers and physical devices. Due to the data breaches which can occur by cloud-based applications, this research proposed in this paper will focus on gathering evidence from Windows 11 operating systems to discover and collect left over registry artifacts by one of the main cloud storage applications known as OneDrive. Whereas it will imply their existence even after the unlinking and uninstalling of cloud drive applications. This proposed research will show what type of data remnants and where it can be found using the analysis of digital forensic investigator. Also, due to the time consuming to collect registry artifacts with their essential values, a bash script will be built to gather registry artifacts in which will show how data is stored within Windows 11 registry. Moreover, there will be two main approaches for this research, the first approach will be taking a snapshot of Window's registry after the installation and linking account into the cloud storage application to perform digital forensic investigation on the machine to discover related artifacts in the registry. The second approach is to unlink account and uninstall OneDrive cloud drive applications as well as restarting the machine and then take another snapshot to perform a second forensic investigation to compare evidence gathered on the second approach with evidence gathered on the first approach. | http://dx.doi.org/10.12785/ijcds/160102 | Cloud Forensic; Digital Evidence; Registry Artifacts; Windows 11 Forensics; Cybersecurity; Forensic Computing | Mohammed A. Bajahzar and Shailendra Mishra (Majmaah University, Saudi Arabia) |
5 | 1570908404 | Machine learning-based security mechanism to detect and prevent cyber-attack in IoT networks | Internet of Things (IoT) systems become more prevalent, their security problems can be significant. Denial-of-service attacks, malware, and phishing attacks can compromise data and services on networks. For comprehensive protection, machine learning-based security measures in IoT systems should be developed with more robust models and integrated with multiple security measures. A Ridge Classifier is used to detect anomalies in this study. With this approach, the proposed system can detect and predict cyber-attacks accurately in smart networks using secure real-time information. In IoT systems, it detected and mitigated network threats with a 97% accuracy rate. In addition to improving the security and resilience of government and business networks, this system can also protect the data from malicious threats. | http://dx.doi.org/10.12785/ijcds/160148 | Cyber Attacks; Network Threats; Network Security; Security Countermeasure | Abdullah Alomiri, Shailendra Mishra and Mohammed Alshehri (Majmaah University, Saudi Arabia) |
6 | 1570908738 | DIC2FBA: Distributed Incremental Clustering with Closeness Factor Based Algorithm for Analysis of Smart Meter Data | Due to increased civilization, smart cities and advent of technology, lots of buildings including commercials, residentials and other types are populating in numbers in the recent past. The electricity consumption is also affecting due to increased occupancy in these buildings. To analyse the electricity consumption patterns technology is utmost useful. This analysis will be useful for consumers and electricity generation units too to know about consumption and future requirements of electricity. Incremental clustering algorithm is the best choice to handle ever increasing data. In this research work, in the first phase the electricity consumption data was extracted from smart meter images and then in the second phase the data was taken from extracted .csv files merging data from various sources together. This research proposes Distributed Incremental Clustering with Closeness Factor Based Algorithm (DIC2FBA), to update load patterns without overall daily load curve clustering. The proposed DIC2FBA has used Amazon Web Service(AWS) and Microsoft Azure HDInsight service. The AWS EC2 instance, AWS S3 bucket, and HdInsight, which clusters data from multiple sites in iterative and incremental mode. The DIC2FBA first extracts load patterns from new data and then intergrades the existed load patterns with the new ones. Further, we have compared the findings achieved using the DIC2FBA with IK means based on time, features, silhouette score, and Davis Bouldin index which indicate that our method can provide an efficient response for electricity consumption patterns analysis to end consumers via smart meters. | http://dx.doi.org/10.12785/ijcds/160103 | Distributed Incremental Clustering; CFBA; Smart Meter Analysis | Archana Yashodip Chaudhari (Symbiosis International Deemed University Pune, India & Symbiosis Institute of Technology Pune, India); Preeti Mulay (Symbiosis Institute of Technology- SIT, India); Ayushi Agarwal, Krithika Iyer and Saloni Sarbhai (Symbiosis Institute of Technology Pune India, India) |
7 | 1570912712 | Fake News Detection Datasets: A Review and Research Opportunities | The impact of fake news is far-reaching, affecting journalism, the economy, and democracy. In response, there has been a surge in research focused on detecting and combatting fake news, resulting in the development of datasets, techniques, and fact-verification methods. One crucial aspect of this effort is the creation of diverse and representative datasets for training and evaluating machine learning models for fake news detection. This review paper examines the available datasets relevant to detecting fake news, with a particular emphasis on those available in the Indian context, where few resources exist. By identifying research opportunities and highlighting existing corpora, this paper aims to assist researchers in improving their fake news detection studies and contributing to more comprehensive research on the topic. To the best of our knowledge, no survey has specifically focused on accessible corpora in the Indian context, making this review a valuable resource for researchers in the field. | http://dx.doi.org/10.12785/ijcds/160104 | Fake news; Misinformation; Rumor; Satire dataset; Cyber security; Fake propganda | Yasir Hamid (Abu Dhabi Polytechnic, United Arab Emirates); Nedal Ababneh (Abu Dhabi Polytechnic, Australia); Amandeep Kaur (Chitkara University Punjab, India); Pummy Dhiman (Chitkara University, India) |
8 | 1570913221 | A Review on NLP Techniques and Associated Challenges in Extracting Features from Education Data | In recent years, there has been a substantial surge in academic efforts to ensure the quality of educational resources, including curricula, examinations, and content. This heightened focus has prompted a growing research interest that explores using automated analytical tools, particularly natural language processing (NLP), to interpret and evaluate the quality of these educational materials. This study employs a methodical approach to comprehensively survey NLP techniques for extracting syntactic and semantic features to analyze and comprehend educational content. Through this investigation, the study identifies and explores the challenges and strengths associated with traditional and advanced feature extraction methods. The findings of this review hold substantial benefits for various stakeholders, including education regulatory bodies, researchers, higher education institutions, and NLP researchers. Notably, the study equips NLP researchers with valuable insights into document analysis's current strengths and weaknesses. It also imparts the essential skills for NLP-based application developers, enhancing their capacity to design and implement the most suitable algorithms and techniques for various NLP tasks. | http://dx.doi.org/10.12785/ijcds/160170 | NLP; syntactic features; semantic feature; question classification | Elia Ahidi Elisante Lukwaro (Nelson Mandela African Institute of Science and Technology, Tanzania); Khamisi Kalegele (The Open University of Tanzania, Tanzania); Devotha Nyambo (The Nelson Mandela African Instituion Of Science and Technology, Tanzania) |
9 | 1570914933 | Neutrosophic Clustering: A Solution for Handling Indeterminacy in Medical Image Analysis | The need for additional innovation in the healthcare industry has become more apparent as the world begins to recover from the ravages of the pandemic. While computational intelligence has quietly become integrated into more and more fields, its applications were not something the average person discussed until recently. Computational Intelligence is becoming more and more applicable in several health, industrial, and commercial sectors around the world. Its ability to provide faster and improved functionality is what healthcare workers at healthcare centers believe will be a significant implication in the strife towards improving healthcare delivery and patient care. One of the major applications of A.I. in healthcare is pattern mapping of medical images which mainly involves image processing. It seeks to extract significant things from the image through clustering. Therefore, choosing a suitable clustering method for a specific data set is a crucial step in the process of image segmentation. Numerous modifications to the clustering algorithm, such as the fuzzy k-mean algorithm, have been presented up to this point. All of the data mining techniques currently in use are capable of handling the uncertainty brought on by numerical deviations or unpredictable phenomena in the natural world. But, present data mining challenges in the real world may include indeterminacy components. Neutrosophic logic can be used to resolve this conundrum. This article will discuss how neutrosophic reasoning is used in clustering. These clustering algorithms will be compared for clustering accuracy on distinct medical picture cluster architectures. | http://dx.doi.org/10.12785/ijcds/160111 | Image processing; uncertainty; silhouette coefficient; neutrosophic logic; Fuzzy k-means; indeterminacy | Sitikantha Mallik (KIIT University, Bhubaneswar, India); Suneeta Mohanty (KIIT University, India); Bhabani Mishra (Kiit University, India) |
10 | 1570915535 | Deduplication using Modified Dynamic File Chunking for Big Data Mining | The unpredictability of data growth necessitates data management to make optimum use of storage capacity. An innovative strategy for data deduplication is proposed in this research study. The data is split into blocks of a predefined size by the fixed-size DeDuplication algorithm. The main drawback of this approach is that the preceding sections will be relocated from their original placements if additional sections are inserted into the forefront or center of a file. As a result, the generated chunks will have a new hash value, resulting in less DeDuplication ratio. To overcome this drawback, this study suggests multiple characters as content-defined chunking breakpoints, which mostly depend on file internal representation and have variable chunk sizes. The experimental result shows significant improvement in the redundancy removal ratio of the Linux dataset. So that a comparison is made between the proposed fixed and dynamic deduplication stating that double character chunking has less average chunk size and can gain a much higher deduplication ratio. | http://dx.doi.org/10.12785/ijcds/160105 | big data; data mining; deduplication; dynamic chunking; fixed chunking; hashing | Saja Taha Ahmed (Ministry of Education, Iraq) |
11 | 1570915707 | Enhanced Multipath TCP to Improve the Mobile Device Network Resiliency | Mobile devices consume a significant amount of internet traffic, and they can utilize multiple interfaces like Wi-Fi and cellular networks to share traffic between networks, known as a multi-homed host. This approach enhances the resilience of the internet connection by allowing traffic to flow through multiple paths. The Multipath Transmission Control Protocol (MPTCP) supports this type of connection, but a fairness issue emerges when a multi-path host shares a bottleneck link with regular single-path hosts. To deal with this issue, this paper proposes an enhanced MPTCP (eMPTCP) that uses a throughput adjustment to estimate bandwidth based on TCP Westwood+ congestion control. By decreasing each subflow traffic on the multi-path, the proposed eMPTCP achieves fairness in shared links. The simulation conducted using network simulator 2 (ns2) represents the mobile conditions of mobile data air interface, and the results demonstrate that eMPTCP outperforms standard congestion control in achieving connection resilience. | http://dx.doi.org/10.12785/ijcds/160106 | Multi-path TCP; Fairness; Congestion Control; Bandwidth Estimation | Hilal H. Nuha and Fazmah Arif Yulianto (Telkom University, Indonesia); Hendrawan Hendrawan (Bandung Institute of Technology, Indonesia) |
12 | 1570915914 | Two-Stage Gene Selection Tactic For Identifying Significant Prognosis Biomarkers In Breast Cancer | The mining of a subset of informative genes from microarray gene expression data is a significant data preparation task in the classification of breast cancer. Out of all the algorithms developed, CFS-BFS and CONSISTENCY-BFS are the two best ones for gene selection. For reliable prognostication of breast cancer subtypes, a ground-breaking 2-Stage Gene Selection algorithm has been developed. Using CFS-BFS in the first stage and CONSISTENCY-BFS in the second, the majority of the distracting, inappropriate, and redundant genes are removed. To improve algorithm efficacy, the 2-Stage GeS strategy gets around the uncertainty problem with CFS-BFS. Surprisingly, using Hidden Weight Naive Bayes to establish the 2-Stage GeS, more accurate and reliable results are obtained. The standings of recall, precision, f-score, and fallout show encouraging results. The top four genes E2F3, PSMC3IP, GINS1 and PLAGL2 were further verified by applying Kaplan-Meier Survival Model. E2F3 and GINS1 are likely targets for precision therapy. | http://dx.doi.org/10.12785/ijcds/160107 | Cfs-Bfs; Consistency-BFS; gene selection; micro-array gene expression dataset; breast cancer; Kaplan Meier Survival | Monika Lamba (Assistant Professor & The Northcap University, India); Geetika Munjal (Amity University, India); Yogita Gigras (Assistant Professor, India) |
13 | 1570917350 | Improvement In Depth-Of-Return Loss & Augmentation Of Gain-Bandwidth With Defected Ground Structure For Low Cost Single Element mm-Wave Antenna | In this work, a low cost wide-band mm-Wave antenna prototype is designed, fabricated and tested. The FR4 structure comprised of a rectangular radiating patch (RRP) etched with a circle, a concentric circle at lower edges and Defected Ground Structure (DGS). The DGS is developed by etching of a dumbbell at center and two semi-elliptical slots at its upper edges to enhance gain for wider spectrum. A -10 dB bandwidth of 3.89 GHz (30.26-34.15GHz) is achieved and a mismatch loss below 0.044 dB for band of 30-31.59GHz results 99% through power. Also, it provide 3-dBi-gain-bandwidth of 3.89 GHz (30.26-34.15GHz) and 5-dBi enormous gain-bandwidth of 2.50 GHz (30.47-32.97GHz). Further, four peaks of gain i.e. 9.83 dBi at 31.3 GHz, 8.15 dBi at 32.64 GHz, 6.16 dBi at 33.47 GHz and 6.05 at 33.74 GHz are also achieved. Some other quality parameters like directivity, group delay, axial ratio are also analyzed. The antenna parameters are measured using Vector Network Analyzer (VNA) Bench of Rohde & Schwarz ZNB 40 and Anechoic Chamber. The fabricated structure is suitable for intelligent network based next-generation (5G and beyond) communications, futuristic robotic control devices, Local Multipoint Distribution Services (LMDS) 5G applications in mm-Wave range. Finally, equivalent circuit of the proposed structure is also developed and investigated. | http://dx.doi.org/10.12785/ijcds/160108 | Concentric circle; Ka-band; Semi-elliptical; mm-Wave; Dumbbell | Simerpreet Singh (Lovely Professional University, Phagwara Jalandhar & Bhai Gurdas Institute of Engineering & Technology, India); Gaurav Sethi (Lovely Professional University, India); Jaspal Singh Khinda (Bhai Gurdas Institute of Engineering & Technology Sangrur Punjab, India) |
14 | 1570918247 | AES-32: An FPGA implementation of lightweight AES for IoT Devices | IoT is marked by the resource-constrained devices. Information security is the main challenge that arise due to wireless transmission of data by ubiquitous sensors. The phenomenal growth of resource constrained devices in IoT setups has motivated for the research of lightweight solutions for information security. In this work, an optimized implementation of AES for high throughput has been presented. The data path of the AES is compressed to 32-bit. Implementation has been carried out on different FPGA families. Data path compression and use of BRAMs has led to improved throughput with savings in resource consumption. Loop-unrolled AES results in the consumption of 2669 slices which 12 times as big as this design. While 32-bit AES with 128-bit data path consumes 4 times more resources than proposed design which uses 223 slices and 5 BRAMs on Artix-7 FPGA. The proposed design delivers throughput in the range of 2.2 to 3.5 Gbps and achieves efficiency of 1.75 Mbps-7.8 Mbps per slice on different FPGAs. It outperforms different lightweight ciphers and constrained AES implementations in existing literature. | http://dx.doi.org/10.12785/ijcds/160167 | Internet of Things (IoT),,; Data path; Advance Encryption Scheme (AES); Field Programmable Gate Arrays (FPGA); Information security | Sumit Singh Dhanda, Sr. (National Institute of Technology Kurukshetra & Kurukshetra University Kurukshetra, India); Brahmjit Singh (National Institute of Technology Kurukshetra, India); Poonam Jindal (NIT Kurukshetra, India) |
15 | 1570918693 | Enhancing Data Consistency via a Context-Aware Dynamic Adaptive Model | This paper introduces a new model for managing data consistency in large-scale, geo-distributed storage systems, pivoting on the concept of dynamic adaptive consistency. Recognizing the challenges in balancing consistency and availability in such systems, we propose an innovative, context-aware model that categorizes operations into "consistent blocs." This categorization allows for a more granular and efficient management of consistency levels, representing a notable improvement over adaptive approaches that employ a uniform consistency model across all operations or apply a single consistency level to each operation independently. Our model dynamically adapts these blocs' consistency levels in response to real-time changes in the operational context; this can include, for example, variations in network latency, data access patterns, and workload intensity, ensuring optimal data consistency tailored to current conditions. Our approach extends traditional adaptive consistency models by introducing more flexibility. A middleware architecture achieves this goal by introducing an Adaptation Manager that dynamically adjusts consistency levels. We implement this model and evaluate its performance using the YCSB benchmark on a Cassandra cluster. Our results reveal significant flexibility in expressing users' requirements and prompt responsiveness in the dynamic adaptation of the policy. Our proposition holds significant benefits for applications where rapid adaptation to context is crucial. | http://dx.doi.org/10.12785/ijcds/160141 | Zohra Mahfoud (University of Science and Technology Houari Boumediene & University of Bouira Akli Mohand Oulhadj, Algeria); Nadia Nouali-Taboudjemat (CERIST, Algeria) | |
16 | 1570927726 | A Unique Approach for Ex-filtering Sensitive Data through an Audio Covert Channel in Android Application | With technology evolving rapidly worldwide, cybersecurity has recently undergone an enormous revolution. Technologies such as Artificial Intelligence, Machine Learning, Blockchain, and Data Science have made cybersecurity a demanding and challenging area of research. Different authors and researchers continuously work in this area to improve current implementations. As a result of this evolution, different organizations have grown, and many small-scale organizations have become large-scale ones. With the internet, organizations have grown enormously, but it has led in two different directions. On the one hand, researchers are using the internet to evolve technologies, while on the other hand, some people are continuously searching for ways to destroy this technology. Along with the internet, mobile phones have become wise, and with Android, people have started using them differently. As Android evolves day by day, people are adapting to it very frequently. However, some people continuously work to identify ways to leak information from Android devices through a covert channel. In this paper, the author presents a unique approach to sending sensitive information, such as Contacts and SMS, with the help of a covert audio channel. The author performs experiments on the latest Android devices and concludes that it is possible to send sensitive information to Android with this approach. The author also tests this experiment with different detection tools and concludes that none can detect such a covert channel. | http://dx.doi.org/10.12785/ijcds/160112 | Covert Channel; Android Application; Cyber Security | Abhinav K Shah and Digvijaysinh M Rathod (National Forensic Sciences University, India); Bharat Buddhadev (Malaviya National Institute of Technology, India); Jeet Rami (Microsoft, India) |
17 | 1570927767 | A Crop Adaptive Irrigation System for Improving Farm Yield in Rural Communities | Inefficient irrigation is one of the major causes of low crop yield and poverty among rural farmers in developing countries. Existing smart irrigation systems are often complicated, expensive, and require a high level of literacy, hindering their use by rural farmers. Thus, we propose an adaptive irrigation system that is easy to operate and accessible to farmers in rural communities. The proposed system utilizes sensors to monitor soil moisture levels, temperature, and humidity and delivers water to plants only when necessary, increasing efficiency and reducing water waste. An Arduino UNO board is used to analyze the data from sensors to determine the correct amount of water for each crop. A two-level feedback control system is used to activate the water pump, ensuring that the moisture level falls within the desired range. The system also has a user-friendly interface, with an SMS feature that allows users to control and monitor the system remotely. The system was tested for three distinct crops: beans, maize, and tomatoes, and its efficacy, efficiency, and sensor accuracy were evaluated. Results indicate that the system conserves water and increases crop yields by delivering the correct amount of water to each crop. The proposed adaptive smart irrigation system is effective in optimizing water consumption, reducing waste, and boosting yields, resulting in significant energy and water savings for farmers. This research has significant implications for agriculture in rural communities. It can reduce the burden on farmers by providing an automated irrigation system that requires minimal human intervention. The system can also be controlled remotely, making it easier to monitor and modify. Additionally, the system is cost-effective, making it accessible to small-scale farmers with limited resources. | http://dx.doi.org/10.12785/ijcds/160134 | Crop Adaptive; Smart Irrigation; SDGs; Arduino Microcontroller | Roseline A Obatimehin, Micheal Ogayemi, Martins Osifeko, Ajibola Oyedeji, Abisola Olayiwola and Olatilewa Abolade (Olabisi Onabanjo University, Nigeria) |
18 | 1570927955 | Decision Tree Analysis Approaches to Classify Sensors Data in a Water Pumping Station | Water pumping stations play a vital role in the citizens life where the failure in the pumping schedule, or the quality of the pumping may affect the life of the citizens. The data of the water pumping station may expose the weakness points in the system of the station where they can be overcome using machine learning approaches. In this paper, six decision tree algorithms are examined to find the optimal one for classifying the data of water pumping stations. The main goal is to determine the fault in the sensors to control the pumping process and to overcome the future failure. Six algorithms namely (J48, Rep Tree, Random Forest, Decision Stump, Hoeffding Tree, and Random Tree) are examined before and after implementing feature selection (FS) process. FS is implemented to find the most correlated sensors that remove the less correlated sensors. FS process affects the accuracies of the algorithms where it enhances the resulting accuracies of the algorithms. Random Forest and Random Tree algorithms prove their accuracy in data classification with 100% after implementing FS and removing the less correlated sensors data. The model can be used as assistant tool for classifying and predicting the failure in water pumping station. | http://dx.doi.org/10.12785/ijcds/160142 | Decision Tree; Sensors Data; Water Pumping Station; Supervised Machine Learning | Mostafa Adnan Hadi, Alaa Khalaf Hamoud, Ahmed Monther Abbood, Ahmed Naji Abdullah and Ahmed Khaled Abdullatif (University of Basrah, Iraq) |
19 | 1570929419 | A survey of Blockchain integration with IoT: benefits, challenges and future directions | The Internet of Things (IoT) stands at the forefront of the latest generation of information technology advancements, signifying a crucial juncture in the integration of digital and physical domains. This integration is pivotal, facilitating a plethora of transformative digital services that substantially enhance user experiences by seamlessly melding virtual and physical elements. However, the rapid proliferation of IoT also brings to the fore a range of challenges, particularly in the realms of security, simplicity, and data integration. These challenges pose significant obstacles to the full realization of IoT's potential. This paper provides a comprehensive analysis of these challenges within the IoT framework. We delve into the intricacies of IoT environments, examining the security risks and integration complexities that arise. The paper proposes effective strategies to address these challenges, aiming to establish a more secure and reliable IoT infrastructure. We discuss innovative approaches for enhancing data integration and simplifying IoT systems while ensuring robust security measures are in place. Through this exploration, the paper aims to contribute to the ongoing development of IoT, offering insights into overcoming its current limitations and outlining a path for its future evolution. Our analysis underscores the need for continuous research and innovation in this field, setting the stage for the emergence of more advanced, secure, and user-friendly IoT applications across various industries. | http://dx.doi.org/10.12785/ijcds/160143 | Blockchain; Internet of Things (IoT); IoT challenges; Network security; Data privacy; consensus protocol | Yassine Maadallah, Nassira Kassimi and Younès EL Bouzekri El Idrissi (ENSA of Ibn tofail university, Kenitra, Morocco); Youssef Baddi (EST of Sidi bennour, Chouaib Doukkali University) |
20 | 1570930419 | Taming Existing Satellite and 5G Systems to Next Generation Networks | To update any existing wireless or cellular network, the operators must exchange old fashion equipment by the new developed one. This problem arises when operators have to modify or release new generation. This paper proposes a new integrated satellite and 5G systems for the next generation network (NGN). The paper proposes a system that utilizes 5G wireless cellular integrated with satellite communication systems. Many downlink channels of the satellite link are trained to manage the new 6G system. With the terrestrial's satellite, the satellite receivers will be directed to the LEO-system which are communicated to the 5G systems or base stations. The new system will be able to support all user services and applications. Moreover, the system will help monitor and track all objects, rockets, airplanes, vehicles, and flying drones. The monitoring and tracking of vehicles will be very simple as the system will integrate the satellite and the 5G mobile systems using the high definition cameras and wireless sensor networks (WSNs) to view and track the vehicles. With the dependency of this future proposal, 6G will integrate 5G with the satellite system for global internet and mobile coverage. | http://dx.doi.org/10.12785/ijcds/1601108 | 5G; 6G; mmWave; Taming; Satellite | Ali Othman, Al Janaby (Ninevah University, Iraq); Alauddin Yousif Al-Omary (University of Bahrain, Bahrain); Siddeeq Y. Ameen (Duhok Polytechnic University, Iraq) |
21 | 1570947175 | Facial-Based Autism Classification Using Support Vector Machine Method | Autism Spectrum Disorder (ASD) is a complex neural developmental condition characterized by difficulties in communication, social interaction, and delayed brain development. Despite previous studies, there is a need to explore and enhance autism classification techniques using facial data. This research aims to classify individuals with autism based on facial images using the Support Vector Machine (SVM) method. It also evaluates the performance of SVM-based classification with HOG and SURF feature extraction, contributing to the identification of autism through facial features. A dataset of 200 facial images of students, including individuals with and without autism, was analyzed. The data was divided into 80:20 and 70:30 splits for training and testing purposes. SVM models with HOG and SURF feature extractions were evaluated using accuracy, precision, recall, and F1-Score metrics. The HOG-SVM and SURF-SVM models showed consistent performance in both data splitting scenarios. Accuracy values exceeded 0.88, and precision, recall, and F1-Score values were above 0.9. The 80:20 data split demonstrated improved performance, especially for the HOG-SVM model. Both HOG and SURF feature extraction methods showed good performance in classifying autism data. The SVM model with HOG achieved an accuracy of 0.95 in the 80:20 data split, while the SURF model achieved 0.9. Early autism detection based on facial data holds potential for use in student selection in elementary schools. However, the study has limitations due to limited data and the focus on accuracy alone. Future research can expand the data size, explore other feature extraction methods, and implement advanced deep learning techniques to improve classification performance and contribute further to autism detection based on facial data. | http://dx.doi.org/10.12785/ijcds/160163 | Autism; HOG; SURF; SVM | Muhathir Muhathir and Maqhfirah Dr (Universitas Medan Area, Indonesia); Mukhaira El Akmal (Indonesia Prima University, Indonesia); Mutammimul Ula and Ilham Sahputra (Universitas Malikussaleh, Indonesia) |
22 | 1570952791 | Detection of Automatic Girl Child Trafficking | Girl child trafficking has become a matter of serious concern for human society. There are different manual approaches to stop and prevent it. In this work, we proposed a two-stage computational model for automatic girl child trafficking by analyzing images. Due to the unavailability of girl child trafficking images, we constructed a data set having one thousand four hundred ninety-six data. Three features - age, emotion, and gender were considered for the development of our proposed computational model. In the first stage, ResNet 50 deep neural network was used to determine the three feature values from an image. In the next level, a Support Vector Machine (SVM) was used to determine whether there is a possibility of girl child trafficking or not. It has been observed our proposed model can detect girl child trafficking with an accuracy of 93.13%. The high accuracy observed in our study indicates the candidatures of our model for real-time child trafficking. | http://dx.doi.org/10.12785/ijcds/160199 | girl child trafficking; deep learning; , machine learning; image processing | Dhrubajit Kakati (Central Institute of Technology Kokrajhar, India); Rajarshi Seal, Tania Sarkar and Ranjan Maity (CIT Kokrajhar, India) |
23 | 1570953091 | Comparative Analysis of Naive Bayes and K-NN Approaches to Predict Timely Graduation using Academic History | Graduation is a pivotal moment in higher education, significantly impacting institutional accreditation and public perception. This study aims to ensure that all students graduate punctually, recognizing the critical role of higher education in achieving this goal. Central to this effort are comprehensive datasets that capture academic performance throughout both undergraduate and graduate studies. These datasets include details such as university and program specifics, undergraduate and master's GPAs, TOEFL scores, and the duration of study. By leveraging classification techniques within data mining, particularly K-NN and Naive Bayes, a comparative analysis was conducted to precisely predict the on-time completion of graduate students. The process of predicting graduation involves several stages, including data reprocessing, transformation, and the segmentation of data into training and testing sets. Subsequently, the selected methods are applied, and analyses are undertaken to accurately forecast graduation outcomes. Experimental findings reveal an 80% accuracy rate for Naive Bayes and 73% for K-NN. Notably, Naive Bayes demonstrates superior efficacy in predicting on-time graduation. However, to further refine accuracy, it is necessary to expand datasets and diversify the variables used in the analysis, such as incorporating additional academic and non-academic factors that may influence graduation timelines. The insights derived from this research hold significant implications for academic institutions, offering valuable guidance for implementing proactive measures to support students in completing their studies within the expected timeframe. By utilizing the findings of this study, educational institutions can develop tailored strategies and interventions to address potential barriers to timely graduation, such as enhancing academic advising, providing targeted support services, and optimizing course scheduling. These efforts will ultimately foster student success, improve institutional outcomes, and contribute to the overall excellence of higher education institutions. | http://dx.doi.org/10.12785/ijcds/160185 | Prediction; Graduation; Naive Bayes; K-Nearest Neighbor; Academic History; Confusion Matrix | Imam Riadi (Universitas Ahmad Dahlan, Indonesia); Rusydi Umar (University of Ahmad Dahlan, Indonesia); Rio Anggara (Universitas Ahmad Dahlan, Indonesia) |
24 | 1570953261 | Detection of Lung and Colon Cancer from Histopathological Images: Using Convolutional Networks and Transfer Learning | Analyzing histopathology images to detect the presence of cancer cells is a very important task during cancer treatment. This task has traditionally been largely done by manual methods. Therefore, the results of these analyzes are highly dependent on the pathologist's skills and professional experience, wasting time and manpower. Automating this task using deep learning techniques will speed up the early detection of cancer cells. Interestingly, these techniques have led to impressive advances in image processing in various fields, including the medical field. In this paper, we first attempt to highlight the importance of using deep learning techniques to classify histopathological images, and have cited studies using LC2500 datasets to accomplish this task. We then compared twelve models based on pretrained VGG-16, ResNet, DenseNet and NasNet models. The overall accuracy in this study ranged from 95.99% to 99.98%, reaching 100% for some categories. The purpose of this article is to compare pretrained models, examine the impact of the number of layers on the performance of built models, and highlight the importance of using transfer learning techniques. | http://dx.doi.org/10.12785/ijcds/160144 | Histopathological images; Deep learning; Classification; Convolutional neural networks; Lung and colon cancer detection; Transfer learning | Abdelwahid Oubaalla (ENSA, Sidi Mohamed Ben Abdellah University, Morocco); Hicham El Moubtahij (IBN ZOHR University, Morocco); Nabil El akkad (ENSA of Fez, Sidi Mohamed Ben Abdellah University, Morocco) |
25 | 1570956963 | Build a Secure Network using Segmentation and Micro-segmentation Techniques | Due to the increasing number of threats and attacks that threaten the network during the recent years, novel methods and techniques have been improved to secure the infrastructure of the network and the data transmitted within it. Micro-segmentation and segmentation techniques are popularly used over computer networks to reduce the defensive versus cyberattack. These techniques aim to minimize the damage obtained from attackers by segmenting the network into many clusters or sections and limiting the communications among them. Thus, each cluster or segment within the network becomes isolated from the others and this will increase the security of highly sensitive data networks and prevent unauthorized people and attackers from reaching to these sensitive data. In this paper, the micro-segmentation and segmentation techniques have been studied. Further, two scenarios of the implementation of micro-segmentation and segmentation within networks have been studied. Then, an enhanced scenario has been suggested to overcome the limitations of those two scenarios. The suggested scenario integrates NSX-T micro-segmentation with Sky API and policy enforcer to enhance the security and the performance of the network. After that, a comparison between all scenarios has been achieved to show which one is the best. | http://dx.doi.org/10.12785/ijcds/1601111 | Micro-segmentation; Segmentation; cyberattack; NSX-T | Rafat Mousa Alshorman (Yarmouk University, Jordan); Hussein Ahmad Al ofeishat (Al-Balqa Applied University, Jordan) |
26 | 1570957194 | Detection of spatter signature for streaming data in the laser metal deposition process | In recent years, Laser Metal Deposition (LMD) has experienced significant advancements. For process monitoring purposes, in-situ sensors are often used, which tend to produce noisy data, and due to the short processing window, these data need to be automatically analyzed in real-time to ensure their reliability for further processing. A simple Moving Average (MA) is commonly used to reduce signal peaks, which could otherwise skew the statistical properties of the data. Stabilization of the LMD process can be ascribed to the occurrence of spatters, which exhibit concept drift characteristics and are closely related to signal peaks. In this respect, this study aims to differentiate between two types of anomalies in data streams: point anomalies and concept drift, to eliminate the peaks that could cloak the performance in the actual signals during the process. To solve this issue, a two-step approach is being proposed. A differencing method is first applied to identify any potential point outliers, which are then verified to check if these identified observations are indeed peaks resulting from the spatters generation with a density-distance approach. The method's reliability and robustness were tested with overhang structures (3-axis printing) and impeller blade structures (5-axis printing). Results show that the existing method, the Drift Streaming Peaks-Over-Threshold method, is inferior compared to the proposed method in terms of F1-score, despite a decrease in performance as the inclination angle increases. These experiments ascertain the pertinence of the proposed method in processing incoming sensor data of LMD. | http://dx.doi.org/10.12785/ijcds/1601105 | Metal Additive Manufacturing; Reasoning-based; Spatter; Statistical Approach; Streaming | Muhammad Mu'az Imran (Sungkyunkwan University, Korea (South) & Universiti Brunei Darussalam, Brunei Darussalam); Gisun Jung and Young Kim (Sungkyunkwan University, Korea (South)); Azam Che Idris, Liyanage Chandratilake De Silva and Pg Emeroylariffion Abas (Universiti Brunei Darussalam, Brunei Darussalam); Yun Bae Kim (Sungkyunkwan University, Korea (South)) |
27 | 1570958700 | Graph-Based Rumor Detection on Social Media Using Posts and Reactions | In this article, we offer a unique technique to identifying rumors on online social media that utilizes on graph-based contextual and semantic learning. The basic premise is that social media entities are connected, and if an event takes place, then comparable stories or user responses with shared interests get spread across the network. The method that is being offered makes use of tweets and people's replies to them in order to comprehend the fundamental interaction patterns and make use of the textual and hidden information. The primary emphasis of this effort is developing a reliable graph-based analyzer that can identify rumors spread on social media. The modeling of textual data as a words cooccurrence graph results in the production of two prominent groups of words: significant words and bridge words. Using these words as building pieces, contextual patterns for rumor detection may be constructed and detected using node-level statistical measurements. The identification of unpleasant feelings and inquisitive components in the responses further enriches the contextual patterns. When all is said and done, the patterns are rated, and only the top k check-worthy patterns are selected for feature creation. We employ a word-level Glove embedding that has been trained using a Twitter dataset in order to ensure that the semantic relations are maintained. The suggested method is assessed using the PHEME dataset, which is open to the public, and contrasted with a variety of baselines as well as our suggested approaches. The results of the experiments are encouraging, and the strategy that was suggested seems to be helpful for rumor identification on social media platforms online. | http://dx.doi.org/10.12785/ijcds/160114 | Neural network; rumors; social media; NLP | Nareshkumar R (SRM Institute of Science & Technology, Kattankulathur, India); Nimala K and Sujatha R (SRM Institute of Science and Technology, Kattankulathur, India); Shakila Banu S (CARE College of Engineering, Trichy, India); Sasikumar P (Sphoorthy Engineering College, Hyderabad, Telangana, India); Balamurugan P (MLR Institute of Technology, Dundigal, Hyderabad, Telangana, India) |
28 | 1570958983 | Factors Affecting Citizen Engagement in the Kingdom of Bahrain Through Gamification | Civic engagement, as a fundamental pillar of participatory governance, has evolved with the integration of digital platforms, aiming to foster inclusive decision-making processes. However, sustaining meaningful engagement remains a challenge. Gamification, the integration of game elements into non-game contexts, offers a novel approach to address this challenge by enhancing user motivation and participation. This study investigated the influence of gamification on civic engagement in Bahrain. The study focused on independent variables such as Performance Expectation, Effort Expectation, Social Influence, Facilitating Condition, Gamification Perceived Ease of Use, and Gamification Perceived Usefulness, and their effects on the dependent variables of Behavioral Intention and Civic Engagement. The analysis revealed significant positive relationships between Performance Expectation, Effort Expectation, Social Influence, Facilitating Condition, Gamification Perceived Ease of Use, and Gamification Perceived Usefulness, with Behavioral Intention. This indicates that individuals who have higher expectations of performance, perceive greater effort expectations, feel more social influence, find gamification easy to use, and perceive it as useful are more likely to express intention for civic engagement. Furthermore, a significant positive relationship was found between Behavioral Intention and Civic Engagement, demonstrating that individuals with a stronger intention to engage in civic activities are more likely to actively participate in civic engagement. These findings collectively highlight the relevance of the proposed model's constructs in shaping civic engagement behaviors among individuals in Bahrain. The study underscored the potential of gamification to enhance citizen participation and involvement in civic activities by improving factors such as performance expectations, effort expectations, social influence, and the perceived ease of use and usefulness of gamification elements. | http://dx.doi.org/10.12785/ijcds/160160 | Citizen Engagement.; Gamification.; Performance Expectation.; Effort Expectation.; Social Influence.; Facilitating Condition. | Reem AlKaabi (University of Bahrain, Bahrain) |
29 | 1570961288 | Efficient Early Detection of Patient Diagnosis and Cardiovascular Disease using an IoT System with Machine Learning and Fuzzy Logic | Rising healthcare challenges, particularly undiagnosed heart disease due to subtle symptoms and limited access to diagnostics, necessitate innovative solutions. This study introduces an innovative Internet of Things (IoT)-based system for early detection, leveraging the strengths of both fuzzy logic and machine learning. By analyzing patient-specific data such as heart rate, oxygen saturation, galvanic skin response, and body temperature, our system utilizes fuzzy logic to evaluate potential disease symptoms, enabling self-diagnosis under medical supervision. This personalized approach enables individuals to monitor their health and seek prompt medical attention as needed. Additionally, we train multiple machine learning algorithms (Decision Tree, KNN, SVM, Random Forest, Logistic Regression) on the well-established Cleveland heart disease dataset. Among these, Random Forest achieved the highest accuracy (82.6%), precision (81.5%), recall (83.7%), and F1-Score (82.5%), showcasing its effectiveness in predicting cardiovascular disease. This unique blend of fuzzy logic for personalized symptom assessment and machine learning for CVD prediction presents a new method for early diagnosis. While promising, further validation through large-scale clinical trials is essential. Ultimately, this system underscores the significance of integrating AI with medical expertise for optimal patient care, providing a potential pathway to improved health outcomes and enhanced accessibility to early detection of cardiovascular disease. | http://dx.doi.org/10.12785/ijcds/160115 | Healthcare; IoT; Cloud Platform; Diagnosis System; Fuzzy Logic; Machine Learning | Rafly Arief Kanza (Politeknik Elektronika Negeri Surabaya, Indonesia); M. Udin Harun Al Rasyid (Politeknik Elektronika Negeri Surabaya (PENS), Indonesia); Sritrusta Sukaridhoto (Politeknik Elektronika Negeri Surabaya, Indonesia) |
30 | 1570962023 | Mangrove Tree Density Detector Using YOLO Based On Darknet Framework using RGB Drone Imagery | Mangrove preservation is crucial due to their ecological significance impact. Monitoring the health of mangrove forests is essential for preservation strategy, yet it remains challenging and time-intensive, particularly in remote locations. This study aims to create system to automatically assess mangrove density, providing essential data for informed preservation strategies, such as prioritizing reforestation in low density area. Using drones with RGB cameras to capture aerial imagery, enabling remote data collection. The system utilizes the YOLO neural network object detector to automatically detect objects, enabling quantity estimation. Experiment shows that YOLO object detector is able to detect mangrove tree accurately with 95\% recall, 88.3\% IoU, and 22ms processing time. The system uses 'tiny' model variant to provide more efficient accuracy compared to computation resource, making it suitable for deployment on computer with limited resources. In comparison to standard model that improves the recall by 4\%, IoU by 2\%, but demands six times more processing time. Then it calculate the covered area using camera transformation formula. Finally calculates the density for mangrove forest health, synchronized with GPS location. With the resulting data on mangrove density, evaluations of mangrove forest health become much easier, facilitating effective preservation actions, such as reforestation in area with low mangrove density. | http://dx.doi.org/10.12785/ijcds/160149 | Remote Sensing; Mangrove Health; YOLO; Object Detection; Density Calculation | Ilyas Yudhistira Kurniawan (Electronic Engineering Polytechnic Institute of Surabaya, Indonesia); M. Udin Harun Al Rasyid (Politeknik Elektronika Negeri Surabaya (PENS), Indonesia); Sritrusta Sukaridhoto (Politeknik Elektronika Negeri Surabaya, Indonesia) |
31 | 1570965349 | An FPGA Implementation of Basic Video Processing and Timing Analysis for Real-Time Application | Since digital images from cameras or any image sources can be quite large, it is common practice for researchers to divide these large images into smaller sub-images. This present study proposes a subsystem module to read and display the region of interest (ROI) of real-time video signals for static camera applications to prepare for background subtraction (BGS) algorithm operation. The proposed subsystem was developed using Verilog hardware description language (HDL), synthesized, and implemented in the ZYBO Z7-10 platform. An ROI background image of (360×360) resolution was selected to test the operation of the module in real-time. The proposed subsystem, which was used to implement the ROI reading algorithm consisted of five modules. Timing analysis was used to determine the real-time performance of the proposed subsystem. The subsystem works in multi-clock domain frequencies, 445.5MHz, 222.75MHz, 148.5MHz, and 74.25MHz, which are six, three, two, and one-time pixel clock frequencies respectively. These frequencies are chosen to perform five basic processing operations in real-time. The operation revealed that the latency of the proposed ROI reading subsystem was 13.468ns (one-pixel period), which matched the requirements for real-time applications. | http://dx.doi.org/10.12785/ijcds/160131 | Background subtraction; Clock domain; Real-time; Region of Interest; Verilog HDL | Marwan Al-yoonus and Saad Kazzaz (University of Mosul, Iraq) |
32 | 1570966207 | An Instance Segmentation Method for Nesting Green Sea Turtle's Carapace using Mask R-CNN | This research presents a method to perform instance segmentation using Mask R-CNN on nesting Green Sea Turtle images. The goal is to achieve precise segmentation in order to produce a dataset fit for sea turtle classification tasks. The method alleviates the labour-intensive and tedious task of manual annotation by automatically extracting the carapace as the Region-of-Interest (RoI) and excluding the background elements. The task is non-trivial as the image dataset used in this study contains noise, blurry edges and low contrast between the target object and background. These image defects are due to several factors, including jittering footage caused by the cameraman, the nesting event occurring during a low-light environment, and the limitation of the Complementary Metal-Oxide-Semiconductor (CMOS) sensor used in the camera. The CMOS sensor produces higher levels of noise which can manifest as random variations in pixel brightness or colour, especially in low-light conditions. These factors contribute to the degradation of image quality that causes difficulties in accurately segmenting the RoI. To address these challenges, this research proposes using Contrast-Limited Adaptive Histogram Equalization (CLAHE) as the data pre-processing technique to enhance contrast, bringing out more details to the structure of the carapace against the background elements. Our research findings demonstrate the effectiveness of Mask R-CNN with the benefit of CLAHE as a data-pre-processing step in accurately segmenting turtle carapaces. | http://dx.doi.org/10.12785/ijcds/160116 | Computer Vision; Instance Segmentation; Mask R-CNN; CLAHE; Deep Learning | Mohamad Syahiran Soria and Khalif Amir Zakry (University Malaysia Sarawak, Malaysia); Irwandi Hipiny and Hamimah Ujir (Universiti Malaysia Sarawak, Malaysia); Ruhana Hassan (University Malaysia Sarawak, Malaysia); Alphonsus Ligori Jerry (Sarawak Forestry Corporation, Malaysia) |
33 | 1570966693 | Improved Real-time 3D Reconstruction Method for Mixed Reality Telepresence | The advancement of current technologies has allowed long-distance communication between human-to-human to be closer to lifelike encounters. Mixed Reality (MR) telepresence focuses on user involvement within the real and virtual world for telecommunication, which is the current advancement in telepresence applications and is widely recognized. Even so, developing a reliable and affordable three-dimensional (3D) reconstruction of humans in action in real-time for MR telepresence is challenging. Most existing methods have required high computational processes and rely on a large data set to produce a dynamic 3D reconstruction. Therefore, this research introduces an improved real-time three-dimensional reconstruction method that could potentially utilized for MR telepresence. The implementation of double compression techniques on the 3D reconstruction data resulted in substantial enhancements to the real-time 3D reconstruction methodology, hence strengthening the overall framework of the MR telepresence system. An experiment was carried out to evaluate the performance of an improved real-time 3D reconstruction method for MR telepresence. We found the amount of the data transmitted in bytes per frame using the improved method was appropriate for the MR telepresence application. The findings indicate that the real-time performance of 3D reconstruction was accomplished by reducing the data size by twice, resulting in an optimal size that does not impose any limitations on the available bandwidth. | http://dx.doi.org/10.12785/ijcds/1601121 | Mixed Reality; Telepresence; 3D reconstruction; MR telepresence; real-time; compression | Ajune Wanis Ismail (Universiti Teknologi Malaysia (UTM), Malaysia); Shafina Abd Karim Ishigaki (University of Technology Malaysia, Malaysia); Fazliaty Edora Fadzli (Universiti Teknologi Malaysia, Malaysia) |
34 | 1570966743 | Sentiment Analysis from Texts Written in Standard Arabic and Moroccan Dialect based on Deep Learning Approaches | Sentiment analysis plays a crucial role in extracting subjective information from various sources using natural language processing techniques. It involves identifying opinions, attitudes, and emotions towards specific topics or documents. This study focuses on evaluating the performance of machine learning, deep learning, and transfer learning algorithms in accurately classifying positive and negative sentiments in Arabic comments. The study uses different machine learning and deep learning techniques, including the use of Arabert. A transfer learning technique based on the BERT algorithm for Arabic language processing. Arabert is pre-trained on a vast corpus of Arabic data, which allows him to capture complex Arabic-specific linguistic patterns. It is then refined onto a smaller dataset of Arabic comments for sentiment analysis. The study will outline the important steps and processes involved in each approach, highlighting their strengths, and comparing their performance. The utilization of deep learning and transfer learning techniques, such as Arabert, has the potential to enhance sentiment analysis accuracy on Arabic comments. By comparing the performance of different methods, the study aims to identify the most effective approaches for sentiment analysis in Arabic text. The findings of this research have practical implications in improving sentiment analysis accuracy for Arabic language applications, particularly when working with limited labeled datasets. The results can be valuable in fields like market research, customer service, and social media analysis, providing insights into the attitudes, opinions, and emotions expressed by Arabic-speaking users. | http://dx.doi.org/10.12785/ijcds/160135 | ANLP; Machine learning; Deep learning; Transfer learning | Abdellah Ait elouli (IBN ZOHR University, unknown); El Mehdi Cherrat and Hassan Ouahi (Ibn Zohr University, Morocco); Abdellatif Bekkar (Hassan II University, Morocco) |
35 | 1570967618 | Design of an intelligent tutor system for the personalization of learning activities using case-based reasoning and multi-agent system | Artificial intelligence (AI) has largely supported the development of tutoring systems, educational data mining is essential to provide information about the learning process and learner's behavior in order to have a solid basis for effectiveness research on personalized systems. Integrating Intelligent Tutoring Systems (ITS) and Multi-Agent Systems (MAS) with Case-Based Reasoning (CBR) has significant advantages. ITS provide an interactive platform to collect and analyze learner data, generating specific profiles. MAS, on the other hand, enables collaboration between different agents, such as the profile agent, recommendation agent, assessment agent, and adaptation agent, to personalize learning activities according to the individual characteristics of each learner. The use of CBR enriches personalization by exploiting previous knowledge and experience, by using similar cases in a knowledge base, the system can offer recommendations and proven solutions for each learner, thus promoting a more relevant and effective learning experience. This paper explores the application of personalization of learning activities by combining ITS and MAS with CBR. Personalization of learning activities offers the possibility of adapting and personalized educational con-tent to the individual needs of each learner, thus improving learning engagement and effectiveness. | http://dx.doi.org/10.12785/ijcds/160136 | Personalization; Artificial intelligence; Intelligent tutoring systems; Multi-agent systems; Case-based reasoning; learning activities | Lamya Anoir (Higher Normal School, Abdelmalek Essaadi University Tetouan, Morocco); Ikram Chelliq (Higher Normal School Abdelmalek Essaadi University, Morocco); Maha Khaldi (Rabat Business School, International University, Morocco); Mohamed Khaldi (Higher Normal School Abdelmalek Essaadi University, Morocco) |
36 | 1570967689 | Classification of Tuberculosis Based on Chest X-Ray Images for Imbalance Data using SMOTE | This research delves into the issue of dataset imbalance in the classification of Chest X-Ray (CXR) images in TBX11K by applying the Random Forest (RF) and XGBoost (XGB) methods with or without the Synthetic Minority Over-sampling Technique (SMOTE) resampling technique. The objective of this study is to assess the impact of SMOTE on model performance in the classification of CXR TBX11K images. In this research, the SMOTE technique is applied to the RF and XGB classification models. The use of SMOTE aims to increase the number of minority class samples (TB positive) to mitigate the imbalance with the majority class samples (TB negative). Each model is evaluated using the same metrics for comparison, such as accuracy, precision, recall, and F1 score. After conducting experiment, the research results indicate that the use of SMOTE technique in the RF and XGB models is effective in addressing class imbalance in the dataset. The RF model without SMOTE achieves an accuracy of approximately 93.33%, while the RF model with SMOTE achieves an accuracy of 92.72%. The XGB model without SMOTE attains an accuracy of 94.11%, whereas the XGB model with SMOTE achieves an accuracy of 94.33%. Although there is a slight decrease in accuracy in models with SMOTE during testing, the balance between precision and recall remains high. Overall, the XGB model with SMOTE is the optimal model for identifying rarely occurring positive cases, while the RF model without SMOTE is the optimal model for situations where overall accuracy is most critical. | http://dx.doi.org/10.12785/ijcds/160171 | Tuberculosis; Random Forest; Chest X-Ray; machine learning; imbalance data; SMOTE | Muhammad Fadhlullah (Universitas Gadjah Mada Indonesia, Indonesia); Wahyono Wahyono (Universitas Gadjah Mada, Indonesia) |
37 | 1570969416 | Investigation of different generative adversarial networks techniques for image restoration | Generative Adversarial Networks are artificial neural networks that pit two different sets of neural networks against one another in order to generate data that isn't part of the training set. The Generative Adversarial Network (GAN) produces good outcomes when it is trained on image data that comes from the actual world. The generator and the discriminator make up the Generative Adversarial Network (GAN), which stands for "generative adversarial network." The parameters that were utilized to generate the data are completely arbitrary. The information is evaluated, and erroneous information is distinguished from true information by the discriminator. Several researchers has investigated various types of GANs but comprehensive analysis and comparison of different types of recent GAN's has not been done in literature. The article concludes with a discussion of the possible uses of GANs in a variety of settings, as well as how these applications constitute a fascinating new area of research and prospective expansion | http://dx.doi.org/10.12785/ijcds/160150 | Tammineni Shanmukha Prasanthi (Andhra University & Visakhapatnam, India); Swarajya Madhuri Rayavarapu and Gottapu Sasibhushana Rao (Andhra University, India); Rajkumar Goswami (Gayatri Vidhya Parishad College for Women, India) | |
38 | 1570970114 | Design of Node level load balancing in hierarchical fog structure | Fog computing refers to the operations performed in a distributed network of nodes on the edge of the network to provide faster output generation for emergent requests. The fog layer brings computation closer to the devices, thereby reducing latency in the network. However, in recent years, fog computing has been subjected to several problems, including load balancing. Load balancing ensures appropriate allocation and distribution of resources and workload in a fog network. That being said, it's important to note that the nature of nodes in a fog network can be heterogeneous. In such a situation, it's crucial to have a load balancing mechanism that routes requests to the appropriate node based on the type of the requests, the load on the nodes, and the total load on the system. One way to solve the issue would be applying the conventional load balancing algorithms, but traditional load balancing schemes don't apply here since they tend to work amongst homogeneous sets of nodes with similar resources. In the proposed research, three load balancing schemes, namely- Highest Capacity Mode, Earliest Predicted Response Time, Equal Capacity Mode have been proposed. Finally, an optimal approach, a hybrid algorithm, and taps on the benefits of the 3 types of load balancing schemes have been proposed with the best performance among all the schemes. | http://dx.doi.org/10.12785/ijcds/160145 | Fog Computing; Load Balancing; Node-RED; Hierarchical Fog Architecture | Jai Geetha (VTU, India); Jayalakshmi D S (M S Ramaiah Institute of Technology, India); Chandrika Prasad (M. S. R. I. T, India); Srinidhi N N and Naresh E (Manipal Institute of Technology Bengaluru, Manipal Academy of Higher Education, Manipal, India) |
39 | 1570970785 | Rapid Navigation Optimization - based Deep Convolutional Neural Network for Covid-19 Detection using CT Scans | In December 2019, a highly infectious virus named ‘Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2) sparked a global pandemic. Deep Neural Networks have been extensively used to develop intelligent systems for accurate and timely diagnosis of COVID-19 infection using chest Computerized Tomography. However, Deep Learning approaches require a large annotated dataset. The fundamental goal of this research is to develop a model that would learn efficiently from a size-limited dataset. This study proposes a hybrid feature extraction approach that exploits the CT imaging characteristics of COVID-19 infection through hand-crafted texture features and complex features extracted by a pre-trained ResNet101 network. The 7-layered Deep Convolutional Neural Network used for classification has been optimized using a revolutionary rapid navigation optimization technique that improves the Moth-Flame Optimizer by integrating the concept of Mayfly velocity to update the position of the Moth fly in the exploration space. When tested on an open access dataset containing 349 COVID-19 positive CT images and 397 COVID-19 negative CT images, the accuracy, sensitivity, and specificity of the proposed rapid navigation optimization-based deep CNN classifier were 97.260%, 94.301%, and 99%, respectively. Our proposed method outperformed the other published cutting-edge research works that have tested on the same public dataset. | http://dx.doi.org/10.12785/ijcds/160146 | COVID-19; chest CT; Haralick texture features; Local Directional Pattern; Gray level co-occurrence matrix; Moth-Flame Optimization | Priya S Sawant (Pune Institute of Computer Technolgy); R Sreemathy (Pune Institute of Computer Technology, University of Pune, India) |
40 | 1570971682 | An Approach for Aircraft Detection using VGG-19 and OCSVM | Aircraft detection is an essential and noteworthy area of object detection that has received significant interest from scholars, especially with the progress of deep learning techniques. Aircraft detection is now extensively employed in various civil and military domains. Automated aircraft detection systems play a crucial role in preventing crashes, controlling airspace, and improving aviation traffic and safety on a civil scale. In military operations, detection systems play a crucial role in quickly locating aircraft for surveillance purposes, enabling real-time decision-making for military strategies. This article proposes a system for accurately detecting airplanes, regardless of their type, model, size, or color variations. However, the diversity of aircraft images, including variations in size, illumination, resolution, and other visual factors, poses challenges to detection performance. As a result, an aircraft detection system must be designed to distinguish airplanes clearly without affecting the aircraft's position, rotation, or visibility. The methodology involves three significant steps: feature extraction, detection, and evaluation. Firstly, deep features will be extracted using a pre-trained VGG19 model and transfer learning principle. Subsequently, the extracted feature vectors are employed in One Class Support Vector Machine (OCSVM) for detection purposes. Finally, the results are assessed using evaluation criteria to ensure the effectiveness and accuracy of the proposed system. The experimental evaluations were conducted across three distinct datasets: Caltech-101, Military dataset, and MTARSI dataset. Furthermore, the study compares its experimental results with those of comparable publications released in the past three years. The findings illustrate the efficacy of the proposed approach, achieving F1-scores of 96% on the Caltech-101 dataset and 99% on both Military and MTARSI datasets. | http://dx.doi.org/10.12785/ijcds/160109 | Aircraft Detection; Deep Learning; VGG; SVM | Zainab Ali Khalaf and Marwa Abdul-Majeed Hameed (University of Basrah, Iraq) |
41 | 1570971936 | A combined deep CNN-LSTM network for Sketch recognition | Automating freehand sketches is a complex process because of their diverse and abstract characteristics. Recently, there has been significant interest in machine learning algorithms among many researchers owing to their emergence. Nevertheless, most utilized models are either inadequate or overly complex, featuring processes that lack clarity and consistency, which impede their ability to accurately depict real-world scenarios. In this study, we introduce an approach that applies deep learning methods involving a combination of Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) to enhance the sketch recognition performance. In the initial phase of our approach, a CNN was employed to extract features that were subsequently forwarded to an LSTM network for classification. We evaluated the efficacy of our method by utilizing the QuickDraw dataset offered by Google, and the results demonstrated that our approach outperformed both CNN and LSTM, as well as other state-of-the-art methods. Our method attained an accuracy of 95%, with precision and recall reaching 95%, while also achieving an F1 score of 94%. | http://dx.doi.org/10.12785/ijcds/160147 | Sketch recognition; Convolution neural network; Recurrent neural network; LSTM; features | Lale EL Mouna (UCD, Morocco); Silkan Hassan (Université Chouaib DOUKKALI, Morocco); Cédric Stéphane Tékouabou Koumétio (Chouaib Doukkali University of El Jadida, Morocco); Youssef Hanyf (UIZ, Morocco); Mohamedou Cheikh tourad and Mohamedade Farouk Nanne (UNA, Mauritania) |
42 | 1570972347 | A Novel Framework for Mobile Forensics Investigation Process | Investigating digital evidence by gathering, examining, and maintaining evidence that was stored in smartphones has attracted tremendous attention and become a key part of digital forensics. The mobile forensics process aims to recover digital evidence from a mobile device in a way that will preserve the evidence in a forensically sound condition. This evidence might be used to prove to be a cybercriminal or a cybercrime victim. To do this, the mobile forensics process lifecycle must establish clear guidelines for safely capturing, isolating, transporting, storing, and proving digital evidence originating from mobile devices. There are unique aspects of the mobile forensics procedure that must be considered. It is imperative to adhere to proper techniques and norms in order for the testing of mobile devices to produce reliable results. In this paper, we develop a novel methodology for the mobile forensics process model lifecycle named Mobile Forensics Investigation Process Framework (MFIPF) which encompasses all the necessary stages and data sources used to construct the crime case. The developed framework contributes to identifying common concepts of mobile forensics through the development of the mobile forensics model that simplifies the examination process and enables forensics teams to capture and reuse specialized forensic knowledge. Furthermore, the paper provides a list of the most commonly used forensics tools and where we can use them in our proposed mobile forensic process model. | http://dx.doi.org/10.12785/ijcds/160110 | Mobile Forensics; Digital Forensics; Forensic tools; Acquisition; iOS and Android Artifact; Extraction | Mohammed Moreb (Smart University College for Modern Education); Saeed Salah (Al-Quds University, Palestine); Belal Amro (Hebron University, Palestine) |
43 | 1570972819 | Optimizing Resource Allocation in IoT for Improved Inventory Management | Effective inventory management is crucial for businesses to minimize costs and maximize operational efficiency. With the proliferation of Internet of Things (IoT) devices, businesses have access to vast amounts of real-time data that can revolutionize their inventory management processes. This paper explored the optimization of resource allocation in IoT for improved inventory management and developed an inventory management system using IoT and Wireless Sensor Network (WSN) to optimize the resource allocation. In this paper, the dataset that is taken into consideration is the primary dataset, which is collected from different locations with the help of WSN, temperature, humidity, and stock of mapping of the place where data is allocated. Further, preprocessing of the data is done, and then the data is split as training and testing data. Machine learning models, i.e., decision tree, random forest, regression model, and ensemble model (combination of decision tree, random forest, and regression model) are applied to classify and train the data. The result metrics such as Root Squared Mean Error (RMSE), Mean Absolute Error (MAE), Mean Squared Error (MSE), and Accuracy are taken into consideration to evaluate the performance of the model. Experimental results are obtained the values of RMSE, MAE, and MSE are 0.25, 0.0625, and 0.625, respectively. Also, the overall accuracy of the proposed model would be obtained as 93.75%. | http://dx.doi.org/10.12785/ijcds/160151 | Inventory Management; Internet of Things; Wireless Sensor Network; Resource Allocation Optimization; Machine Learning; Decision tree | Arti Kotru (LPU & MIET, India); Isha Batra (Lovely Professional University, India) |
44 | 1570974432 | Evaluation of Deep Learning Models for Detection of Indonesian Rupiah | This research addresses the critical task of Indonesian Rupiah banknote detection using object detection techniques. We evaluated three state-of-the-art object detection models, You Only Look Once v5 (YOLOv5), YOLOv7, and YOLOv8, to determine the most suitable model for this specialized application. Our diverse dataset of Indonesian Rupiah banknotes captures a wide range of real-world scenarios and challenges. We investigated the impact of data augmentation and conducted hyperparameter tuning. The results revealed that YOLOv8 emerged as the top-performing model, consistently delivering remarkable Mean Average Precision (mAP) scores of 0.995 for mAP@0.5 and 0.995 for mAP@0.5:0.95. It was effective with or without data augmentation and maintained high precision and recall across multiple classes. YOLOv5 also performed well, especially when augmented, demonstrating its adaptability to additional training data. Although YOLOv7 didn't perform as well as YOLOv8 and YOLOv5, it still showed good results, especially when data augmentation was applied. Furthermore, our research demonstrated that data augmentation has a significant impact on model performance, with YOLOv5 being the most responsive to additional data. These findings offer potential support for the development of practical applications aimed at enhancing the independence and quality of life for visually impaired individuals, providing a more accessible and reliable solution for banknote recognition. | http://dx.doi.org/10.12785/ijcds/160125 | Deep Learning; Object Detection; Indonesian Rupiah; YOLOv5 Model; YOLOv6 Model; Yolov7 Model | Charleen Charleen and I Gede Putra Kusuma Negara (Bina Nusantara University, Indonesia) |
45 | 1570975881 | Exploring the Landscape of Health Information Systems in the Philippines: A Methodical Analysis of Features and Challenges | A thorough analysis was conducted to evaluate Health Information Systems (HIS) in the Philippines utilizing the PRISMA approach. An initial pool of 313 potential articles, with 285 articles being excluded based on the exclusion criteria, resulting in a focused analysis of 28 articles. This analysis classifies the many HIS features while highlighting each one's distinct value inside the Philippine healthcare system. These features encompass scheduling and communications, record-keeping and prescription, knowledge and information management, and marketplace and payment systems. Common features to most HIS are the profiling of patient, notification system, membership verification, laboratory result generation, and electronic appointment and scheduling. Parallel to this, the study examined the many difficulties encountered in the adoption and application of HIS in the Philippines, tackling issues like a lack of human resources, infrastructure-related challenges, and the impact of regional strategies and policies. Additionally, financial issues were also found to be a major challenge hampering the successful development and maintenance of HIS within the hospital system. This methodical investigation, Philippine-specific, provides insights into the dynamic environment of HIS, providing a basis for wise choice-making and strategic planning adapted to the distinct healthcare context of the Philippines. | http://dx.doi.org/10.12785/ijcds/160118 | Health Information Systems; HIS in the Philippines; EMRs in the Philippines; Challenges of HIS; Features of HIS | Mia Amor C Tinam-isan and January Naga (MSU-Iligan Institute of Technology, Philippines) |
46 | 1570975890 | LEACH Protocol with Angular Area Routing: Boosting Energy Efficiency and QoS in Wireless Sensor Networks | Wireless Sensor Networks (WSNs) stand as a vital component in contemporary wireless technology. The endurance of nodes within WSNs significantly influences system efficiency. This study delves into five energy management and node longevity strategies: Low Energy Adaptive Clustering Hierarchy (LEACH), LEACH-C, TS-I-LEACH, LEACH-Enh-DVHOP, and an original method proposed herein. Graphical analysis underscores the marked enhancement of the proposed method in sustaining active nodes over rounds, notably in initial phases compared to other methods. A pivotal innovation lies in employing angular area-based routing, augmenting resource allocation and energy efficiency. Moreover, the study introduces a node's probability to serve as a Cluster Head (CH), leveraging the refined threshold formula `` |
http://dx.doi.org/10.12785/ijcds/160188 | Wireless Sensor Network; Area-based Routing; Node Longevity; LEACH; Energy Management; Cluster Head | Nirwana Haidar Hari and Mokh Sholihul Hadi (Universitas Negeri Malang, Indonesia); Sujito Sujito (State University of Malang, Indonesia) |
47 | 1570976434 | A Systematic Review on IoT and Machine Learning Algorithms in E-Healthcare | In recent years, the Internet of Things (IoT) has been adopted in many applications since its usage is essential to daily life. Also, it is a developing technology in the healthcare system to provide effective emergency services to patients. In the current scenario, medical cases and diseases among people are growing enormously. Thus, it is becoming challenging to accommodate and provide healthcare services for more incoming patients in clinics and hospitals with limited space and medical resources. Hence, the integration of IoT and assistive technologies came into the healthcare sector for providing efficient healthcare services wirelessly as well as for continuous monitoring of the patients. With the help of IoT and Machine Learning technologies, healthcare providers can keep a closer eye on their patients and maintain more proactive lines of communication with them. Data collected from IoT devices can be fed to Machine Learning technologies for predicting and diagnosing diseases. Due to the severity of diseases, lack of early disease prediction methods, lack of resources, and a smaller number of specialized doctors, most of the population is dying. Hence, to address these issues in the healthcare domain, more research works are proposed based on Machine Learning and IoT-based healthcare systems. This work reviews the research works related to IoT-based healthcare systems and machine learning comprehensively. | http://dx.doi.org/10.12785/ijcds/160122 | IoT; Machine Learning; Disease Diagnosis; E-healthcare | Deepika Tenepalli (VIT University, India); Navamani T m (VIT Vellore, India) |
48 | 1570976915 | Applying Process Mining to Generate Business Process Models from Smart Environments | The management of business processes went through several changes. On the one hand, business intelligence (BI) is becoming more popular among businesses as a way to cut costs, boost service quality, and enhance decision-making. On the other hand, we use business process management in a smart environment. So these data sources produced in this environment via sensors, actuators, and other devices are more varied and unstructured, so to apply the process mining techniques, it is necessary to transform them into a structured format. Several works have been done in this direction, and the authors have contributed to the improvement, but the problem is that there is not yet an approach that formalizes the transformation in a general way regardless of the type of sensor data element. Our approach is based on a model-driven architecture (MDA), which allows us to generate source-to-target data transformations. The main objective is to establish the MDA approach via transformation rules based on machine learning techniques. | Process mining (PM); raw sensor log; event log; Model Driven Architecture(MDA) | Iman EL Kodssi (University Hassan II, Morocco); Hanae Sbai (FST Mohammedia, Hassan II University of Casablanca, Morocco); Mustapha Kabil (University Hassan II, FST Mohammedia, Morocco) | |
49 | 1570978553 | Deep Feature segmentation model Driven by Hybrid Convolution Network for Hyper Spectral Image Classification | Hyperspectral image (HSI) classification can support different applications, such as agriculture, military, city planning, land utilization, and identifying distinct regions. It is treated as a crucial topic in the research community. Recent advancement in convolution neural network (CNN) has shown the unique capability of extracting meaningful feature and classification. However, CNN works with square images with fixed dimensions and cannot extract local information of images having distinct geometric variations with context and content relationships; hence there is a scope for improvement in correctly identifying class boundaries. Encouraged by the facts, we propose an HSI feature segmentation model by the hybrid convolution network (GCNN-RESNET152) for the HSI classification. First, pre-trained CNN on ImageNet is used to obtain the multilayer feature. Second, the 3D discrete wavelet transform image is fed into the graph convolution network GCN model to gain patch-to-patch correlations feature maps. Finally, the features are integrated using the three weighted coefficients concatenation method. Finally, the linear classifier is used to predict the semantic classes of pixel HSI. The proposed model is tested on four benchmark dataset Houston University (HU), Indian pines(IP), Kennedy space station(KSS), and Pavia university(PU). The result is compared with state-of-art algorithms and found to be superior in terms of overall, average, and kappa accuracy. The Overall, average and kappa accuracy achieved for HU: 97.7%, 99.4%, 95.6%, IP: 97.7%, 99.4%, 95.6%, KSS:97.48%,99.68%,96.43%, and PU: 97.7%, 99.4%, 95.6% respectively, which is 5 to 8% more than state of art methods. | http://dx.doi.org/10.12785/ijcds/160153 | Hybrid Convolution Network; Hyper-spectral image; classification; deep feature segmentation | Rahul Ghotekar (KIIT University, India); Kailash Shaw (Department of AIML, Symbiosis Institute of Technology, Pune Campus, Symbiosis International (Deemed University), Pune, I); Minakhi Rout (Kalinga Institute of Industrial Technology, Bhubaneswar, Odisha, India) |
50 | 1570978793 | Generative Adversarial Networks for Facial Expression Recognition in the Wild | Modelling and recognizing people's emotions from their faces are challenging computer vision problems. Normally we approach these issues by identifying Action Units, which have many applications in Human Computer Interaction. Although Deep Learning approaches have demonstrated a high level of performance in recognizing AUs and emotions, they require large datasets of expert-labelled examples. In this article, we demonstrate that good deep features can be learnt in an unsupervised fashion using Deep Convolutional Generative Adversarial Networks, allowing for a supervised classifier to be learned from a smaller labelled dataset. The two main aspects that are addressed in this paper are: i) the creation of facial expression images under different arbitrary poses (frontal, multi-view, and in the wild); and ii) the recognition of emotion categories and Action Units. By experimenting with this different approach and using more varied datasets for feature learning and classification, we are demonstrating a more compelling generalization and better results. Compared to existing state-of-the-art methods, the proposed model achieves good performance, particularly in the Radboud dataset with an overall accuracy of 98.57%. | http://dx.doi.org/10.12785/ijcds/160193 | Affective computing; fine-tuning; transfer learning; FACS; generalization; relabelling | Luma Akram Alharbawee (University of Mosul, Iraq); Nicolas Pugeault (University of Glascow, United Kingdom (Great Britain)) |
51 | 1570979096 | Halal Supply Chain Risk using Unsupervised Learning Methods for Clustering Leather Industries | Cowhide plays a significant role in Indonesia's culinary, leather industries and caters to the preferences of a predominantly Muslim population that strongly emphasizes halal products. Regulatory authorities must comprehensively understand its characteristics to provide halal assurance to the diverse entities within Indonesia's leather industry effectively. This study employs unsupervised learning methods, specifically K-Means and Hierarchical clustering algorithms to analyze a dataset comprising 100 Cowhide The Small and Medium Enterprises (SMEs) Industries located in Garut Regency, West Java Province, Indonesia. This dataset encompasses 62 features that enable the clustering of cowhide industries based on halal risk factors. Experimental results indicate that the optimal number of clusters is m=4. The K-Means algorithm outperforms the Hierarchical clustering algorithm with a higher average silhouette score of 0.59 compared to 0.31 indicating its superior performance. Furthermore, the K-Means algorithm demonstrates exceptional stability in clustering the data, making it a robust choice for this analysis. The clustering outcomes of the Cowhide SMEs Industry provide valuable insights into the industry's characteristics, facilitating the efficient implementation of halal assurance measures. These findings hold substantial implications for the halal certification and assistance processes within the leather industry in Indonesia. | http://dx.doi.org/10.12785/ijcds/160165 | K-means clustering; Hierarchical clustering; Leather; Halal; Industries; SMEs | Rahmad Kurniawan (Universitas Riau, Indonesia & Universiti Kebangsaan Malaysia, Malaysia); Fitra Lestari and Mawardi M Mawardi (Universitas Islam Negeri Sultan Syarif Kasim Riau, Indonesia); Tengku Nurainun (State Islamic University of Sultan Syarif Kasim Riau, Indonesia); Abu Bakar Abdul Hamid (Universiti Putra Malaysia, Malaysia); Tisha Melia (Universitas Riau, Indonesia) |
52 | 1570979161 | Enhancing Trust between Patient and Hospital using Blockchain based architecture with IoMT | This paper addresses a critical issue within the healthcare industry, wherein concerns about transparency and accountability have arisen due to certain instances of potential misconduct. In particular, there have been allegations in certain countries, that some healthcare institutions prolong the use of intensive care for deceased patients to generate additional revenue, leading to a severe erosion of trust between the families of patients and these institutions. To mitigate these concerns, we provide how the implementation of a blockchain-based system integrated with the Internet of Medical Things (IoMT) can be beneficial in solving such problems. This approach seeks to enhance trust and transparency by autonomously capturing patient data and ensuring its immutability through blockchain technology. Blockchain can be implemented for IoT in multiple ways depending on the use-case. We provide an architecture that adopts a decentralized framework for combating this particular problem efficiently. The system will provide a secure, tamper-proof, and accountable system that can alleviate trust issues and uphold the highest ethical standards in healthcare. Using this system, patient's relatives will have a real-time access to patient's data and can understand the condition of the patient. | http://dx.doi.org/10.12785/ijcds/160123 | Blockchain; architecture; IoMT; IoT; healthcare; patient monitoring | Deepa Pavithran and Charles Shibu (Abu Dhabi Polytechnic, United Arab Emirates); Sudheer Madathiparambil (Ahalia Hospital Mussafah, United Arab Emirates) |
53 | 1570979168 | Improving Sentiment Analysis in Digital Marketplaces through SVM Kernel Fine-Tuning | The rapid growth of online marketplaces, especially in the digital realm, encourages the need for in-depth studies related to marketing strategies through public opinion, especially through the Twitter platform. Developing an automated online market sentiment analysis method can accelerate the market analysis. Using machine learning to analyze market sentiment can help efficiently evaluate online marketplaces in Indonesia through Twitter applications. Our research compares the Support Vector Machine (SVM) algorithm with linear and non-linear kernels. Understanding the advantages and disadvantages of SVM algorithms is very important in identifying the sentiment of online marketplaces, especially in Indonesia. In the evaluation, the Non-linear SVM kernel (Polynomial) achieved the highest accuracy at sp; 85% of the training data used. The model predicts negative and positive labels with a precision of 0.93, while the recall value is 0.96 for negative and 0.90 for positive labels. The balanced f1-score values are 0.94 for negative and 0.90 for positive labels, achieving the highest accuracy and effectiveness in capturing people's responses to online marketplaces while maintaining robust precision, recall and f1-score metrics. | http://dx.doi.org/10.12785/ijcds/160113 | SVM; Machine Learning; Marketplace Online; Indonesia | Abdul Fadlil (Universitas Ahmad Dahlan, Indonesia); Imam Riadi (Ahmad Dahlan University & Yogyakarta, Indonesia); Fiki Andrianto (Universitas Ahmad Dahlan, Indonesia) |
54 | 1570979201 | Lib-Bot: A Smart Librarian-Chatbot Assistant | The library is normally described as a knowledge warehouse and various long past knowledge can be found in the library. It can be said without fear that most of the researchers or university students who study research-related subjects will have the experience of visiting the library to find some relevant literature or references that are helpful for their projects. However, the librarians may not be available at the counter in the library all the time to solve visitors' problems, especially during peak hours. This will indeed influence the visitors' satisfaction and experience. Besides, during the COVID-19 pandemic, it has always been advised to avoid as much physical contact as possible. Thus, in this project, an AI chatbot will be implemented and applied to a library mobile application to answer library-related questions from the user. Bidirectional Encoder Representations from Transformers (BERT) algorithm will be used to classify the intent of user messages. Besides, it is also observed that many chatbot-related applications only support text input for chatting. Hence, a speech-to-text recognition feature will be implemented for the library chatbot assistant as well to enable both text and voice input. Moreover, the chatbot assistant may not be able to answer all the questions. Therefore, for those queries that cannot be solved by the chatbot, the system will store the unsolved query in the database and the library admins can view it on the admin portal site. The library admins can filter those queries and upload new training data for the AI model so that the chatbot can cover a wider range of questions. | http://dx.doi.org/10.12785/ijcds/160101 | BERT; Library; Chatbot; Machine Learning; Intent Classification | Tong-Jun Ng and Kok-Why Ng (Multimedia University, Malaysia); Su-Cheng Haw (MMU, Malaysia) |
55 | 1570980185 | CFCM-SMOTE: A Robust Fetal Health Classification to Improve Precision Modelling in Multi-Class Scenarios | The advent of cardiotocography (CTG) has radically transformed prenatal care, facilitating in-depth evaluations of fetal health. Despite this, the reliability of CTG is frequently undermined by data-related issues, such as outliers and class imbalanced data. To address these challenges, our study introduces an innovative integrated methodology that combines cluster-based fuzzy C-means (CFCM) with the synthetic minority oversampling technique (SMOTE) to improve the precision of classification of fetal health status classification in multiclass scenarios. We used a considerable dataset from the UCI Machine Learning Repository, employing CFCM to manage outliers and SMOTE to rectify class imbalanced data. This approach has significantly improved the performance of the classification algorithm, a fact that is corroborated by comprehensive experimental validation. We observed notable improvements in several evaluation metrics, including precision (PRC), sensitivity (SNS), specificity (SPC), F1 score (F1-S), and accuracy (ACC), surpassing the capabilities of prior methodologies. Specifically, the deployment of our algorithm amplified the precision (PRC: from 98.16\% to 99.58\%), sensitivity (SNS: from 95.82\% to 100\%), specificity (SPC: from 85.81\% to 99.75\%), F1 score (F1-Score: from 96.98\% to 99.79\%), and accuracy (ACC: from 94.20\% to 99.84\%) of the Classification and Regression Tree (CART) algorithm for the 'normal' class, while also improving the precision and accuracy of the Random Forest (RF) algorithm from PRC: 94.77\% to 95.89\% and ACC: 90.60\% to 97.45\%. These results confirm the potential of CFCM-SMOTE as a robust model for fetal health diagnostics and as a basic strategy for the development of predictive analyses in prenatal healthcare. | http://dx.doi.org/10.12785/ijcds/160137 | Fetal health classification; Cardiotocography; Imbalanced class; CFCM-SMOTE | Ahmad Ilham, AI (Universitas Muhammadiyah Semarang, Indonesia); Asdani Kindarto (Semarang Government Municipality & Universitas Muhammadiyah Semarang, Indonesia); Akhmad Fathurohman (Universitas Muhammadiyah Semarang, Indonesia); Laelatul Khikmah (Institute of Statistics and Business Technology Muhammadiyah Semarang, Indonesia); Rima Dias Ramadhani, Safrie Abdunnasir Jawad, Dhewi April Liana and Aura Amylia. AR (Universitas Muhammadiyah Semarang, Indonesia); Ahmed Kareem Oleiwi (Zhengzhou University, Iraq); Astri Mutiar (Sekolah Tinggi Ilmu Keperawatan PPNI Jawa Barat, Bandung, Indonesia) |
56 | 1570980504 | K-means clustering -based Trust (KmeansT) evaluation mechanism for detecting Blackhole attacks in IoT environment | The Internet of Things (IoT) is offering numerous applications and making our lives become easier and more comfortable. However, the significant features lead to various research challenges among security is a main concern as we deal with sensitive information in the IoT environment. The environment opens a loophole for various attacks and those attacks harm the network intentionally. Blackholes are one kind of attack that harms routing operations by dropping all incoming packets. To address this issue, a K-means clustering – based Trust (KmeansT) evaluation mechanism has been proposed. Here, the trust evaluation will be done with the help of both direct observations and recommendations for trust will be given by others. Followed by k means clustering algorithm has been applied to enhance the evaluation mechanism. The blackhole attacks are effectively identified by the proposed model. Mathematical models of the proposed work witness the effectiveness of detection. Simulation results will be analysed by comparing them with the existing similar models in terms of various performance metrics. | http://dx.doi.org/10.12785/ijcds/160154 | Internet of Things; Security; Blackhole attack; Trust; K-means clustering | Shameer M (Karpagam Academy of Higher Education, India); Gnanaprasanambikai L (Karpagam Academy of Higher Education, India & Coimbatore, India) |
57 | 1570980519 | Use of AI Applications in order to Learn the Sentiment Polarity of Public Perceptions: A Case Study of the COVID-19 Vaccinations in the UAE | Artificial Intelligence (AI) has revolutionized predictive, forecasting, and classification capabilities, finding applications across diverse domains. This paper investigates AI's role in analyzing sentiment polarity within the public's perceptions of post-COVID-19 vaccination providers in the United Arab Emirates (UAE). AI's prevalence in medical research, particularly for low-computation tasks, underscores its significance. Amidst abundant medical resources, concerns about the safety and efficacy of various COVID-19 vaccines persist. Social media platforms serve as dynamic spaces where individuals share vaccination experiences, shaping public perceptions. Recognizing AI's pivotal role in influencing business perception, this study exploits AI to extract insights into current vaccine perceptions. This methodology employs data mining to analyze textual data, classifying and clustering social media posts into distinct groups. Emotional labeling discerns sentiments associated with each vaccine. The dataset includes tweets on Pfizer-BioNTech, Sinopharm, Sputnik V, and Oxford-AstraZeneca—chosen for their availability in the UAE, where around 70% of the population received vaccinations by June 2021. This analysis aims to understand public sentiment, identifying preferences and concerns. Findings offer valuable insights into how these vaccines are perceived in the UAE, contributing to the broader discourse on AI, public perception, and healthcare. The study informs decision-makers and health authorities in refining communication strategies and addressing public concerns. | http://dx.doi.org/10.12785/ijcds/160187 | Sentiment Analysis; LSTM; SMOTE; Public Perception; COVID-19; RNN | Abdulrahman Turki Radaideh and Fikri Dweiri (University of Sharjah, United Arab Emirates) |
58 | 1570980761 | Image Classification Based on Disaster type Using Deep Learning | People nowadays use social media platforms to capture and share real-time incidents in the form of images, videos and text. However, sharing too much information at once makes it harder for first responders to determine where exactly individuals are in need and whether they require immediate assistance. In the past, machine learning techniques were used to automatically identify and infer disaster response from images, as manually identifying disaster types is currently challenging. Therefore, in this paper, deep learning models are used to investigate how well they can classify the images according to their disaster type by learning the features extracted from the input images on their own. In this study, 2 existing datasets namely the ‘Comprehensive Disaster Dataset' (CDD) and ‘Natural Disaster Dataset' (NDD) based on disaster types were customized into a dataset entitled as ‘Customized Disaster Dataset'. The Customized Disaster Dataset comprises of a total of ten classes, three of which are non-damage images, Pre-trained models like the MobileNetV2, VGG16 and InceptionV3 were used to train the datasets to allow for further comparison with existing studies. Along with that, a customized neural network model was created and trained on the datasets. Different scenarios were devised to assess the top 3 performing models. The InceptionV3 being best model had a classification accuracy of 96.86%. In this study, we have demonstrated the effectiveness of CNN models as a tool for automatic disaster type classification. | http://dx.doi.org/10.12785/ijcds/1601109 | deep learning; disaster; image classification | Sameerchand Pudaruth and Anisha Coopen (University of Mauritius, Mauritius) |
59 | 1570981307 | Review of Compensation and Dispersion Techniques for Fiber Optic Lightpath Networks | Fiber optic communication system offers high-speed, long-distance connectivity and integrates with effective data transmission for modern world applications. The primary challenges in optical communication lead to the detrimental impact of dispersion that introduces more signal distortion and lowers the quality of data transmission. This research provides a detailed analysis of dispersion compensation techniques and their necessity for maintaining the integrity of optical communication. This review explores the fundamental analysis of dispersion, such as chromatic, polarization mode, and modal dispersion, and the factors that influence the presence of dispersion characteristics. Furthermore, the proposed review analyses the passive and active compensation techniques and highlights their significance and limitations. Active dispersion compensation, such as dispersion compensating modules and Digital signal processing methods, are investigated for dynamic optical networks. However, the passive dispersion compensation techniques, such as fiber Bragg gratings and dispersion-compensating fibers, are examined in detail, listing their ability for dispersion mitigation effects. Finally, this comprehensive review provides key insights into the developments and prospects in dispersion compensation techniques and enhances the performance and reliability of the optical system design. | http://dx.doi.org/10.12785/ijcds/160155 | Sudha Sakthivel (Malaysian Institute of Information Technology, Universiti Kuala Lumpur, India); Muhammad Mansoor Alam (Riphah International University, Islamabad, Pakistan & Universiti Kuala Lumpur, Kuala Lumpur, Malaysia); Aznida Abu Bakar Sajak (Universiti Kuala Lumpur, Malaysia & University of Liverpool, United Kingdom (Great Britain)); Mazliham Mohd Suud (Multimedia University, Malaysia); Mohammad Riyaz Belgaum (G Pullaiah College of Engineering and Technology, India & Multimedia University, Cyberjaya. Malaysia, Malaysia) | |
60 | 1570981332 | Harnessing Deep Learning for Early Breast Cancer Diagnosis: A Review of Datasets, Methods, Challenges, and Future Directions | Breast cancer is the most common kind of cancer diagnosed worldwide and the leading cause of cancer-related deaths among women, therefore it presents a significant public health risk. Therefore, early identification and diagnosis of malignant breast tumors can significantly increase patient survival rates and facilitate effective treatment. Imaging is one of the key procedures in decision-making for diagnosing breast cancer. In instance, mammography is the most efficient and highly recommended imaging technique by radiologists in the identification of many types of breast abnormalities However, with the daily growth in mammography, it is still challenging for radiologists and doctors to give correct and consistent interpretations, which can lead to potential misinterpretations and unneeded biopsies. Considering this context, various researchers have looked into the use of mammography and Deep Learning (DL) approaches for accurate early breast cancer diagnosis. Utilizing these approaches in clinical settings can increase diagnosis accuracy, save time spent, lower the likelihood of mistakes and errors, increase patient satisfaction, and streamline radiologists' workloads. The basic ideas of healthy breast tissue, breast cancer, mammography, and deep learning are briefly presented in this review. This paper delves into the latest advances in systems utilizing deep learning algorithms applied to breast cancer diagnosis using mammograms. Additionally, it provides a concise overview of publicly available mammogram datasets and explores the most widely used metrics for evaluating computer-aided breast cancer diagnosis systems.. Finally, issues and potential research objectives in this developing field are outlined. This paper presents a comprehensive examination of the topic and intend to inspire and direct medical professionals, researchers, scientists, and other healthcare workers who are interested in creating cutting-edge applications toward early breast cancer diagnosis using mammographies image processing in the right direction. | http://dx.doi.org/10.12785/ijcds/1601122 | Marwa Ben Ammar (University of Tunis El Manar & Higher Institute of Medical Technologies of Tunis, Tunisia); Faten Labbene Ayachi (SupCom, Tunisia); Anselmo Cardoso de Paiva (Federal University of Maranhão, Sao Luis, MA, BR, Brazil); Riadh Ksantini (University of Bahrain, Bahrain); Halima Mahjoubi (University of Tunis El Manar, Tunisia) | |
61 | 1570981649 | Exploring Deepfake Detection: Techniques, Datasets, and Challenges | Deepfake detection is an active area of research due to extensive use of deepfake media for spreading false information, manipulate public opinion and cause harm to individuals. This paper presents a critical and systematic review of 84 articles for deepfake generation and detection. We review the current state-of-the-art techniques for deepfake detection techniques by grouping them into four different categories: deep learning-based techniques, traditional machine learning-based, artifacts analysis-based and biological signal-based methods, the datasets used for training and testing deepfake detection models. We also discuss the evaluation metrics used to measure the effectiveness of these methods and the challenges and future directions of deepfake detection research. Our findings suggest that deep learning models demonstrate superior accuracy compared to other methods and artifacts analysis-based methods shows greater potential in precision but there is still room for improvement in detecting more sophisticated and realistic deepfakes. | http://dx.doi.org/10.12785/ijcds/160156 | Sandhya Bansal (M. M. E. C, India); Preeti Rana (Computer Science and Engineering Maharishi Markandeswar Engineering College MMDU, India) | |
62 | 1570983224 | Improved LMS Performance for System Identification Application | The adaptive filtering algorithm of Normalized Least Mean Square (NLMS) is known to be highly efficient in terms of requiring less number of iterations compared to the reference Least Mean Square (LMS) method, at the cost of having increased computational complexity. The performance of the simple method of LMS is found to be highly dependent on the step size which is assigned and fixed at the beginning of iterations. Throughout the literature, only the range of LMS step size that assures stability is usually suggested, while the selection of the most suitable value of the step size within this range is still not thoroughly studied. This work proposes the use of the step size value of the first iteration in NLMS to adjust the step size value in LMS, which is found through the results to be highly effective in approaching NLMS behavior without having to increase computational burden. Relying on this way to specify the LMS step size can provide simplicity, accuracy and high convergence speed, not only for system identification, but also for many other adaptive filtering applications. | http://dx.doi.org/10.12785/ijcds/160194 | Adaptive filters; Convergence; LMS; NLMS; Step size | Wasa Mamdoh Abdulatef (Ninevah University & Communication, Iraq); Zainab Rami Alomari and Mahmod Ahmed Mahmod (Ninevah University, Iraq) |
63 | 1570988864 | Deep Learning Based Hyperspectral Image Classification: A Review For Future Enhancement | The use of Hyperspectral Image(HSI) has become prevalent in many sectors due to its ability to identify detailed spectral information (i.e., relationships between the collected spectral data and the object in the HSI data) that cannot be obtained through ordinary imaging. Traditional RGB image classification approaches are insufficient for hyperspectral image classification(HSIC) because they struggle to capture the subtle spectral information that exists within hyperspectral data. In the past few years, the Deep Learning(DL) based model has become a very powerful and efficient non-linear feature extractor for a wide range of computer vision tasks. Furthermore, DL-based models are exempt from manual feature extraction. The use of this stimulus prompted the researchers to use a DL-based model for the classification of Hyperspectral Images, which yielded impressive results. This motivation inspired the researchers to develop a DL-based model for the classification of hyperspectral images, which performed well. Deeper networks might encounter vanishing gradient problems, making optimization more difficult. To address this issue, regularisation and architectural improvements are being implemented. One of the key issues is that the DL-based HSIC model requires a large number of training samples for training, which is an important concern with hyperspectral data due to the scarcity of public HSI datasets. This article provides an overview of deep learning for hyperspectral image classification and assesses the most recent methods. Among all studied methods SpectralNET offers significantly better performance, due to the utilization of wavelet transformation. | http://dx.doi.org/10.12785/ijcds/160133 | Hyperspectral image; Hyperspectral image classification; Deep learning based hyperspectral image classification; Deep learning | Anish Sarkar and Utpal Nandi (Vidyasagar University, India); Nayan Kumar Sarkar (North Eastern Regional Institute of Science and Technology (NERIST), India); Chiranjit Changdar (BELDA COLLEGE, India); Bachchu Paul (Vidyasagar University, India) |
64 | 1570989134 | Ferritin Level Prediction in Patients with Chronic Kidney Disease Using Cluster Centers on Fuzzy Subtractive Clustering | It's critical to understand the iron reserves of hemodialysis patients with chronic kidney disease (CKD). This is done in order to determine early on if the patient has an iron deficiency or, conversely, whether there is an accumulation of iron serum. One of the elements that can be utilized to evaluate this is ferritin levels. Unfortunately, testing ferritin levels is still considered quite expensive. Therefore, this study predicts ferritin levels through simple and low-cost variables such as height, weight, blood pressure, duration of hemodialysis, history of comorbidities, and Hb levels before and after hemodialysis. The clustering algorithm is used for grouping since the sample data is very diverse. Using the concepts of density, 50 patient conditions were adaptively categorized by fuzzy subtractive clustering (FSC). We obtained eight clusters as the final result. The correlation coefficient shows that the strongest relationship with ferritin levels occurs in the blood pressure variable. The average similarity between the expected and actual levels of ferritin was 62.53%. The average similarity value obtained from tests on the nine testing data sets was 83.74%. This study is still limited to predicting based on ferritin levels. Further research will be conducted using other factors such as serum iron. | http://dx.doi.org/10.12785/ijcds/160132 | fuzzy clustering; prediction; density; chronic kidney diseas | Linda Rosita and Sri Kusumadewi (Universitas Islam Indonesia, Indonesia); Tri Ratnaningsih and Nyoman Kertia (Universitas Gadjah Mada, Indonesia); Barkah Djaka Purwanto (Universitas Ahmad Dahlan, Indonesia); Elyza Gustri Wahyuni (Universitas Islam Indonesia, Indonesia & Fakultas Teknologi Industri, Indonesia) |
65 | 1570989170 | A Deep Learning Approach for Enhancing Tuberculosis Classification Leveraging Optimized Sequential AlexNet (OSAN) | In this research paper, we present an innovative tuberculosis (TB) classification model built upon the well-established AlexNet architecture, with a primary emphasis on its outstanding performance in the realm of TB detection. Tuberculosis remains a formidable challenge to global healthcare systems, particularly in resource-limited settings. Timely and accurate diagnosis is of paramount importance for the effective management and containment of this disease. Our approach entails meticulous architectural refinements and rigorous training on a diverse dataset encompassing a wide spectrum of TB-related symptoms. This comprehensive training ensures the model's adaptability and resilience in addressing real-world diagnostic complexities. The central objective of our OSAN model is to categorize medical images into two crucial groups: "normal" and "TB-infected." The outcomes achieved are truly noteworthy, with a classification accuracy rate of 99.67%. This exceptional level of accuracy underscores the model's potential to bring about transformative changes in TB diagnostics. It holds the promise of early identification, facilitating prompt intervention, and ultimately leading to improved patient outcomes. Our research contributes to the overarching objective of enhancing patient care and supporting global health initiatives. By providing a reliable and accessible tool for TB diagnosis, our model has the potential to make a significant impact in the battle against this persistent global health menace. | http://dx.doi.org/10.12785/ijcds/1601112 | AlexNet; Chest X-rays; Convolutional Neural Networks; OSAN model; Tuberculosis | Shanmuga Sundari M, Vidyullatha Sukhavasi, Swapna D, Durga Kbks, Poonam Shaylesh Lunawat and Mayukha Mandya Ammangatambu (BVRIT HYDERABAD College of Engineering for Women, India) |
66 | 1570989203 | Exploring Research Challenges of Blockchain and Supporting Technology with Potential Solution in Healthcare | The healthcare industry is currently experiencing a digital revolution driven by developments in information technology. This transformation is geared to improve health care, diagnosis, and continuous surveillance by utilising smart devices. Digitization improves the efficiency of computing, storing, and accessing medical records, resulting in enhanced patient treatment experiences. The convergence of artificial intelligence with blockchain has great potential to revolutionize healthcare by improving security, cross platform communication, and decision-making processes. Nevertheless, the combination of these two advanced technologies poses numerous research challenges that require comprehensive investigation. The objective of the study is to conduct an early investigation into blockchain based artificial intelligence approaches. Afterwards, it seeks to recognize and address certain unresolved research challenges that must be addressed in the future. The research work suggests that the problem of excessive energy usage and low transaction rate can be eliminated by improving the consensus algorithms as the efficiency of both of these depends on it. The scalability issue may be resolved in the future using forks and sharding techniques and as far as regulatory issues are concerned the government must set up common policies to follow. | http://dx.doi.org/10.12785/ijcds/160138 | Blockchain; Artificial Intelligence; Health; Machine Learning; Deep Learning; Healthcare | Shilpi Garg (Chitkara University & Chitkara Institute of Engineering and Technology, India); Rajesh Kaushal and Naveen Kumar (Chitkara University, India); Ekkarat Boonchieng (Chiang Mai University, Thailand) |
67 | 1570989599 | The Optic Disc Detection and Segmentation in Retinal Fundus Images Utilizing You Only Look Once (YOLO) Method | Automated analysis of retinal pictures is an essential diagnostic method for the early detection of disorders affecting the eyes, such as glaucoma, diabetic macula edema (DME), and retinopathy brought on by diabetes. This paper presents a robust methodology for optic disc detection and segmentation using a deep learning-based technique. This is comparable to the initial stage of creating a computer-assisted diagnostic system for diabetic macula edema in retinal images. The suggested approach is predicated on the recommended approach using the YOLO (You Only Look Once) algorithm for detection objects and segmentation for bounding boxes that belong to the same category, comparing the Intersection over Union (IOU) values of each bounding box with those of the others. If IOU values are higher than thresholds, they will consider them the same targets and maintain the boundary boxes with the highest reliability. Three retinal image databases that are accessible to the public are used to quantitatively assess the technique: Messidor-1, Messidor-2 and the IDRID Database. The technique yields a success rate of optic disc identification of 99.5%, a precision of 99.9%, and a Recall of 100% for Messidor-1, and testing for all Messidor-2, and IDRID databases accepts accuracy of 99.1% and 98.7 on respectively. When it comes to the identification and border extraction of the optic disc, this special technique has demonstrated a notable improvement over previous approaches. | http://dx.doi.org/10.12785/ijcds/160139 | fundus pictures; you only look once; diabetic; optic discs; diabetic macular edema | Zahraa Jabbar Hussein (University of Kufa, Iraq & University of Babylon, Iraq); Enas Hamood (University of Babylon, Iraq) |
68 | 1570990847 | Deep Neural Networks for Classifying Nutrient Deficiencies in Rice Plants Using Leaf Images | Nutrients are vital in ensuring expected crop growth and yield quality. Accurate identification of nutrient deficiencies in plants is essential to provide appropriate supplements of fertilizers. Manual inspection of symptoms and identifying nutrient deficiencies is a tiresome task requiring higher expertise. This paper aims to design and develop a computationally efficient deep-learning model to classify plant nutrient deficiencies accurately. This paper presents an image-based deep-learning framework for nutrient deficiency identification. Three deep learning models, namely the Xception model, vision transformer, and multi-layer perceptron-based (MLP) mixer model, were trained to identify nitrogen (N), phosphorous (P), and potassium (K) deficiencies in rice plants from red-green-blue (RGB) images. The model performance is tested on nutrient deficiency symptoms in rice plants dataset available publicly on Kaggle. All three models achieved nutrient deficiency classification accuracy greater than 92%. The Xception model achieved the highest average accuracy of 95.14% at the cost of approximately 1.2 million total trainable parameters, much less than the vision transformer and MLP mixer model. The Xception model performs better as compared to the other two models in classifying nutrient deficiencies with the least number of total trainable parameters. In the future, these neural networks can be trained and extended to accurately detect and segment nutrient-deficient crop areas in large fields to supply precise fertilizer supplements. | http://dx.doi.org/10.12785/ijcds/160124 | Deep learning; Plant nutrient deficiency classification; Xception model; Vision transformer; MLP mixer model | Shrikrishna Kolhar (Symbiosis Institute of Technology, Symbiosis International (Deemed) University (SIU), Pune, India); Jayant Jagtap (NIMS School of Computing and Artificial Intelligence, India & NIMS University, India); Rajveer K Shastri (Vidya Pratishthan's College of Engineering, Baramati & University of Pune, India) |
69 | 1570991820 | Epilepsy Identification using Hybrid CoPrO-DCNN Classifier | Electroencephalogram (EEG) is progressively developing as a remarkable structure of neuron action. It comprises of massive information that is used for identifying abnormality and dealing with intellectual disorders and irregularities. Present paper shows study related to EEGs of abnormal subjects and those are analyzed with respect to normal subject. Numerous topographies like Mean, Entropy, Wavelet bands are evaluated and compared. Building upon the adaptive hunting strategies observed in coyotes, this hybrid computational model is fused with deep learning architectures to enhance diagnostic accuracy. The methodology involves the creation of a unique computational algorithm inspired by coyote hunting behaviors, integrated with deep neural networks. This hybrid model is applied to analyze EEG data for brain disorder detection, leveraging both the biological-inspired algorithm and the data-driven capabilities of deep learning. Regarding the results, the proposed scheme exhibits promising diagnostic accuracy, achieving an accuracy rate of 98.65% for training (True Positive - TP) and 98.82% utilizing k-fold validation. These preliminary results demonstrate the potential effectiveness of the hybrid approach in accurately detecting brain disorders from EEG signals. However, it's important to note that these results are indicative of the initial success and represent a part of the comprehensive evaluation conducted in this study. | http://dx.doi.org/10.12785/ijcds/160157 | Epilepsy; classifier; Electroencephalogram; wavelet; deep learning | Ganesh Basawaraj Birajadar and Altaf Osman Mulani (SKN Sinhgad College of Engineering Pandharpur, India); Khalaf Ibrahim Osamah (Al-Nahrain University, Iraq); Nesren Farhah (Saudi Electronic University, Saudi Arabia); Pravin Gawande (Savitribai Phule University, India); Kishor Kinage (D J Sanghvi College of Engineering, India); Abdulsattar Abdullah Hamad (University of Samarra, Iraq) |
70 | 1570992362 | Retinopathy of Prematurity Disease Diagnosis Using Deep Learning | Retinopathy of Prematurity (ROP) is a disease affecting infants born preterm, at birth their retina is not well developed and in most times after birth the veins do not develop to full term. Sometimes these veins stop growing and then suddenly start growing to the wrong directions and this abnormally causes retina traction, causing blindness. Each country has its own screening guidelines for the diagnosis. The disease can be categorized as severe or mild and has five stages. Stage one and two is not severe and can develop and heal unnoticed. Stage three should be diagnosed because it is reversable through treatment but when the disease progresses to stage four retina traction occurs and causes blindness at stage five. The emergent of digital imaging support has resulted to having hospitals capturing retina images to determine the presence or absence of severe ROP. These images can be used to determine the presence of retinal detachment or lack of growth of the veins. The disease diagnosis is expensive with few eye specialists available in hospitals and the process of capturing retina images by non-eye specialists and transmitting them to specialists for disease diagnosis pauses many issues. Different cameras produce images of different contrast, image transmission may cause quality reduction depending on the channel of transmission. These challenges call for the development of systems to support both image quality assessment and assistive disease diagnosis. This paper proposes a Deep learning model to assist ophthalmologists to determine the presence or absence of the disease as well as diagnosing the disease at stage three. Data obtained from two databases: Kaggle database and HVDROPDB database were used for model training, testing and validation by having the model achieve an accuracy of 92.8%, sensitivity of 94.9%, and precision of 97.3%. | http://dx.doi.org/10.12785/ijcds/160180 | Retina Image Analysis; Retinopathy Classification; Eye Disease Diagnosis; Image Quality; Deep learning | Elizabeth Ndunge Mutua and Bernard Shibwabo Kasamani (Strathmore University, Kenya); Christoph Reich (Hochschule Furtwangen University, Germany) |
71 | 1570994004 | Machine Learning Based Smartphone Screen Gesture Recognition Using Smartphone Embedded Accelerometer and Gyroscope | In the recent times, smartphone usage has become increasingly popular for learning. User's exhibit multiple gesture interactions with smartphones, while reading, which can provide valuable implicit feedback about the content consumed. Smartphones have many embedded sensors which capture plethora of user interaction data. The on-device Gyroscope and Accelerometer can be enabled to capture the variations done due to gesture interactions like scrolling, pinch to zoom, tap, orientation change and screen capture. This research work is based on training machine learning classifier models with smartphone sensors' readings to identify the users screen gesture interactions. Data for the classifier is collected by from 47 users in total using an android application. Aggregated time domain feature extraction has been computed on the preprocessed data. Four groups of data have been used to train the models. Extensive experiments are done to test the success of proposed system using Random Forests, Support Vector Machine (SVM), Extreme Gradient Boost (XGB), ADA boost, Naïve Bayes (NB) and K-Nearest Neighbour (KNN). Detailed analysis of the success rate and accuracy calculation have been performed. Best identification accuracy of 97.58% is achieved by Random Forest Classifier followed by Extreme Gradient Boost and K-Nearest Neighbour with accuracy 95.97% and 93.55% respectively. | http://dx.doi.org/10.12785/ijcds/160166 | Gesture recognition; Smartphone sensors; Mobile sensing; Screen gestures; Online learning; Implicit feedback | Priyanka Bhatele (Dr Vishwanath Karad MIT World Peace University, India); Mangesh V Bedekar (School of Computer Engineering and Technology & MIT World Peace University, India) |
72 | 1570994238 | Enhancing EHR Sharing through Interconnected Blockchains via Global Smart Contracts | Blockchain technology has ushered in transformative possibilities within the healthcare sector by creating a unified distributed network that streamlines the exchange of patient data among various stakeholders. However, the adoption of private or consortium-based blockchain models has raised concerns about the potential isolation and fragmentation of these networks. To address this challenge, blockchain interoperability has emerged as an escalating research area that offers a means for independent blockchains to collaborate across diverse platforms within a federated ecosystem. This study proposed a novel cross-chain communication protocol designed to integrate independent blockchains operating on different platforms. By leveraging a global smart contract triggering mechanism, this protocol establishes a standardized transaction conversion module to ensure transaction compatibility across various blockchain platforms within a federated network. The practical implementation of our cross-chain communication protocol was demonstrated through the exchange of electronic health records between the Hyperledger Fabric and Ethereum networks. Extensive experimentation was conducted to assess the performance metrics, revealing critical dependencies between the source and target blockchain networks in terms of the average elapsed time and query processing duration within the target network. The findings of this study underscore the considerable potential of blockchain interoperability within a federation, particularly when applied to the sharing of patient EHRs dispersed across multiple autonomous blockchains. | http://dx.doi.org/10.12785/ijcds/1601117 | Blockchains integration; cross-chain communication; electronic health record sharing; inter-blockchains communication; global smart contract; connected blockchains | Faiza Hashim (United Arab Emirates University, UAE, United Arab Emirates); Khaled Shuaib and Ezedin Barka (UAE University, United Arab Emirates); Farag Sallabi (United Arab Emirates University, United Arab Emirates) |
73 | 1570994361 | A Novel Blind Audio Source Separation Utilizing Adaptive Swarm Intelligence and Combined Negentropy-Cross Correlation Optimization | This paper presents a novel computational framework for blind audio source separation (BASS) that enhances existing Independent Component Analysis (ICA) with an adaptive swarm intelligence algorithm (ASIA). The proposed ASIA methodology addresses the challenges of optimal parameter determination in stochastic optimization process of swarm intelligence approach for an estimation of the precise unmixing matrix. In order to ensure the separated signals are as independent as possible in BASS task, a complex and non-convex optimization problem is formulated where the unmixing matrix is customized to minimize mutual information and maximize the non-Gaussianity of the signals. To solve our optimization problem the study introduces a weighted combination of negentropy and cross-correlation in the fitness function of the proposed ASIA. This unique approach of proposed framework ensures maximum statistical independence of the separated signals from the unknown mixed signals. Overall analysis of experimental outcome demonstrate that the proposed framework exhibits superior blind separation of mixed audio signals, showcasing enhanced computational efficiency and de-mixing accuracy compared to conventional baseline approaches. This paper has presented unique approach to blind audio source separation in over-determined scenario that combines adaptive PSO with ICA. The main goal of the proposed approach was to find an optimal de-mixing matrix that could efficiently separate mixed signals. The presented approach incorporates an adaptive inertia weight and velocity clamping mechanism into the traditional PSO, which effectively addresses the challenges associated with parameter determination in stochastic optimization techniques | http://dx.doi.org/10.12785/ijcds/160192 | Audio Signal; Mixed Signal; Blind Source Separation; Swarm optimization; ICA | Pushpalatha G (Visveswaraya Technological University, India); B Sivakumar (Ambekar Institute of Technology, India) |
74 | 1570995437 | Optimizing Multi-Level Crop Disease Identification using Advanced Neural Architecture Search in Deep Transfer Learning | Efficiently managing crop diseases holds immense potential for optimizing farming systems. A crucial aspect of this process is accurately identifying infection levels to enable targeted and effective disease treatment. Despite recent advancements, developing a reliable system for identifying and localizing crop diseases in complex, unstructured field environments remains challenging. Such a system requires extensive annotated data. This study comprehensively evaluates deep transfer learning techniques for identifying the degree of rust disease infection in Morocco's Vicia faba L. production systems. A vast dataset captured under natural lighting conditions and various crop growth stages was created to facilitate this research. Ten deep learning models were rigorously assessed through transfer learning, establishing a benchmark for this task. Deep transfer learning achieved high classification accuracy, with F1 scores consistently surpassing 90.0%. Training time for all models was reasonably short, under 2.5 hours. The NVIDIA Quadro P1000, known for its exceptional performance, was pivotal in achieving this outcome. The Neural Architecture Search-based model emerged as the top performer, achieving an impressive overall F1 score of 90.84%. Three models achieved F1 scores near or above 90.0%, highlighting the effectiveness of deep transfer learning for rust infection identification. This research illuminates the potential of deep transfer learning in detecting and diagnosing crop diseases, specifically rust infection in Vicia faba L. production systems. The findings contribute to developing robust disease management strategies, improving agricultural practices, and enhancing crop yield. | http://dx.doi.org/10.12785/ijcds/160195 | Deep Learning; Neural Architecture Search; Crop Diseases; CNN; Real-time Object Detection; Precision Agriculture | Hicham Slimani (Mohammed V University in Rabat, Morocco); Jamal El mhamdi (Ecole Normale Supérieure de l'Enseignement Technique, Morocco); Abdelilah Jilbab (Mohammed V University in Rabat, Morocco) |
75 | 1570995558 | Optimizing Deep Learning Architecture for Scalable Abstractive Summarization of Extensive Text Corpus | Text processing plays a prominent role in dealing with the evergrowing volume of information available on the internet as well as digital platforms. Abstractive text summarization and categorization, in particular, aims to generate concise and coherent summaries by paraphrasing the source text while preserving its core meaning and context. This research work focuses on enhancing abstractive text summarization and categorization for the large corpora through the application of a robust deep neural network architecture. With the increasing volume of information available, the need for efficient summarization techniques becomes critical. A pre-training strategy using diverse datasets is employed to improve the model's statistical performance and generalization capabilities. Furthermore, to address the challenge of information overload, an attention-based content selection mechanism is introduced, which highlights essential information from the source text to guide this process. The model's effectiveness is also extended to multi-document summarization, ensuring coherence across related documents. To evaluate the performance, various statistical performance metrices are exploited. In order to judge the novelty of adapted strategy, a benchmarking has been carried out with some state-of-the-art existing frameworks. The obtained results demonstrate the significant potential of this approach in effectively summarizing large corpora and managing the overwhelming amount of textual data available. | http://dx.doi.org/10.12785/ijcds/160126 | Deep Learning; Corpus Processing; Computational Linguistics; Language Models; Text Classification | Krishna Dheeravath (Jawaharlal Nehru Technological University, Ananthapuramu, India); S Jessica Saritha (Assistant Professor, India) |
76 | 1570995669 | Securing the Airwaves: A Survey on De-authentication Attacks and Mitigation Strategies | Wi-Fi networks, crucial for modern communication, are confronted with an escalating array of security challenges. Notably, de-authentication attacks emerge as formidable threats, involving the unauthorized expulsion of legitimate users from a Wi-Fi network. These attacks disrupt communication and may lead to unauthorized access by exploiting vulnerabilities in the communication protocols governing Wi-Fi networks. The susceptibility of these networks to malicious interference extends beyond disrupting communication; it poses a significant risk to the integrity of authentication methodologies, potentially facilitating unauthorized access. /In the context of the Internet of Things (IoT), where seamless and secure connectivity is paramount, the implications of de-authentication attacks become particularly severe. This paper delves into a comprehensive analysis of de-authentication attacks, meticulously dissecting their various types and the resultant impact on wireless communication protocols, authentication methodologies, and the overall security posture of IoT devices. The exploration culminates in a forward-looking discussion, contemplating future trends and emerging threats within the Wi-Fi security landscape. This forward-looking perspective aims to provide valuable insights to guide ongoing efforts in fortifying wireless networks against the ever-evolving landscape of cyber challenges. As technology advances, understanding and mitigating such security risks remain imperative for ensuring the robustness and reliability of Wi-Fi networks in the face of emerging cyber threats. | http://dx.doi.org/10.12785/ijcds/160196 | De-authentication; Mitigation Techniques; Wireless Networking; IoT Security; Network Security | Adwait Gaikwad and Balaji Patil (MIT-World Peace University, India) |
77 | 1570995854 | Improving Sentiment Analysis using Negation Scope Detection and Negation Handling | Negation is one of the challenges in sentiment analysis. Negation has an immense influence on how accurately text data can be classified. To find the accurate sentiments of users this research identifies that the presence of polarity-shifting words and the removal of negative stopwords leads to the flipped polarity of sentences. To resolve these challenges this research proposes a method for negation scope detection and handling in sentiment analysis. Negation cues (negative words) and non_cue words are classified using logistic regression. These negation cue and non_cue words in addition to lexical and syntactic features determine the negation scope (part of sentence affected by cue) using the Machine Learning (ML) approach i.e. Conditional Random Fields (CRF). Subsequently, in negation handling the sentiment intensity of each token in a sentence is established, and affected tokens are processed to determine the final polarity. It is revealed that sentiment analysis with negation handling and calculated polarity gives 3.61%, 2.64%, 2.7%, and 1.42% increase in accuracy for Logistic regression, Support Vector Machine, Decision Tree (DT), and Naive Bayes (NB) consecutively for Amazon food products dataset. Consecutively, 7.64%, 5%, and 1.44% improvement for Logistic Regression (LR), Support Vector Machine (SVM), and Naive Bayes for electronic dataset. | http://dx.doi.org/10.12785/ijcds/160119 | Conditional Random Field; Decision Tree; Logistic Regression; Machine Learning; Naive Bayes; Support Vector Machine | Kartika Makkar, Pardeep Kumar and Monika Poriye (Kurukshetra University, India); Shalini Aggarwal (Shaheed Udham Singh Government College Matak Majri Indri, India) |
78 | 1570996294 | Performance and Robustness Analysis of Advanced Machine Learning Models for Predicting the Required Irrigation Water Amount | The agricultural sector plays a pivotal role in ensuring global food security, particularly in light of significant population growth. The demand for food is increasing substantially, while crop production may not sufficiently meet these rising needs. Water scarcity is one of the main problems that poses a significant challenge to the agriculture sector, exacerbated by inefficiencies in traditional irrigation methods. Addressing this issue requires accurate prediction of the precise water requirements of plants. In this paper, we introduce various machine learning and deep learning models designed to assess the water needs of greenhouse plants using daily changes in air environment and soil data. Results indicate that the Multi-Layer Perceptron (MLP) model consistently outperformed other models, demonstrating stability and efficacy across various data optimization phases. Additionally, Machine Learning (ML) and Long-Short Term Memory (LSTM) models displayed commendable performance in different data optimization scenarios. Robustness is used as a critical factor by analyzing the parameter sensitivity of each model. This analysis aids in comprehending the model's robustness before any model deployment. The results reveal the superior robustness of ML models compared to Deep Learning (DL) models. This robustness stems from the limited number of parameters utilized in ML models, enhancing their reliability in comparison to the proposed DL models. | http://dx.doi.org/10.12785/ijcds/1601103 | Precision irrigation; Water amount prediction; Data-based optimization; Hyper-parameters tuning; DL time series; Sensitivity analysis | Hamed Laouz (University of Biskra, Algeria); Soheyb Ayad (University Med Khider of Biskra, Algeria); Sadek labib Terrissa (University of Biskra & LINFI Laboratory, Algeria); Aïcha-Nabila Benharkat (Université de Lyon, CNRS, INSA-Lyon, France); Samir Merdaci (University of El-Oued, Algeria) |
79 | 1570996366 | Design of Time-Delay Convolutional Neural Networks (TDCNN) Model for Feature Extraction for Side-Channel Attacks | This work explores a novel method of SCA profiling to address compatibility problems and bolster Deep Learning (DL) models. Convolutional Neural Networks are proposed in this research as a countermeasure to misalignment-focused countermeasures. We discovered that CNNs provide the potential for end-to-end profiling attacks, where sensitive information can be directly extracted from raw data without the need for any preprocessing. We are of the opinion that dimensionality reduction approaches and realignments both carry the danger of erasing valuable information from data. Indeed, in order to get a well-synchronized dataset, a realignment method modifies signals so that traces are somewhat comparable to one another. "Time-Delay Convolutional Neural Networks" (TDCNN) is more accurate than "Convolutional Neural Network," yet it's still acceptable. It's true that TDCNNs are neural networks based on convolution learned on single spatial information, just as side-channel tracings. However, given to recent surge in popularity of CNNs, particularly from the year 2012 when CNN framework ("AlexNet") achieved Image Net Large Scale Visual Recognition Competition, a notable picture detection competition, moniker TDCNN has been phased out in DL literature. Currently, one needs to employ characteristics related to CNN design, including declaring that one input feature equals 1, for instance, to establish a TDCNN in the most widely used DL libraries. | http://dx.doi.org/10.12785/ijcds/160127 | Deep Learning; Convolutional neural networks; Side Channel Analysis; Side Channel Attacks; Cryptography | Amjed Ahmed (Imam Kadhim (A) for Isalmic Science University, Iraq); Mohammad Hasan, Shahrul Azman Noah and Azana Hafizah Mohd Aman (Universiti Kebangsaan Malaysia, Malaysia) |
80 | 1570996508 | Microservices for Asset Tracking Based on Indoor Positioning System | Indoor positioning system (IPS) is widely used for different use cases, but most of them are asset tracking and indoor navigation. Asset tracking for instance, might help industry have more efficient such as warehouse, stock recording, guest tracker and many more. Implementation of asset tracking need to have the IPS such as trilateration and fingerprinting. To have accurate location, it is not just the precise, but the data which will be consumed need robust services to process all those data in almost real time. Bluetooth low energy (BLE) is used to send the received signal strength indicator (RSSI) to the microservices based server. To support this, microservices architecture (MSA) is designed with Service-Oriented Modeling and Architecture (SOMA) framework to translates business goal into necessary services. We are implementing and comparing both MSA implementation strategies, which are orchestration and choreography strategies on the cloud computing with Kubernetes platform. These strategies compared to find the most resource efficient with biggest number of served requests. The bigger the served request number means more assets to be tracked in real time. Less resource usage could also mean the computationally is inexpensive. The study is finding that choreography strategy in MSA is better for IPS since the number of served requests are five times bigger with similar resources usage. | https://dx.doi.org/10.12785/ijcds/160162 | Indoor positioning system; Bluetooth low energy; Asset tracking; Microservices architecture; SOMA framework | Dondi Sasmita (Binus University, Indonesia); I Gede Putra Kusuma Negara (Bina Nusantara University, Indonesia) |
81 | 1570996704 | Enhancing Image Clarity with the Combined Use of REDNet and Attention Channel Module | The objective of our research is to enhance the performance of picture denoising, particularly in scenarios when there is a scarcity of data, such as the BSD68 dataset. The intricacy of model construction poses a barrier in attaining optimal outcomes in the presence of limited data. In order to tackle this difficulty, we provide a method that incorporates Channel Attention, Batch Normalisation, and Dropout approaches into the current REDNet framework. Our investigation indicates enhancements in performance parameters, such as PSNR (Peak Signal to Noise Ratio) and SSIM (Structural Similarity Index), across various levels of noise. With a noise level of 15, we obtained a Peak Signal-to-Noise Ratio (PSNR) of 34.9858 dB and a Structural Similarity Index (SSIM) of 0.9371. At a noise level of 25, our tests yielded a PSNR of 31.7886 decibels and an SSIM of 0.8876. In addition, at a noise level of 50, we achieved a Peak Signal-to-Noise Ratio (PSNR) of 27.9063 decibels and a Structural Similarity Index (SSIM) of 0.7754. The incorporation of Channel Attention, Batch Normalisation, and Dropout has been demonstrated to be a crucial element in enhancing the efficacy of picture denoising. The Channel Attention approach enables the model to choose concentrate on crucial information inside the picture, while Batch Normalisation and Dropout techniques provide stability and mitigate overfitting issues throughout the training process. Our research highlights the effectiveness of these three strategies and emphasises their integration as a novel way to address the constraints presented by the scarcity of data in picture denoising jobs. This emphasises the significant potential in creating dependable and effective picture denoising methods when dealing with circumstances when there is a limited dataset. | http://dx.doi.org/10.12785/ijcds/160117 | Image Denoising; Deep Learning; Channel Attention Module; REDNet Model; Image Processing | Rico Halim and I Gede Putra Kusuma Negara (Bina Nusantara University, Indonesia) |
82 | 1570996902 | Advancing Context-Aware Recommender Systems: A Deep Context-Based Factorization Machines Approach | Context-aware recommender systems (CARS) aim to offer personalized recommendations by incorporating user contextual information through analysis. By analyzing these contextual cues, CARS can better understand the preferences and needs of users in different situations, thereby improving the relevance and effectiveness of the recommendations they provide. However, integrating contextual information such as time, Location and into a recommendation system presents challenges due to the potential increase in the sparsity and dimensionality. Recent studies have demonstrated that representing user context as a latent vector can effectively address these kinds of issues. In fact, models such as Factorization Machines (FMs) have been widely used due to their effectiveness and their ability to tackle sparsity and to reduce feature space into a condensed latent space. In this article we introduce a Context-aware recommender model called Deep Context-Based Factorization Machines (DeepCBFM). The DeepCBFM combines the power of deep learning with an extended version of Factorization Machines (FMs) to model non-linear feature interactions among user, item, and contextual dimensions. Moreover, it addresses certain limitations of FMs, in order to improve the accuracy of recommendations. We implemented our method using two datasets that incorporate contextual information, each having distinct context dimensions. The experimental results indicate that the DeepCBFM model outperforms baseline models and validates its effectiveness. | http://dx.doi.org/10.12785/ijcds/160128 | Recommender systems; Context-Aware Recommender Systems; Factorization Machines; Context-Based Factorization Machines; Deep Learning; Deep Neural networks | Rabie Madani (Mohammed V University of RABAT Morocco, Morocco); Abderrahmane Ez-zahout (Mohammed V University of Rabat Morocco, Morocco); Fouzia Omary (Mohammed V University in Rabat, Faculty of Sciences, Morocco); Abdelhaq Chedmi (Mohammed V University Rabat, Morocco) |
83 | 1570997042 | Analysis of Multi-Join Query Optimization Using ACO and Q-Learning | Query optimization, the process of producing an optimal execution plan for the problematic query, is more challenging in such systems due to the huge search space of other plans incurred by distribution. Due to the continually updating environment of fixed queries, the query optimizer has to frequently adjust the optimal execution plan for a query. The number of permutations in a query grows exponentially as the number of related tables in the query grows, which is a process used to assess the cost of upgrading searches. On the one hand, optimizing the join operator in a relational database is the most difficult and complex task. Following that, numerous strategies have been created to address all of these concerns. The efficacy of query optimization, on the other hand, required the use of a reinforcement learning model. Ant Colony Optimization (ACO) algorithm and Q Learning were proposed in the current research to address this issue and improve workload delay, Optimization Time, and Cost. Q-learning techniques are compared with Ant Colony Optimization and can be utilized to identify optimistic queries with minimal workload delay and query costs. When compared to the Q-Learning algorithm, using a non-dominated ACO algorithm can discover optimistic queries and reduce query cost. | http://dx.doi.org/10.12785/ijcds/1601113 | Query optimization; Join query; Ant Colony Optimization; Reinforcement learning; query execution plan | Karthi Keyan and K Krishnaveni (Sri S. Ramasamy Naidu Memorial College, India); Dac-Nhuong Le (Haiphong University, Vietnam) |
84 | 1570997350 | Disaster Event, Preparedness, and Response in Indonesian Coastal Areas: Data Mining of Official Statistics | Coastal areas are vulnerable to disasters such as tsunamis, floods, large waves, and hurricanes. Many studies on disasters in coastal areas were based on surveys for specific areas, but limited research explored the whole country. Applying data analytics for disaster management is critical to reducing the impact of disasters. This study aims to classify provinces based on disaster events and disaster preparedness and response capacity in coastal villages through cluster analysis, principal component analysis, and a combination of principal component analysis and cluster analysis. This secondary study applies data mining techniques to Indonesian official statistics. Data mining used the Python Scikit-learn and Tableau analytical software. The unit of analysis is all provinces of Indonesia as an archipelago country. The cluster analysis optimally produced two clusters with 6 (18%) and 27 (82%) provinces. The small cluster, named the high-intensity cluster, has a higher intensity of disaster events, preparedness, and response than the big one, named the low-intensity cluster. The big cluster has a higher percentage of coastal villages (25%) than the first (10%). The results of the principal component analysis were used to classify regions through geographic heat maps and scatter plots. Combining multiple principal component analysis and cluster analysis provides an alternative method to cluster analysis alone. The analysis produced three clusters with 6 (18%), 10 (30%), and 17 (52%) provinces. However, the cluster model from cluster analysis alone is better than the model from the combination of principal component analysis and cluster analysis. Therefore, cluster analysis and principal component analysis might be used independently, and both methods are complementary to exploring regional classification. The result of this study suggests an improvement in disaster preparedness and response for coastal villages, especially for provinces with a high percentage of coastal villages. | http://dx.doi.org/10.12785/ijcds/160120 | Seaside; Data analytics; Hazard; Rural; Tsunami; Sustainability | Gunawan Gunawan (University of Surabaya, Indonesia) |
85 | 1570997356 | Retinal Image Quality Assessment Using Morphological Operations | Retinopathy of Prematurity (ROP) disease affects newborn babies born preterm. The disease has five stages, with stage IV and V being critical where if the disease is not diagnosed at stage III when the vessels begin to grow abnormally, the reversing it is not possible. Diagnosis and treatment are possible between stage I-III. Hospitals without eye specialists, a doctor can be instructed on how to capture retina image which is transmitted online to an ophthalmologist for disease diagnosis. Different devices produce images of varying qualities and during transmission, some image features could be lost. Some images are captured under poor lighting conditions resulting to poor quality images being generated. This study proposes an algorithm which performs quality assessment of retina images before being used to diagnose ROP Stage II or III disease. The algorithm was developed and tested using Retinopathy of prematurity disease data of 91 images available at the Kaggle database and the objective was to separate images of quality from non-quality ones. The algorithm was able to separate quality from non-quality retina images with 92.82% sensitivity, 96.98% specificity and 97.31% accuracy. Performance evaluation was conducted by means of estimating the similarity measure of DSC and Jaccard index (JI), producing agreeable indices of 94.81% DSC and 88.42% JI. | http://dx.doi.org/10.12785/ijcds/160181 | Algorithm; Retina image; Blood vessels; Retina vascular structure | Elizabeth Ndunge Mutua and Bernard Shibwabo Kasamani (Strathmore University, Kenya); Christoph Reich (Hochschule Furtwangen University, Germany) |
86 | 1570997375 | Authentic Signature Verification Using Deep Learning Embedding With Triplet Loss Optimization And Machine Learning Classification | Many types of documents, whether they would be financial, commercial, or judicial, require signatures to approve their authenticity. With the advancements of technology and the increasing number of documents, traditional signature verification methods face challenges in verifying signatures. Thus, the field of machine learning and deep learning present promising solutions to address the limitations of traditional signature verification methods. In this paper, we combine CNN algorithms, such as VGG16 and ResNet-50, for image embedding with triplet loss optimization alongside machine learning classifiers, such as Support Vector Machine, Artificial Neural Network, Random Forest, and XGBoost. In addition, hyperparameter tuning through Grid Search is implemented as well to improve the model's ability. The models are evaluated using the ICDAR-2011 dataset written in the Latin (Dutch alphabet) script and BHSig260-Bengali dataset written in the Bengali script. Both of which contain genuine and forged signatures belonging to 64 individuals for the ICDAR dataset and 100 individuals for the BENGALI dataset. In this research, the experiment can be divided into two segments, which is conducting the experiment without triplet loss optimization and with triplet loss optimization. Through the experiments, it is found that the triplet loss optimization is able to improve the VGG16 embedding model for the SVM classifier, which resulted in an increase in the AUC of 0.970 to 0.991. | http://dx.doi.org/10.12785/ijcds/160121 | Signature Verification; Signature Authentication; Image Embedding; Triplet Loss; Machine Learning Classifier | Andreas Christianto (Bina Nusantara University, Indonesia); Jovito Colin (Bina Nusantara, Indonesia); I Gede Putra Kusuma Negara (Bina Nusantara University, Indonesia) |
87 | 1570997379 | YOLOv8 and Faster R-CNN Performance Evaluation with Super-resolution in License Plate Recognition | Automatic License Plate Recognition (ALPR) has become a widely used computer vision application applied in various fields. However, traditional ALPR methods face challenges as they require high-quality images from a fixed-angle camera to produce accurate results in recognizing License Plate (LP) characters. Weather conditions and unfavorable LP angles can lead to low-resolution images, causing inaccuracies in LP character recognition by LPR systems. There is a need to improve LPR systems to adapt to a variety of captured image conditions, particularly those with low resolution. To address this issue, past researchers have developed Super-resolution (SR) models capable of generating high-resolution images from low-resolution counterparts. In this study, we enhance the LPR system by incorporating SR, aiming to improve character recognition. The study comprises two phases: License Plate Detection (LPD) and License Plate Recognition (LPR). In the LPD phase, we utilize state-of-the-art object detection models, including Faster R-CNN using detectron2 and YOLOv8. Our detection models perform well, especially YOLOv8, which achieves 93% accuracy in both train and validation datasets, slightly decreasing to 90% in the test dataset. This outperforms Faster R-CNN, which achieves 71%, 71%, and 74%, respectively. In the recognition phase, we employ two approaches: Tesseract-OCR alone and Tesseract-OCR with SRGAN. The end-to-end pipeline achieves a Character Error Rate (CER) of 53.9% and a Levenshtein distance of 3.6% without SR-GAN. When SR-GAN is applied as a preprocessing step, the CER is reduced to 51.7%, and the Levenshtein distance is reduced to 3.5%. This highlights the effectiveness of SR-GAN in enhancing image quality and, consequently, improving the performance of OCR engines. The insights gained from this study can contribute to the development of robust license plate recognition systems for real-world deployment. | http://dx.doi.org/10.12785/ijcds/160129 | Super resolution; license plate detection; license plate recognition; YOLOv8; SRGAN | Diva Angelika Mulia, Sarah Safitri and I Gede Putra Kusuma Negara (Bina Nusantara University, Indonesia) |
88 | 1570998035 | Navigating the Software Symphony: A Comprehensive Review of Factors and Strategies for Software Development in Startups | In the ever-evolving software industry, startups are catalysts of innovation, propelling transformative changes. Effective software development is pivotal for their success. The aim of the review is to reveal the successive factors, challenges, and strategies underlying software development in startups. We steered for a systematic literature review, by developing a classification schema, ranking the selected studies, and analyzing the factors, challenges, strategies to enhance the software development for startups. Only few studies are dedicated to software development factors, from the primary studies software startup successive factors, strategies, challenges are extracted, categorized and analyzed. The purpose of the review is to reveal the process of software project management, underlined the key factors like Agile, Lean, Customer centricity and Design thinking Concepts in Software Industry, Information Technology maturity models, corporate culture. In order for startups to succeed in the shifting world of software development, it is advised that they find ways to overcome difficulties, finding success factors, investigate opportunities, and adoptable to new ideas. This study provides the ability of a software startup to embrace innovation, adaptability, and technology is vital for long-term growth and success sustainability. This comprehensive approach empowers startups to reach the excellence in the energetic Software Industry landscape. | http://dx.doi.org/10.12785/ijcds/160130 | Software Startups; Software Sustainability; Software Project Management; Agile Lean Priciples; Startup factors | Anitha Gracy J and Parthasarathy S (Thiagarajar College of Engineering, India); S. Sivagurunathan (Gandhigram Rural Institute-Deemed University, India) |
89 | 1570998208 | An Overview of Using Mobile Sink Strategies to Provide Sustainable Energy in Wireless Sensor Networks | The Wireless sensor networks (WSNs) can effectively address the issue with the static sink's sink-hole or hot spot brought on by multi-hop routing by data collecting using the mobile sinks (MS). The optimal path's design, however, is a famous NP-hard issue. Data transmission from source nodes to base station that is both successful and efficient while reducing energy consumption and data loss is what determines the architecture's overall performance. In WSNs, data collection is done via mobile sinks or static sinks (SS). However, the effectiveness of MS based data collection techniques is higher than that of techniques based on static sinks. Nevertheless, the MS-based data collection methods have a number of shortcomings and restrictions. Designing a trajectory is therefore an NP-hard task. In this work, we suggest a survey that utilizes the path optimization techniques. We also have an overview of different approaches for using SS and MS-based techniques to collect data from a sensor network, as well as different kinds of data collection using MS and some of the difficulties it encounters. Lastly, we offer a level-based categorization of the various trajectory techniques that were employed to gather the data. We divided the schemes into two categories at the first level: static and dynamic. | http://dx.doi.org/10.12785/ijcds/160158 | WSN; Path Planning; Mobile Sink; Energy efficiency | Hadeel Majid Lateef (Babylon University, Iraq); Ali Kadhum M. Al-Qurabat (University of Babylon & College of Science for Women, Iraq) |
90 | 1570998866 | Enhancing Robustness of Swarm Robotics Systems in a Perceptual Discrimination Task | The automation of tasks such as environmental monitoring, toxin detection, and mineral resource identification requires artificial agents with perceptual discrimination capabilities to identify the predominant features in environments much larger than their sensing range. The key challenge is developing collective decision-making methods that allow agents to predict a global perspective of the environment from local observations. Our research explores the leverage of collective decision-making for binary perceptual discrimination tasks, using evolutionary computation techniques to synthesise an artificial neural network controller. We focus on strategies that generalised better to patchy and clustered feature distribution environments. We investigate three communication strategies - close-neighbour, rand-neighbour, and far-neighbour- in which robots exchange opinions about the dominant colour of the environment based on the distance between sender and receiver robots. The results show that the rand-neighbour strategy significantly improves performance, particularly in unseen patchy patterns. The extensive analysis of the communication dynamics among the robots indicates that the effectiveness of rand-neighbour strategy is attributed to its efficient circulation of opinions among both close and distant robots. Our findings support the hypothesis that primordial communication between one receiver robot and a randomly chosen emitter robot is sufficient to develop an effective collective decision-making strategy for swarm of robots engages in perceptual discrimination tasks. | http://dx.doi.org/10.12785/ijcds/160189 | Evolutionary robotics; Swarm robotics; Collective decision-Making; Communication strategies | Rusul Ibrahim (University of Kerbala, Iraq); Muhanad Alkilabi (University of Kerbala, Belgium); Ali R. Hasoon (University of Kerbala, Iraq); Elio Tuci (University of Namur, Belgium) |
91 | 1570998948 | The Development of the Secure Quality Dataset (SQDS): Combining Security and Quality Measures Using Deep Machine Learning for Code Smell Detection | Code smells are an indication of deviation from design principles or implementation in the source code. Early detection of these code smells increases software quality by using refactoring techniques that will help the developers in software engineering maintain the process of software. Security is included as one of the requirements of software artifact quality in the ISO/IEC 25010 standard so we thought the security in the design phase is more efficient than after delivery of the software to the customer. A study aims to create a new dataset containing security metrics besides the quality metrics that will help software engineering researchers by detecting both the presence of a security illusion and god class bad smell at the same time in a program, we take Fonata's dataset of god class that have 61features of quality metrics, then calculate the security metrics on these 74 software written in java by programming a parser to analyze each software, finally used five machine learning algorithms on the proposed datasets (SQDS), after that, we used accuracy performance metric was employed for comparing the results. The experimental findings suggest that the proposed dataset demonstrates superior performance in identifying code smell security vulnerability and augmenting the training data can improve the accuracy of predictions. Finally, we applied three deep machine learning (RNN, LSTM, and GRU) on both the original Fonata's Dataset of God Class bad smell and our proposed SQDS dataset and made a comparison between them. | http://dx.doi.org/10.12785/ijcds/160172 | Security Metrics; God Class bad smell; , Quality metric; machine learning; Deep learning | Hiba Moneer Yahya, HM. (University of Mosul & Mathematics and Computer Scince Collage, Iraq); Dujan Taha (College of Computer Science and Mathematics, Iraq) |
92 | 1570999085 | Artificial-Intelligence-Enhanced Beamforming for Power-Efficient User Targeting in 5G Networks | In the quest for optimizing 5G networks, this study was performed to introduce an innovative Artificial Intelligence (A.I.)-based beamforming technique focused on power efficiency and signal integrity. By leveraging a machine learning algorithm, the base station (BS) conducts an omnidirectional scan to identify and direct beams towards the user equipment (UE) exhibiting the lowest possible power signature for optimizing the overall network's performance. Extensive simulations conducted using a Uniform Linear Array (ULA) at 28 GHz with Quadrature Amplitude Modulation (QAM) to authenticate the process, A.I. algorithm dynamically adjusted the beamforming weights, which were then applied to synthetic user signals to simulate real-world conditions. The results that were validated through Bit Error Rate (BER), Throughput, Angle of Arrival (AOA), Direction of Arrival (DOA), and Array Response (AR) metrics has shown that the A.I.-driven approach does not only reduces power consumption but also maintains user's signal fidelity with high precision. A.I.'s decision-making process was exactly analyzed showing its capability to fine-tune beam direction in the presence of noise and interference. The study concluded that A.I.-based steering in the direction of the least power-intensive user is not only capable of functioning adequately but also enhances and improves the overall network efficiency and reliability. | http://dx.doi.org/10.12785/ijcds/160179 | A.I; Beamforming; 5G; Throughput; BER | Yousif Maher Alhatim (University of Ninevah, Iraq); Ali Othman Al Janaby (Ninevah University, Iraq & Electronics Engineering, Iraq) |
93 | 1570999389 | Cascaded Fuzzy Analytics Based Model for Determining Rental Values of Residential Properties | The world's property marketplace continues to experience enormous growth in infrastructure geared towards enhancing the quality of neighborhoods, such as physical landscaping and aesthetics, which have pushed rental values above reasonable bounds. The practice of ascertaining the market value of properties makes use of underlying key characteristics, especially in cities across the globe. Again, the rental values of property vary differently from place to place on the basis of characteristics (or factors). Studies are ongoing in determining the best factors needed to accurately arrive at appropriate market and rental values for properties. This study proposes a cutting-edge approach based on cascaded fuzzy logic controls to pair up distinct property characteristics identified by various professionals and literally works. The housing dataset was collected and used to construct the membership functions, the inference engine, and validate the proposed property rental value model. The outcomes revealed that the cascaded fuzzy analytics model was the inverse of the regression model, as the minimal MSE (0.05628) supported a good prediction of residential property values when compared to the regression model (R = 0.7320), whose value must be close to 1 to be a good estimate. Again, the proposed cascaded fuzzy analytics model (0.05628) was an improvement over the regression model (0.09619) in terms of MSE and standard error of estimation. These revealed the capability of the proposed model in determining residential property prices at a lower error rate than statistical inference approaches like regression estimation models. | http://dx.doi.org/10.12785/ijcds/1601123 | Rental Values; Property; Fuzzy Analytics; Determinants.; Fuzzy | Jasim Mohammed Dahr and Alaa Sahl Gaafar (Directorate of Education in Basrah, Iraq); Alaa Khalaf Hamoud (University of Basrah, Iraq) |
94 | 1570999811 | An Efficient Failure Predictive and Remediation System for Windows Infrastructure with Analysis of Log-Event Records | The demand for IT infrastructures has grown due to their importance in business and everyday life. Downtime due to the unavailability of any IT infrastructure components is undesirable. Ensuring IT infrastructure's continuous availability and stability is crucial for organizations to prevent downtime and its associated consequences. Thus, prompt failure detection, analysis of underlying causes, and corrective measures are vital. IT infrastructure logs register every detail of the executed operation and provide a lot of dimensional information about it. Therefore, the research field of IT infrastructure failure detection and prediction using log analysis techniques is gaining prominence. The proposed method uses a BERT pre-trained model-based semantic analysis framework and an attention-based mechanism OLSTM classification model. Furthermore, the remediation model offers failure notifications to the system administrator on the dashboard and registered email ID, along with potential solutions to address the issue and mitigate the failure of IT Infrastructure components. The effectiveness of the developed prediction and remedial system was evaluated on a real-time Windows infrastructure by implementing a proof of concept. In this process, the trained model was utilized to analyse newly generated log entries and forecast potential failure situations. Consequently, a remediation strategy was applied in order to address the problem and prevent downtime effectively. The integration of automatic failure detection and prediction using IT infrastructure logs has the potential to become a routine practice in IT infrastructure monitoring. The suggested remediation approach shows promise in being widely adopted for timely failure mitigation, resulting in reduced downtime. | http://dx.doi.org/10.12785/ijcds/1601127 | Log analysis; System log; IT Infrastructure; Deep Learning; BERT; POC | Deepali Arun Bhanage (Pimpri Chinchwad College of Engineering, India); Ambika Vishal Pawar (Persistent University Persistent Systems Pune India, India); Aparna Joshi (PCETs PCCOE Pune, India); Rajendra Pawar (MIT Art Design and Technology University Pune India, India) |
95 | 1570999843 | A Secure Self-Embedding Technique for Manipulation Detection and Correction of Medical Images | The protection of medical images transmitted through the E-healthcare system is very critical. Nowadays, medical image watermarking has been emerged as trustworthy way to authenticate medical information during transmission. This paper presents a secure self-embedding scheme for detection and correction of tamper in medical images. The proposed scheme involved two decomposition and dimensionality reduction techniques, singular value decomposition and learning sparse decomposition. First, the color medical image is transformed into YCrCb color space and the luminance plane is chosen. To create the watermark, the medical image is automatically classified into region of interest (ROI) and region of non-interest (RONI), and then, the ROI is encoded by sparse decomposition with learned BPDN dictionary. The sparse watermark is then hidden in the singular values of the host part of the image. The quantitative and qualitative results show that the proposed method is robust against numerous aggressive and geometric distortions without compromising the quality of the original medical image. The proposed algorithm yields a high PSNR larger than 45dB for all type of images, as well as high NC value under all types of attacks. It is demonstrated that the presented system performs better than the existing state-of-art techniques, and could be helpful for e-healthcare systems. | http://dx.doi.org/10.12785/ijcds/160140 | Color medical images; Automatic Segmentation; Self-Embedding; Manipulation Detection and Correction; Encryption | Afaf Tareef (Mutah University, Jordan) |
96 | 1571000494 | Overview of Medical Image Segmentation Techniques through Artificial Intelligence and Computer Vision | Medical image segmentation is a crucial task in computer vision, playing a pivotal role in applications such as diagnostics, treatment planning, and medical research. The present study explores a wide range of methodologies employed in the field of medical research to achieve image segmentation. These techniques range from traditional approaches based on thresholding, edge detection, region-based and clustering, to modern artificial intelligence methods, particularly deep learning techniques. The strengths and limitations of each method are thoroughly examined. This paper focuses on analyzing various architectures used for medical image segmentation, specifically evaluating their performance. It aims to delve deeply into the different segmentation methods, offering a comparative perspective on their effectiveness. Furthermore, This document delves into the most recent technological progress in segmentation, emphasizing major breakthroughs capable of transforming the precision and productivity of analyzing medical images. Through an exhaustive compilation and detailed critique of the results obtained by employing a range of segmentation strategies, the study presents the outcomes of multiple approaches, accompanied by an in-depth analysis of the strengths and weaknesses inherent to the various techniques applied to medical image segmentation. This research enhances the comprehension of how these methods can be applied within the medical sector, especially in the area of computer vision. | http://dx.doi.org/10.12785/ijcds/160183 | Segmentation; Computer vision; Medical image; Machine learning; Computed Tomography; Deep learning | Sabbar Hanan (LAROSERIE, Morocco); Silkan Hassan (Université Chouaib DOUKKALI, Morocco); Khalid Abbad (University of sidi Mohamed ben Abdellah FEZ, Morocco); El Mehdi Bellfkih and Imrane Chems eddine idrissi (Hassan II University, Morocco) |
97 | 1571001004 | Predicting Microvascular Complications in Diabetic Mellitus Using Improved Enhanced Coati Optimizer | Objective: Diabetes complications are classified as Macro and Microvascular Diseases. Microvascular complications in type 2 Diabetic patients commonly occur as diabetic retinopathy, diabetic neuropathy, and diabetic nephropathy. Therefore detecting these microvascular complications from the clinical dataset is very important. Method: In this paper, a machine learning model is proposed for predicting and detecting microvascular diseases in type 2 diabetic Patients. The initial stage is preprocessing where data processing . After the preprocessing operation is performed feature selection process is carried out using the Improved Enhanced Coati algorithm. The optimal features from the Improved Enhanced Coati Optimization algorithm are applied to various classification algorithms. The reason behind applying this feature selection algorithm to various models is to check the performance with the traditional classifiers. Hence model performance is compared with XGB, KNN, SVM, RF, AdaBoost, Tree, and ANN algorithms. Findings: For the classification of diabetic retinopathy, the selected features are age, sex, BMI, BP, FPS, Family History, and Medical Adherence. Similarly, the features selected to classify Diabetic Nephropathy as Sex, SP, FPS, Family History, Onset Age, and HbA1C and FPS used to classify Diabetic Neuropathy. On optimal selection of features various ML classification algorithms are applied. The results are compared with algorithms as XGB, KNN, SVM, RF, AdaBoost, Tree, and ANN. The results are measured by considering parameters for training and testing accuracy and Random Forest Classifier results are optimal for the AdaBoost estimator for type 2 diabetic patients for the diabetic retinopathy is 99.9% and 94.78%, diabetic nephropathy, and diabetic neuropathy is 99.8% and 95.44%. Novelty: In the proposed methodology the feature selecting fitness function is selected based upon the received optimal accuracy from the feature selecting estimator as AdaBoost. In Coati Optimizer the feature selection process is carried out by selecting a fitness function that provides the minimum error. | http://dx.doi.org/10.12785/ijcds/1601110 | Enhanced Coati Optimizer; Feature Selection; Microvascular Complications; Machine learning classification algorithms; Bio-inspired algorithms | Mayuri Kulkarni (*1a Research Scholar, Department of Computer Engineering, SSVPs's B.S. Deore College of Engineering, Dhule K.B.C. North); Shailesh Shivaji Deore (SSVPSs Bapusaheb Shivajirao Deore College of Engineering Dhule Maharashtra, India); Chin-Ling Chen (Chaoyang University of Technology, Taiwan); Khalaf Ibrahim Osamah (Al-Nahrain University, Iraq); Mueen Uddin (University of Doha for Science and Technology Qatar, Qatar); Abdulsattar Abdullah Hamad (University of Samarra, Iraq); Yong-Yuan Deng (Chaoyang University of Technology, Taiwan) |
98 | 1571001034 | Pneumonia Medical Image Classification Using Convolution Neural Network Model AlexNet & GoogleNet | Pneumonia is one of the deadliest diseases in the world. Diagnosis of pneumonia is done with the help of CT-scan image analysis of the chest. This analysis is usually done by a pulmonary specialist. The availability of pulmonary specialists is still limited, especially in underdeveloped, outermost and frontier (3T) areas. In addition, manual analysis still faces the possibility of errors. The use of artificial intelligence technology is expected to overcome these problems. The purpose of this study is to obtain the results of pneumonia disease classification using the CNN algorithm using the AlexNet and GoogleNet models. The tools used in this research are python. The image dataset used amounted to 5856 images obtained from the Kaggle repository. The stages of this research consist of data preparation where this data has been preprocessed and split data. Furthermore, the CNN stage with the architecture used is AlexNet and GoogleNet. . The training data used is 90% of the data or 5270 images and the testing data is 10% or 586 images. Model training is carried out as many as 20 iterations so that the model used can recognize. The training model is done in as many as 20 iterations so that the model used can recognize the image more accurately. After the model has been trained the model will be tested by providing test data. The results of this research are displayed in the confusion matrix. The results of the research using the AlexNet and GoogleNet architectures get an accuracy value. This accuracy value is then compared between the two. The accuracy obtained from AlexNet architecture is 96% while that obtained from GoogleNet is 94%. From the results of the accuracy of the two models, it can be concluded that the AlexNet architecture has the highest accuracy of 96%. | http://dx.doi.org/10.12785/ijcds/1601124 | AlexNet; GoogleNet; Pneumonia; , Classification | Rio Subandi (Universitas Ahmad Dahlan, Indonesia); Herman Herman (Universitas Ahmad Dahlan, Yogyakarta, Indonesia); Anton Yudhana (Universitas Ahmad Dahlan, Indonesia) |
99 | 1571001053 | Application of optimized Deep Learning mechanism for recognition and categorization of retinal diseases | Retinal disorders are one of the common eye problems and its complication affects the eyes. In some cases, the retinal diseases would not cause any symptoms or it only shows mild vision impairments. Finally, it causes no vision or blindness. So, earlier recognition of symptoms could help to avoid blindness. Routine screening is one of the methods for early diagnosis of retinal disease. Other common ways to identify retinal disease is to have an expert evaluate and rate eye photographs for the existence and severity of the illness. Unfortunately, in many parts of the world where retinal disease is common, but the medical specialists capable of recognizing DR are scarce. Hence, a novel optimized African Buffalo based deep Convolutional Neural Network (AB-DCNN) deep learning model is introduced in this article, which could detect the retinal disorders in the earlier stage from the fundus retinal image datasets and classify its stages. The proposed mechanism could detect diseases like Central Serous Retinopathy (CSR), Age-related Macular Degeneration (AMD), Diabetic Retinopathy (DR) and Macular hole (MH) and classify its stages as Severe, Moderate, Mild NPDR, PDR, and normal case. Depending upon the clinical importance, the impact of uncertainty on system performance and the relation among explainability and uncertainty are examined. The uncertainty evidences make the system more reliable for usage in clinical environments. The proposed methodology increases the operational speed and lessens the computation time of the algorithm. It also reduces the losses and enhances the classification accuracy. | http://dx.doi.org/10.12785/ijcds/160168 | Retinal disease; Deep learning; African Buffalo optimization; Classification | Dhafer Alhajim (University of Al-Qadisiyah, Iraq); Ahmed Mohammed Al-Shammari (University of Al-Qadisiyah, Computer Center, Al Diwaniyah, Iraq); Ahmed Kareem Oleiwi (Zhengzhou University, Iraq) |
100 | 1571001387 | 5G Mobile Communication Performance Improvement with Cooperative-NOMA Optimization | The 5G Mobile communication cellular networks are expanding rapidly, and with the quick development of several new services and mobile applications are anticipated consumption of frequency and the bandwidth resources in upcoming cell phone networks. Therefore, networks suffer from low speed and high latency. Corporative Non-orthogonal Multiple Access (C-NOMA) is an approach method to meet the various needs of improved user fairness, high reliability, high spectral efficiency (SE), extensive connectivity, raising data rates, high flexibility, low transmission latency, massive connectivity, low delay, higher cell-edge throughput, and superior performance. This paper mainly focuses on power-domain (PD-NOMA), which employs successive interference cancellation (SIC) at the receiver and superposition coding (SC) at the transmitter. Also, this paper compares C-NOMA, NOMA, and OMA for different types of environmental fading. This paper shows how NOMA performance can be improved when combined with numerous confirmed wireless communication network strategies, including cooperative (C-NOMA) system communications with the help of optimization The simulation results demonstrated enhancements of cooperative (NOMA) compared to non-cooperative (NOMA) and (OMA) with the help of MATLAB and NYUSIM simulations. The results also demonstrated the improvement is valid with the increase of bandwidth frequency signal spectrum for the varying of near and far user distance. | http://dx.doi.org/10.12785/ijcds/1601119 | 5G; NOMA; Cooperative NOMA; Fading channel; mmWave; Optimization | Salar Ismael Ahmed (Erbil Polytechnic University, Iraq); Siddeeq Y. Ameen (Duhok Polytechnic University, Iraq) |
101 | 1571001695 | Quantifying Breast Cancer: Radiomics, Machine Learning, and Dimensionality Reduction for Enhanced Image-Based Diagnosis | Radiomics allows for measuring tumor heterogeneity, discovering prognostic biomarkers, early detection and diagnosis, and combining with machine learning to improve clinical decision-making. Radiomics is essential for obtaining quantitative characteristics from medical pictures, such as those acquired from radiological scans such as MRI, CT, or PET scans. The characteristics include many qualities such as shape, texture, intensity, and spatial relationships within the images. Radiomics is crucial for extracting features by turning medical images into quantitative data that capture detailed aspects of tissue architecture and physiology. The identified traits could significantly transform clinical decision-making in oncology and other areas. This study aims to enhance existing breast cancer diagnostic techniques by utilizing radiomics to detect the disease at an early stage. Our study intends to enhance diagnostic accuracy by utilizing machine learning models and dimensionality reduction approaches on radiomics characteristics. We provide a new technique that integrates dimensionality reduction with machine learning algorithms to examine radiomics characteristics collected from breast cancer images, improving early breast cancer detection. The proposed method is comprehensively evaluated, showing significant enhancements in diagnostic accuracy for early-stage breast cancer when compared to conventional methods. The proposed model has an accuracy of 88.72% as compared to recent works as mentioned in Table 3. The results suggest that radiomics-based techniques could enhance breast cancer screening by identifying subtle imaging indicators. | http://dx.doi.org/10.12785/ijcds/1601114 | Radiomics Features; Breast Cancer Detection; Digital Image Processing; Machine Learning | Zulfikar Ali Ansari (Integral University, India); Manish Madhava Tripathi (Integral University, Lucknow, India); Rafeeq Ahmed (Government Engineering College West Champaran, India) |
102 | 1571001915 | A Hybrid ROI Extraction Approach for Mask & Unmask Facial Recognition System using Light-CNN | In recent years, deep learning-based algorithms have been immensely employed and tested in a variety of real-world applications. The efficacy of such algorithms has been thoroughly examined in a practical setting. In this paper, CNN-based deep learning approaches are utilized to recognize faces in real-time to identify faces with and without mask. We employ pre-trained algorithms (YOLOv2 and SSD) to identify people wearing a face mask, which enables a machine to perform recognition tasks while evolving through a learning method. Meanwhile, if there is more than one person in the scene, the one with the max score will be selected for classification. Thus, a hybrid approach that combines YOLOv2 and SSD algorithms to work in parallel is developed for masked-face extraction. Likewise, the Viola-Jones algorithm is used here to detect faces without mask and randomly select a single region of interest (ROI) to be stored for classification. All pre-processing algorithms work separately in parallel as reconstruction steps for preprocessing to crop the ROI and store images for training and testing dataset. Followed by developing a lightweight computational complexity CNN model for face mask recognition to identify whether the selected person's face is wearing a mask or not. The dataset contains numerous variations in appearance and viewpoint to capture different scenarios with and without mask faces. On average, the proposed face mask detection architecture realizes recall and F1 score of 98.3 and 98.31, respectively. The training performance, on the other hand, has improved by 19.7% and 95.9% for training time and storage space (model size) compared to AlexNet. The presented framework architecture is an efficient face mask and unmask detector and can be employed as a robust medical assistant face detector in the healthcare sector for automated tracking of a patient, visitor, or staff member wearing a mask or not. | http://dx.doi.org/10.12785/ijcds/160190 | Hybrid ROI extraction; deep learning; CNN; dace mask recognition; YOLOv2; SSD | Ahmed Ahmed and Faris S. Alghareb (Ninevah University, Iraq) |
103 | 1571002549 | Leaf Condition Analysis Using Convolutional Neural Network and Vision Transformer | Plants play an essential role to human survival, from being the primary source of oxygen emissions to being a vital supply of dietary ingredients. It keeps the ecosystem's general equilibrium, particularly in the food chain. Diseases will cause plants to deteriorate in quality. Many botanists and domain experts research various ways to prevent plants from getting infected and preserve their quality using computer vision and image processing integration on leaf images. The quality of the image collection provides a substantial value for the classification model in identifying leaf diseases. Nevertheless, the amount of leaf disease image dataset is very scarce. Since the performance of the models is determined on the overall quality of the dataset, this could compromise the predictive models. Besides, existing leaf disease detection programs do not provide an optimized user's experience. As a result, although customers may receive an excellent interactive features programme, the backend algorithm is not optimized. This problem may discourage users from applying the program to solve plant disease problems. In this paper, contrast boosting, sharpening, and image segmentation are used to create an unprocessed leaf disease image dataset. Through the use of a hybrid deep learning model that combines vision transformer and convolutional neural networks for classification, the algorithm can be optimized. The model performance is evaluated and compared with the other methods to ensure quality and usage compatibility in the plantation domain. The model training and validation performance is represented on graphs for better visualization . | http://dx.doi.org/10.12785/ijcds/1601125 | Plant Disease Identification; Deep Learning; Vision Transformer; Convolutional Neural Network | Yong Wai Chun, Kok-Why Ng, Su Cheng Haw and Naveen Palanichamy (Multimedia University, Malaysia); Seng Beng Ng (Universiti Putra Malaysia (UPM), Malaysia) |
104 | 1571002733 | An Examination of the Security Architecture and Vulnerability Exploitation of the TurtleBot3 Robotic System | This paper conducts a comprehensive security analysis of the TurtleBot3, a widely utilized robot in education and light-duty industrial applications, recognized for its cost-effectiveness and flexibility. Given its connectivity, the TurtleBot3 is susceptible to cyber threats, a concern that this study addresses by identifying and exploiting its security vulnerabilities. Through an extensive examination, the research uncovers that weak authentication protocols and insufficient access controls can be exploited by attackers to gain unauthorized control over the robot. Such breaches enable malicious actors to alter the robot's operations, access confidential information, and initiate further attacks within its network. The findings of this study underscore the critical need for robust cybersecurity measures in robotics, highlighting the potential risks posed by these vulnerabilities. Moreover, the paper proposes a set of countermeasures and protective strategies designed to fortify the TurtleBot3 against cyber threats. These recommendations aim to enhance the robot's security framework, ensuring a safer use in various sectors. By addressing these cybersecurity challenges, the research emphasizes the significance of integrating security considerations in the development and deployment of robotic systems, offering valuable insights for developers, users, and policymakers involved in the field of robotics and automation. This research not only illuminates the vulnerabilities within the TurtleBot3 system but also paves the way for developing more secure and resilient robotic platforms in the future. | http://dx.doi.org/10.12785/ijcds/1601118 | Robotics; Robotic Security; Vulnerability Assessment; Penetration Testing; Security Assessment; TurtleBot3 | Yash Patel and Parag Rughani (National Forensic Sciences University, India); Tapas Kumar Maiti (Dhirubhai Ambani Institute of Information and Communication Technology, USA) |
105 | 1571002785 | QR Shield: A Dual Machine Learning Approach Towards Securing QR Codes | Quick Response (QR) codes, extensively employed due to their compatibility with smartphone technology and the technological advances of QR code scanners, have become a crucial aspect of modern life. With the ever-increasing adoption and utilization of QR codes in various real-life contexts, finding effective and efficient security mechanisms to maintain their integrity has become paramount. Despite their popularity, QR codes have been exploited as potential attack vectors through which attackers encode malicious URLs. Such attacks have become a critical concern, necessitating effective countermeasures to mitigate them. In response to this pressing issue, this research paper introduces QR Shield, a dual machine learning-based model designed to address the security vulnerabilities inherent in QR codes. Following extensive testing of multiple machine learning algorithms, two of the most promising algorithms have been integrated into this model, namely the Random Forest (RF) and XGBoost algorithms. The QR Shield employs these sophisticated machine learning algorithms to accurately identify and detect malicious URLs embedded within QR codes, utilizing a benchmark dataset of URLs. Through rigorous evaluation using four key metrics, the effectiveness of the QR Shield is demonstrated, with experimental results showcasing an impressive accuracy rate of 96.8%. Based on these outcomes, the QR Shield exhibits a high potential to detect malicious URLs embedded within QR codes, which confirms the ability to generalize the proposed QR Shield to various real-life domains and applications. Additionally, the present study contributes significantly to the broader field of QR code security by offering comprehensive insights into the efficacy of supervised machine learning models in enhancing QR code security and privacy, thus paving the way for future advancements in this critical area of research. | http://dx.doi.org/10.12785/ijcds/160164 | Cybersecurity; Supervised Learning; Machine Learning Models; QR Code Security; Malicious URL; Experimental Study | Hissah Almousa (Qassim University, Saudi Arabia); Arwa Almarzoqi (Qassim University & College of Computer, Saudi Arabia); Alaa Alassaf, Ghady Alrasheed and Suliman Alsuhibany (Qassim University, Saudi Arabia) |
106 | 1571004729 | An Adaptive Load-Balancing Approach for Sharded Blockchains | Blockchains were introduced as an innovative means of storing and processing transactions securely and in a decentralized manner. They function by recording transactions in a sequential chain of blocks, wherein each block incorporates the cryptographic hash of its preceding block. However, the emergence of blockchains as a way to organize and protect user data across the internet has come with some concerns, mainly how to deal with the issue of scalability while still maintaining the security standards as well as the decentralized nature inherent to blockchains. Many different implementations were offered. This paper investigates the concept of sharding, which offers a promising solution by partitioning a blockchain into smaller clusters to optimize performance through efficient load balancing. First, we will explore the existing literature on the subject and related algorithms. Then provide a detailed explanation of the functioning of the existing centralised and decentralised algorithms, as well as the one proposed by this thesis. Next, we will elicit the settings and conditions of the simulation environment, both in data collection and preparation. Finally, we will provide the results obtained as well as a comparative analysis of the tested algorithms and give an overview of possible future endeavours in the advancement of load-balancing algorithms concerning sharded Blockchains. | http://dx.doi.org/10.12785/ijcds/1601101 | Blochain; Sharding; Consensus Protocol; Load Balancing | David Halim Daou and Ramzi A. Haraty (Lebanese American University, Lebanon) |
107 | 1571005250 | Baseline model for deep neural networks in resource-constrained environments: an empirical investigation | This paper presents an empirical study on advanced Deep Neural Network (DNN) models, with a focus on identifying potential baseline models for efficient deployment in resource-constrained environments (RCE). The systematic evaluation encompasses ten state-of-the-art pre-trained DNN models: ResNet50, InceptionResNetV2, InceptionV3, MobileNet, MobileNetV2, EfficientNetB0, EfficientNetB1, EfficientNetB2, DenseNet121, and Xception, within the context of an RCE setting. Evaluation criteria, such as parameters (indicating model complexity), storage space (reflecting storage requirements), CPU usage time (for real-time applications), and accuracy (reflecting prediction truth), are considered through systematic experimental procedures. The results highlight MobileNet's excellent trade-off between accuracy and resource requirements, especially in terms of CPU and storage consumption, in experimental scenarios where image predictions are performed on an RCE device. Consequently, MobileNet emerges as a suitable baseline model for future DNNs developed specifically for RCE image classification. The study's conclusions endorse MobileNet as a baseline model for transfer learning techniques (used in DNN design), providing valuable insights for optimizing DNN models in resource-constrained scenarios. This approach enhances the creation of efficiency-focused and lightweight DNN models, improving their application and efficacy in resource-constrained environments. Future research will leverage the identified MobileNet model as a foundation to create a new DNN model tailored for efficiency-driven image classification applications in RCE devices. | http://dx.doi.org/10.12785/ijcds/160184 | Basline Model; Deep neural network; Image classification; Optimization model; RCE | Raafi Careem (Uva Wellassa University, Sri Lanka); Md Gapar and Dr Abdol Ali Khatibi (Management and Science University, Malaysia) |
108 | 1571006294 | Maternal Dyslipidemia During Pregnancy Correlates with Elevated Lipid Levels in One-Year-Old Infants | Developmental disorders like autism spectrum disorder (ASD) can result from differences in children's brains. Neurodevelopmental problems are exacerbated by a confluence of genetic, environmental, and prenatal risk factors associated with ASD. Repetitive behaviors and deficiencies in social communication are among the early indicators in children. Although gestational risk factors are not the cause of ASD, they can impact how children interact with one another. On the other hand, these risk factors might also favorably influence the development of ASD, and women who are pregnant can help with interventions. Later in life, the development of autism is linked to changes in lipid levels at birth. Dyslipidemia, characterized by abnormal cholesterol and triglyceride levels, is more common in individuals with ASD than in their healthy siblings or unrelated controls. However, the specific predictive value of blood lipid profiles and the key markers for dyslipidemia associated with ASD remain unclear. This paper explores the influence of infant lipid levels on the development of dyslipidemia associated with ASD, considering gestational risk factors for mothers. A machine learning model is constructed using combined parental and childhood lipid levels to predict ASD. The model is then validated using independent cohorts and tested against lipid profiles from infancy. Various statistical approaches designed for biomarker discovery in Electronic Health Records (EHR) data are applied to achieve these objectives. | http://dx.doi.org/10.12785/ijcds/1601104 | Maternal dyslipidemia; Gestational dyslipidemia; Lipid levels; Early childhood development; Infant health; Longitudinal study | Yugandhar Bokka (GIET University, India); R N V Jagan Mohan (Sagi Rama Krishnam Raju Engineering College, India); M Chandra Naik (GIET University, India) |
109 | 1571006938 | Outlier Handling in Clustering: A Comparative Experiment of K-means, Robust Trimmed K-means, and K-means Least Trimmed Squared | The presence of outliers in data often leads to unsatisfactory modeling outcomes, especially when employing clustering algorithms for population segmentation and behavioral analysis. While various outlier-resilient clustering algorithms like DBSCAN, LDOF, t-SNE, and others exist, one of the most renowned algorithms, k-Means, still faces challenges in effectively handling outliers. This journal proposes an optimization of the k-Means algorithm resilient to outliers by incorporating the Least Trimmed Square technique as post-processing, referred to as k-Means LTS. The outlier trimming process occurs after the grouping process, allowing trimming within each cluster. This algorithm will be compared with ordinary k-Means and Robust Trimmed k-Means, as known as RTKM, both employing outlier trimming. The comparison of these three algorithms will consider performance metrics, clustering results, and running time. The contribution of this research lies in the enhanced optimality of k-Means LTS algorithm, outperforming the other two algorithms across all comparison parameters. By utilizing this algorithm, the presence of outliers within each cluster can be more easily explained, and the running time is notably shorter compared to RTKM. As a result, the proposed algorithm of k-Means LTS consistently proves to work better than ordinary k-Means and RTKM when implemented across ten datasets of varying types. | http://dx.doi.org/10.12785/ijcds/160175 | Clustering; Least Trimmed Squares; K-Means; Robust clustering; Noisy data; Outliers | Tricia Estella (Bina Nusantara University, Indonesia & BINUS Graduate Program, Indonesia); Nadzla Andrita Intan Ghayatrie (Bina Nusantara University & BINUS Graduate Program, Indonesia); Antoni Wibowo (Bina Nusantara University & Jakarta, Indonesia) |
110 | 1571007773 | Investigating the Relationship Between Personality Traits and Information Security Awareness | This study delves into the crucial intersection of personality traits and information security behaviors in an era of increasing technological reliance. Using a quantitative approach, we explore the correlation between the Big Five Personality Traits (BFI) and the Knowledge-Attitude-Behavior (KAB) components related to information security awareness. Our study, which involved 311 undergraduate students chosen through stratified random sampling, uses Spearman correlation analysis and logistic regression modeling to examine correlations between personality traits from the BFI and information security risk status. The findings reveal significant correlations, particularly highlighting the roles of neuroticism (33.33%), lack of direction (16.67%), extraversion (16.67%), and antagonism (16.67%) in increasing susceptibility to security risks. The logistic regression model demonstrates 85.7% accuracy, indicating its effectiveness in correlating personality traits with information security behaviors. The study underscores the importance of considering individual personality profiles in cybersecurity strategies. By understanding the interplay between personality traits and security behaviors, organizations can effectively develop targeted interventions to enhance information security awareness and resilience. These findings provide a nuanced understanding of the psychological factors shaping cybersecurity attitudes and behaviors. Also, these findings have significant implications for crafting targeted cybersecurity awareness programs, suggesting that integrating personality traits into these initiatives could promote cyber-secure behavior more effectively. This research adds valuable insights to information security, emphasizing the need for a more personalized approach to awareness strategies and future research to explore this relationship further. | http://dx.doi.org/10.12785/ijcds/160191 | BFI characteristics; Cybersecurity; Information Security; Personality Factor | January Naga, Mia Amor C Tinam-isan, Melody Mae Maluya, Kaye Antonnette Panal and Ma. Tanya Tupac (MSU-Iligan Institute of Technology, Philippines) |
111 | 1571008087 | HandloomGCN: Real-time handloom design generation using Generated Cellular Network | Handloom design creation, deeply rooted in cultural heritage, has traditionally relied on manual craftsmanship. The individual minds are conditioned to biased and coming up with combine aesthetics of non-handloom designs with handloom designs is a tough task. This paper explores an innovative approach by fusing deep convolutional neural networks with cellular automata for generating handloom designs to automate and enhance this intricate process. Further the output is processed with a higher resolution network. The fusion network works on higher-levels of feature pyramid, managing the image layout at a texture level. We implemented the approach with different weight ratios to generate the outcomes. This method also avert over-excitation artifacts and reduces implausible feature mixtures in compare to previous approaches. It allows to generate adoptable result with increased visual effects. Unlike existing methods, the combined system can match and fit local features with considerable variability and yielding results. The outcomes shows potential of this fusion in pushing the boundaries of design innovation in the field of handloom textiles. Qualitative and quantitative experiments demonstrate the superiority of the introduced method among all other existing approaches. The work established a comprehensive benchmark for comparision and results into a new publicly accessible "HandloomGCN" dataset of handloom clothes for this research field. | http://dx.doi.org/10.12785/ijcds/1601128 | Handloom Design; Deep convolutional Neural Network; Generated cellular Network; High-Resolution Network; Texture; Weight Ratio | Anindita Das (Assam Downtown University, India); Aniruddha Deka (Assam down town University) |
112 | 1571008589 | Using a Grey Wolf Optimization and Multilayer Perceptron Algorithms for an Anomaly-Based Intrusion Detection System | The swift development of information technology has led to an increase in the total number of electronic devices linked to the Internet. Additionally, there were more network attacks. Accordingly, it is crucial to create a defense system capable of identifying novel attack types. An intelligent system Intrusion detection system (IDS) is the most effective defense system, monitoring and analyzing network packets to spot any unusual activity. Moreover, there are a lot of useless and repetitive features in the network packets, that hurt the IDS system's performance and use up too many resources. The computation times will be shortened and computation complexity will be also simplified by choosing the suitable feature selection technique that helps to determine the most related subset of features. An enhanced anomaly IDS model based on a multi-objective grey wolf optimization technique has been proposed in this paper. Using the grey wolf optimization technique, the best features from the dataset were identified to achieve a considerable improvement in classification accuracy. However, a multilayer perceptron technique (MLP) was employed to assess the suitability of specific features that were properly for predicting attacks. Furthermore, to show the efficiency of the suggested approach using 20% of the NSL-KDD dataset, multiple attack scenarios were employed. The proposed approach achieves high detection rates (92.52%, 70.31%, 14.53%, and 2.87%) for DoS, Probe, R2L, and U2R categories, respectively, with classification accuracy reaching 85.43%. Our proposed model was evaluated against other current approaches and produced noteworthy results. | http://dx.doi.org/10.12785/ijcds/1601115 | Intrusion Detection; Grey Wolf Optimizer; Multilayer Perceptron; Feature Selection; Classification | Wathiq Laftah Al-Yaseen (Kerbala Technical Institute, Al-Furat-Al-Awsat Technical University, Iraq); Qusay Abdullah Abed (Kerbala Technical Institute, Al-Furat Al-Awsat Technical University, Iraq) |
113 | 1571009195 | Online Signature Classification based on Dynamic Nature of Features Selection Framework | In recent digital age, online signature verification plays a key role in authentication including security standards across many industries, such as financial, legal, and ecommerce. The World Bank's data shows the global digital economy is growing fast, with internet usage nearly 60% of people worldwide. According to numbers from the International Telecommunications Union, over 4.7 billion individuals have become internet users with so many user doing internet's online, security and trust for online transactions are important issues. Forensics and biometrics are emerging as key players in this area. Verifying signatures digitally is one important use. As in the study mentioned earlier, using machine learning can help make signature verification systems more accurate and reliable. Our study describes an online verification method using machine learning that is based on the dynamic features of a signature and compares the outcomes to methods already in use. The online signature verification has been validated using supervised learning (K-nearest neighbour (KNN)). This research aimed to enhance authenticity and reduce the occurrence of false positives as its primary objectives. The outcomes show that this methodology has better authenticity than the current methods. The Signature Verification System (SVS) 2004 based signature data-sets is utilized in the tests. | http://dx.doi.org/10.12785/ijcds/160197 | Online Signature Verification; Signature features; KNN (k Nearest Neighbor); Machine Learning | Akhilesh Kumar Singh (Sharda University, India); Surabhi Kesarwani (Greater Noida Institute of Technology, India); Anu Shree (GLA University, India); Pawan Kumar Verma (Sharda University, India); Nitin Rakesh (Symbiosis Institute of Technology Nagpur, India & SIT Nagpur, India); Monali Gulhane (Symbiosis International Deemed University, India) |
114 | 1571010177 | High-Fidelity Machine Learning Techniques for Driver Drowsiness Detection | It is devastating that daily, there is an ample number of car crashes that cause damage to automobiles, onboard passengers get injured, and others tend to lose their lives. Road crashes are fast rising across the globe and have drawn many road safety commissions and concerned individuals to discuss ways to reduce this menacing situation drastically. With the introduction of artificial intelligence and technological advancement, the government and state commissions have beckoned on the various universities and research institutions to develop methods to curb the rise of automobile crashes. Some causes of these crashes include drunk driving and drowsiness, the latter is most prevalent as it occurs to all and sundry. Drowsiness detection can be categorized into three main techniques; behavioral-based, vehicular-based, and physiological-based. In this research, the behavioral-based approach was studied, with significant consideration being the cost of implementation, execution time, and accuracy. Three machine learning (ML) classifiers were considered: Support Vector Machine (SVM), Naïve Bayes (NB), and Random Forest (RF). A dataset of 1448 images was used for training and testing these classifiers: 70% for training and 30% for testing. Random Forest classifier gave the best accuracy of (92.41%) compared to SVM (90.34%) and Naïve Bayes (69.43%). A deep neural network (VGG16) was used to classify drowsiness, and this gave a high accuracy of 97.20%, which outperformed the traditional machine learning models. | http://dx.doi.org/10.12785/ijcds/1601106 | Drowsiness detection; machine learning; automobile crashes; CNN; SVM; NB | Yasser Ismail (Southern University, USA); Ebenezer Essel (LSU, USA); Mahmoud Darwich (University of Mount Union, USA); Fahmi Khalifa (Mansoura University, Egypt & Morgan State University, USA); Fred Lacy (Southern University, USA); Abeer Abdelhai Abdelhamid (Teaching Assistant, Egypt) |
115 | 1571011528 | Diabetic Patient Real-Time Monitoring System Using Machine Learning | Continuous monitoring is critical to improving the quality of life of people with diabetes. Leveraging technologies such as the Internet of Things (IoT), modern communication tools, and artificial intelligence (AI) can contribute to reducing healthcare costs. The integration of various communication systems enables the provision of personalized and remote healthcare services. The increasing volume of healthcare data poses challenges in storage and processing. To overcome this challenge, this paper suggests intelligent medical architectures for intelligent e-health apps. To provide cutting-edge medical services, 5G and 6G technologies are necessary since they can satisfy critical needs, including high bandwidth and energy efficiency. This work presents an intelligent machine learning (ML) using an ensemble learning-based real-time monitoring system for diabetes patients. Mobiles, detectors, and other intelligent gadgets are used as buildings to gather measurements of the body. Subsequently, the collected data undergoes a normalization procedure for preprocessing. Principal Component Analysis (PCA) is employed to extract features. The ranking of every feature in the dataset is then assessed using two feature selection (FS) techniques, namely information gain (IG), and chi-square (chi2), and the association between the features chosen by the FS methods is then found using Pearson correlation method, which is one of the correlation methods that can be used to find the correlated between the selected features. For diagnostic purposes, the intelligent system employs data classification through an ensemble learning approach utilizing XGBoost and Random Forest (RF) as base models. The final classification is determined by a hard voting mechanism in conjunction with particle swarm optimization (PASWOP). Simulation results underscore the superiority of the suggested approach in terms of accuracy when compared to alternative techniques. | http://dx.doi.org/10.12785/ijcds/160182 | Internet of Things; Machine Learning; Principal Component Analysis; Particle Swarm Optimization | Tareq Emad Ali (University of Baghdad, Iraq); Faten Imad Ali (AL-Nahrain University, Iraq); Ameer Morad (Baghdad University, Iraq); Mohammed Abdala (AL-Nahrain University & College of Information Engineering, Iraq); Alwahab Dhulfiqar Zoltán (Eötvös Loránd University, Hungary) |
116 | 1571012495 | Blockchain-Based Student Grievances Redressal System, Performance Analysis and Proposing Artificial Intelligence-Based Model | Ensuring the integrity of data is essential to protect sensitive information from unauthorized alterations. Employing appropriate tools and methodologies is crucial in preventing data manipulation. This is especially vital for maintaining the integrity of various types of information, such as financial transactions, online assets, patient health records, insurance details, data from IoT sensors, supply chain information, and logistics data. Data integrity plays a pivotal role in maintaining the accuracy of student grievance information, as higher authorities may be tempted to manipulate it for institutional interests. Within the realm of higher education, it is imperative for universities to establish a secure environment where students feel comfortable expressing their grievances. Traditional methods for registering complaints were not secure, lacks in maintaining privacy and transparency and thus contributing to heightened frustration and fear. This article suggests a viable solution to this issue by implementing a student complaints system based on blockchain technology. The proposed approach involves simulating the use of the Hyperledger Fabric framework, leveraging blockchain to ensure its resistance to tampering. This study also reveals an approach of computing overall system throughput and latency by making use of Hyperledger Caliper. The simulated result indicates minimal latency and high throughput even after injecting the transactions at different TPS (Transaction per second) rates. At last, an artificial intelligence-based model is proposed for smooth functioning of such systems. | http://dx.doi.org/10.12785/ijcds/1601102 | Blockchain; Hyperledger Fabric; Hyperledger Caliper; Fabric Performance; Artificial Intelligence | Rajesh Kaushal and Naveen Kumar (Chitkara University, India); Francesco Flammini (SUPSI, Switzerland) |
117 | 1571012577 | Enhancing and Denoising Mammographic Images for Tumor Detection using Bivariate Shrinkage and Modified Morphological Transforms | Breast cancer stands as a prevalent concern for women worldwide. Mammography serves as the frontline defense for early detection, yet its low X-ray dosage often leads to suboptimal image quality. This study proposes a multi-step solution: (i) Image enhancement employs a two-step approach: denoising using bivariate shrinkage and a hybrid median filter based on stationary wavelet transform (SWT) to avoid shift variants, and it is combined with modified morphology operations including the background, a vignette image with the weighting function 1/R 2 . (ii) Segmentation utilizes the fast K-means algorithm with a straightforward technique to automatically determine the number of clusters and tumors within the segment containing the largest centroid. (iii) Classification employs an artificial neural network (ANN) model, based on statistical features extracted from SWT coefficients at different levels, for tumor classification to achieve reliable results. Utilizing data from the Mammographic Image Analysis Society (MIAS) database, the proposed method was tested on Gaussian noisy images, demonstrating superior performance compared to existing algorithms in detecting lesions. The segmentation achieves a high accuracy, 98.15% on average and a specificity of 99.56%. However, the ground truth occasionally extends beyond the tumor mass, resulting in a low sensitivity of 62.81%. Finally, classification is also performed using the ANN model giving an overall data accuracy of 96%. | http://dx.doi.org/10.12785/ijcds/160177 | Breast Cancer; Mammogram; Stationary Wavelet Transform; Bivariate Shrinkage; Morphological Transform; Segmentation | Yen Thi Hoang Hua (University of Science-VNU HCM, Vietnam); Nguyễn Hồng Giang (University of Science, Vietnam); Binh Bao Luong (Ho Chi Minh City University of Technology & Vietnam National University Ho Chi Minh City, Vietnam); Liet Van Dang (University of Science-VNU HCM, Vietnam) |
118 | 1571012591 | Survey on Recommender Systems for Market Analysis using Deep Learning | Deep learning and machine learning techniques in marketing analysis have gained tremendous popularity because of its" learning feature." These techniques are applied in various ways within business organizations specially in marketing to handle tasks such as prediction, feature extraction, natural language processing and recommendation etc. In the domain of recommender system relationship between items will create denser representations. For improved and successful recommendations, embeddings (continuous vector representations) are created to reduce categorical variables. Business intelligence in marketing analysis is about understanding structure and growth of market for estimating beneficial policies for cost minimization and maximization profit based on the customer data. The consumer behavioral data is shattered in different silos, which makes data processing and analysis difficult. This study aims to provide comprehensive review of deep learning-based methodologies for recommendation task along with embedding techniques to create composite embedding from domain specific partial embeddings of customer data for market analysis which is shattered in silos. The study explains about graph convolution networks and knowledge graphs for learning disentangled embeddings to improve recommendation. The study reviews deep learning-based methods, algorithms, its applications and provide new perspective strategies in the area of recommender systems. The results and discussion section summarizes the trends of deep learning-based methods for recommender systems for market analysis and highlights open issues to improve recommendations. | http://dx.doi.org/10.12785/ijcds/1601120 | Data Mining; Deep Learning; Information systems; Recommender systems; Graph convolutional networks | Manjusha Manikrao Kulkarni and Savitha Hiremath (Dayananda Sagar University, India) |
119 | 1571013540 | Industrial IoT Sensor Data Federation in an Enterprise for an Early Failure Prediction | The recent availability of powerful (SBC) Single Board Computing devices has facilitated edge computing at a level that was previously hard to deploy. This new shift, presented a gap, hitherto considered tough to implement in the industry with lower power consumption at the edge. Generally, keeping in mind preventive maintenance intervention as the key purpose, a simple and quick implementation of federation in the industry had to be addressed. Industries need such predictions with data privacy and accuracy to take care of chronic spare replacements before things fail. We were presented with an opportunity to suggest preventive maintenance procedures and make manufacturing decisions based on (IIoT) Industrial Internet of Things data from multiple sensors across the enterprise, from multiple similar machines in different shop floors in an industrial setup across a varied geography. We introduce a unique method of federation, specifically, using HDF5 model file transfer. The checkpoint file is then synchronized to the central server using timed file transfer scripts at the nodes achieving simple federation. Preset cron jobs at the clients allow real time federation as a quick solution using off-the-shelf hardware. This has a lot of impact in decentralized decision making. Failure patterns can be identified and in general, an accurate model can be generated with limited resources. The uniqueness in this approach is that the training checkpoints are saved. In case of any interruption, TensorFlow Keras callback ModelCheckpoint can continuously save the training model while training the model, and also at the end of the training. | http://dx.doi.org/10.12785/ijcds/1601100 | Synchronized models; Federated Learning; MQTT; Edge Computing; Single Board Computers; Industrial Internet of Things | Sachin Bhoite (MIT World Peace University, India); Chandrashekhar Himmatrao Patil (Vishwanath Karad MIT World Peace University, India); Harshali Patil (Sri Balaji University, Pune, India) |
120 | 1571014497 | Exploring Honeypot as a Deception and Trigger Mechanism for Real-Time Attack Detection in Software-Defined Networking | Cyberattacks are becoming more frequent and sophisticated, making their detection harder. Probe attacks in Software Defined Networking (SDN) not given much attention by the research community, which represents the starting phase for other attacks. The attacker scans the network to get the necessary details about hosts and services running in network to launch successful attacks exploiting vulnerabilities in the system. The issue with probe attacks is that they occur passively and the target system is not aware of them. On one hand, additional mechanism is required to check the network traffic continuously by embedding switches with independent agents, which is against the OpenFlow standard. On the other hand, using statistics provided by OpenFlow switches to the controller, which overloads the controller with the extra task of continuously checking traffic statistics. In this work, a lightweight detection mechanism proposed that detects probe attacks in real-time using machine learning. Honeypot integrated into the detection mechanism to detect passive probe attacks by luring attackers through proving fake services and serving as a trigger mechanism that activates the detection mechanism when necessary. The experimental results show that the proposed mechanism successfully detects probe attacks in real-time achieving accuracy (94.73%) with the minimum CPU load. | http://dx.doi.org/10.12785/ijcds/160169 | Intrusion Detection System (IDS); Software Defined Networking (SDN); Probe; Reconnaissance; Honeypot; Machine learning (ML) | Harman Yousif Khalid (University of Duhok, Iraq); Najla Badie Aldabagh (Mosul University, Iraq) |
121 | 1571014529 | IoT-based AI Methods for Indoor Air Quality Monitoring Systems: A Systematic Review | This exploratory disquisition delves into the world of Indoor Air Quality(IAQ) monitoring systems, using the solidarity of Artificial Intelligence(AI) and Internet of Effects (IoT) technologies. Its overarching thing is to check the efficacy of these structures in regulating IAQ within structures, with a specific focus on mollifying pollutant degrees and their dangerous results on inhabitants. The study undertakes a comprehensive review of present literature and exploration trials, which depend upon AI and IoT algorithms for border monitoring, records analysis, and contrivance evaluation. also, it delves into the complications of machine armature, deployment ways, and functional efficiency. Furthermore, the exploration attracts different instructional budgets, including clever detectors and IoT bias stationed within the ambient surroundings. It elucidates the functionality of those instruments to accumulate real-time statistics, encompassing variables together with unpredictable natural composites, temperature oscillations, and moisture ranges. A vital aspect of this study is the disquisition of AI, contrivance getting to know Machine Learning (ML), and Deep Learning (DL) algorithms, showcasing their prophetic prowess within shadowing fabrics. also, they have a look at delving into the symbiotic dating among those algorithms, expounding their function in enhancing machine delicacy and optimizing energy intake. Moreover, the studies trials to delineate personalized health tips knitter- made to character inhabitants, decided from the wealth of records accrued through these structures. By integrating present-day technologies with empirical perceptivity, this takes a look at trials to pave the manner for better IAQ control strategies, fostering more healthy and lesser sustainable lodging surroundings. | http://dx.doi.org/10.12785/ijcds/160159 | indoor air quality; IAQ; Air pollution; Sick Building | Hayder Qasim Flayyih (University of Dyala, Iraq); Jumana Waleed and Amer M. Ibrahim (University of Diyala, Iraq) |
122 | 1571014749 | Trajectory tracking of a robot arm using images sequences | In this research paper, we explore the use of a robotic arm in computer vision and robotics to generate trajectories. The combination of two methods, namely convolutional neural networks (CNN) and Spline3, is utilized to achieve accurate trajectory tracking. The CNN is trained to track a sequence of images captured by the robotic arm as it moves in a 2D plane. By analyzing the visual data, the CNN identifies and localizes the object accurately. Once the object is detected and located by the CNN, the next step is to generate a trajectory using the Spline3 method. This trajectory is characterized by minimized oscillations and irregularities, ensuring precise path replication. To evaluate the effectiveness of the proposed method, simulations are conducted using a two-degree-of-freedom model of the SCARA arm. These simulations demonstrate the correlation between accurate object localization by the CNN and trajectory tracking precision by the robotic arm. The evaluation metrics employed include mean Average Precision (mAP), recall, precision, cosine similarity, Mean Squared Error (MSE), and Peak Signal-to-Noise Ratio (PSNR). The metrics provide quantitative measures of the performance of both the CNN object detection and the Spline3 trajectory generation. The main aim of this study is to enable the use of this type of manipulator arm in the most complex areas, for example, to help surgeons carry out their surgical operations in an accurate and reliable manner. | http://dx.doi.org/10.12785/ijcds/160178 | Object-detection; CNN; Trajectory; Tracking; Spline3; SCARA | Aicha Belalia (University of Science and Technologies USTO & University of Chlef, Algeria); Samira Chouraqui (University of Sciences and Technology Oran USTO, Algeria); M'hammed Boussir (University of Science and Technology Oran USTO, Algeria) |
123 | 1571015334 | Financial Technology in Recovery: Behavioral Usage of Payment Systems by Indonesian MSMEs in the Post-Pandemic Era | The COVID-19 pandemic's outbreak hastened the digital transformation within Indonesian MSMEs. The crisis necessitated an immediate transition to online platforms to sustain business activities, driving many MSMEs to surmount previous reluctance and embrace digital technologies. This research focused on the determinant of intention and use behaviour of fintech payment system among Indonesia MSME's. This study highlights the importance and urgency of understanding the evolving dynamics of digital adoption among Indonesian MSMEs in the post-COVID era, using the UTAUT to shed light on these specific factors. The approach used in this research is quantitative. The sampling technique used was purposive sampling which has selection criteria MSMEs that have engaged with fintech payment systems for at least one year. The sample size calculation utilized the Slovin formula, incorporating a margin of error of 5% got 399 respondents of MSMEs from Surabaya, Indonesia. Data analysis techniques used are the SEM-PLS using SmartPLS-4. The result finding indicate that Performance Expectancy, Effort Expectancy, and Social Influences significantly influence Behavioural Intention. However, Facilitating Conditions do not have a notable impact on either Behavioural Intention or. In addition, Behavioral Intention significantly influence Use Behaviour of fintech payment system in MSMEs. This research enhances theoretical understanding by utilizing the UTAUT model in a novel context, this study not only extends the model's relevance but also definitively ascertains the influence of factors such as performance expectancy, effort expectancy, social influence, and facilitating conditions on the acceptance of FinTech payment solutions. Practically, this study offers government to make MSMEs valuable guidance on formulating digital strategies and surmounting obstacles in FinTech integration. Further exploration into these facets is required in future studies and that policy formulation should better align with the unique necessities of MSMEs during their digital transformation processes. | http://dx.doi.org/10.12785/ijcds/160198 | MSMEs; Post-Covid19; FinTech Payment System; Technology Acceptance; UTAUT | Ian Firstian Aldhi and Fendy Suhariadi (Universitas Airlangga, Indonesia); Elisabeth Supriharyanti (Universitas Katolik Widya Mandala Surabaya, Indonesia); Elvia Rahmawati (Universitas Narotama, Indonesia); Dwi Hardaningtyas (Wijaya Putra University, Indonesia) |
124 | 1571015371 | Ship Movement Analysis Based on Automatic Identification System (AIS) Data Using Convolutional Neural Network and Multiple Thread Processing | Automatic Identification System (AIS) data is one of the most common and widely used datasets in the maritime industry. This dataset is a useful source of information regarding maritime traffic for both individuals and businesses. The reliability of this data and the long-distance transmission over the sea are the primary motivating factors behind its utilization. A wide variety of research projects are currently being carried out on this AIS data. Some of the applications that are being investigated include the detection of ship travel anomalies, the monitoring of ship security, the detection of ship collisions, and the pursuit of shipment trajectory tracking. A number of different methods of machine learning and deep learning are also being utilized in order to perform the analysis of the data. Nevertheless, the vast majority of the studies that have been done up to now have been carried out without any analysis into the consequences of concurrent processing of AIS data. The purpose of this study is to investigate and evaluate the impact that different numbers of thread processing have on the accuracy as well as the processing time. For the analysis of ship movement classification, the deep learning CNN model will be utilized. This study will check the speed, accuracy, and CPU utilization while performing AIS data analysis. | http://dx.doi.org/10.12785/ijcds/1601107 | Automatic Identification System data; AIS data; Convolutional Neural Network; Multithread Processing; Parallel Processing | Yosia Adriel Kornelius (Bina Nusantara University, Indonesia); Maybin K. Muyeba (MIST Consulting Ltd, Manchester, United Kingdom, United Kingdom (Great Britain)); Harco Leslie Hendric Spits Warnars (Bina Nusantara, Indonesia) |
125 | 1571016641 | Advanced Hybrid Recommender Systems: Enhancing Precision and Addressing Cold-Start Challenges in E-Commerce and Content Streaming | This research critically examines the pertinence of advanced recommender systems in tandem with the burgeoning e-commerce and content streaming domains. Traditional recommendation systems falter in cold-start scenarios, where sparse user or item data leads to inaccurate suggestions. Moreover, they overlook diverse interaction and auxiliary information within user-item pairs. Addressing these challenges, the paper introduces a novel hybrid recommendation system amalgamating collaborative filtering, content-based filtering, and knowledge-based techniques. Leveraging user-item interaction data alongside item and user features, when available, enhances recommendation coverage and accuracy for new entities. Matrix factorization with side information integrates content features into collaborative filtering, enriching personalization via latent factors. Deep learning models with attention mechanisms exploit auxiliary information, refining recommendation quality dynamically. Real-time interaction and scenario data fuel a contextual bandit framework, continuously evolving user profiles via multi-armed bandit algorithms. Employing Approximate Nearest Neighbors techniques like Locality-Sensitive Hashing expedites user similarity identification, curtailing computational overhead. Finally, ensemble learning with model stacking integrates predictions from multiple recommendation models, mitigating biases and capturing diverse data patterns. The study's ramifications are extensive, notably boosting recommendation precision and recall, thereby augmenting user satisfaction and engagement significantly. By offering a holistic approach to the cold-start problem, encompassing diverse data sources and recommendation techniques, this research makes a substantial contribution to the field. | http://dx.doi.org/10.12785/ijcds/1601129 | Hybrid Recommendation; Matrix Factorization; Deep Learning; Attention Mechanisms | Indu Hariyale (YCCE Nagpur, India); Mukesh Raghuwanshi (Professor, Computer Engineering, Symbiosis Institute of Technology, Nagpur (SIT Nagpur), Symbiosis International (Deeme) |
126 | 1571016895 | Emotion Infused Rumour Detection Model using LSTM | Twitter is a highly favored platform for sharing brief messages, known as tweets, read and shared among users at a rapid pace. Hence, the dissemination of information occurs quickly within the community of users in network. Twitter's unregulated environment provides a suitable platform for individuals to share and circulate unverified information; this propagation of rumours can greatly affect society. The detection of rumour accurately on Twitter from tweets is a crucial task. In this study, we suggested an Emotion Infused Rumour Detection model based on an LSTM model that employs tweet text and twenty-one distinct linguistic, user, post, and network features to classify between rumour and non-rumour tweets. The performance of the proposed Emotion Infused Detection model using LSTM is compared to two different deep learning models. The findings of the experiment demonstrate the superiority of the deep learning-based model for identifying rumours. The suggested Emotion Infused Rumour Detection model, which uses an LSTM model, earned an F1-score of 0.91 in identifying rumour and non-rumour tweets, outperforming the state-of-the-art findings. The suggested approach can lessen the influence of rumours on society, prevent loss of life and money, and increase users' confidence in social media platforms. The proposed model has the ability to promptly and accurately recognize tweets containing rumours, aiding in the prevention of the spread of misinformation. | http://dx.doi.org/10.12785/ijcds/160186 | LSTM; Rumour; RNN; Rumour Detection; Twitter; Deep Learning | Osheen Sharma (Assistant Professor - Goswami Ganesh Dutta Sanatan Dharma College, Chandigarh & Chitkara University Institute of Engineering & Technology, Punjab, India); Monika Sethi (Chitkara University, Punjab, India); Sachin Ahuja (Chandigarh University, India) |
127 | 1571019485 | Enhanced Security Measures in Advanced Wireless Sensor Networks: Multi-Attack Prevention Strategies | Distributed denial of service (DDoS) attacks is coordinated attempts to make a system or service unavailable to its intended users by flooding them with traffic or otherwise overloading it with unnecessary requests. Attacks that originate at the application layer of the network are more challenging to detect because they masquerade as legitimate traffic. Networks are protected from distributed denial of service attacks using a disarray-theory-based, six-step strategy that relied on the Cooperative-Based Fuzzy Artificial Immune System (Co-Fais) to determine whether the traffic was malicious. To address the issue of power security, the event acknowledgment method employs the Sequential Probability Ratio Test (SPRT) and the informational character of data. The Destination Oriented Directed Acyclic Graph (DODAG) is an RPL-based alternative to the underlying Routing Protocol for Low Power and Loss Networks that ensures the seamless flow of data from beginning to end in a sensor network that is geographically dispersed and whose communication is disrupted by natural disasters (RPL). QoI-aware RPL could save power by gathering the same information with less data transfer. However, the entertainment industry is rife with blunders due to randomly constructed perceptual frameworks, and the reliability of replicated results is lower than it would be with a more methodical approach. Therefore, it is difficult to keep track of additional component information from the underlying sign while minimising replication errors. In order to provide a more precise signal of enjoyment while requiring less storage space, a powerful tension and multiplication process is required. This review employs a modified bat technique to increase the perceptual cross-section and thus get around this obstacle. | http://dx.doi.org/10.12785/ijcds/1601116 | QoI-aware RPL; DDoS attacks; Sequential Probability Ratio Test (SPRT); Destination Oriented Directed Acyclic Graph (DODAG); Cooperative-Based Fuzzy Artificial Immune System (Co-Fais) | Arvindhan M, B. Bharathi Kannan, Srinivasan Sriramulu and Sunil Kumar (Galgotias University, India); Sudeep Varshney (Sharda University, India) |
128 | 1571020541 | A Comprehensive Review of Course Recommendation Systems for MOOCs | In recent years, many students have accepted MOOCs as a means of education.. Due to the enormous number of courses available through MOOC, students need help in identifying and selecting an appropriate course based on their profile and interests. To address this issue, MOOCs incorporate a course recommendation system that generates a list of courses based on the student's prerequisites. This literature review attempts to detect and assess trends, processes employed, and developments in MOOC course RS through an exhaustive analysis of academic literature published between January 1, 2016 and November 31, 2023. The study include the various methodologies employed, the datasets used for evaluations, the performance measures used, and the many issues encountered by Recommendation Systems. Literature published in ScienceDirect, Wiley, Springer, ACM, and IEEE, were chosen for review. After applying inclusion and exclusion criteria, 76 articles from the aforementioned databases, including journals, conferences, and book chapters, were selected. The investigation found that methods from Machine Learning and Deep Learning were widely deployed. Traditional approaches like content-based filtering, collaborative filtering, and hybrid filtering were frequently employed in conjunction with other algorithms for more accurate and precise suggestions. It also underlines the need to take data sparsity, the cold start problem, data overload, and user preferences into account when designing a course recommendation system. The literature study examines cutting-edge course Recommendation System in depth, examining recent developments, difficulties, and future work in this field. | http://dx.doi.org/10.12785/ijcds/1601126 | MOOCs; Course Recommendation Systems; Machine Learning,; Deep Learning; Recommendation Systems(RS) | Mohd Mustafeez ul Haque (Maulana Azad National Urdu University, India); KOTaiah Bonthu (Central Tribal University of Andhra Pradesh, India); Jameel Ahamed (Maulana Azad National Urdu University, India) |
129 | 1571020760 | An Ensemble Neural Architecture for Lung Diseases Prediction Using Chest X-rays | Accurate diagnostic tools for disease control and treatment options is of immense importance, specially during pandemics, Coronavirus (or COVID) that drew global attention in late 2019. Early detection and seclusion are the cornerstone effective ways to prevent virus spread. Artificial intelligence (AI)-based diagnostic tools for COVID detection have surged dramatically using various diagnostic imaging techniques, among which Chest X-ray (CXR) have been extensively investigated due to its fast acquisition coupled with its superior results. We propose a hybrid, automated, and efficient approach to detect COVID-19 at an early stage using CXRs. One of the main advantages of the proposed analysis is the development of a learnable input scaling module, which accommodates various CXR with different sizes with the ability to keep prominent CXRs features while filtering out noise. Additionally, the suggested method ensembles several learning modules to extract more discriminative representation of texture and appearance cues of CXRs, thereby facilitating more accurate classification. Particularly, we integrated two sets of features (texture descriptors and deeper features) representing a rich concentration of local and global features. In addition to learnable scaling and information-rich features, an ensemble classifier using various machine learning models is used for classification. Our classification module included support vector machine, XGBoost and extra trees modules. Extensive evaluation, supported by ablation and comparison studies, is conducted using two benchmark datasets to evaluate the model's performance in a cross-validation strategy. Using various metrics, the results document the robustness of our ensemble classification system with higher accuracy of 98.20% and 97.85% for the two data sets, respectively. | http://dx.doi.org/10.12785/ijcds/160174 | Ensemble Classifier; Autoencoder,; Artificial Intelligence; Feature Fusion | Abeer Abdelhai Abdelhamid (Teaching Assistant, Egypt); Oluwatunmise Akinniyi (Morgan State University, USA); Gehad A Saleh (Mansoura University, Egypt); Wael Deabes (Texas A&M University, USA); Fahmi Khalifa (Mansoura University, Egypt & Morgan State University, USA) |
129 papers.