List papers
Seq | # | Title | Abstract | Keywords | DOIX | Authors with affiliation and country |
---|---|---|---|---|---|---|
1 | 1570983253 | Multi-Sensor Data Fusion using FPN-ResNET | Multi-sensor data fusion is ubiquitous; therefore, the associated research is significant. There are several instances in the day-to-day activities where data fusion can be observed. The present generation autonomous driving system requires a thorough understanding followed by a voluminous dataset for training the model. The experimental data of imagery and proximity sensors are significant for the model's performance. The projection of the camera to LiDAR proves ineffective as the semantic density of the camera is suppressed in the process. The present work attempts to enhance the conventional point-level fusion techniques by allocating prime importance to semantic density. This is facilitated by performance optimization by identifying the hindrances and enhancing the transformation of the view by the Bird's-eye-View pooling. The object tracking is facilitated through the Extended Kalman Filter(EKF) by fusing the LiDAR data with the camera detections. The detection precision is found to be 0.9684, and the detection recall is 0.9436, while the mAP is evaluated to be 74.3% | sensor fusion; LiDAR; multi-sensor data; Feature Pyramid Net; Residual Network | Vinodh S (Visvesvaraya Technological University & R V College of Engineering, India); Ramakanth Kumar P (R. V. College of Engineering, India) | |
2 | 1570999111 | Detection of Soil Nitrogen Levels via Grayscale Conversion: A Full-Factorial Design of Experiment Approach | In smart agriculture, the detection of the level of soil nitrogen is essential in soil fertility and productivity that correlates to crop yields and fertilizer recommendations. With the advancement of technology, identification of such level is easily obtained using devices that capture images, which are affected by two factors, i.e., the tilting angle of the test tube and the lighting condition. The device produces images containing three colors namely, red, green, and blue values, respectively. Using grayscale conversion, these values are then converted into a single value to analyze appropriately. This study aims to determine the optimal combination of the factors to obtain the correct reading of nitrogen using a full-factorial design of experiments with four replications designed by Minitab®; design of experiment is a valuable statistical tool that promotes efficient experimentations used by scientists and engineers. The results are analyzed using Analysis of Variance and depicted that the determined factors: tilting angle and lighting condition, and their interactions are significant. The developed regression model explains 76.43% of variability among factors. The optimal setting for tilting angle is 90°, while the lighting condition should be indoors. Design of Experiment is a valuable statistical tool that promotes optimization and efficient experimentations used by scientists and engineers. | Automation; Design of Experiments; Full-Factorial; Grayscale conversion; NPK level; Optimization | John Joshua F. Montañez (Bicol State College of Applied Sciences and Technology, Philippines) | |
3 | 1571000399 | Incorporating Transfer Learning Strategy for improving Semantic Segmentation of Epizootic Ulcerative Syndrome Disease Using Deep Learning Model | Automated fish disease detection can eliminate the need for manual labor and provides earlier detection of fish disease such as EUS (Epizootic Ulcerative Syndrome) before it further spreads throughout the water. One of the problems that is faced on implementing a semantic segmentation fish disease detection system is the limited size of the semantic segmentation dataset. On the other hand, classification datasets for fish disease detections are more common and available in larger sizes, which cannot be used in segmentation tasks directly since it lacks the necessary label for such tasks. In this paper, we propose a training strategy based on transfer learning to learn from both ImageNet and classification dataset before being trained on the segmentation dataset. Specifically, we first train the ImageNet pre-trained VGG16 and ResNet50 on a classification task, then we transfer the weights into a semantic segmentation architectures such as U-Net and SegNet, and finally train the segmentation network on a segmentation task. We introduce two different modified U-Net architectures to allow the respective pre-trained VGG16 and ResNet50 weights to be transferred into the architecture. We used a classification dataset containing 304 images of fish diseases for classification task and a segmentation dataset containing 25 images of EUS-affected fishes for the segmentation task. The proposed training strategy is then compared with alternative training strategies such as training VGG16 and ResNet50 on ImageNet alone or classification dataset alone. When applied to SegNet and U-Net, the proposed training strategy surpasses their respective architecture trained on ImageNet or classification dataset alone. Between these two architectures with all compared training strategies, the U-Net+VGG16 architecture trained with our proposed training strategy achieves the best performance with validation and testing mIoU of 57.80% and 60.43%, respectively. The training code is available at https://github.com/RealOfficialTurf/FishDiseaseSegmentation. | Fish Disease Detection; Semantic Segmentation; Transfer Learning; U-Net Model; SegNet Model | Anbang Anbang and I Gede Putra Kusuma Negara (Bina Nusantara University, Indonesia) | |
4 | 1571012424 | Adopting Complex Networks to Detect Cheat Cases in Electronic exams | For Electronic education considerations, sometimes it is crucial to rely on solutions, even though these solutions have more negative than positive results. One of the most sensitive areas in remote studies is the morals and honesty of the students, precisely when they perform online tests or exams. This study will suggest a monitoring system to avoid cheating with electronic exams depending on the distributed geo-information of students' devices and the integration of complex networks' analysis. This investigation was conducted in a class with equal gender distribution. There were 34 females and 34 males attended the class. The number of e-learning and e-test sessions varied for every student. According to the study, some students only get e-test sessions rather than e-learning sessions. In this instance, the students were removed in order to provide a distribution of honesty ratings that is typical. Following the computation of each student's honesty percentage, the results were distributed regularly according to the total number of students. The findings indicate that when considering the differences in honesty scores for the two genders, distant E-tests perform better with female students than with male students. There are several possible explanations for this, one of which is the social structure of the students. In Middle Eastern cultures, it is common knowledge that men enjoy greater freedom and space than women. This had an impact on the ability of male students to congregate in one place, as this study demonstrated when IP-address physical locations were compared. It was discovered that many students had abnormalities with their Electronic-Study-Profile when taking the E-test, but that the same students also had similarities with the E-Test-Profile. Compared to the male pupils, female students also showed anomalies in their E-Test-Profiles. | Electronic learning; electronic exams; COVID-19; Networks | Mahmood Alfathe (Ninevah University, USA); Ali Othman Al Janaby (Ninevah University, Iraq & Electronics Engineering, Iraq); Azhar Abdulaziz (Ninevah University, Iraq); Manar Kashmoola (Mosul University, Iraq) | |
5 | 1571015450 | Deep Learning Algorithm using CSRNet and Unet for Enhanced Behavioral Crowd Counting in Video | In crowd analysis, video data incurs challenges due to occlusion, crowd densities, and dynamic environmental conditions. To address these challenges and to enhance the accuracy we have proposed Behavioral Crowd Counting (BCC) that combines the Congested Scene Recognition Network (CSRNet) with Unet in video data. The CSRNet combines two networks namely a (1) frontend for feature extraction and (2) backend for the generation of a density map. It effectively tallies individuals within densely populated regions, offering a solution to the high crowd densities constraints. The Unet builds the semantic map and refines the semantic and density map of CSRNet. The Unet unravels complex patterns and connections among individuals in crowded settings, capturing spatial dependencies within densely populated scenes. It also offers the flexibility to incorporate attention maps as optional inputs to differentiate crowd regions from the background. We have also developed new video datasets namely Behavioral Video Dataset from the image dataset of the fine-grain crowd-counting to evaluate the BCC model. Datasets include standing vs sitting, waiting vs non-waiting, towards vs away, and violent vs non-violent videos, offering insights into posture, activity, directional movement, and aggression in various environments. The empirical findings illustrate that our approach is more efficient than others in behavioral crowd counting within video datasets, consisting of congested scenes as indicated by metrics MSE, MAE, and CMAE. | Congested Scene Recognition Network (csrnet); Unet; Feature Extraction; Behaviour,; Crowd Analysis | B Ganga (University of Visvesvaraya College of Engineering); Lata B T, K Rajshekar and KR Venugopal (UVCE, India) | |
6 | 1571016767 | Encryption Technique Using a Mixture of Hill Cipher and Modified DNA for Secure Data Transmission | The 21st century has seen an explosion of information due to the quick development of technology, making information a far more crucial strategic resource. In addition, there has been a development in hackers' ability to steal information with all their might and intelligence. Consequently, secretly transmitting information became the main concern of all agencies. Further, as classical cryptographic methods are now exposed to attacks, protecting data by a collection of steganography and cryptography techniques is becoming increasingly popular and widely adopted. Therefore, it has been determined that DNA use in cryptography could lead to new technological advancements by converting original text into an unintelligible format. In this paper, a new cryptographic technique that combines Modified DNA sequence with Hill cipher has been proposed. The proposed technique includes four phases: In the first phase, Hill cipher technology encrypts plain text into n-bit binary numbers. Second, perform XOR operations on the result, and then a key value with a length of 32 bits is added to the output of XOR. Third, Modified DNA cryptography is applied to generate ambiguity and steganography. The decryption process, which is the last phase, applies to recover the original message on the receiver side. The proposed scheme provides higher data security when compared to several existing schemes. | Hill Cipher; Modified DNA; Cryptography; Steganography | Kameran Ali Ameen, Walled Khalid Abdulwahab and Yalmaz Najm Aldeen Taher (University of Kirkuk, Iraq) | |
7 | 1571019981 | Using DCT and Quadtree for Image Compression | Data file compression has been used for a long time to reduce data file sizes, due to restriction in memory capacity and slow network communication speeds. Compression is also applied to images and became a necessity with the introduction of the world wide web since images are used extensively in web sites. A Bitmap is a standard for image storing introduced by Microsoft for its Windows operating system. Bitmaps are very popular today, but they use a lot of memory space, giving the need for compressing them. Various techniques are used for bitmap (BMP) image compression. Several are lossless, that is the quality of the image is not modified, and others are lossy, in which a part of the image is lost. Examples of lossless techniques include GIF and PNG, while JPG is an example of a lossy technique. JPG is composed of Discrete Cosine Transform (DCT), Quantization, Huffman coding and Run-length encoding (RLE). Quadtrees are also used for lossless or lossy image compression. In this work, we propose an algorithm based on DCT/Quantization and Quadtrees to be used in sequence together for lossy image compression. Our proposed method will be compared with other techniques, namely JPEG and Quadtree with different parameters. In the results, our proposed algorithm performed well compared to other quadtree methods. | Image Compression; Quadtree; DCT algorithm; Quantization; JPEG; Bitmap | Chadi Fouad Riman (American University of the Middle East, Kuwait); Pierre Abi Char (American University of the Middle East-kuwait, Kuwait) | |
8 | 1571020131 | Feature Engineering for Epileptic Seizure Classification Using SeqBoostNet | Epileptic seizure, a severe neurological condition, profoundly impacts patient's social lives, necessitating precise diagnosis for classification and prediction. This research addresses the critical gap in automated seizure detection for epilepsy patients, aiming to improve diagnostic accuracy and prediction capabilities through Artificial Intelligence driven analysis of Electroencephalography (EEG) signals. The system employs innovative feature combination such as spectral and temporal features, combining Uniform Manifold Approximation and Projection (UMAP) with Fast Fourier Transformation (FFT), and a classification technique called Sequential Boosting Network (SeqBoostNet). SeqBoostNet is a groundbreaking stacked model that integrates machine learning (ML) and deep learning (DL) approaches, leveraging the strengths of both methodologies to swiftly differentiate seizure onsets, events, and healthy brain activity. The method's efficacy is validated on benchmark datasets such as BONN from the UCI repository and real-time data BEED from the Bangalore EEG Epilepsy Dataset, achieving remarkable accuracy rates of 98.40% for BONN and 99.66% for BEED datasets. The practical significance of this study lies in its potential to transform epilepsy care by providing a precise automated seizure detection system, ultimately enhancing diagnostic accuracy and patient outcomes. Furthermore, it underscores the importance of integrating advanced AI techniques with EEG analysis for more effective neurological diagnostics and treatment strategies. | Epileptic Seizure; UMAP; Machine Learning; Deep Learning; FFT; LSTM | Najmusseher and Nizar Banu P K (Christ University, India) | |
9 | 1571020612 | Optimized feature set in classification of plant leaves images using machine learning models | Saving the earth becomes the utmost priority and responsibility of any individual. Environmental and ecosystem health assessment studies require precision farming, enabling early identification of diseases and optimizing crop management. Automatic plant leaf detection will serve as one of the crucial contributions towards biodiversity research. The proposed work provides an optimized feature set in the classification of plant leaves using machine learning techniques. The work uses fourteen different plant leaves, namely, apple, blueberry, cherry, corn, cotton, grape, groundnut, peach, pepper, potato, raspberry, soybean, strawberry, and tomato. A total of 20, 357 images are taken for training and testing purposes. Features include shape, texture, HSI and wavelets. Features are reduced using feature optimization techniques such as XG Boost, Pearson correlation, and chi-squared. In search of the best classifier, five classifiers, namely, random forest, k-nearest neighbor, support vector machine, naïve Bayes and decision tree are varied with their hyperparameters. The SVM classifier gave the best results, achieving an accuracy of 99.59 with four-fold cross-validation. The novelty of the work lies in deploying features using the knowledge gained by farmers. Results reveal that the method outperforms the state-of-the-art works and are found encouraging. In this regard the techniques used here enable us to target the leaves and detect the diseases and further facilitate to opt for preventive measures. | Ecosystem; Biodiversity; Classification; HSI; Wavelets | Nikhil Jeevan Inamdar, Prof (KLS Gogte Institute of Technology Belagavi, India); Manjunath Managuli (KLS GIT, Belagavi, Karnataka and Affiliated to VTU Belagavi, India) | |
10 | 1571021663 | Design and Implementation of an Advanced Digital Communication System Based on SDR | Simplicity, flexibility, and high scalability are mandatory for modern digital communication systems. This can be achieved using software-defined radio (SDR) technology, which depends on digital signal processing (DSP) software algorithms. This paper considers designing the modulation and the demodulation parts of a single carrier digital communication system based on a Microcontroller(MC),, in which the signal is modulated digitally using a look-up table (LUT) module, while the receiver demodulates the signal using a digital signal processing algorithm by utilizing a single carrier discrete Fourier transform (DFT), both the receiver and transmitter employ Teensy 4.0 microcontroller which can be programmed using C++ language. The target data rate that will be used as a test for this paper is 10 kilo symbols/sec (KS/s), and it will support multi-modulation types. For the transmitter, modulation schemes such as BPSK, QPSK, 8QAM, 8PSK, and 16QAM are generated, while at the receiver, the symbols phases are exploited to detect the signal, rather than the amplitude, and this method is suitable for any type of modulation schemes, as a summary in this paper will achieve the design of two new ideas first is modulate the signal using MC and the other is the method of demodulate the signal using the MC. | Digital modulation; microcontroller; C++; MPSK; SDR; QAM | Sadeem Mohameed (University of Ninevah, Iraq); Mohamad A. Ahmed, Mahmod Ahmed Mahmod and Abdullah B. Al-Qazzaz (Ninevah University, Iraq) | |
11 | 1571022983 | A survey of Fingerprint Identification System Using Deep Learning | The integration of deep learning technologies, particularly Convolutional Neural Networks (CNNs), has profoundly transformed fingerprint identification, providing a more effective and accurate approach compared to traditional methods. These deep learning models, trained on extensive datasets of fingerprint images, excel in extracting intricate patterns and unique features essential for precise fingerprint matching, even amidst challenging conditions like varying image quality, orientation, or lighting. Notably, the adaptability of deep learning-based systems, continuously refining their accuracy and performance with additional data and fine-tuning, proves indispensable in dynamic environments with ongoing fingerprint data collection. Moreover, the convergence of deep learning with other biometric modalities, such as facial recognition or iris scanning, has led to the development of robust multimodal biometric systems, enhancing security through layered verification mechanisms. However, persisting challenges, such as the acquisition of large annotated datasets and the mitigation of bias in training data, underscore the importance of addressing these issues to further enhance the reliability and performance of deep learning-based fingerprint identification systems. This survey paper aims to comprehensively review and analyze the application of deep learning techniques in fingerprint identification systems, providing valuable insights into current advancements, challenges, and future directions in the field, thereby serving as a resource for researchers, practitioners, and enthusiasts seeking a nuanced understanding of this critical area in biometrics. | Fingerprint; Identification; Deep learning; CNN; Survey | Hussein Ghalib Muhammad (University of Basra, Iraq); Zainab Ali Khalaf (University of Basrah, Iraq) | |
12 | 1571023677 | Variance Adaptive Optimization for the Deep Learning Applications | Artificial intelligence jargon encompasses deep learning that learns by training a deep neural network. Optimization is an iterative process of improving the overall performance of a deep neural network by lowering the loss or error in the network. However, optimizing deep neural networks is a non-trivial and time-consuming task. Deep learning has been utilized in many applications ranging from object detection, computer vision, and image classification to natural language processing. Hence, carefully optimizing deep neural networks becomes an essential part of the application development. In the literature, many optimization algorithms like stochastic gradient descent, adaptive moment estimation, adaptive gradients, root mean square propagation etc. have been employed to optimize deep neural networks. However, optimal convergence and generalization on unseen data is an issue for most of the conventional approaches. In this paper, we have proposed a variance adaptive optimization (VAdam) technique based on Adaptive moment estimation (ADAM) optimizer to enhance convergence and generalization during deep learning. We have utilized gradient variance as useful insight to adaptively change the learning rate resulting in improved convergence time and generalization accuracy. The experimentation performed on various datasets demonstrates the effectiveness of the proposed optimizer in terms of convergence and generalization compared to existing optimizers. | Deep Neural Networks; Deep Learning; Optimization; Variance; Convergence | Nagesh Jadhav and Rekha Sugandhi (MIT Art, Design and Technology University); Rajendra Pawar (MIT Art Design and Technology University Pune India, India); Swati Shirke (Pimpri Chinchwad University, India); Jagannath E. Nalavade (MIT Art, Design and Technology University) | |
13 | 1571024871 | RFM-T Model Clustering Analysis in Improving Customer Segmentation | In the dynamic landscape of business, understanding and identifying customers are paramount for effective marketing strategies. This study delves into the realm of customer segmentation, a crucial component of robust marketing strategies, particularly focusing on the widely adopted RFM (Recency, Frequency, and Monetary) model. Various new models of RFM have been explored, with a notable extension being the RFM-T model, introducing the "T" variable to represent Time. This study aims to compare the performance of the traditional RFM model with the innovative RFM-T model, assessing their efficacy in customer segmentation. Utilizing a dataset sourced from a US-based online retail platform, the study employs the K-Means algorithm for segmentation, a method commonly utilized for partitioning data points into distinct clusters. To ascertain the optimal number of clusters, the Elbow Curve approach is employed, offering insight into the granularity of segmentation. Subsequently, the Silhouette Score, a metric used to assess the cohesion and separation of clusters, is leveraged to evaluate the quality and effectiveness of both models. By conducting a comparative analysis of the traditional RFM model and its enhanced RFM-T counterpart, the study endeavors to shed light on their respective contributions to the refinement of customer profiling and segmentation strategies within the online retail industry. Through this exploration, businesses can glean valuable insights into the evolving landscape of customer segmentation, thereby enabling them to tailor their marketing efforts more precisely and effectively to meet the dynamic needs and preferences of their target audience. | RFM; RFM-T; Time; K-Means algorithm; Customer segmentation | Astrid Dewi Rana, Quezvanya Chloe Milano Hadisantoso and Abba Suganda Girsang (Bina Nusantara University, Indonesia) | |
14 | 1571024987 | Investigational Study for Overcoming Security Challenges in Implantable Medical Devices | Implantable Medical Devices (IMDs) have gained significant popularity due to their telemetry capabilities, making them a preferred choice for patients and medical professionals alike. However, like any networked device, IMDs are vulnerable to security breaches, which can pose serious risks to human life. Consequently, ensuring robust security measures for these devices is of utmost importance. While researchers have made efforts to address these vulnerabilities, many proposed solutions are impractical due to the inherent constraints associated with IMDs, particularly their limited battery life. This paper presents a comprehensive review of battery-efficient security solutions for IMDs by surveying extensive research literature in the field. By exploring innovative approaches that provide both strong security and optimized energy consumption, this study aims to strike a balance between safeguarding IMDs and prolonging their operational lifespan. The paper consolidates existing research, highlighting promising avenues for practical and effective security solutions in the face of evolving threats. Serving as a valuable reference for future research endeavors, this work emphasizes the criticality of continuous advancements in this field to ensure the well-being of patients who rely on these life-saving devices. Ultimately, it underscores the need to overcome the unique challenges posed by limited battery life in order to enhance the security of IMDs and mitigate potential risks to human health. | Implantable Medical Devices; Security; Privacy; battery-efficient | Muawya Al-Dalaien (Princess Sumaya University for Technology, Jordan); Hussein Al bazar (Arab Open University Saudi Arabia, Saudi Arabia); Hussein El-jaber (Arab Open University, Kingdom of Saudi Arabia, Malaysia) | |
15 | 1571025378 | AI-Based Disaster Classification using Cloud Computing | The combination of cloud computing and artificial intelligence (AI) offers a potent remedy for disaster management and response systems in this age of quickly advancing technology. Using text and image data gathered from social media sites, this project, makes use of the collective intelligence present in the data. We carefully trained a bidirectional LSTM model for textual analysis and a Convolutional Neural Network (CNN) model for image classification using Kaggle datasets. Our system's fundamental component is an API that is installed on an Amazon Web Services (AWS) EC2 instance. To improve performance and stability, the API is strengthened with load balancing, auto-scaling features, and multi-AZ redundancy. The API easily integrates with the trained models to determine whether the content is relevant to a disaster scenario when it receives input data. When a positive classification is made from the processed text or image, an alert mechanism sends out an email notification with important information about the disaster that was discovered. The abundance of user-generated content available on social media sites like Facebook, Instagram, and Twitter presents a special chance to improve the efficacy and efficiency of disaster relief operations. The main objective of this project is to use cutting-edge technologies to sort through massive amounts of social media data and derive useful insights in emergency situations. | Machine Learning; Deep Learning; Long Short-Term Memory; Elastic Compute Cloud; Artificial Intelligence | Rathna R (Vellore Institute of Technology, India); Aryan Purohit (Vellore Institute of Technology - Chennai Campus, India); Allen Stanley (Vellore Institute of Technology Chennai, India) | |
16 | 1571027526 | Advancing Text Classification: A Systematic Review of Few-Shot Learning Approaches | Few-shot learning, a specialized branch of machine learning, tackles the challenge of constructing accurate models with minimal labeled data. This is particularly pertinent in text classification, where annotated samples are often scarce, especially in niche domains or certain languages. Our survey offers an updated synthesis of the latest developments in few-shot learning for text classification, delving into core techniques such as metric-based, model-based, and optimization-based approaches, and their suitability for textual data. We pay special attention to transfer learning and pre-trained language models, which have demonstrated exceptional capabilities in comprehending and categorizing text with few examples. Additionally, our review extends to the exploration of few-shot learning in Arabic text classification, including both datasets and existing research efforts. We evaluated 32 studies that met our inclusion criteria, summarizing benchmarks and datasets, discussing few-shot learning's real-world impacts, and suggesting future research avenues. Our survey aims to provide a thorough groundwork for those at the nexus of few-shot learning and text classification, with an added focus on Arabic text, emphasizing the creation of versatile models that can effectively learn from limited data and sustain high performance, while also identifying key challenges in applying Few-Shot Learning (FSL), including data sparsity, domain specificity, and language constraints, necessitating innovative solutions for robust model adaptation and generalization across diverse textual domains. | Few-shot learning; Text classification; Transfer learning; Machine Learning; Pre-trained Language Models | Amani Aljehani and Syed Hamid Hasan (King Abdulaziz University, Saudi Arabia); Usman Ali Khan (King Abdulaziz University, Jeddah, Saudi Arabia) | |
17 | 1571028796 | Security of SDDN based on Adaptive Clustering Using Representatives (CURE) Algorithm | In the current fact of data center networking, a software-defined data center network (SDDN) has emerged as a transformational solution to address the inherent complexities in network control. Nonetheless, even with so many advantages to look up to, there are critically important issues making its implementation critical, where security, performance, reliability, and fault tolerance are important. For this reason, security becomes a very vital issue, since SDDNs are exposed to many Distributed Denial of Service (DDoS) attacks. In this regard, a new machine-learning-based CURE algorithm framework has been proposed in this paper to outweigh the security challenges. It uses an Adaptive CURE algorithm to minimize the effect of DDoS. The algorithm is designed with adaptive input, depending on the processing resources. The controller captures the suspicious traffic acting as a central coordinator and, if an anomaly in traffic is detected, then the same reforwards a copy of suspicious traffic to the processing and analyzing unit. The adopted approach applies the Adaptive CURE algorithm in processing, through a comprehensive study of the pattern of traffic, the anomalous traffic in the distinguishing of potential DDoS attacks with great accuracy. The algorithm's intelligence facilitates the identification of DDoS attacks. This allows to update switches with suitable flow entries by the controller. Such response mechanisms further improve the security posture of SDDN networks, specifically providing a really strong defense against DDoS attacks. The experiment results show that the proposed framework achieves an accuracy of up to 96.2% with various DDoS attacks. | Software-Defined Data Center Network; DDoS Attack; CURE algorithm; Datacenter | Mohammed Swefee and Alharith A. Abdullah (University of Babylon, Iraq) | |
18 | 1571029493 | Deep Learning in Plant Stress Phenomics Studies - A Review | Efficient crop management and treatment rely on early detection of plant stress. Imaging sensors provide a non-destructive and commonly used method for detecting stress in large farm fields. With machine learning and image processing, several automated plant stress detection methods have been developed. This technology can analyze large sets of plant images, identifying even the most subtle spectral and morphological characteristics that indicate stress. This can help categorize plants as either stressed or not, with significant implications for farmers and agriculture managers. Deep learning has shown great potential in vision tasks, making it an ideal candidate for plant stress detection. This comprehensive review paper explores the use of deep learning for detecting biotic and abiotic plant stress using various imaging techniques. A systematic bibliometric review of the Scopus database was conducted, using keywords to shortlist and identify significant contributions in the literature. The review also presents details of public and private datasets used in plant stress detection studies. The insights gained from this study will significantly contribute to developing more profound deep-learning applications in plant stress research, leading to more sustainable crop production systems. Additionally, this study will assist researchers and botanists in developing plant types resilient to various stresses. | Deep learning; Imaging techniques; Machine vision; Machine learning; Plant stress | Sanjyot Patil (Symbiosis Institute of Technology, India); Shrikrishna Kolhar (Symbiosis Institute of Technology, Symbiosis International (Deemed) University (SIU), Pune, India); Jayant Jagtap (NIMS Institute of Computing, Artificial Intelligence and Machine Learning, NIMS University Rajasthan, Jaipur, India) | |
19 | 1571031682 | A machine learning-based optimization algorithm for wearable wireless sensor networks | In an Internet of Things (IoT) setting, a Wireless Sensor Network (WSN) effectively collects and transmits data. Using the distributed characteristics of the network, machine learning techniques may reduce data transmission speeds. This paper offers a unique cluster-based data-gathering approach using the Machine Learning-based Optimization Algorithm for WSN (MLOA-WSN) designed in this article for assessing networks depending on power, latency, height, and length. Using the cluster head, the data-gathering technique is put into action, with the data collected from comparable groups transmitted to the mobile sink, where machine learning methods are then applied for routing and data optimization. As a result of the time-distributed transmission period, each node across the cluster can begin sensing and sending data again to the cluster head. The cluster-head node performs data fusion, aggregation, and compression, which sends the generated statistics to the base station. Consequently, the suggested strategy yields promising outcomes as it considerably improves network performance and minimizes packet loss due to a reduced number of aggregating procedures. The existing method for findings of the MLOA-WSN system is a value of 2.43, a packet loss rate analysis of 7.6 and an Average delay analysis of the optimizers for 224. The method was evaluated under various settings, and the outcomes indicated that the suggested algorithm outperformed previous techniques in terms of decreased delay and solution precision. | Wireless Sensor Network; , Data Transmission; Machine Learning; Internet of Things; Optimisation Algorithm; Cluster | Sudhakar Yadav N and Uma Maheswari V (Chaitanya Bharathi Institute of Technology, India); Rajanikanth Aluvalu (Symbiosis International University, India); Sai Prashanth Mallellu (Vardhaman College of Engineering, India); Vaibhav Saini (Verolt Technologies Pvt Ltd, India); MVV Prasad Kantipudi (Symbiosis International Deemed University, India) | |
20 | 1571032062 | Automatic Detection of Sewage Defects, Traffic Lights Malfunctioning, and Deformed Traffic Signs Using Deep Learning | Effective urban management relies on timely detection and resolution of infrastructural anomalies such as sewage defects, malfunctioning traffic lights, and deformed traffic signs. Traditional methods of inspection often prove inefficient and time-consuming. In this paper, an automatic detection of the urban infrastructural issues, it presents a multi-task convolutional neural network architecture capable of simultaneously identifying sewage defects, malfunctioning traffic lights, and deformed traffic signs from street-level imagery. The model is trained on a diverse dataset comprising annotated images of urban scenes captured under various environmental conditions. We demonstrate the effectiveness of our approach through extensive experimentation and evaluation on real-world datasets. Results indicate that the model achieves high accuracy and robustness in detecting the specified anomalies, outperforming existing methods. Furthermore, we discuss the potential implications of our research for urban management, including improved efficiency in maintenance operations, enhanced safety for commuters, and cost savings for municipal authorities. About 2438 images were collected of 6 categories and were augmented twice. The first augmentation increased by (X9) for by generating data from Keras. The second augmentation was carried out on training data only by (X3) using the Roboflow tool, where we defined the angles of the shape and gave it a class name. An overall accuracy of 86% based on F1-Measure value for all classes while individual classes shows different F1-value based on the available training samples. Overall, this research contributes to the advancement of automated infrastructure inspection systems, facilitating smarter and more sustainable urban environments. | Convolutional Neural Network (CNN); Deep Learning; Sewage Defects; Traffic Lights Malfunctioning; Manhole Damage; YOLO v5 | Khalid Mohamed Nahar (Yarmouk University, Jordan); Firas Ibrahim Alzobi (The World Islamic Sciences & Education University, Jordan) | |
21 | 1571037323 | Enhancing Smartphone Motion Sensing with Embedded Deep Learning | Embedded systems and smartphones are vital in real-time applications, inspiring our interaction with technology. Smartphones possess various sensors like accelerometers, gyroscopes, and magnetometers. Deep Learning (DL) models enhance the capabilities of sensors, enabling them to perform real-time analysis and decision-making with accuracy and speed. This study demonstrates an intelligent system that detects smartphone movements using deep learning (DL) techniques such as convolutional neural networks (CNN) and stacked autoencoders(SAEs). The dataset has six smartphone movements, with 921 samples split into 695 for training and 226 for testing. The best training performance was achieved by Auto-Encoder 1 and Auto-Encoder 2. The SAEs had high classification accuracy (CA) and AUC values of 0.996 and 1.0, respectively. Similarly, CNN performed well with CA and AUC values of 0.991 and 0.998. These results show that CNN and SAEs are effective in identifying smartphone movements. The findings help improve smartphone apps and understand how well they can identify movement. The study indicates that CNN and SAE are influential in accurately identifying smartphone movements. Future research can improve motion detection by integrating more sensor data and advanced models. Using advanced deep learning architectures like RNNs or transformers can enhance the understanding and accuracy of predicting smartphone movements. | Auto-Encoder; CNN; Embedded Systems; Machine Learning; Smart Phone; Sensor Data | Terlapu Panduranga Vital (jntuK, India & Aditya Institute of Engineering Technology and Management, India); Jayaram D (Osmania University, India); Vijaya Bendalam (Jntugv, India & Aditya Institute Of Technology And Management, India); Ramesh Bandaru and Ramesh Yegireddi (Aditya Institute of Technology and Management, India); G Stalin Babu (GMR Institute of Technology, India); Vishnu Murthy Sivala and Ravikumar T (Aditya Institute of Technology and Management, India) | |
22 | 1571038958 | Shortest Path Optimization for Determining Nearest Full Node from a Light Node in Blockchain IoT Networks | In a blockchain IoT network, there exists a diversity of devices, including full nodes and light nodes, each with varying capacities and roles. Full nodes have the capability to store the entire ledger, whereas light nodes, constrained by limited memory capacity, cannot store. However, light nodes can efficiently retrieve data from full nodes and actively participate in network transaction approvals, especially in critical applications such as military and healthcare sectors. To enable light nodes to approve transaction by verifying blockchain ledgers we need to determine the nearest distance from a light node to a full node is imperative. While several algorithms exist for this purpose, Routing Protocol for Low-Power and Lossy Networks (RPL) emerges as the optimal choice. In comparison to other algorithms like Dijkstra's Algorithm, Floyd-Warshall Algorithm, Genetic Algorithms (GA), and Ant Colony Optimization (ACO), RPL stands out with distinct advantages. While Dijkstra's Algorithm and Floyd-Warshall Algorithm excel in finding shortest paths, they may not be optimized for the unique constraints and dynamics of IoT networks. Genetic Algorithms (GA) offer heuristic solutions but may lack adaptability to real-time changes in network topology, while Ant Colony Optimization (ACO) may face scalability and resource constraints in IoT environments. Conversely, RPL is meticulously tailored for low-power and lossy networks inherent to IoT settings. Its capability to form Directed Acyclic Graphs (DAGs) and dynamically adjust routes based on metrics like hop count and energy efficiency positions it as an ideal choice for determining the nearest distance between light nodes and full nodes in a blockchain IoT network. By capitalizing on its adaptability and efficiency, RPL surpasses other algorithms in enabling efficient data retrieval and facilitating network transaction approvals, thereby ensuring the seamless operation of blockchain IoT systems. | IoT networks; Blockchain IoT; Destination Advertisement Object (DAO) Messages; Directed Acyclic Graph (DAG) topology,; DODAG Information Object (DIO) Messages | Vivek Anand M (Galgotias University, India & Kumaraguru College of Technology, India); Srinivasan S (Galgotias University, India) | |
23 | 1571087751 | A Multi-Radio Channel Hopping Rendezvous Scheme in Cognitive Radio Networks for Internet of Things | With the rapid expansion of the Internet of Things (IoT), the demand for wireless spectrum is increasing exponentially, both in licensed and unlicensed bands. The existing fixed spectrum assignment policy creates a bottleneck as the spectrum that is not in use remains unutilized or underutilized. To overcome this issue, cognitive radio technology has emerged as a promising solution to spectrum assignment issues. In a Cognitive Radio Network (CRN), unlicensed users or secondary users (SUs) must meet on an available channel to establish a communication link for necessary information exchange. This process is known as rendezvous. However, SUs are unaware of each other if no centralized controller is involved. Channel Hopping (CH) is a rendezvous technique without the involvement of any centralized controller. Most of the existing CH algorithms are based on single radio SUs. As the cost of wireless transceivers is declining, multiple radios can be employed for rendezvous performance improvement. This paper proposes a multi-radio matrix-based CH algorithm that involves employing two radios with each SU instead of one. Compared with existing single radio algorithms, the proposed CH algorithm performs better by lowering the upper bounds on time to rendezvous. Our paper presents a comprehensive analysis of the benefits of incorporating an additional radio, demonstrating how this innovation leads to more efficient and timely rendezvous, thereby enhancing the overall communication capabilities within CRNs. | Cognitive Radio; Rendezvous; Common Control Channel; Channel Hopping | Mohd Asifuddola (Aligarh Muslim University, India & Aligarh Muslim University, Aligarh, India); Mumtaz Ahmed (Jamia Millia Islamia, India); Liyaqat Nazir (National Institute of Technology, Srinagar, India); Mohd Ahsan Siddiqui (NITTTR, Chandigarh, India); Shakeel Ahmad Malik (Islamic University of Science and Technology Awantipora, Kashmir, India); Mohammad Ahsan Chishti (National Institute of Technology Srinagar, India) |
23 papers.