E-Book, Englisch, 285 Seiten
Anuradha / Gigras / Khanna Data Analytics for IoT Applications
1. Auflage 2025
ISBN: 979-8-89881-270-6
Verlag: De Gruyter
Format: EPUB
Kopierschutz: 0 - No protection
E-Book, Englisch, 285 Seiten
Reihe: Digitization and Automation for the 21st Century
ISBN: 979-8-89881-270-6
Verlag: De Gruyter
Format: EPUB
Kopierschutz: 0 - No protection
Data Analytics for IoT Applications explains how advanced computational methods support real-world IoT solutions. The book offers a clear and practical overview of how AI, machine learning, and big data analytics strengthen IoT systems across healthcare, agriculture, security, smart environments, and industrial safety. Going from intrusion detection and neural network optimization to smart sensing, energy-efficient communication, and early warning systems. It also showcases applied use cases such as disease detection, facial recognition, livestock monitoring, robotics in forensics, fraud detection, and smart water management, blending theory with hands-on applications. Key Features: · Integrates AI, machine learning, and big data with IoT systems. · Demonstrates practical applications through real-world case studies. · Examines privacy, security, and ethical considerations in IoT networks. · Explores emerging trends including nature-inspired algorithms, edge computing, and robotics.
Autoren/Hrsg.
Weitere Infos & Material
Optimizing Feed-Forward Neural Networks with Hybrid Particle Swarm and Salp Swarm Algorithm
Tejna Khosla1, *, Om Prakash Verma2
Abstract
The extraction of optimal parameters of Artificial Neural Networks (ANN) is a tedious task, as it requires continuous efforts. This process also leads to high cost and low efficiency. To overcome these problems, we have proposed a hybrid model of the Particle Swarm Optimization Algorithm (PSO) and Salp Swarm Algorithm (SSA), namely PSO-SSA, which combines three strategies, namely the Weighted Position Strategy, Adaptive Velocity, and Locality Pruning Strategy. To study the effectiveness of the proposed model, the model is applied to seven classification problems varying from low-dimensional to high-dimensional, namely, 3-digit XOR, Iris, Balloon, Breast Cancer, Heart, MicroMass, and Health News on Twitter. The extensive comparative study shows that our model performed best among other state-of-the-art techniques. The performance metrics used are convergence rate, classification accuracy, general performance, and Mean Square Error (MSE).
* Corresponding author Tejna Khosla: Computer Science Engineering Department, Delhi Technological University, Bawana, Delhi, India; E-mail: patoditejna@gmail.com
INTRODUCTION
Artificial Neural Networks (ANN), bioinspired by the neurons in the human brain, are conventional computational intelligence tools. They can perform tremendous parallel computations; therefore, their application domain ranges from solving classification and clustering to approximating function problems [1]. The classification problem involves two phases, training and testing. During ANN training, the data sets are randomly divided into training data sets and test data sets [2-4]. The training data sets build the classification model, which is then
applied to the test data sets to predict the class label [5]. Training the ANN is an optimization problem in which we find the weights and biases to minimize the Minimum Square Error (MSE) [6]. Since the search space of ANN is complex, multimodal, and high-dimensional, therefore, training the ANN requires an effective optimization technique.
According to the state-of-the-art literature techniques, training of the ANN can be of two types, Deterministic and Stochastic. Deterministic training uses mathematical optimization techniques to train the ANN. If the test samples are compatible, the classification accuracy achieved by deterministic training remains the same consistently, e.g., gradient-based [7] and back propagation [8]. These trainers demonstrate effective convergence; however, if the initial solution is near the local optima, it gets stuck and affects the quality. On the other hand, the stochastic techniques use metaheuristics to improve the solution quality by avoiding the local optima trap. They are also known as metaheuristic optimization algorithms. They are, nevertheless, slower than deterministic methods. These techniques sample multiple regions of the search space and have a high capability of achieving exploration and exploitation, e.g., Particle Swarm Optimization (PSO) [9], Firefly Algorithm (FA) [10], Chameleon Swarm Algorithm (CSA) [11], Red Fox Algorithm (RFA) [12], Horse Optimization Algorithm [13], Bat Algorithm [14], Differential Evolution (DE) [15], Bacterial Foraging Algorithm [16], and many more.
Metaheuristic algorithms can successfully train ANNs due to the following characteristics: 1) They are global optimizers, 2) they have the ability to balance exploration and exploitation, 3) they can solve the nonlinear and multimodal problems due to their gradient-free behavior, and 4) they avoid local minima to a large extent due to their stochastic behavior [17, 18]. Each metaheuristic optimization algorithm has its advantages and disadvantages for training the ANN. FA helps optimize the weights of the ANN, though its convergence time is relatively higher [19]. CSA has been used for feature selection, which is an important step in the subsequent classification task using ANN [20]. It gives good convergence speed and accuracy and performs parallel search effectively, but it sometimes offers random solutions and is complicated. PSO is a simple- to-implement algorithm and takes less time in computations, but it has poor exploration ability [21]. Artificial Bee Colony Algorithm (ABC) and bacterial foraging algorithm have good global convergence against other optimization algorithms, but have a limited search space for initial solutions [22]. RFA and SSA are used to solve high-dimensional problems, but they have weak searchability. We can conclude that no optimization technique is capable of solving real-world problems (no-free lunch theorem) [23]. Many global optimizers were capable of optimizing neural networks by finding the weights and biases, though the convergence was not fast. Hence, there was a need to find the best optimal parameter values and ensure faster convergence in addition to exploration-exploitation balance. Some optimization algorithms from the literature got stuck in the local optima trap and led to premature convergence and higher computational costs. Also, there is no mention of the overfitting problem, which is the phenomenon where the algorithm gives the best performance on training data but poor performance on generalization [24]. It is the biggest challenge in the ANN optimization that hampers the quality of solutions. Metaheuristic algorithms can impact the performance of ANNs by addressing the overfitting problem.
The literature shows that there have been a lot of surveys using metaheuristic optimization algorithms to solve classification using neural networks. The various optimization algorithms have been applied and tested on feed-forward neural networks to prove their efficiency and reduce MSE [25]. The Arithmetic Optimization Algorithm has been used to optimize the weights in ANN by the authors [26]. The authors simulated the proposed algorithm by replacing healthy generation with improved indicators. This paper highlights the use of nature-inspired algorithms in improving the accuracy and performance of ANNs. The multi-objective grey wolf optimizer was used to train the network by selecting the optimal values as weights and biases between the layers [27]. The paper showed an integration of optimization algorithms and ANNs for complex systems and highlights the importance of optimization algorithms for achieving the right predictions. A thorough analysis of results showed good performance over other conventional algorithms. The biological neurons are imitated by training the artificial neural networks using cuckoo search with Levy flight behavior [28]. The algorithm demonstrated good convergence compared to other state-of-the-art algorithms and successfully classified the patterns. The neural network model was processed using the genetic algorithm [29] and used to predict accurate air temperature. The authors determined the optimal duration and resolution of the weather variable using the proposed model. The results were good, though the parameter settings were fixed and could not be changed.
In the same context, many variants of PSO have come up over the years to optimize artificial neural networks since PSO works on the current best and the global best solution, which results in a quality solution. It is relatively simple and efficiently solves real-world problems [30]. The hybrid of PSO and Gravitational Search Algorithm (GSA) was proposed by Mirjalili et al. [31]. The use of PSO bridled the slow searching speed of GSA, and the resultant hybrid trained the ANN model. The analysis proved that the PSO-GSA hybrid could solve the neural networks without getting trapped in local optima and with good convergence speed. Also, this paper showed the ability of nature-inspired algorithms to improve the efficiency of neural networks in terms of convergence and accuracy. Jianbo Yu et al. [32] proposed an improvement in the existing PSO, where the PSO was used to optimize the weights of neural networks and their structure. However, PSO suffers mainly due to a lack of momentum that leads to stagnation of swarm movement in the local optima region. The classification performance was improved in both structural and weight optimization using modified PSO. However, it hinders its efficiency and leads...




