Abstract:In this paper, a multi-objective hybrid firefly and particle swarm optimization (MOHFPSO) was proposed for different multi-objective optimal power flow (MOOPF) problems. Optimal power flow (OPF) was formulated as a non-linear problem with various objectives and constraints. Pareto optimal front was obtained by using non-dominated sorting and crowding distance methods. Finally, an optimal compromised solution was selected from the Pareto optimal set by applying an ideal distance minimization method. The efficiency of the proposed MOHFPSO technique was tested on standard IEEE 30-bus and IEEE 57-bus test systems with various conflicting objectives. Simulation results were also compared with non-dominated sorting based multi-objective particle swarm optimization (MOPSO) and different optimization algorithms reported in the current literature. The achieved results revealed the potential of the proposed algorithm for MOOPF problems.Keywords: optimal power flow; multi-objective optimization; non-dominated sorting; ideal distance minimization; total fuel cost minimization; voltage profile enhancement; real power loss minimization; hybrid firefly and particle swarm optimization
To obtain well-trained models that can still be employed productively, DL models have intensive memory and computational requirements due to their huge complexity and large numbers of parameters [193, 194]. One of the fields that is characterized as data-intensive is the field of healthcare and environmental science. These needs reduce the deployment of DL in limited computational-power machines, mainly in the healthcare field. The numerous methods of assessing human health and the data heterogeneity have become far more complicated and vastly larger in size [195]; thus, the issue requires additional computation [196]. Furthermore, novel hardware-based parallel processing solutions such as FPGAs and GPUs [197,198,199] have been developed to solve the computation issues associated with DL. Recently, numerous techniques for compressing the DL models, designed to decrease the computational issues of the models from the starting point, have also been introduced. These techniques can be classified into four classes. In the first class, the redundant parameters (which have no significant impact on model performance) are reduced. This class, which includes the famous deep compression method, is called parameter pruning [200]. In the second class, the larger model uses its distilled knowledge to train a more compact model; thus, it is called knowledge distillation [201, 202]. In the third class, compact convolution filters are used to reduce the number of parameters [203]. In the final class, the information parameters are estimated for preservation using low-rank factorization [204]. For model compression, these classes represent the most representative techniques. In [193], it has been provided a more comprehensive discussion about the topic.
solution of cl wadhwa power system pdf free 22
2ff7e9595c
Comments