-
Essay / A brief note on optimization - 887
1. INTRODUCTIONOptimization, in simple terms, means minimizing costs incurred and maximizing profits such as resource utilization. EAs are population-based metaheuristic optimization algorithms (i.e., optimizing the problem by iteratively trying to improve the solution with respect to the given quality measure) that often work well to approximate the solutions to all types of problems because they make no assumptions about the underlying evaluation. of the fitness function. There are many EAs available viz. Genetic Algorithm (GA) [1], Artificial Immune Algorithm (AIA) [2], Ant Colony Optimization (ACO) [3], Particle Swarm Optimization (PSO) [4], Differential Evolution (DE) [5 , 6], Harmony Search (HS) [7], Bacteria Search Optimization (BFO) [8], Mixed Frog Jumping (SFL) [9], Artificial Bee Colony (ABC) [10, 11], Biogeography Based Optimization (BBO) [12], Gravitational Search Algorithm (GSA) [13], Grenade Explosion Method (GEM) [14] etc. To use any EA, a decision problem model must be constructed that specifies: 1) The decisions to be made, called decision variables, 2) The measure to be optimized, called objective, and 3) Any logical restrictions on the solutions potential, called constraints. These 3 parameters are necessary when building any optimization model. The solver will find values for the decision variables that satisfy the constraints while optimizing (maximizing or minimizing) the objective. But the problem with all the above EAs is that, to get an optimal solution, besides the necessary parameters (explained above), many algorithm-specific parameters need to be handled appropriately. For example, in case of GA, adjustment of algorithm specific parameters such as crossover rate (or probability, PC), mu...... middle of paper ...... the algorithm is identified and modified accordingly, using OpenMP one can easily exploit the features of the multi-core processor and maximize the utilization of all cores in the multi-core system, which is necessary from an optimization point of view (i.e. i.e. maximize the use of resources). This paper contributes in this direction and undertakes a detailed study by investigating the effect of number of cores, dimension size, population size and problem complexity on the speedup of the TLBO algorithm. In the remainder of this article, we give a brief review of the literature on TLBO and its applications. Subsequently, we discuss the possibilities of fine-tuning a TLBO to make it suitable for a parallel implementation on a multi-core system. Next, we present results on a few test problems of different complexities and show appreciable speedups using our proposed algorithm...