Generally speaking, optimization is a field of applied mathematics. It deals with methods for finding extreme positions, minima or maxima, in a function. Optimization in the field of numerical simulation integrates or adapts these methods in order to improve the technical tasks with regard to one or more target values in the best possible way. In addition, proprietary algorithms not based on classical mathematics have been established over the years, which are based specifically on the finite element or finite volume method.
The description of an optimization task includes the definition of the permitted design or search space, that is, the space that contains the set of possible solutions. It is also necessary to define an objective function. In the case of multi-objective optimization, in other words if several target values are available, the individual design parameters are usually weighted.
From an economic point of view, in contrast to mathematics in applied technical optimization it is quite sufficient to come close to the optimum. Much more important is a sufficient improvement compared to the initial state and the probability of being able to produce the virtually found solution reliably later on.
Most algorithms start from a random initial situation. Depending on how the procedure defines the following steps, randomly or according to a deterministic scheme, a differentiation is made between probabilistic and exact algorithms. If the search for a better solution takes place in the vicinity of the already known one, one speaks of a local optimization procedure, however, if it takes place in the entire design area, it is classified as global procedure. Both strategies can also be combined. The optimization methods of numerical simulation additionally distinguish between parameter-free and parameterized algorithms. Furthermore, it is differentiated whether the parameters refer to the geometry or the mesh.
Of the classical methods, the gradient method and evolutionary algorithms are most frequently used. With the gradient method, the direction of the greatest gradient is determined at the current position. Once known, the next design variant is selected accordingly in this direction. As the name suggests, evolutionary algorithms are inspired by biological evolution. They do not start at a starting point, but with a randomly selected starting population. For each individual in this population, fitness is determined in terms of a target function. The best ones are then selected and a new generation of individuals is created by recombination and mutation. This replaces the old one and the cycle starts again. The selection procedure and the mutation incorporate a random factor into the solution finding process, which increases the probability of finding a global extreme value instead of a local one.
RMS: Response Surface Method
Depending on the number of parameters and complexity of the underlying simulation, the individual processes can be very time-consuming. To avoid this, the Response Surface method, abbreviated RSM, is often used. With this method, the optimization is no longer performed directly with the actual system, but with the help of an approximation function. This is derived from the corresponding system responses via a sufficient number of sampling points. As soon as the number of support points required to determine the response surface is smaller than the number of iterations for finding the optimum, this procedure has a clear advantage in terms of computing time. However, the quality of the optimization depends strongly on the quality of the approximation. The Response Surface method can therefore only be used with appropriate success if the optimization problem does not show any major nonlinearities.