Multi-objective genetic optimizer
Most real-life design processes have a large number of variables, several constraints, and a few conflicting objectives. The 'old-style' method to handle such cases is to cast everything into one single objective function by introducing weight factors for the different objectives and penalties for the constraints. This is problematic because usually the weight factors can not be given a priori because often it is the tradeoff itself that one is intersted in. Furthermore, such optimizations very narrowly aim for a single point in variable- and objective-space whereas for decision making the behavior of the optimzed set of variables as function of the different objectives is needed. The new genetic multi-objective optimizer was developed to adress these issues. The code works with a population of sample solutions that is ranked and genetically recombined in order to improve the overall population as a whole. During this process, valuable information is not lost but presented in a useful way: The output is the optimal tradeoff between all objectives, taking into account optional constraints, where each point along the line is fully optimized in all its variables.
Note: We use NGSA-II for ranking and Differential Evolution (DE) to create new candidate solutions. Load-balancing is achieved by being flexible in the definition of a new 'generation' so that a few runs can take much longer than the typical time to create a new generarion without significant loss of performance.
MPI Linux version
The stochastic nature of the genetic optimizer has significant advantages such as very good global convergence properties and insensitivity to numerical noise. However this comes at a price in terms of required CPU power. We created a load-balanced version of the multi-objective optimizer targeting MPI Linux clusters with 100 - 1000 cores. This is a perfect match for typical population sizes and speedups in the same range can be achieved. In addition, the multi-dimensional parameter scanning utility (MR) has been rewritten for MPI clusters.
Update: The MPI Linux version can now also be used to track hundreds of millions of particles distributed over a large number of nodes.
Highlights of GPT version 3.1: The tracking engine and output generator of GPT version 3.1 is significantly improved compared to version 3.0:
Fast snapshots that produce time-domain output without slowing down the ODE solver
In previous versions, time-domain output had to be generated with the tout keyword. Internally, tout instructs the GPT ODE solver to decrease its stepsizes in such a way that the particle coordinates are calculated exactly at the specified times. The tout algorithm starts to make small adjustments to the stepsizes ahead of the requested output time in order to end correctly. As a result, tout slows down the ODE solver. GPT version 3.1 has a new snapshot command that is completely decoupled from the stepping algorithm. The new snapshot keyword also writes the phase-space coordinates at the specified simulations times, but it does so using high-order interpolation instead of slowing down the ODE solver. For a series of time domain outputs with dense spacing this results in a significant performance increase without any loss in accuracy.
The new snapshot command is particularly useful when the CPU time is dominated by the calculation of the electromagnetic fields and not by the tracking itself, such as is the case in a molecular dynamics simulation of a diffraction experiment.
Perfect screens without interpolation error
Using the same interpolation algorithm as described above, the screens in GPT version 3.1 do not contain interpolation errors. That is to say, the interpolation error is always less than the tracking error. This allows the use of screen output in regions with very high field gradients. Without loss of accuracy it is now possible to position a large number of screens very densely together in a high-field region, even if particles pass several screens during one timestep.
A selection of the common features of GPT is listed below. Please contact us whenever you have any questions about the capabilities of GPT related to your project.
1) The 2.8 release contains a simple O(N2)
version, later releases contain a fast O(N log N)
2) Platform dependant, see the GPT versions page for details.