Highlight of GPT version 3.2: A new multi-objective genetic optimizer that can optionally be run on an MPI Linux cluster to greatly aid and accelerate the design process.

Multi-objective genetic optimizer
Most real-life design processes have a large number of variables, several constraints, and a few conflicting objectives. The 'old-style' method to handle such cases is to cast everything into one single objective function by introducing weight factors for the different objectives and penalties for the constraints. This is problematic because usually the weight factors can not be given a priori because often it is the tradeoff itself that one is intersted in. Furthermore, such optimizations very narrowly aim for a single point in variable- and objective-space whereas for decision making the behavior of the optimzed set of variables as function of the different objectives is needed. The new genetic multi-objective optimizer was developed to adress these issues. The code works with a population of sample solutions that is ranked and genetically recombined in order to improve the overall population as a whole. During this process, valuable information is not lost but presented in a useful way: The output is the optimal tradeoff between all objectives, taking into account optional constraints, where each point along the line is fully optimized in all its variables.

Note: We use NGSA-II for ranking and Differential Evolution (DE) to create new candidate solutions. Load-balancing is achieved by being flexible in the definition of a new 'generation' so that a few runs can take much longer than the typical time to create a new generarion without significant loss of performance.

MPI Linux version
The stochastic nature of the genetic optimizer has significant advantages such as very good global convergence properties and insensitivity to numerical noise. However this comes at a price in terms of required CPU power. We created a load-balanced version of the multi-objective optimizer targeting MPI Linux clusters with 100 - 1000 cores. This is a perfect match for typical population sizes and speedups in the same range can be achieved. In addition, the multi-dimensional parameter scanning utility (MR) has been rewritten for MPI clusters.

Update: The MPI Linux version can now also be used to track hundreds of millions of particles distributed over a large number of nodes.


Highlights of GPT version 3.1: The tracking engine and output generator of GPT version 3.1 is significantly improved compared to version 3.0:

Fast snapshots that produce time-domain output without slowing down the ODE solver
In previous versions, time-domain output had to be generated with the tout keyword. Internally, tout instructs the GPT ODE solver to decrease its stepsizes in such a way that the particle coordinates are calculated exactly at the specified times. The tout algorithm starts to make small adjustments to the stepsizes ahead of the requested output time in order to end correctly. As a result, tout slows down the ODE solver. GPT version 3.1 has a new snapshot command that is completely decoupled from the stepping algorithm. The new snapshot keyword also writes the phase-space coordinates at the specified simulations times, but it does so using high-order interpolation instead of slowing down the ODE solver. For a series of time domain outputs with dense spacing this results in a significant performance increase without any loss in accuracy.
The new snapshot command is particularly useful when the CPU time is dominated by the calculation of the electromagnetic fields and not by the tracking itself, such as is the case in a molecular dynamics simulation of a diffraction experiment.

Perfect screens without interpolation error
Using the same interpolation algorithm as described above, the screens in GPT version 3.1 do not contain interpolation errors. That is to say, the interpolation error is always less than the tracking error. This allows the use of screen output in regions with very high field gradients. Without loss of accuracy it is now possible to position a large number of screens very densely together in a high-field region, even if particles pass several screens during one timestep.


A selection of the common features of GPT is listed below. Please contact us whenever you have any questions about the capabilities of GPT related to your project.

Equations of motion
5th order embedded Runge-Kutta
Adaptive stepsize control
Best accuracy over 10-10
Any number and any type of particles
Additional differential equations
    Windows User Interface
Fully integrated set-up editor
On-line help
Plots of raw GPT output
Plots of all data-analysis results
Plots of particle trajectories
Multiple synchronized windows
Wizards for custom elements
GPTwin User Interface
Output
At specified simulation times
At 3D planes (nondestructive screens)
Coordinates and electromagnetic fields
Trajectory output
    Space-charge
3D particle-in-cell
3D point-to-point 1)
2D point-to-ray
2D point-to-circle
 
Data-analysis
Fully hierarchical
Standard macroscopic quantities
RMS, 90% and 100% Emittance
Courant-Snyder parameters
Histograms
Color-density plots
Support for files >>4 GB
GPT plotting  

Scanning and solving
All multi-dimensional
Parameter scans
Root-finder
Multi-objective optimizer
Load balanced MPI support 2)

Root-finding and solving
      Collector design
Multiple-scattering
plate, pipe, cone, torus, sphere, iris
Current/Power density plots
Collector design
Beam line components
barmagnet, bend, bz, bzsolenoid,
circlecharge, drift, ecyl, erect,
ezcell, linecharge, magline, magplate,
magpoint, multislit, platecharge,
pointcharge, quadrupole, rectcoil,
rmax, sextupole, solenoid, trwcell,
trwlinac, trwlinbm, undueqfo, unduplan,
xymax
Custom elements
    Interfaces with other codes
2D and 3D electrostatic field-maps
2D and 3D magnetostatic field-maps
2D TM cavity field-map
Poisson/ Superfish interface
Tabulated ascii input and output
DXF output for 3D drawing software
SDDS conversion utility
Field import

1) The 2.8 release contains a simple O(N2) version, later releases contain a fast O(N log N) tree-based model.
2)
Platform dependant, see the GPT versions page for details.