Project

High-precision cross section predictions for the Large Hadron Collider

At the most fundamental level, nature is described by the interaction of fundamental particles. In order to further our understanding of the universe at this most fundamental level, it is of vital importance that theory predictions are compared to measurements at ever-increasing precision. The production of colour-singlet states, such as Higgs bosons and electroweak bosons, is of particular interest. Not only do these particles play central roles in electroweak symmetry breaking and the fate of the universe, but the theoretical description of their production can be performed at unusually high orders in perturbation theory, leading to high-precision predictions. The goal of this project is to perform cross section calculations that allow us to make predictions for such processes under realistic conditions measurable in experiments, at unprecedented next-to-next-to-next-to-leading order precision. These leading-edge computations are, however, extremely demanding and easily require several million CPU hours. Without the use of a HPC, the predictions we aim to make simply cannot be made.

Project Details

Project term

September 30, 2021–September 30, 2022

Affiliations

University of Cambridge
Uniklinik RWTH Aachen

Institute

Jülich Supercomputing Centre

Principal Investigator

Michał Czakon

Researchers

Terry Generet
René Poncelet

Methods

The type of calculation we want to perform cannot be done fully analytically. The only way to describe realistic  nal states with phase-space cuts similar to those used in experiments is to integrate the phase space numerically. Due to the high dimensionality of the phase spaces – up to 16 dimensions – quadrature methods are not an option. Instead, we use the Monte Carlo numerical integration method. Even so, the cross section contains many singularities with a highly non-trivial structure. These singularities are handled according to the sector-improved residue subtraction scheme, as implemented in the C++ code Stripper.

Results

Unfortunately, due to theoretical issues described in the next subsection, we were thus far unable to complete our calculation. These problems could not have been foreseen and took a considerable amount of time to solve, which is the reason why this project has been extended. After solving the abovementioned issues, we were able to con rm results at lower perturbative orders, all the way through the next-to-next-to-leading order. This includes results obtained by other groups and results obtained by ourselves using a well-established alternative technique, which however is not applicable to the perturbative order we ultimately wish to reach.

Discussion

There were two main issues which have been solved during the time of the original project: optimisation and technical biases.
At the start of the project, it soon became clear that to successfully perform the calculation using a reasonable amount of resources, it needed to be optimised signi cantly.
The biggest gain in speed has been due to an improvement to the implementation of the sector-improved residue subtraction scheme. The subtraction scheme requires the phase space to be decomposed so as to isolate all singularities before regulating them. The previous implementation of this procedure sometimes created very strongly peaked integrands, which are extremely di cult to integrate numerically. Once this problem was identifed, a suitable solution was devised and implemented, which sped up the code to the point where our intended calculation was made feasible.
The second major issue was a systematic bias in the results caused by a necessary technical cut: due to the instability of floating-point calculations involving small differences of large numbers, a so-called technical cut needs to be introduced to prevent the program from sampling such problematic regions. If this cut is chosen too small, it does not sufficiently shield the calculation from numerical instabilities. If it is chosen too large, it can introduce a bias. For the intended calculation, it turned out that the previous implementation of this cut could not be chosen to simultaneously stabilize the calculation and keep it unbiased. It took signifcant testing and analysis of designated runs to find a suitable replacement cut that performs as required. The new version can completely protect the calculation from the relevant numerical instabilities, without any visible biases. In the time between the previous status report, submitted as part of the project extension application, and the end of the original project, we were able to test the new technical cut with a greater number of events, reducing the Monte Carlo integration error and increasing our sensitivity to any biases still present. Within the improved uncertainties, we were still unable to find any leftover bias. In that same period, we also substituted our implementation of certain mathematical functions with high-precision approximations, which has sped up the evaluation of specific parts of the calculation by a factor of up to 200.

Additional Project Information

DFG classification: 309 Particles, Nuclei and Fields
Software: The C++
Cluster: CLAIX