# BayesFITS project description

##
**Introduction**

Last decades have witnessed an unprecedented surge of activity in addressing complex problems involving large amounts of information both in science and beyond. Progress has become possible because of a fortuitous combination of several factors. Firstly, there has been steady technological advancement related to increasingly powerful, affordable and widely available computers. Related to it has been a rapid expansion of the internet, which pave way to a huge increase in communication, but also data transfer and accessibility. Finally, the technological progress has made it possible for researchers to attack problems involving multi-dimensional parameter spaces and large volumes of data, ranging from pure science (physics, astronomy, biology, etc) to financial markets to defense and security to safety assessment etc.

Particle physics has contributed to the progress in a number of ways, of which the most well-known example was the ground-breaking work on developing the world-wide-web, which started over twenty years ago at CERN with particle physicists and computer experts attempting to resolve the problem of making accessible large sets of experimental data to different groups located in physically different locations. Particle physicists were also at the front-line of developing email.

The most recent, and still very much ongoing development which, to a large extent, originated in particle physics is the computational GRID. When preparations for the Large Hadron Collider (LHC) at CERN started some two decades ago, particle physicists were faced with a new challenge: how to transfer and subsequently process in a manageable way the vastly increased amount of data that the LHC is expected to produce. This has led to the development of distributed computing which has opened a host of new possibilities, and also challenges. It is evident that attempts at resolving challenges arising in basic research, including particle physics, has led to often unexpected spin-os for other areas of research and, indeed, for economy and a society at large. There are also countless examples where the algorithms and methods developed in more basic scientic research found their direct applications in other, more industry-related branches of research. Given the complexity and enormity of the the LHC project, it is likely that, apart from expected discoveries in basic science, it will continue to serve, as it already has, as an effective testing ground for developing new ideas and approaches in technology and information processing, to which we want to contribute in this Project. As described below, we will develop a practical approach based on applying methods of Bayesian statistics to analyzing and eciently extracting useful information from vast amounts of data expected from the LHC. Realizing this goal will lead to creating a library of sophisticated high-efficiency and high-portability algorithms, and other tools, that will have a much wider range of applicability, in particular in analyzing risk assessment in realistic environments that we will also address. Not only should this help our physicists (with a substantial Polish investment and participation) to make a more effective use of the results from the LHC during the expected two decades of its operation but, equally importantly, our results are likely to be useful to researchers from other areas of science and industry both in Poland and in an international environment, and will allow a broader society to prot from a more efective use of the Polish and European investment in the LHC project.

##
**The Bayesian approach**

Bayesian approach has in recent years become enormously popular in its applications to a whole spectrum of problems in a very wide range of fields: pure mathematics, astronomy, physics, artificial intelligence, pattern recognition, economics, finance, risk assessment, etc, where previously the usefulness of statistics often appeared remote. Bayesian inference method is based on an intuitive assumption that probability is a measure of the degree of belief about a hypothesis. At the heart of the approach lies Bayes theorem which can be schematically expressed as P(theory|data)=P(data|theory)P(theory), where theory represents some hypothesis about the problem at hand and data expresses the state of knowledge about the problem. Here P(theory) is the prior probability for the theory, which reflects our degree of belief before carrying out the measurement, and P(data|theory) is the probability to have gotten the data actually obtained, given the theory, which is also called the likelihood function.

##
**Bayesian method in particle physics**

Over the last few years a successful application of the Bayesian inference approach to particle physics theory has been made, and it is the development which the Project Leader (PL) has led and where he made a substantial contribution, as briefly described below. To provide a word of background, in some important aspects the situation in theoretical particle physics is unsatisfactory. While it is widely believed that the so-called Standard Model of particle physics is only a part of a more complete and fundamental theory, it is not clear which, if any, of the known candidate models for such “new physics" beyond the Standard Model, among which so-called supersymmetric (or SUSY) models are most appealing, is a correct one. These models are typically expressed in terms of several parameters, whose both ranges and priors are often poorly known. On the other hand, their choices determine the values of the various testable predictions, or observables. In order to achieve a deeper understanding of a given model and to derive its global properties, performing a full numerical exploration of the whole multi-dimensional parameter space is essential. Fixed-grid scan techniques that were previously used turned out to be a bottle-neck in making progress, while the application of a random scan using Markov Chain Monte Carlo (MCMC) came out to be extremely efficient and thus provided a breakthrough.

Inherently related to performing a full scan of the whole multi-dimensional parameter space is the issue of a proper comparison of the model's predictions for various observables with experimental data from the LHC and elsewhere. Since both come with inherent uncertainties, one can only compare the two in a statistical way by evaluating the likelihood function. Furthermore, as the observables are functions of the model's defining and other (nuisance) parameters, some values of the parameters lead to a relatively better agreement with the data and some with relatively poorer. This may also reveal some tensions among observables, with implications for the assumed priors of model's initial parameters. This clearly suggests that a proper way of comparing theoretical expectations with experimental information must be rooted in a statistical approach. A fruitful way of proceeding which allowed us to address conceptual and practical shortcomings of the previous fixed-grid approach was to apply Bayesian inference linked with an MCMC-type scanning technique.** **

##
**The program**

It is in the spirit of linking front-line research and industrial-leaning applications that we want to proceed in this Project. We want to set up a research team which will develop a methodology and a related range of computational tools that, while difering in specic applications, will have a common underlying platform of effcient MCMC-type scanning algorithms and the methodology of Bayesian statistics. We will develop procedures, algorithms and applications to address concrete problems in physics at the LHC and astroparticle physics on the one hand, and on the other in practical applications to some issues related to current and oncoming challenges in Polish industry, namely those related to safe operation of nuclear and chemical plants with potential of huge consequences for accident risks. However, the expertise and the library of practical computational tools that we will create, will potentially have a muchbroader range of applicability.