NNPDF

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
NNPDF
DeveloperThe NNPDF Collaboration
Stable release
4.0
Repository
  • {{URL|example.com|optional display text}}Lua error in Module:EditAtWikidata at line 29: attempt to index field 'wikibase' (a nil value).
Engine
    Lua error in Module:EditAtWikidata at line 29: attempt to index field 'wikibase' (a nil value).
    TypeParticle physics
    Websitennpdf.mi.infn.it

    NNPDF is the acronym used to identify the parton distribution functions from the NNPDF Collaboration. [citation needed]NNPDF parton densities are extracted from global fits to data based on a combination of a Monte Carlo method for uncertainty estimation and the use of neural networks as basic interpolating functions.[1]

    Methodology

    [edit | edit source]
    The NNPDF Collaboration strategy is summarized in this diagram.

    The NNPDF approach can be divided into four main steps:

    • The generation of a large sample of Monte Carlo replicas of the original experimental data, in a way that central values, errors and correlations are reproduced with enough accuracy.
    • The training (minimization of the χ2) of a set of PDFs parametrized by neural networks on each of the above MC replicas of the data. PDFs are parametrized at the initial evolution scale Q02 and then evolved to the experimental data scale Q2 by means of the DGLAP equations. Since the PDF parametrization is redundant, the minimization strategy is based in genetic algorithms as well as gradient descent based minimizers.
    • The neural network training is stopped dynamically before entering into the overlearning regime, that is, so that the PDFs learn the physical laws which underlie experimental data without fitting simultaneously statistical noise.
    • Once the training of the MC replicas has been completed, a set of statistical estimators can be applied to the set of PDFs, in order to assess the statistical consistency of the results. For example, the stability with respect PDF parametrization can be explicitly verified.

    The set of Nrep PDF sets (trained neural networks) provides a representation of the underlying PDF probability density, from which any statistical estimator can be computed.

    Example

    [edit | edit source]

    The image below shows the gluon at small-x from the NNPDF1.0 analysis, available through the LHAPDF interface

    Releases

    [edit | edit source]

    The NNPDF releases are summarised in the following table:

    PDF set DIS data Drell-Yan data Jet data LHC data Independent param. of s and s¯ Heavy Quark masses NNLO
    NNPDF4.0 Yes Yes Yes Yes Yes Yes Yes
    NNPDF3.1 Yes Yes Yes Yes Yes Yes Yes
    NNPDF3.0 Yes Yes Yes Yes Yes Yes Yes
    NNPDF2.3 Yes Yes Yes Yes Yes Yes Yes
    NNPDF2.2 Yes Yes Yes Yes Yes Yes Yes
    NNPDF2.1 Yes Yes Yes No Yes Yes Yes
    NNPDF2.0 Yes Yes Yes No Yes No No
    NNPDF1.2 Yes No No No Yes No No
    NNPDF1.0 Yes No No No No No No

    All PDF sets are available through the LHAPDF interface and in the NNPDF webpage.

    References

    [edit | edit source]
    1. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
    [edit | edit source]

    Lua error in Module:Authority_control at line 153: attempt to index field 'wikibase' (a nil value).