April 26, 2024
Search for Majorana neutrinos exploiting millikelvin cryogenics with CUORE – Nature

Search for Majorana neutrinos exploiting millikelvin cryogenics with CUORE – Nature

Optimum trigger and analysis threshold

The continuous data stream of CUORE is triggered with the optimum trigger, an algorithm based on the optimum filter51 characterized by a lower threshold than a more standard derivative trigger32. A lower threshold enables us not only to reconstruct the low-energy part of the spectrum, but also yields a higher efficiency in reconstructing the events in coincidence between different calorimeters, and consequently a more precise understanding of the corresponding background components52,53.

The optimum trigger transfer function of every event is matched to the ideal signal shape, obtained as the average of good-quality pulses, so that frequency components with low signal-to-noise ratio are suppressed. A trigger is fired if the filtered signal amplitude exceeds a fixed multiple of the noise root mean square (RMS), defined separately for each calorimeter and dataset. We evaluate the energy threshold by injecting fake pulses of varying amplitude, calculated by inverting the calibration function, into the data stream. We reconstruct the stabilized amplitude of the fake pulses, fit the ratio of correctly triggered events to generated events with an error function, and use the 90% quantile as a figure of merit for the optimum trigger threshold. This approach enables monitoring of the threshold during data collection, and is based on the assumption that the signal shape is not energy dependent, that is, that the average pulse obtained from high-energy γ events is also a good template for events of a few keV. The distribution of energy threshold at 90% trigger efficiency is shown in Extended Data Fig. 4.

For this work we set a common analysis threshold of 40 keV, which results in >90% trigger efficiency for the majority (97%) of the calorimeters, while at the same time allowing the removal of multi-Compton events from the region of interest through the anti-coincidence cut.

Efficiencies

The total efficiency is the product of the reconstruction, anti-coincidence, pulse shape discrimination (PSD) and containment efficiencies.

The reconstruction efficiency is the probability that a signal event is triggered, has the energy properly reconstructed, and is not rejected by the basic quality cuts requiring a stable pre-trigger voltage and only a single pulse in the signal window. It is evaluated for each calorimeter using externally flagged heater events54, which are a good approximation of signal-like events.

The anti-coincidence efficiency is the probability that a true single-crystal event correctly passes our anti-coincidence cut, instead of being wrongly vetoed owing to an accidental coincidence with an unrelated event. It is extracted as the acceptance of fully absorbed γ events at 1,460 keV from the electron capture decays of 40K, which provide a reference sample of single-crystal events.

The PSD efficiency is obtained as the average acceptance of events in the 60Co, 40K and 208Tl γ peaks that already passed the base and anti-coincidence cuts. In principle, the PSD efficiency could be different for each calorimeter, but given the limited statistics in physics data we evaluate it over all channels and over the full dataset. To account for possible variation between individual calorimeters, we compare the PSD efficiency obtained by directly summing their individual spectra with that extracted from an exposure-weighted sum of the calorimeters’ spectra. We find an average ±0.3% discrepancy between the two values and include it as a global systematic uncertainty in the 0νββ fit. This takes a Gaussian prior instead of the uniform prior used in our previous result32, which had its uncertainty come from a discrepancy between two approaches that has since been resolved.

Finally, the containment efficiency is evaluated through Geant4-based Monte Carlo simulations55 and accounts for the energy loss due to geometrical effects as well as bremsstrahlung.

Principal component analysis for PSD

In this analysis we use a new algorithm based on principal component analysis (PCA) for pulse shape discrimination. The method has been developed and documented for CUPID-Mo56, and has been adapted for use in CUORE57. This technique replaces the algorithm used in previous CUORE results, which was based on six pulse shape variables30. The PCA decomposition of signal-like events pulled from γ calibration peaks yields a leading component similar to an average pulse, which on its own captures >90% of the variance between pulses. We choose to treat the average pulse of each calorimeter in a dataset as if it were the leading PCA component, normalizing it like a PCA eigenvector. We can then project any event from the same channel onto this vector and attempt to reconstruct the 10-s waveform using only this leading component. For any waveform x and leading PCA component w with length n, we define the reconstruction error as:

$${rm{RE}}=sqrt{mathop{sum }limits_{i=1}^{n}{({{bf{x}}}_{i}-({bf{x}}cdot {bf{w}}){{bf{w}}}_{i})}^{2}}.$$

(1)

This reconstruction error metric measures how well an event waveform can be reconstructed using only the average pulse treated as a leading PCA component. Events that deviate from the typical expected shape of a signal waveform are poorly reconstructed and have a high reconstruction error. We normalize the reconstruction errors as a second-order polynomial function of energy on a calorimeter-dataset basis (see Extended Data Fig. 5), and cut on the normalized values by optimizing a figure of merit for signal efficiency over expected background in the Qββ region of interest. Using this PCA-based method, we obtain an overall efficiency of (96.4 ± 0.2)% compared to the (94.0 ± 0.2)% from the pulse shape analysis used in our previous results, as well as a 50% reduction in the PSD systematic uncertainty from 0.6% to 0.3%.

Statistical analysis

The high-level statistical 0νββ decay analysis consists of an unbinned Bayesian fit on the combined data developed with the BAT software package58. The model parameters are the 0νββ decay rate (Γ0ν), a linearly sloped background, and the 60Co sum peak amplitude. Γ0ν and the 60Co rate are common to all datasets, with the 60Co rate scaled by a preset dataset-dependent factor to account for its expected decay over time. The base background rate, expressed in terms of counts keV−1 kg−1 yr−1, is dataset-dependent, whereas the linear slope to the background is shared among all datasets, because any structure to the shape of the background should not vary between datasets. Γ0ν, the 60Co rate, and the background rate parameters have uniform priors that are constrained to non-negative values, whereas the linear slope to the background has a uniform prior that allows both positive and negative values.

In addition to these statistical parameters, we consider the systematic effects induced by the uncertainty on the energy bias and energy resolution59,60, the value of Qββ, the natural isotopic abundance of 130Te, and the reconstruction, anti-coincidence, PSD and containment efficiencies. We evaluate their separate effects on the 0νββ rate by adding nuisance parameters to the fit one at a time and studying both the effect on the posterior global mode ({hat{{Gamma }}}_{0nu }) and the marginalized 90% CI limit on Γ0ν.

A list of the systematics and priors is reported in Extended Data Table 1. The efficiencies and the isotopic abundance are multiplicative terms on our expected signal, so the effect of each is reported as a relative effect on Γ0ν. By contrast, the uncertainties on Qββ, the energy bias, and the resolution scaling have a non-trivial relation to the signal rate; therefore, we report the absolute effect of each on Γ0ν. The dominant effect is due to the uncertainty on the energy bias and resolution scaling in physics data. We account for possible correlations between the nuisance parameters by including all of them in the fit simultaneously.

We chose a uniform prior on our physical observable of interest Γ0ν, which means we treat any number of signal events as equally likely. Other possible uninformative choices could be considered appropriate, as well. Because the result of any Bayesian analysis depends to some extent on the choice of the priors, we checked how other reasonable priors affect our result57. We considered: a uniform prior on (sqrt{{{Gamma }}_{0nu }}), equivalent to a uniform prior on mββ and also equivalent to using the Jeffreys prior; a scale-invariant uniform prior on logΓ0ν; and a uniform prior on 1/Γ0ν, equivalent to a uniform prior on ({T}_{1/2}^{0nu }).

These priors are all undefined at Γ0ν = 0, so we impose a lower cut-off of Γ0ν > 10−27 yr−1, which with the given exposure corresponds to approximately one signal event. The case with a uniform prior on (sqrt{{{Gamma }}_{0nu }}) gives the smallest effect, and strengthens the limit by 25%, whereas the flat prior on 1/Γ0ν provides the largest effect, increasing the limit on ({T}_{1/2}^{0nu }) by a factor of 4. In fact, all these priors weigh the small values of Γ0ν more. Therefore, our choice of a flat prior on Γ0ν provides the most conservative result.

We compute the 0νββ exclusion sensitivity by generating a set of 104 toy experiments with the background-model, that is, including only the 60Co and linear background components. The toys are split into 15 datasets with exposure and background rates obtained from the background-only fits to our actual data. We fit each toy with the signal-plus-background model, and extract the distribution of 90% CI limits, shown in Extended Data Fig. 4.

We perform the frequentist analysis using the Rolke method61, obtaining a lower limit on the process half-life of ({T}_{1/2}^{0nu } > 2.6times {10}^{25},{rm{yr}}) (90% CI). The profile likelihood function for Γ0ν is retrieved from the full Markov chain produced by the Bayesian analysis tool. The non-uniform priors on the systematic effects in the Bayesian fit are thus incorporated into the frequentist result as well. We extract a 90% confidence interval on Γ0ν by treating −2log as an approximate χ2 distribution with one degree of freedom. The lower limit on ({T}_{1/2}^{0nu }) comes from the corresponding upper edge of the confidence interval on Γ0ν. Applying the same method to the toy experiments, we find a median exclusion sensitivity of ({T}_{1/2}^{0nu } > 2.9times {10}^{25},{rm{yr}}).

Source link