May 5, 2024
Anthropogenic fingerprints in daily precipitation revealed by deep learning – Nature

Anthropogenic fingerprints in daily precipitation revealed by deep learning – Nature

DD model for climate-change detection

The DD model, which refers to the CNN model for detecting climate-change signals embedded in daily precipitation anomalies, comprises an input layer, five convolution layers, two pooling layers, one fully connected layer and an output layer (Extended Data Fig. 2). The size of the convolution kernel, which extracts key features from the input to produce feature maps, is 3 × 3. Spatial pooling was performed after the first two convolution processes by using 2 × 2 max pooling with a stride of 2. L2 regularization was applied to minimize overfitting24,51.

The DD model accepts gridded data of normalized daily precipitation anomalies as an input variable. These anomalies were determined by subtracting the daily climatology data for 1980–2010 and then normalizing them by dividing the longitudinally averaged standard deviation of the daily precipitation anomalies at the corresponding latitude during the same time period. The input variable has dimensions of 160 × 55 (2.5° × 2.5° resolution over 0°–400° E, 62.5° S–76.5° N). To properly consider the precipitation pattern around 360°(0°) E, the data were longitudinally extended by concatenating 0°–360° E and 360°–400° E. Through five convolutional and two max-pooling processes, the horizontal dimension of the feature map is reduced to 40 × 14. As the last convolutional layer uses 16 convolutional filters, the size of the dimension of the final feature map is 8,960 (that is, 40 × 14 × 16). Then, each element of the final feature map is connected to the first dense layer with 32 neurons and, finally, the first dense layer is connected to the second dense layer with a single neuron to output a scalar value representing the AGMT anomaly of the corresponding year. The variability of the estimated annual mean AGMT anomaly was matched to the observed data to avoid the influence of systematic differences between the training and testing samples. Note that this post-processing did not affect the detection results, as both the test statistics (that is, internal variability of the estimated AGMT) and the detection metric (that is, AGMT on any specific day) were modified to the same degree.

We generated five ensemble members with different random initial weights and defined the ensemble-averaged AGMT as the final forecast. The Xavier initialization technique was applied to initialize weights and biases52. Tangent hyperbolic and sigmoid functions were used as the activation functions for the convolutional and fully connected layers, respectively. Adam optimization was applied as the gradient-descent method and the mean absolute error was applied as the loss function53.

Natural variability estimation

The natural variability ranges of the estimated AGMT and EM days are measured using a bootstrap method. First, the AGMT and the fractional EM days (that is, the number of EM days/365) are calculated for each year using the daily precipitation output of CESM2 historical ensemble simulations for 1850–1950. Then, the 97.5th percentile values of the 8,080 total cases (that is, 101 year × 80 ensemble members) are estimated, which corresponds to the upper bound of the 95% confidence range of the natural variability. The resulting 97.5th percentile values are 0.42 °C and 10.9% for AGMT and fractional EM days, respectively.

The natural variability range of the linear trends in the EM days (Fig. 1c), the estimated AGMT (Fig. 2a,b) and the precipitation variability (Fig. 4a,c) are also defined using a bootstrap method. We first sample 20-year segments from CESM2 LE simulations during 1850–1950 with a 10-year interval in the initial year of the segments. With nine values per ensemble member, a total of 720 (9 × 80) values of the 20-year segments are obtained. Similarly, a total of 960 samples of 41-year segments are constructed with a 5-year interval. Then, linear trends of the EM days, the estimated AGMT and the precipitation variability are calculated for each 20-year or 41-year segment. Finally, the upper and lower 2.5% percentile values are defined as the 95% two-tailed confidence interval of the natural variability.

Occlusion sensitivity

Occlusion sensitivity is used to quantify the relative importance of each grid point when deriving an output variable43. The occlusion sensitivity O(t, x, y) is a three-dimensional tensor incorporating time (t), longitude (x) and latitude (y) and is calculated using the following equation:

$$Oleft(t,x,yright)=widehat{y}-Dleft[Pleft(t,x,yright)ast Zleft(7,7right)right].$$

Here ⁎ is the horizontal convolution operator, D and (hat{y}) denote the DD model and the estimated AGMT with the original input data P(t, x, y), respectively, and Z(7, 7) denotes 7 × 7 grid points occluding a mask with zero filling. A different patch size of 5 × 5 grid points is used for a sensitivity test. The occlusion sensitivity is plotted at the centred grid point of the corresponding grid box. To maintain the original size of the input map, the edge of the map is filled with zeros (that is, zero padding).

Ridge regression method

The ridge regression method is used to estimate the coefficients of multiple regression models in which linearly independent variables are highly correlated. The loss function of ridge regression with i samples and j regression coefficients is defined by the following equation24:

$${rm{Loss}}=mathop{sum }limits_{i=1}^{M}{left({y}_{i}-mathop{sum }limits_{j=0}^{P}{w}_{j}{x}_{i,j}right)}^{2}+lambda mathop{sum }limits_{j=0}^{P}{w}_{j}^{2},$$

in which xi,j and yi denote the input and label data, respectively. wj indicates the regression coefficient and (lambda {sum }_{j=0}^{p}{w}_{j}^{2}) is the regularization term based on the sum of the squared regression coefficients (that is, the L2 norm). P and M denote the number of samples and the number of regression coefficients, respectively. The regularization suppresses overfitting by preventing the regression coefficient from becoming excessively large. λ is a hyperparameter that determines the penalty intensity, which is set to 0.1 after several experiments to minimize the loss values for the validation dataset.

Moisture budget equation for precipitation variability change

For timescales longer than a day, the zeroth-order balance in the moisture budget is found between precipitation (P) and vertical moisture advection4:

$${P}_{f}approx -,{leftlangle omega {partial }_{p}qrightrangle }_{f},$$

(1)

in which ω and q are the vertical pressure velocity and specific humidity, respectively. (leftlangle cdot rightrangle =frac{1}{g}{int }_{{P}_{s}}^{{P}_{t}}cdot {rm{d}}p) denotes the vertical integral throughout the troposphere. The subscript f denotes variations at a specific timescale derived from the time filtering. Zhang et al.4 suggested that the column-integrated vertical moisture advection can be reasonably approximated as the advection of the low-tropospheric mean moisture (({bar{q}}_{l})) by mid-tropospheric vertical velocity anomalies (({omega }_{{m}_{f}})), hence:

$${P}_{f}approx -frac{{omega }_{{m}_{f}}{bar{q}}_{l}}{g},$$

(2)

in which g is the gravitational acceleration.

Equation (2) can be modified to denote the variability of precipitation and its change owing to the global warming as follows:

$$sigma left[{P}_{f}right]approx frac{sigma left[{left({omega }_{m}right)}_{f}right]{bar{q}}_{l}}{g};$$

(3)

$$Delta sigma left[{P}_{f}right]=frac{1}{g}left[Delta sigma left[{left({omega }_{m}right)}_{f}right]{bar{q}}_{l0}+sigma left[{left({omega }_{m0}right)}_{f}right]Delta {bar{q}}_{l}right],$$

(4)

in which σ denotes standard deviation and Δ denotes the difference between the historical and future warming periods. The subscript 0 indicates the values from the historical period.

According to equation (4), well-known models for global precipitation change are similarly applicable for the high-frequency precipitation variability changes. The historical moisture climatology term (that is, ({bar{q}}_{l0})) on the right-hand side refers to the ‘wet-gets-more-variable’ paradigm and the historical precipitation variability term (that is, (sigma [{({omega }_{m0})}_{f}])) refers to the ‘variable-gets-more-variable’ paradigm. Given the strong coupling between the low-level moisture and the sea-surface temperature, the climatological moisture change term (that is, (Delta {bar{q}}_{l})) presumably implies the ‘warmer-gets-more-variable’ paradigm.

Satellite and reanalysis dataset

We analysed 21 years (2001–2020) of daily mean satellite-observed precipitation data from the IMERG version 6 (ref. 30) and the GPCP version 3.2 (ref. 31). Daily gauge-satellite-reanalysis merged precipitation was obtained from the MSWEP version 2.8 for the period from 1980 to 2020 (ref. 32). Daily reanalysis precipitation data obtained from ERA5, which spans 1980–2020, were also used33. Data were interpolated to a 2.5° × 2.5° horizontal grid. Domains over 0°–360° E and 61.25° S–76.25° N were used. Daily gauge-based precipitation at the horizontal resolution is 0.25° × 0.25° from National Oceanic and Atmospheric Administration (NOAA) CPC from 1960 to 2020 (ref. 48) was used for a domain over the continental USA (126.25°–67.25° W, 20°–49.5° N). The AGMT was obtained from HadCRUT5 data54.

The PDF of the daily precipitation anomalies with respect to its percentile is calculated for each decade. After arranging daily precipitation anomalies over certain regions during the whole period (that is, 1980–2020) by their magnitudes, the values of precipitation for every tenth percentile are defined. The PDF of precipitation anomalies for each decade were calculated in the same way and then compared with the reference PDF value estimated by using the whole period for each percentile (Fig. 3).

CESM2 LE simulations

To train the DD model, we used a climate model dataset from the CESM2 LE, which has state-of-the-art skills in simulating characteristics of the daily precipitation at various timescales6. All the ensemble members that provide daily precipitation output were used (that is, 80 ensemble members). With the aid of tens of realizations for historical and global-warming-scenario simulations, the total number of samples used in training our DD model is larger than what any other model simulation framework can provide, which is advantageous for training the deep-learning model. The simulations cover the period from 1850 to 2100, of which data from 1850 to 2014 were obtained from the historical simulations and the rest from the SSP3-7.0 scenario simulations. A domain over 0°–360° E and 61.25° S–76.25° N was used and the horizontal resolution was coarsened to 2.5° × 2.5°. The input data were prescribed in the form of a normalized anomaly; the modelled daily climatology from 1980 to 2010 was subtracted from the raw precipitation fields and then divided by longitudinally averaged standard deviation at the corresponding latitude during the same period.

Because the total number of samples was 7,329,200 days (80 members × 251 years × 365 days), which exceeded the limit of our computing resources, we subsampled the training and validation datasets by randomly selecting one year from each decade. Thus, the total number of days of training data was reduced to 730,000. For the validation dataset, we randomly selected a different year from each decade and then randomly selected 73 days from each selected year. The total number of days of validation data used was 146,000.

Source link