Abstract
Network reconstruction is the task of inferring the unseen interactions between elements of a system, based only on their behaviour or dynamics. This inverse problem is in general ill-posed and admits many solutions for the same observation. Nevertheless, the vast majority of statistical methods proposed for this task—formulated as the inference of a graphical generative model—can only produce a ‘point estimate’, i.e. a single network considered the most likely. In general, this can give only a limited characterization of the reconstruction, since uncertainties and competing answers cannot be conveyed, even if their probabilities are comparable, while being structurally different. In this work, we present an efficient Markov-chain Monte–Carlo algorithm for sampling from posterior distributions of reconstructed networks, which is able to reveal the full population of answers for a given reconstruction problem, weighted according to their plausibilities. Our algorithm is general, since it does not rely on specific properties of particular generative models, and is specially suited for the inference of large and sparse networks, since in this case an iteration can be performed in time for a network of nodes, instead of , as would be the case for a more naïve approach. We demonstrate the suitability of our method in providing uncertainties and consensus of solutions (which provably increases the reconstruction accuracy) in a variety of synthetic and empirical cases.
1 Introduction
Many complex systems are governed by interactions that cannot be easily observed directly. For example, while we can use testing to measure individual infections during an epidemic spreading, measuring the direct transmission contacts that caused them is significantly harder [1,2]. Similarly, we can measure the abundance of different species in an ecosystem, or the level of gene expression in a cell, with relatively simple methodologies (e.g. via qPCR DNA amplification or DNA microarrays), but determining directly the interactions between any two species (e.g. mutualism or competition) [3,4] or any two genes [5,6] is significantly more cumbersome. Another prominent example is the human brain, which can have its behaviour harmlessly probed by an fMRI scan, but its direct neuronal structure cannot be measured non-invasively. In all these cases, network reconstruction needs to be performed based on the indirect information available, if we wish to understand how the system functions.
Several different methods have been proposed for the task of network reconstruction. A significant fraction of them are heuristic in nature and attempt to determine the existence of an edge from pairwise correlations of the activities of two nodes [7–12]. These methods are fundamentally limited in two important ways. Firstly, they conflate correlation with conditional dependence or causation, since two nodes may be strongly correlated even if they are not directly connected (e.g. if they share a neighbour in common). Secondly, with these methods, the existence of an edge is decoupled from any explicit modelling of the dynamics or behaviour of the system, which severely hinders the interpretability of the reconstruction—after all, how much would we have really uncovered about a network system if we do not know how an edge contributes to its function? [13]. Another prominent class of methods is based on the definition of explicit generative probabilistic models for the behaviour of a system, conditioned on a network of interactions operating as the parameters of this model [2,14–16]. In this case, the reconstruction amounts to the statistical inference of these parameters from data. Within a Bayesian workflow [17], this inferential approach offers a series of advantages, including: (i) A more principled methodology, coupling tightly theory with data, and relying on explicit—and hence scrutinizable—modelling assumptions; (ii) non-parametric implementations [18] dispense with the need to make ad hoc choices, such as arbitrary thresholds, total number of inferred edges, etc.; (iii) the inherent connection with the minimum description length (MDL) principle [19,20] provides a robust framework for model selection [18], according to the combined quality of fit and parsimony of the models considered, such that different hypotheses can be directly compared; and finally, (iv) recent advances [18,21] allow for scalable, sub-quadratic reconstruction of large networks, making the overall approach practical.
However, despite these advantages, so far the literature on network reconstruction deals almost exclusively with point estimates, i.e. most of the methods proposed can only produce a single network, considered to be the most likely one,1 and do not allow for uncertainty quantification—arguably one of the most desirable and important features of an inferential analysis. In other words, these point estimates contain no information about possible alternatives, how different and plausible they are, and hence how confident we can be about the point estimate in the first place. Besides this limitation that point estimation imposes on interpretability, its accuracy is also in general inferior to estimates that attempt to summarize the consensus over many possible solutions, weighted according to their plausibility [27].
One important reason why point estimation is predominantly employed is its relative algorithmic efficiency, when compared with approaches based on posterior averages. This is the main issue we address in this work, where we develop a scalable algorithm for posterior sampling of reconstructed networks that performs substantially better for larger problem instances than the nave baseline. More specifically, whereas a nave implementation of a sampling scheme would take time to reconstruct a sparse network of nodes, our algorithm is capable of doing the same in time .
This paper is organized as follows. In §2, we describe our overall inferential framework and in §3 our posterior sampling approach. In §4, we compare the performance of posterior sampling with point estimates for synthetic examples. In §5, we do the same for empirical data, where we make also a comparison with correlation-based reconstructions. We finalize in §6 with a discussion.
2 Inferential framework
(a) Monte–Carlo sampling
3 Posterior sampling and the quadratic mixing problem
An appealing property of the MCMC approach is that it obviates the computation of the usually intractable normalization constant that completes the definition of the posterior distribution , since this quantity appears both in the numerator and denominator of equation (3.1), and thus does not affect the acceptance rate. Therefore, using this scheme, only the joint likelihood is needed to be able to asymptotically sample from the posterior .
However, the efficacy of the overall approach hinges crucially on the choice of the proposal distributions and , since not all valid choices will lead to the same mixing time, i.e. the number of steps needed to reach the stationary distribution given some initial state. An efficient proposal distribution will result in fast mixing, allowing for sufficiently many independent samples from the target distribution to be obtained with relatively short MCMC runs.
Instead, an efficient proposal would choose entries according to their probability to lead to a move being successful. A successful move proposal is one that combines two properties: 1. It gets accepted; 2. The new value for is sufficiently different from the previous value —in particular if then , and vice versa. This means that an efficient entry proposal needs to be able to estimate the typical edge set—in other words, we need to be able to estimate, beforehand, which entries of the marginal posterior have sufficiently high values. If this succeeds, we would be able to update all typical edges in time , significantly reducing the mixing time when compared to the uniform entry proposal of equation (3.3). We describe our approach to achieve this in the following.
(a) Estimating the typical edge set
- (i)
At iteration , given an initial , we find the set containing the entries of that most increase or least decrease the posterior , with being a parameter of the algorithm.
- (ii)
The entries of are updated in sequence to maximize , yielding a new estimate .
- (iii)
If the difference between and falls below some tolerance value, we return , otherwise we continue from step 1.
The above algorithm does not guarantee that all members of the typical set are found. To increase our chances of finding the entire set, we initialize the MCMC with the MAP estimate , and after a sweep comprised of consecutive proposals, we compute a new set according to the same algorithm used in step 1 of the above algorithm, and add it to our typical set estimate . Note that since this changes the proposal probabilities that depend on , this procedure will invalidate detailed balance, and therefore will not lead to a correct sampling of the target distribution. Because of this, we perform this update only for initial sweeps, and afterwards we continue sampling with final set fixed.
Results of MCMC runs for the reconstruction of an Erdős–Rényi network of nodes and average degree , and weights sampled from a normal distribution with mean and standard deviation , serving as the couplings of a kinetic Ising model (see appendix E), based on parallel transitions from a random initial state. Panel (a) shows the cumulative recall of the typical set, i.e. the fraction of all entries with a posterior probability above a particular value that have been found in , for several values of the search period . Panel (b) shows the Jaccard similarity between samples generated by the MCMC and the true value , with () and without () the estimation of the typical edge set, and various search periods . Panel (c) shows the same kinds of MCMC runs, but with an initial state consisting of an empty network (the inset shows a zoom in the high similarity region). Panel (d) shows the autocorrelation function for the values of similarity of the runs in panel (b), discarding the initial transient before equilibration.
(i) Searching for ‘nearby’ edges
Illustration of the proposed ‘nearby’ updates according to equation (3.10). The black edges correspond to the non-zero entries of at some point of the algorithm, and the green edges are entries with for , which would be proposed for an update. Edges between the different components will never be proposed for any value of .
Panel (b) shows the autocorrelation time as a function of the number of nodes , for a target distribution according to equation (3.11), with generated as described in the text, with edges, and considering different combinations of the move proposals, as indicated in the legend, in the situation where the typical network is connected () and where it is disconnected (), in both cases with . The dashed line indicates a linear slope. Panel (a) shows an illustration of the connected and disconnected cases, with black edges representing those in that are currently being sampled, and the dashed edges those in that are not.
(ii) Edge weights, node values and community structure
In the previous sections, we have focused on the move proposals that involve the selection of entries in the matrix to be updated, but not on the proposals to update the actual value of the entry selected, since the former is the most crucial for the algorithmic performance. For the value updates, conventional choices can in principle be used, such as sampling from a normal distribution. In appendix B we describe an alternative approach based on bisection sampling that we found to be efficient, and also works well with regularization schemes that rely on discretization, such as the minimum description length (MDL) formulation of [18], which we summarize in appendix A.
One feature of the MDL regularization is that it includes the stochastic block model [30] as a prior, and therefore it performs community detection as part of the reconstruction, which has been shown previously to improve the overall accuracy [31].
Furthermore, most models also include an additional set of parameters on the nodes, that also need to be updated. We have not included these parameters in our discussion so far, since they can be handled completely separately, by selecting one of them at random, and using the same kinds of updates as used for the entries of . Differently from , there is no inherent algorithmic challenge in sampling these node parameters, since their number scales only linearly with the number of nodes.
Finally, in appendix C we also describe an extension of the algorithm which allows for edge replacements and swaps, that can potentially move across likelihood barriers present when discretized regularization schemes are used.
We provide a reference C++ implementation of the algorithms described here, together with documentation, as part of the graph-tool Python library [32].
4 MAP versus MP estimation with synthetic dynamics
Reconstruction performance based on the dynamics generated by the kinetic Ising model (see appendix E) on two empirical networks, where the weights are sampled from a normal distribution with mean and standard deviation , with being the average degree. The left panels show the results for a network of American football teams [33] (with and ), and on the left for a network of friendship between high school students [34] (with and ). The panels on the top show the similarity between the inferred and true networks, according to the MAP and MP estimators, as indicated in the legend, as a function of the length of the dynamics. The bottom panels show the number of edges of the inferred networks in each case. The dashed horizontal lines indicate the true value.
Besides the increased accuracy, posterior estimation can provide uncertainty quantification. We focus on this aspect when analysing the reconstruction based on empirical dynamics, in the following.
5 Empirical dynamics
Reconstruction of a zero-added Ising model based on votes of deputies of the lower house of the Brazilian congress. (a) Marginal edge probabilities indicated as edge thickness and the posterior mean as edge colors. The node pie charts indicate the marginal group memberships, inferred according to the SBM incorporated in the reconstruction, as described in [18]. (b) MP estimate according to equation (2.15). (c) MAP point estimate according to equation (2.10). (d) Distribution of marginal posterior probability values across all node pairs. (e) Posterior distribution of non-zero weight values across all node pairs. (f) Distribution of node biases across all nodes . In (e) and (f) the vertical lines correspond to the distribution obtained with the MAP point estimate.
The reconstruction uncovers a network ensemble that is divided into 11 groups of nodes who tend to vote in similar ways. As shown in figure 7, the divisions coincide very well with known party affiliations. The existence of non-zero couplings between deputies have uncertainties that vary in the entire range, indicating a very heterogeneous mixture of certain and uncertain edges. The coupling strengths themselves are distributed around four typical values, whereas the node biases are centred closely around a typically small, but positive value, indicating that deputies have only a very small tendency to vote ‘yes’ in the absence of any interaction with their neighbours. The increased accuracy that the marginal estimate provides is noticeable when compared to the MAP estimate of figure 5c, for which only eight groups can be identified, with three groups in the government coalition being merged together (corresponding to the four groups in the upper left of figure 5b). The tenuous intra-coalition organization is only visible when the more detailed analysis from posterior sampling is performed, and implies that the observed dynamics cannot be well captured by a single network—at least not with the dynamical model used. The similarity between both estimates is , showing that, while there is a substantial agreement between both estimates, the disagreement is not negligible (unlike the sufficient data limit in figure 4), and indicates how posterior sampling can be important to uncover uncertainties in the analysis of empirical data.
Our approach allows us to query the individual marginal distributions for every pair , giving a substantial amount of information on the reconstruction, when compared to the MAP point estimate, as can be seen in figures 5e and 5f.
Reconstruction of a multivariate Gaussian model based on log-returns of US stocks in the period between 2014 to 2024. (a) Marginal edge probabilities indicated as edge thickness and the posterior mean as edge colors. The node colors indicate the maximum marginal group memberships, inferred according to the SBM incorporated in the reconstruction, as described in [18]. (b) Distribution of marginal posterior probability values across all node pairs. (c) Posterior distribution of non-zero weight values across all node pairs. The vertical lines correspond to the distribution obtained with the MAP point estimate.
Correspondence between the inferred partition using the bult-in SBM in our reconstruction (left) with available metadata on the nodes (right), for (a) the Brazilian congress, with the metadata being the party affiliation of the deputies, and (b) US stock prices, with the metadata being the industrial sector, in both cases as indicated in the legend.
(a) Comparison between posterior probabilities, weight magnitudes, and pairwise correlations
Nevertheless, we might posit that there are situations where these reconstruction approaches yield similar results. For example, for a sparse, homogeneous true network, with all edges having the exact same weight, and all nodes having the same degree—such that the observed correlation between all true neighbours is approximately the same—it could be that the small drop in correlation between first and second neighbours is sufficient to discriminate between true and false edges.
Scatter plot between mean posterior weights or posterior probabilities and a type of pairwise correlation, i.e. either the covariance , Pearson correlation , or mutual information , for every node pair , for (a) the Brazilian congress data, and (b) the US stock prices data. The connected orange points correspond to binned averages.
Scatter plot of mean posterior weights versus posterior probabilities , for every node pair , for (a) the Brazilian congress data, and (b) the US stock prices data. The connected orange points correspond to binned averages for positive weights, and the blue points for negative weights.
Accuracy according to the fraction of largest values included in the reconstruction, for the Brazilian congress data, for different kinds of ‘scores’ attributed to the edge pairs. The left plot shows the Jaccard similarity, while the right shows the ‘true positive’ rate, taking the marginal probability as a reference.
Left: First 100 edge pairs with the largest values of mutual information, Pearson correlation, covariance and marginal probability, for the Brazilian congress data. The layout of the nodes is the same as in figure 5. Right: Marginal weight distribution of the 10 highest ranking node pairs according to the same scores as in the top panel, as well as the posterior average weight. The upper right corners show the corresponding scores.
From these comparisons, we can conclude that posterior sampling not only provides valuable uncertainty quantification, but also a completely different, and more accurate, reconstruction result than comparatively crude, but often employed heuristics based on thresholding of correlations.
6 Conclusion
We have described an efficient method to sample from posterior distributions of networks that allows us to perform uncertainty quantification for the problem of network reconstruction, as well as to produce consensus estimates from marginal distributions.
Our method does not rely on specific properties of particular generative models used for reconstruction, nor on the prior distribution used for their parameters. We showed how our method can be used together with a sophisticated regularization scheme that uncovers the most appropriate number of edges and weight distribution in a manner consistent with the statistical evidence available in the data.
We have demonstrated on synthetic and empirical examples how posterior sampling can improve the accuracy of network reconstructions, and uncovers the entire range of possible reconstructions weighted according to their plausibility as an account of how the data has been generated.
A comparison with heuristics based on the thresholding of pairwise correlations revealed the relative advantage of performing an inferential reconstruction, since besides providing a generative model, uncertainty estimates, and significantly increased accuracy, it is capable of distinguishing between the probability of existence of an edge and its weight magnitude, which otherwise would be conflated.
Since our methodology is easily adaptable to other generative models, it remains to be explored how it can be employed with models more realistic than the relatively simple ones considered here, and how the underlying Bayesian framework can be leveraged to perform model selection, to investigate the fundamental limits of network reconstruction, and to obtain predictive statements about the unseen behaviour and the outcome of interventions in network systems, based solely on indirect non-network data.
Data accessibility
This article has no additional data.
Declaration of AI use
No, I have not used AI-assisted technologies in creating this article.
Authors' contributions
T.P.: conceptualization, formal analysis, funding acquisition, investigation, methodology, project administration, resources, software, supervision, validation, visualization, writing—original draft, writing—review and editing
Conflict of interest declaration
I declare I have no competing interests.
Funding
This work has been funded by the Vienna Science and Technology Fund (WWTF) and by the State of Lower Austria (grant no. 10.47379/ESS22032).
Appendix
Appendix A. MDL regularization and joint SBM inference
The proposals for the partitions are done according to the merge-split algorithm described in [39]. Although it is straightforward to introduce move proposals for both and , we found that the results are often indistinguishable from simply choosing and , since these are not very sensitive hyperparameters.
For generative models which have additional node parameters, e.g. local fields of the Ising model (see appendix E), almost identical priors can be used for them, with the only exception being that zero values are allowed. See [18] for details.
Appendix B. Edge weight proposals via bisection and linear interpolation (BLI)
- (i)
We sample uniformly at random between either or , depending on which interval is larger.
- (ii)
The new bracketing interval is updated to include as its midpoint and the old midpoint as one of the boundaries if , otherwise the midpoint is preserved and the corresponding boundary is updated to .
- (iii)
If , the search stops. Otherwise, we go back to step 1.
(a) Example target distribution and the proposal generated via the algorithm described in the main text. The circle markers and the vertical lines mark the random bisection points. (b) Average proposal distribution for increasing number of bisection steps, as shown in the legend. (c) Metropolis–Hastings (MH) acceptance rate as a function of the number of bisections.
For the specific generative models considered in the main text and in appendix E, their corresponding conditional likelihood is convex, which means that a deterministic bisection could be used instead. However, in the interest of generality, our algorithm does not rely on the convexity of the conditional likelihood, nor on other usually desirable properties such as it being differentiable or even continuous.
(a) Discrete values
When dealing with the discretized values for considered in appendix A, special considerations are needed. Although we can easily adapt the above BLI sampling to values which are multiples of the quantization parameter , this may not yield proposals which are accepted, since most of the time the proposal will yield a new value of , increasing the number of discrete categories, which, per design, exerts a penalty to the likelihood. Because of this, we consider the following move types:
- (i)
New categories: BLI moves constrained to values which are multiples of .
- (ii)
Old categories: BLI moves constrained to the existing categories, .
- (iii)
Collective category moves: BLI moves of a single category with , to a new value which is a multiple of , distinct from the other categories.
Furthermore, we also employ the merge-split of [39] for the distribution of the weight categories on the edges, since this can remove likelihood barriers that exist when moving one edge at a time. The only modification we use for that algorithm is that when weight categories are split and merged, the respective category values , both for old and new categories, are sampled according to the BLI algorithm described previously.
Appendix C. Updating multiple entries simultaneously: edge replacements and swaps
- (i)
A node is sampled uniformly at random.
- (ii)A neighbour of is sampled uniformly at random with probabilitywhere we account also for nodes with degree zero.
- (iii)
A node is sampled with probability .
- (iv)
If , i.e. at least one of the nodes is repeated, the proposal is skipped.
- (v)
Otherwise, the values of the entries and are swapped.
We do not analyze the effect of these move proposals in detail, but they are included in our reference implementation, and we have observed a positive effect in the mixing time of empirical networks.
Appendix D. Parallelization
The algorithm of [21] used here to estimate the typical edge set can be performed in parallel, which often yields significant runtime improvements in multiprocessor environments. Unfortunately, MCMC algorithms in general, including the one we present to perform posterior sampling, are inherently serial, since we need to consider one move before the next one can be contemplated. Nevertheless, in our particular case partial parallelization can in fact be achieved by noting that if edges and are considered in sequence, and if all their endpoint nodes are different, then their individual contributions to the model likelihood (i.e. excluding prior terms) are completely independent. Since this contribution is the most computationally demanding, taking time where the total number of data samples, then we can benefit from parallelization as follows:
- (i)
A set of edge candidate moves of size is proposed according to the current state.
- (ii)
For the subset of edges that form an independent set, i.e. those that do not share endpoints, their likelihood contributions are computed in parallel.
- (iii)
The MCMC proceeds through the proposed moves sequentially, using the pre-computed likelihood changes if they are available, otherwise they are computed as needed.
Appendix E. Generative models
In our examples, we use three generative models: the equilibrium Ising model [16], the kinetic Ising model and a multivariate Gaussian.
In the case of the zero-valued Ising model with , the normalization of equations (E 3) and (E 1) changes from to .
A notable exception is the literature on the reconstruction of uncertain or incomplete networks, i.e. when the data are a direct measurement of a network, but which has either been corrupted by measurement errors, or parts of it have not been measured at all. For this specific class of reconstruction problems, posterior sampling and uncertainty quantification is more commonplace [22–26]. However, despite both problems sharing the same overall conceptual framework, network reconstruction from dynamics or behaviour is algorithmically very different from the reconstruction of noisy or incomplete networks, and hence requires different computational techniques.
Retrieved from the API to https://finance.yahoo.com.



![Reconstruction performance based on the dynamics generated by the kinetic Ising model (see appendix E) on two empirical networks, where the weights are sampled from a normal distribution with mean 1/⟨k⟩ and standard deviation 0.01, with ⟨k⟩=2E/N being the average degree. The left panels show the results for a network of American football teams [33] (with N=115 and E=613), and on the left for a network of friendship between high school students [34] (with N=291 and E=1136). The panels on the top show the similarity s(W,W^) between the inferred and true networks, according to the MAP and MP estimators, as indicated in the legend, as a function of the length M of the dynamics. The bottom panels show the number of edges of the inferred networks in each case. The dashed horizontal lines indicate the true value.](https://trs.silverchair-cdn.com/trs/content_public/journal/rspa/481/2325/10.1098_rspa.2025.0344/4/m_rspa20250344f04.png?Expires=1767779718&Signature=ex8ZQ5mJPrMJdPH37kYpuHUC057GmjNWyTrkqecEQwwiYBCCwMv7bBmHM60U-r~WKkCmZaQxLdZiiYav~rpQt7juO~5QzW4A7aw7OKy0zlquZpOhw~dNyFAmKjvnc7ERwUMrp0CjMSJMU0oW1qLTfzt7Q~RCWUjlyjHW3~NiztATPUiQkfwvbzfmkFnugkn9IFsNljStASh1xnU3xW0AVLdBDo0~gyiGtasfgOogXByyxrjw0u94HsQP10xUeikfcSYwo0~5kHpYuENnoGt88TNKYotvMTQk~Ufya8-0OXqigWyoKOtmKmt3HF2tNl2cy-k0RBiMaLhFeKIZhxh1-Q__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA)
![Reconstruction of a zero-added Ising model based on M=619 votes of N=623 deputies of the lower house of the Brazilian congress. (a) Marginal edge probabilities π indicated as edge thickness and the posterior mean W^ as edge colors. The node pie charts indicate the marginal group memberships, inferred according to the SBM incorporated in the reconstruction, as described in [18]. (b) MP estimate W^ according to equation (2.15). (c) MAP point estimate W∗ according to equation (2.10). (d) Distribution of marginal posterior probability values πij across all node pairs. (e) Posterior distribution of non-zero weight values Wij across all node pairs. (f) Distribution of node biases θi across all nodes i. In (e) and (f) the vertical lines correspond to the distribution obtained with the MAP point estimate.](https://trs.silverchair-cdn.com/trs/content_public/journal/rspa/481/2325/10.1098_rspa.2025.0344/4/m_rspa20250344f05.png?Expires=1767779718&Signature=FM6vJq5WhAsrp5DHfqIHkxwrstCOIuwAMfSOVJcMjqtXOzCzg11nmRfAHghrySPFanS5ClIRcHUK2Lc-gklFxSG5hrSq~BnPLWC35Ozk-eq71xwVnKnxu7svul9JxBQD6PFKW5T3-fgQ6-wPlL1DwUqGcuHTIBhSrMm~k~ulmZZ-UUZKJUQhlcK-S~XUcZgKBkqtONwZZb6dEPHwV2xY~0wLBmbeidaBl-5jNKnhs5Rgb4b-nVoCawz-DdsDLacsPCTVpDRrfG2o36rqnhWTMjmKge93CUETxtTQhZEfOH3V9meV2tCO1HboOVvUd8RdCc~VGinPsKbW3o6BgcAerA__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA)
![Reconstruction of a multivariate Gaussian model based on M=2516 log-returns of N=6369 US stocks in the period between 2014 to 2024. (a) Marginal edge probabilities π indicated as edge thickness and the posterior mean W^ as edge colors. The node colors indicate the maximum marginal group memberships, inferred according to the SBM incorporated in the reconstruction, as described in [18]. (b) Distribution of marginal posterior probability values πij across all node pairs. (c) Posterior distribution of non-zero weight values Wij across all node pairs. The vertical lines correspond to the distribution obtained with the MAP point estimate.](https://trs.silverchair-cdn.com/trs/content_public/journal/rspa/481/2325/10.1098_rspa.2025.0344/4/m_rspa20250344f06.png?Expires=1767779718&Signature=GRAoWGKlpjnSkRbKaEG5Ox7NGZucLHcf9Curle9I1~1DcayLnecZZS6oVsedm8XJmIi7Hc6RT2S8bsrQ5u~U0H-Tv6cJc486atMn-hf2SuomBM5j66lj03v96bd751PK12taj~SlB9MEE8EKjKviduK5yMwPQiifZc~ppmpiIUqgW6IpTjzJnYaIozOpYI2oN-lAxY6iT-uVwBRXS~cgAf0gUOkpUIF5R01yBwFNUQVEdWm3IG-3FrDvlYG0h-DQ5MtFgzzaXszde72vmbXTsXZqH4WEtsL~OtHCo-u164VRQi9VXZZlNFsKiqdrtJxHL-yzuzJabKqKlBEq-gLuZQ__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA)





