- Access by Staats- und Universitaetsbibliothek Bremen
Role of adjacency-matrix degeneracy in maximum-entropy-weighted network models
Phys. Rev. E 92, 052816 – Published 30 November, 2015
DOI: https://doi.org/10.1103/PhysRevE.92.052816
Abstract
Complex network null models based on entropy maximization are becoming a powerful tool to characterize and analyze data from real systems. However, it is not easy to extract good and unbiased information from these models: A proper understanding of the nature of the underlying events represented in them is crucial. In this paper we emphasize this fact stressing how an accurate counting of configurations compatible with given constraints is fundamental to build good null models for the case of networks with integer-valued adjacency matrices constructed from an aggregation of one or multiple layers. We show how different assumptions about the elements from which the networks are built give rise to distinctively different statistics, even when considering the same observables to match those of real data. We illustrate our findings by applying the formalism to three data sets using an open-source software package accompanying the present work and demonstrate how such differences are clearly seen when measuring network observables.
Article Text
Network science is a prime example of the multiple uses that many tools and methodologies extracted from traditional physics can have when applied to a variety of transdisciplinar problems. The advent of the so-called Big Data Era given by the explosion of ICT technologies is providing researchers with unprecedented large data sets in a myriad of fields ranging from biology to urban studies , including bibliometrics , to chemical sciences or even history , to cite just a few. The current challenge is to extract knowledge and useful information from such enormous data sets. A standard approach is based on representing them as a graph. Network representation of data is especially useful due to its relative simplicity in terms of computational effort, visualization , and analysis. However, it presents some serious limitations, which force us to look for innovative methodological tools.
One way to go beyond simple network representation is to generate appropriate null models. They must be flexible and reliable enough for comparison with our original data in the search for statistically relevant patterns. In general, this is not a simple task; data processing is tricky and subtle in many situations and it may lead to wrong conclusions based on a poor understanding of the problem under study. A clever strategy to find efficient null models consists of generating randomized instances of a given network while keeping some quantities constant . This can be done by algorithmic randomization of graphs but such a procedure can be costly in terms of computational time (especially for dense data sets) and programming difficulty. Most importantly, most “rewiring techniques” do not always generate an unbiased sampling of networks .
A different approach to this problem has its roots in the analogy of networks with classical statistical mechanics systems , though it was originally proposed by sociologists and also by urban planners under the name of exponential random graphs . It is based on the idea of constructing an ensemble of networks with different probabilities of appearance, which on average fulfill the considered constraints. The advantage of these methods is that they consider the possibility of having fluctuations (as usually happens in real data) and their derivation is completely analytical. Furthermore, such methods provide an easy way of rapidly simulating (and averaging) network instances belonging to a given ensemble. So far in the literature successful development of this kind of methodology has been performed for different types of monolayered networks , directed and bipartite structures, and stochastic block models and some multiplex weighted networks .
Recently, there is growing interest in the study of more complex mathematical structures to account for the inherent multilayered character of some network systems. This fact calls for the need to develop maximum entropy ensembles with a multilayered perspective , which will help in the analysis of real-world data sets. This is the main goal of this work. In this paper, we complement previous work on maximum-entropy-weighted networks by considering systems of aggregated multiplexes, where we have information about the layered structure of the system and the nature of their events, but—as usually happens for real data—we only have access to its accumulated structure (the sum of weights connecting two nodes in each layer for each pair of nodes). We show how the role of event and layer degeneracy induces important differences in the obtained statistics for each case, which recovers the monolayered studied cases when the number of layers is set to unity. We further show that, despite the statistics being different, all the cases considered are examples of maximum likelihood networks of the dual problem but yield different expectations for network quantities, highlighting the need to choose an appropriate null model for each case study based on the weighted character of the networks.
In Sec. we present the mathematical framework and calculations of degeneracy for maximum-entropy networks with arbitrary constraints. Section extends the calculation to obtain explicit statistics for a very general case of constraints for the different cases considered. Finally, Sec. presents an application of the model for the particular case of fixed strengths to the analysis of three real-world data sets. Extended mathematical calculations are provided in the Appendixes, and details on the used data sets, measured quantities, and numerical methods in the Supplementary Material , including also a package for public use to apply the proposed models .
We consider a representation of a network of
To completely determine an ensemble, it is not enough to specify the quantities we wish to fix (the constraints given by the original data); we must also define the statistical nature of the events allocated to the occupation numbers
Once a grand canonical ensemble is fully constructed, the probability of obtaining a particular configuration of occupation numbers
The so-called grand-canonical partition function
Using Eq. we reach
Let us note that if the degeneracy term factorizes, i.e.,
where the statistics of
where
The degeneracy terms are in general subtle to compute and, to the best of our knowledge, are seldom considered in the literature. In order to calculate them, however, we need to make considerations about the type of networks under study and their elements. In this work, we consider systems composed of events that are either distinguishable or indistinguishable. Additionally, we study the general representation of an overlay multiplex network, which is obtained by aggregating several layers of a system into a single (integer-weighted) adjacency matrix . Examples of such networks include aggregation of transportation layers , networks generated by accumulation of information over a certain time span such as origin-destination matrices , e-mail communications or human contacts , and even an aggregation of trading activities in different sectors such as the World Trade Network (WTN) .
Thus the system under consideration is an aggregation of
Finally, the degeneracy term is the product of the multiplicity induced by the nature of the events times the nature of the layers (which, in the only possible real scenario, are always distinguishable):
The resulting degeneracy terms are listed in Table (see details in Appendix ), in which one can see that, in principle, the event degeneracy term does not factorize for the distinguishable cases due to the presence of the variable
To go further in our derivation, we now consider the case where the constraints are linear functions of the aggregated occupation numbers:
This case is very generic and includes networks with local constraints on nodes , community constraints , and generalized cost constraints such as distances . The case where the constraints depend on both the binary projection of the occupation numbers and their values
The individual partition functions can be summed:
In this case, we have redefined
And we recover well-known probability distributions: a Poisson distribution for the ME case (independent of the number of layers
The resulting statistics show some important features: On the one hand, one sees that although the degeneracy term changes for ME networks for the case of either a monolayer or a multilayer, the form of the obtained statistics does not. This means that it is not possible to distinguish an ME monolayered network from an aggregation of multiple ME layers belonging to an ensemble with the same constraints. On the other hand, the situation for the other cases changes: For multiplexes the resulting occupation numbers will have statistics different from those in the monoplex case. This has the implication that one could in principle discern the aggregated nature of a network by inspecting the accumulated edge statistics
Another important implication of the obtained statistics is the very different interpretations encoded in the
The different obtained statistics are highly relevant, as their marked differences point out a (regularly overlooked) problem: Different maximum-entropy ensembles yield very different statistics for the same considered constraints, and hence each data set needs to be analyzed carefully, since the process behind the formation of each network dictates its degeneracy term. Furthermore, all the obtained statistics are derived from a maximum-entropy methodology, and hence the values
This means that if one wants to assess the effects a given constraint has on a network constructed from real data, one needs to choose very carefully the appropriate null model to compare the data to. It is also worth pointing out that most of these ensembles are not equivalent to a manual rewiring of the network (although one expects small differences; see Appendix ). However, maximum-entropy models allow for an analytical treatment of the problem and simplify the generation of network samples when the considered constraints are increasingly complicated (both at the coding level and at the computational one). This has many implications, including the possibility of computing
The main difficulty with the soft-constrained maximum-entropy framework presented here for null model generation is the problem of solving the saddle-point equations, , associated with each ensemble. With the exception of some particular cases , these equations do not have an analytical solution and must be obtained numerically. In this case, the best approach is to maximize the associated loglikelihood of each model to a set of observations (constraints), yet the difficulty of each problem increases with the number of constraints since each fixed quantity has an associated variable to be solved. Considering the different statistics obtained in this paper, the most difficult case by far is the W one, since the condition that
To highlight the importance of considering an appropriate null model for the assessment of real data features, in this final section we consider the case of networks with a fixed strength sequence. Real networks usually display highly skewed node strength distributions, which have important effects on their observables. Hence, to correctly assess whether some observed feature in a data set can be explained solely by the strength distribution, it is crucial to choose an appropriate null model to compare the data to. This situation is especially important, for instance, with regard to community analysis through modularity maximization for weighted networks, because the modularity function to be optimized needs as input a prediction from a null model with fixed strengths
So the resulting saddle-point equations, , are
where
The ME case has an analytical solution , while the others must be solved computationally. The Supplementary Material provides extended details about the network quantities computed, simulations, averaging and computational methods, and algorithms used in this section, which are available in the freely provided, open-source ODME package .
As real-world data sets we use a snapshot of the WTN, the OD matrix generated by taxi trips in Manhattan for the year 2011 , and the multiplex European Airline Transportation network . The WTN has been widely studied in the literature and recently has been represented as an aggregated system of
In all cases we consider directed networks, and throughout this paper we only show results in the outgoing direction, as the results in the incoming direction are qualitatively equal. Note that the aggregated B case cannot always be applied since the maximum number of events allocated per node pair cannot exceed the number of layers, and for the WTN data set this condition
To analyze the difference between models, we compute ensemble expectations for different edge- and node-related properties suitably rescaled (fixing the original strength distribution of each data set) and then compare the obtained results with the real observed data features. The airline data set is very sparse and differences between models are not great, which points out the need for adequate sampling for a successful analysis of weighted networks. Anyway, since it is by construction an aggregation of binary layers, the B case displays the most resemblance to the data, both qualitatively and quantitatively (see the Supplementary Material ).
The WTN and taxi data sets, in contrast, contain enough sampling for the wide differences between models to emerge. All cases have the same number of events
Node pair statistics. (a, b) Binary connection probability and (c, d) rescaled average edge weight as a function of the product of origin and destination node strength. Results averaged over
First-order node statistics. Rescaled (a, b) degree and (c, d) disparity for (a, c) the taxi and (b, d) the WTN data sets. The same conditions as for Fig. apply. Dashed lines represent log-binned standard deviation ranges for the real data. The
These anomalies for low- and high-strength nodes, respectively, for the W case produce wild asymmetries in the allocation of weights per node, which can be studied by measuring their disparity,
Second-order node statistics. Rescaled weighted average strength for (a) the taxi and (b) the WTN data sets. The same conditions as for Fig. apply. Dashed lines represent log-binned standard deviation ranges for the real data. A sharp increase is clearly shown for high-strength nodes in the W cases.
Obviously none of the null models used reproduce the real data, however, the goal in model construction is rather to assess the structural impact that a given constraint (in this case, a strength distribution) has on the network observables. In this sense, we show that different models provide very different insights into such impacts. In particular, since the airline data set is by construction an aggregated B network and the taxi data set an ME one (people riding in taxis are clearly distinguishable), the fact that the B and ME cases, respectively, lie closer to the real data comes as no surprise. The WTN case, however, is unclear: The modeling of trade transactions does not have a clearly defined nature, but if one assumes the WTN to be a multilayered network, its aggregated analysis should be performed using either the W case (with
In this work we have shown the importance of considering the nature of the events one wishes to model when using an integer-valued network representation of a system. We have developed and solved a maximum-entropy framework for model generation applied to networks generated by aggregation of multiplexes. We have shown how different considerations about the nature of the events generating the elements of the multiplex give rise to distinctively different node pair statistics. For the case where one wants to fix properties expressed as linear functions of the individual weights (and, optionally, their binary projection) in the network, we elegantly recovered well-known statistics such as Poisson, binomial, negative binomial, geometric, and Bernouilli in each case.
We have, further, provided a practical example by focusing on the case of fixed strengths and applying the models to assess relevant features of three real-world data sets containing different types of weights, showing how the role of adjacency-matrix degeneracy plays a crucial role in model construction. To this end, we have considered the statistical nature of the obtained models as well as the weaknesses and strengths derived from their practical applications in real cases. Finally, we provide the open-source software package ODME , with which practitioners can apply the proposed models to other data sets.
The insights derived from this paper can open the door to the objective identification of truly multiplex structures (except in one case where it has been shown to be impossible) by inspection of the statistics of their edges, provided that several instances of a network belonging to the same ensemble are available. The take-home message of this work is that, in order to perform a meaningful analysis of a given network, a practitioner needs to be able to select an appropriate null model, which depends not only on the endogenous constraints one considers but also on the very nature of the process one is modeling. Our work provides researchers with a range of maximum-entropy (and maximum-likelihood) models to choose from, covering a wide spectrum of possibilities for the case of weighted networks. Each of these models is not wrong or even right in the general case, despite yielding very different predictions for the same sets of constraints, but just more or less appropriate, depending on the problem one wants to study.
We thank A. Allard, A. Fernández, F. Font-Clos, and R. Toral for useful comments and suggestions. This work was partially supported by the LASAGNE (Contract No. 318132) and MULTIPLEX (Contract No. 317532) EU projects, the Spanish MINECO (FIS2012-38266), and the Generalitat de Catalunya (2014SGR608). O.S. acknowledges financial support from Generalitat de Catalunya (FI program) and the Spanish MINECO (FPU program).
We develop here the general case where the constraints have the general form
where
which corresponds to the zero-inflated versions of the previous statistics recovered in the case
ME (zero-inflated Poisson):
W (zero-inflated negative binomial):
B (zero-inflated binomial):
We can then compute the binary connection statistics:
Note how the binary projection in all cases corresponds to Bernouilli statistics. Regarding the occupation number statistics, one has explicitly
And we observe a clear nontrivial relation between binary statistics and weights, which leads to important correlations between degrees and strengths in networks belonging to these ensembles which are also present in real data .
The probability distributions derived in this paper for networks belonging to the different described ensembles fulfill the maximum-likelihood principle for networks . Indeed, the constraint-point equations in can be understood as the equations resulting from maximizing the likelihood of the inverse problem of finding the set of values for the Lagrange multipliers
and maximizing this expression with respect to
which, at the critical points, lead to
If the maximum-entropy description provided here is to be successful, then the fluctuations of the obtained networks need to be bounded and have well-defined statistics. In particular, we require that the associated entropy of the distribution and the statistics of the constraints possess finite first and second moments and that their relative fluctuations around the average values need to be small in the limit of large sampling
where
For the case where any binary constraint is additionally imposed (Appendix ), the relative fluctuations of the binary structure dominate the statistics in the large sampling limit, and despite being bounded, these never vanish .
Concerning the associated Gibbs-Shannon entropy of the ensembles, since the occupation number statistics are independent, we have the random variable
For each state
For this calculation one only needs to take into account the distinguishable case (otherwise there is no degeneracy). This case, however is controversial to analyze. The correct counting of configurations in a grand-canonical ensemble is an issue spanning more than a century (see and references therein for details and extended discussion), ever since Gibbs used it to establish the relation in classical statistical mechanics between the canonical and the grand-canonical ensembles of an ideal gas.
Grand-canonical ensembles of networks can be faced in many ways. The usual view is to imagine a collection of
The prior expression is, however, related to the probability of obtaining a given configuration of
For the case with linear constraints of the form of Eq. , the system partition function
If we add the strong condition that the ensemble average number of events has to be equal to
Wrapping together the previous expressions and considering that the number of copies is arbitrary, one can imagine the limit where it goes to
which leads to a factorizable partition function of the form of Eq. which does not depend on the number of copies of the system; neither does its associated probability,
We have thus reached the same independent Poisson probabilities as obtained by taking an effective factorizable degeneracy term,
These results are in accordance with previous works where the complete equivalence between canonical ensembles [ensembles with soft linear constraints as in Eq. but with
For nonlinear cases (such as the case with binary constraints or the B case with distinguishable events), the effective degeneracy term is an approximation, since the complete calculation using partial sums where
Supplemental Material
The supplementary material contains a pdf file with details about the datasets used, numerical methods implemented in the free, open-source package ODME provided for application of the proposed models, measured magnitudes in the data and details about accuracy of equation solving.
References (46)
- M. Newman, Networks: An Introduction (Oxford University Press, New York, 2010).
- M. Á. Serrano, M. Boguñá, and F. Sagués, Mol. Biosyst. 8, 843 (2012).
- O. Sagarra, M. Szell, P. Santi, A. Díaz-Guilera, and C. Ratti, PLoS ONE 10, e0134508 (2015).
- G. Menichetti, D. Remondini, and G. Bianconi, Phys. Rev. E 90, 062817 (2014).
- R. Guimera and L. A. N. Amaral, Nature 433, 895 (2005).
- M. Schich, C. Song, Y.-Y. Ahn, A. Mirsky, M. Martino, A.-L. Barabási, and D. Helbing, Science 345, 558 (2014).
- P. Colomer-de-Simón, M. Á. Serrano, M. G. Beiró, J. I. Alvarez-Hamelin, and M. Boguñá, Sci. Rep. 3, 2517 (2013).
- G. Menichetti, D. Remondini, P. Panzarasa, R. J. Mondragón, and G. Bianconi, PLoS One 9, e97857 (2014).
- M. Molloy and B. Reed, Random Struct. Alg. 6, 161 (1995).
- A. Coolen, A. De Martino, and A. Annibale, J. Stat. Phys. 136, 1035 (2009).
- J. Park and M. E. J. Newman, Phys. Rev. E 70, 066117 (2004).
- G. Bianconi, A. C. C. Coolen, and C. J. Perez Vicente, Phys. Rev. E 78, 016114 (2008).
- A. Annibale, A. Coolen, L. Fernandes, F. Fraternali, and J. Kleinjung, J. Phys. A. Math. Gen. 42, 485001 (2009).
- E. Roberts, T. Schlitt, and A. Coolen, J. Phys. A 44, 275002 (2011).
- E. Roberts, A. Annibale, and A. Coolen, in J. Phys. Conf. Ser. 410, 012097 (2013).
- A. G. Wilson, Transp. Reasearch 1, 253 (1967).
- P. W. Holland and S. Leinhardt, JASA 76, 33 (1981).
- G. Bianconi, P. Pin, and M. Marsili, Proc. Natl. Acad. Sci. U.S.A. 106, 11433 (2009).
- D. Garlaschelli and M. Loffredo, Phys. Rev. Lett. 102, 038701 (2009).
- O. Sagarra, C. J. Pérez Vicente, and A. Díaz-Guilera, Phys. Rev. E 88, 062806 (2013).
- E. S. Roberts and A. C. C. Coolen, J. Phys. A 47, 435101 (2014).
- T. P. Peixoto, Phys. Rev. E 85, 056122 (2012).
- M. De Domenico, A. Solé-Ribalta, E. Cozzo, M. Kivelä, Y. Moreno, M. A. Porter, S. Gómez, and A. Arenas, Phys. Rev. X 3, 041022 (2013).
- E. Cozzo, G. F. de Arruda, F. A. Rodrigues, and Y. Moreno, arXiv:1504.05567.
- D. Garlaschelli and M. I. Loffredo, Phys. Rev. E 78, 015101 (2008).
- See Supplemental Material at http://link.aps.org/supplemental/10.1103/PhysRevE.92.052816 for details on the data sets used, measured network quantities, and numerical methods used.
- O. Sagarra, ODME: Origin-Destination Multi-Edge Network Package (2014); https://github.com/osagarra/ODME_lite.
- G. Bianconi, Europhys. Lett. 81, 28005 (2008).
- M. Boguñá and R. Pastor-Satorras, Phys. Rev. E 68, 036112 (2003).
- G. Caldarelli, A. Capocci, P. De Los Rios, and M. A. Muñoz, Phys. Rev. Lett. 89, 258702 (2002).
- A. Cardillo, J. Gómez-Gardenes, M. Zanin, M. Romance, D. Papo, F. del Pozo, and S. Boccaletti, Sci. Rep. 3, 1344 (2013).
- R. Guimerà, A. Arenas, A. Díaz-Guilera, and F. Giralt, Phys. Rev. E 66, 026704 (2002).
- L. Gauvin, A. Panisson, C. Cattuto, and A. Barrat, Sci. Rep. 3, 3099 (2013).
- T. Squartini, G. Fagiolo, and D. Garlaschelli, Phys. Rev. E 84, 046117 (2011).
- T. Squartini and D. Garlaschelli, New J. Phys. 13, 083001 (2011).
- A. Halu, S. Mukherjee, and G. Bianconi, Phys. Rev. E 89, 012806 (2014).
- O. Sagarra, F. Font-Clos, C. J. Pérez-Vicente, and A. Díaz-Guilera, Europhys. Lett. 107, 38002 (2014).
- N. Dianati, arXiv:1503.04085v3.
- M. Á. Serrano, M. Boguná, and A. Vespignani, Proc. Natl. Acad. Sci. U.S.A. 106, 6483 (2009).
- P. Santi, G. Resta, M. Szell, S. Sobolevsky, S. H. Strogatz, and C. Ratti, Proc. Natl. Acad. Sci. U.S.A. 111, 13290 (2014).
- R. Mastrandrea, T. Squartini, G. Fagiolo, and D. Garlaschelli, Phys. Rev. E 90, 062804 (2014).
- O. Sagarra, Master's thesis, Universitat Politècnica de Catalunya (2011).
- M. A. Serrano, M. Boguna, and R. Pastor-Satorras, Phys. Rev. E 74, 055101 (2006).
- K. Anand, D. Krioukov, and G. Bianconi, Phys. Rev. E 89, 062807 (2014).
- R. H. Swendsen, Am. J. Phys. 83, 545 (2015).
- R. K. Pathria, Statistical Mechanics, 2nd ed. (Butterworth-Heinemann, London, 1996).