GenPottsBPState#

class graph_tool.dynamics.GenPottsBPState(g, f, x=1, theta=0, m=None, em=None, vm=None, marginal_init=False, frozen=None, converge=True)[source]#

Bases: BPBaseState

Belief-propagtion equations for a genralized Potts model.

Parameters:
gGraph

Graph to be used for the dynamics.

fndarray or list of list

\(q\times q\) 2D symmetric with iteraction strengths between the \(q\) spin values.

xfloat or EdgePropertyMap (optional, default: 1)

Edge coupling weights. If a EdgePropertyMap is given, it needs to be of type double. If a scalar is given, this will be determine the value for every edge.

thetafloat or iterable or VertexPropertyMap (optional, default: 0.)

Vertex fields. If VertexPropertyMap, this needs to be of type vector<double>, containing \(q\) field values for every node. If it’s an iterable, it should contains \(q\) field values, which are the same for every node. If a scalar is given, this will be determine the value for every field and vertex.

mEdgePropertyMap (optional, default: None)

If provided, it should be an EdgePropertyMap of type vector<double>, containing the edge messages.

emEdgePropertyMap (optional, default: None)

If provided, it should be an EdgePropertyMap of type vector<double>, containing the edge marginals.

vmVertexPropertyMap (optional, default: None)

If provided, it should be an VertexPropertyMap of type vector<double>, containing the node marginals.

marginal_initboolean (optional, default: False)

If True, the messages will be initialized from the node marginals.

frozenVertexPropertyMap (optional, default: None)

If provided, it should be an VertexPropertyMap of type bool, where a value True means that a vertex is not a variable, but a fixed field.

convergeboolean (optional, default: True)

If True, the function GenPottsBPState.converge() will be called just after construction.

Notes

This implements belief-propagation (BP) equations [mezard_information_2009] for a generalized Potts model given by

\[P(\boldsymbol s | \boldsymbol A, \boldsymbol x, \boldsymbol\theta) = \frac{\exp\left(\sum_{i<j}A_{ij}x_{ij}f_{s_i,s_j} + \sum_i\theta_{i,s_i}\right)} {Z(\boldsymbol A, \boldsymbol x, \boldsymbol\theta)}\]

where \(Z(\boldsymbol A, \boldsymbol x, \boldsymbol\theta)\) is the partition function.

The BP equations consist in the Bethe approximation

\[\log Z(\boldsymbol A, \boldsymbol x, \boldsymbol\theta) = \log Z_i - \sum_{i<j}A_{ij}\log Z_{ij}\]

with \(Z_{ij}=Z_j/Z_{j\to i}=Z_i/Z_{i\to j}\), obtained from the message-passing equations

\[P_{i\to j}(s_i) = \frac{e^{\theta_{i,s_i}}}{Z_{i\to j}} \prod_{k\in \partial i\setminus j}\sum_{s_k=1}^{q}P_{k\to i}(s_k)e^{x_{ik}f_{s_i,s_k}},\]

where \(Z_{i\to j}\) is a normalization constant. From these equations, the marginal node probabilities are similarly obtained:

\[P_i(s_i) = \frac{e^{\theta_{i,s_i}}}{z_i} \prod_{j\in \partial i}\sum_{s_j=1}^{q}P_{j\to i}(s_j)e^{x_{ij}f_{s_i,s_j}},\]

with \(z_i\) being another normalization constant. Likewise, the edge marginals are

\[P_{ij}(s_i, s_j) = \frac{1}{z_{ij}}e^{x_{ij}f_{s_i,s_j}}P_{i\to j}(s_i)P_{j\to i}(s_j),\]

with \(z_{ij}\) being a normalization constant.

References

[mezard_information_2009]

Marc Mézard, and Andrea Montanari, “Information, physics, and computation”, Oxford University Press, 2009. https://web.stanford.edu/~montanar/RESEARCH/book.html

Examples

>>> g = gt.GraphView(gt.collection.data["polblogs"].copy(), directed=False)
>>> gt.remove_parallel_edges(g)
>>> g = gt.extract_largest_component(g, prune=True)
>>> state = gt.GenPottsBPState(g, f=array([[ 1,  0, -1],
...                                        [ 0,  1, -1],
...                                        [-1, -1,  1.25]])/20)
>>> s = state.marginal_sample()
>>> gt.graph_draw(g, g.vp.pos, vertex_fill_color=s,
...               output="bp-potts.svg")
<...>
../_images/bp-potts.svg

Marginal sample of a 3-state Potts model.#

Methods

bethe_log_prob(s)

Obtains the Bethe approximation of the log-probability of state s (a VertexPropertyMap), based on node and edge marginals, i.e.

converge([epsilon, max_niter, update_marginals])

Calls iterate() until delta falls below epsilon or the number of iterations exceeds max_niter.

copy(**kwargs)

Return a copy of the state.

energy()

Obtains the average energy from the current messages.

entropy()

Obtains the entropy from the current messages.

iterate([niter, parallel, update_marginals])

Updates meassages synchronously (or asyncrhonously if parallel=False), niter number of times.

log_Z()

Obtains the log-partition function (a.k.a.

log_prob(s)

Obtains the log-probability of state s (a VertexPropertyMap), computed as the negative energy of s` minus the logarithm of the partition function.

marginal_entropy()

Obtains the marginal entropy from the current messages.

marginal_log_prob(s)

Obtains the marginal log-probability of state s (a VertexPropertyMap).

marginal_mean()

Obtains the marginal mean from the current messages.

marginal_pair_log_prob(u, v, r, s)

Obtains the marginal joint log-probability of node pairs \(u\) and \(v\) with values \(s_u = r\) and \(s_v = s\).

marginal_sample([update_marginals, val_type])

Samples a state from the marignal distribution.

sample([vorder, parallel, max_niter, epsilon])

Sample from the joint distribution by repeatedly conditioning and marginalizing using the BP equations.

sample_energy(s)

Obtains the energy (Hamiltonean) of state s (a VertexPropertyMap).

update_marginals()

Update the node marginals from the current messages.

bethe_log_prob(s)[source]#

Obtains the Bethe approximation of the log-probability of state s (a VertexPropertyMap), based on node and edge marginals, i.e.

\[\ln P(\boldsymbol s | \boldsymbol A, \boldsymbol x) = \sum_i(1-k_i)\ln P_i(s_i) + \sum_{i<j}A_{ij}\ln P_{ij}(s_i,s_j)\]

where the node and edge marginals are obtained from the messages, and \(k_i\) is the degree of node \(i\).

Warning

The results of bethe_log_prob() and log_prob() will be the same only for trees. The value of bethe_log_prob(), even when approximate (i.e. for loopy graphs), is nevertheless more consistent with the BP ansatz.

converge(epsilon=1e-08, max_niter=1000, update_marginals=True, **kwargs)#

Calls iterate() until delta falls below epsilon or the number of iterations exceeds max_niter.

If update_marignals=True, this function calls update_marginals() at the end.

The remaining keyword arguments are passed to iterate().

copy(**kwargs)#

Return a copy of the state. The keyword arguments override the constructor parameters.

energy()[source]#

Obtains the average energy from the current messages.

This is given by

\[U = -\sum_{i<j}A_{ij}\sum_{s_i,s_j}x_{ij}f_{s_i,s_j}P_{ij}(s_i,s_j) -\sum_i\sum_{s_i}P_i(s_i)\theta_{i,s_i}\]

with \(P_{ij}(s_i,s_j)\) and \(P_i(s_i)\) being the marginal edge and node distributions, respectively.

entropy()[source]#

Obtains the entropy from the current messages.

This is given by

\[S = -\sum_i(1-k_i)\sum_{s_i}P_i(s_i)\ln P_i(s_i) -\sum_{i<j}A_{ij}\sum_{s_i,s_j}P_{ij}(s_i,s_j)\ln P_{ij}(s_i,s_j)\]

with \(P_i(s_i)\) and \(P_{ij}(s_i,s_j)\) being the marginal node and edge distributions, respectively, and \(k_i\) is the degree of node i.

iterate(niter=1, parallel=True, update_marginals=True)#

Updates meassages synchronously (or asyncrhonously if parallel=False), niter number of times. This function returns the interation delta of the last iteration.

If update_marignals=True, this function calls update_marginals() at the end.

Parallel implementation.

If enabled during compilation, this algorithm will run in parallel using OpenMP. See the parallel algorithms section for information about how to control several aspects of parallelization.

log_Z()#

Obtains the log-partition function (a.k.a. free entropy) from the current messages.

log_prob(s)#

Obtains the log-probability of state s (a VertexPropertyMap), computed as the negative energy of s` minus the logarithm of the partition function.

If s is vector valued, it’s assumed to correspond to multiple states, and the total log-probability sum is returned.

marginal_entropy()[source]#

Obtains the marginal entropy from the current messages.

marginal_log_prob(s)#

Obtains the marginal log-probability of state s (a VertexPropertyMap).

If s is vector valued, it’s assumed to correspond to multiple states, and the total marginal log-probability sum is returned.

marginal_mean()[source]#

Obtains the marginal mean from the current messages.

marginal_pair_log_prob(u, v, r, s)[source]#

Obtains the marginal joint log-probability of node pairs \(u\) and \(v\) with values \(s_u = r\) and \(s_v = s\).

marginal_sample(update_marginals=True, val_type='int')#

Samples a state from the marignal distribution. This functio returns a VertexPropertyMap of type given by val_type.

If update_marignals=True, this function calls update_marginals() before sampling.

sample(vorder=None, parallel=True, max_niter=1000, epsilon=1e-08)[source]#

Sample from the joint distribution by repeatedly conditioning and marginalizing using the BP equations.

The optional parameter vorder is a list of vertices giving the marginalization order.

The remaining parameters control the BP convergence; see GenPottsBPState.converge().

Warning

This algorithm has a quadratic complexity \(O(NE)\), where \(N\) and \(E\) are the number of nodes and edges, respectively.

sample_energy(s)#

Obtains the energy (Hamiltonean) of state s (a VertexPropertyMap).

If s is vector valued, it’s assumed to correspond to multiple states, and the total energy sum is returned.

update_marginals()#

Update the node marginals from the current messages.