Skip to main content
banner image
No data available.
Please log in to see this content.
You have no subscription access to this content.
No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
The full text of this article is not currently available.

Abstract

In this work we present a theoretical analysis of the convergence of the Wang-Landau algorithm [Phys. Rev. Lett.86, 2050 (2001)] which was introduced years ago to calculate the density of states in statistical models. We study the dynamical behavior of the error in the calculation of the density of states. We conclude that the source of the saturation of the error is due to the decreasing variations of the refinement parameter. To overcome this limitation, we present an analytical treatment in which the refinement parameter is scaled down as a power law instead of exponentially. An extension of the analysis to the N-fold way variation of the method is also discussed.

Wang-Landau algorithm: A theoretical analysis of the saturation of the error

    R. E. Belardinellia); V. D. Pereyrab)
    Departamento de Física, Laboratorio de Ciencias de Superficie, Universidad Nacional de San Luis, CONICET, Chacabuco 917, 5700 San Luis, Argentina
a)

a)Electronic mail: rbelar@unsl.edu.ar

b)

b)Author to whom correspondence should be addressed. Electronic mail: vpereyra@unsl.edu.ar

 

In this work we present a theoretical analysis of the convergence of the Wang-Landau algorithm [Phys. Rev. Lett. 86, 2050 (Year: 2001)] which was introduced years ago to calculate the density of states in statistical models. We study the dynamical behavior of the error in the calculation of the density of states. We conclude that the source of the saturation of the error is due to the decreasing variations of the refinement parameter. To overcome this limitation, we present an analytical treatment in which the refinement parameter is scaled down as a power law instead of exponentially. An extension of the analysis to the N-fold way variation of the method is also discussed.

Keywords
    Keywords
    • convergence,
    • error analysis,
    • Ising model,
    • random processes,
    • statistical analysis
    Keywords
       

      Monte Carlo methods are well suited for the simulation of large many-body problems, particularly for the study of phase transitions and critical phenomena.1,2

      Many conventional Monte Carlo algorithms such as Metropolis importance sampling,3 Swendsen-Wang cluster flipping,4 etc., generate a canonical distribution at a given temperature. Such distributions are so narrow that, with conventional Monte Carlo simulations, multiple runs are required to determine thermodynamic quantities over a significant range of temperatures.

      For a second-order phase transition in unfrustrated systems, the problem of critical slowing down was solved by a cluster update algorithm.4 For first-order phase transitions and in systems with many free energy minima such as frustrated magnets or spin glasses, a similar problem of long tunneling times between local minima arises.

      Several methods were developed to overcome this problem, including the multicanonical method,5–14 and related entropic sampling method,15 broad histogram method,16–18 multibonding simulations,19 etc.

      The multicanonical ensemble method proposed by Berg 5–7 estimates the density of states (DOS) g(E) (the number of all possible states or configurations for an energy level E of system) first, then performs a random walk with a flat histogram in the desired region in the phase space. This method has been proven as very efficient in studying the first-order phase transitions, where simple canonical simulations have difficulty in overcoming the tunneling barrier between coexisting phases at the transition temperature.5,8–14

      The entropic sampling method, which is basically equivalent to multicanonical ensemble sampling,15 is in iterative process used to calculate the microcanonical entropy defined as S(E)=ln[g(E)]. The method was applied to the two-dimensional (2D) ten-sate (Q=10) Potts model and the three-dimensional (3D) Ising model.15

      The broad histogram method introduced by de Oliveira 16–18 calculates the density of states by estimating the probabilities of possible transitions between all possible states of a random walk in energy space. However, the method presents systematic errors even for simple models such as the Ising model at small system sizes.18 In fact, those methods based on accumulation of histogram entries have the problem of scalability for large systems, suffering from systematic errors when systems are large.20

      Wang and Landau20 introduced a new and efficient algorithm using a random walk in energy space to obtain an estimation of the density of states for statistical models. The method has been successfully used in many problems of statistical physics, biophysics, and others.20–29 The method is based on independent random walks which are performed over adjacent overlapped energy regions, providing the density of states. In that way, thermodynamic observables, including the free energy over a wide range of temperature, can be calculated with one single simulation.

      Since Wang and Landau introduced the multiple-range random walk algorithm to calculate the density of states, there have been numerous improvements proposed29–38 and studies of the efficiency and convergence of this method.32,37,38 Particularly, Zhou and Batt32 have presented a mathematical analysis of the Wang-Landau (WL) algorithm. They give a proof of the convergence and the sources of errors for the WL algorithm and the strategies for improvement. The main conclusions of their work are (i) the density of state is encoded in the average histogram; (ii) the fluctuations of the histogram, proportional to 1lnf, cause statistical errors, which can be reduced by averaging over multiple g(E); (iii) the correlation between adjacent records in the histogram introduces a systematic error, which is reduced at small f.

      Earl and Deem in Ref. 38 have derived a general condition for the convergence of a Monte Carlo method whose history dependence is contained within the simulated density distribution. The authors concluded that (i) the detail balance needs only be satisfied asymptotically and (ii) the calculated density of states approaches to the real one with an error proportional to 1t.

      On the other hand, several limitations of the method remain still unsolved, as, for example, the behavior of the tunneling time which is a bound for the performance of any flat-histogram algorithm, as is discussed in Ref. 36, where it is shown that it limits the convergence in the WL algorithm.

      Berg have proposed a multicanonical simulation scheme where the first part is the WL algorithm and the second part is a standard Markov chain Monte Carlo simulation for which convergence is proven.39

      Berg and Janke have introduced a cluster version of the Wang-Landau algorithm together with a subsequent multibondic simulation. This method improves the efficiency of the conventional WL or multicanonical approach by power laws in the lattice size for systems such as 2D and 3D Ising models.19

      Although several authors say that the WL algorithm converge,32,38 one of the limitations of the method and the subsequent improved algorithms based on it is the saturation in the error.

      The saturation in the error and therefore the nonconvergence of the WL method were also suggested by Landau and co-workers already in Refs. 37 and 40. Particularly the improvement of the N-fold way variation introduced by Malakis 41 does not avoid the saturation in the error.

      In fact, those methods in which the refinement parameter vary lower faster than 1t (with t the Monte Carlo time) determine that the calculated density of states reaches a constant value for long times, therefore the error saturates. To overcome this limitation, we recently introduced a modified algorithm in which the refinement parameter is scaled down as 1t instead of exponentially.42 This new algorithm allows the calculation of the density of states faster and more accurately than with the original WL algorithm due to the fact that the calculated density of state function approaches asymptotically the exact value.

      In this work we present a theoretical demonstration of the saturation in the error in the WL algorithm and deduce analytically the optimum choice of the refinement parameter F as F(t)=1t, without using the histogram flatness condition as was introduced in Ref. 42. The resulting algorithm is more efficient than the original WL method and other variations like those obtained by using N-fold way algorithm.30,41 Due that the exact density of state is, in general, rarely known, we implement all the calculations in a two-dimensional L×L Ising model, where g(E) is exactly known for size up to L=50.43

      The outline of the paper is as follows. In Sec. II, we introduce the time behavior of the Wang-Landau algorithm, demonstrating the origin of the saturation of the error. In Sec. III, we give the analytical bases of the 1t algorithm introduced previously in Ref. 42. In Sec. IV, we discuss the dynamical behavior of the N-fold way version of the WL algorithm, introducing the corresponding 1t modification of our algorithm using such method. Finally, in Sec. V, we discuss the results and give our conclusions.

       

      In the original Wang-Landau algorithm,20 an initial energy range of interest is identified, EminEEmax, and a random walk is performed in this range. During the random walk, two histograms are updated: one for the density of states g(E), which represents the current or running estimate, and one for number of visits to distinct energy states H(E). Before the simulation begins, H(E) is set to zero and g(E) is set to unity, both uniformly. The random walk is performed by choosing an initial state i in the energy range EminEEmax. Trial moves are then attempted and moves are accepted according to the transition probability

      p(EiEf)=min[1,g(Ei)g(Ef)],
      (1) 
      where Ei(Ef) are the initial (final) states, respectively, and p(EiEf) is the transition probability from the energy level Ei to Ef. Whenever a trial move is accepted, a histogram entry corresponding to state n is incremented according to H(En)=H(En)+1 and g(En)=g(En)×f, where f is convergence factor, that is, generally initialized as e. If a move is rejected, the new configuration is discarded and the histogram entry corresponding to the old configuration is incremented according to H(Eo)=H(Eo)+1; at the same time the density of states is incremented according to g(Eo)=g(Eo)×f. This process is repeated until the energy histogram becomes sufficiently flat. When that happens, the energy histogram H is reset to zero everywhere and the convergence factor is decreased, usually according to fk+1=fk, where fk is the convergence factor corresponding to stage k. The process is continued until f becomes sufficiently close to 1 [say, f<exp(108)].

      To analyze the dynamical behavior of the WL algorithm, it is necessary to define different quantities. The histogram H(E,t) is defined in an analogous way as the histogram H used in the WL algorithm, but is not reset in the whole simulation. Its mean value after time t is defined as

      H(t)=1NEH(E,t),
      (2) 
      where N is the number of states of different energies and H(E,t) stands for the mean height of the histogram in E at time t (with t=jN, where j is the number of trial moves attempted). The time t is related to the numbers of accessible energy states. For example, in a two-dimensional Ising model the time t is given by t=j(L21), where L is the size of the system (if we consider only one energy window to calculate the DOS).

      Instead of the density of states it is more convenient to define S(E,t)=ln(g(E,t)), usually named entropy. This definition is generally used in order to fit all possible values of g(E) into double precision numbers. Its mean value is defined as

      S(t)=1NES(E,t).
      (3) 

      The error ϵ(E,t) is defined as

      ϵ(E,t)=1ln[gn(E,t)]ln[gex(E)]=Sex(E)Sn(E,t)Sex(E)
      (4) 
      and its mean value as
      ϵ(t)=1(N1)Eϵ(E,t).
      (5) 
      gn(E,t) is normalized with respect to the exact DOS at the ground state, that is,
      gn(E,t)=g(E,t)gex(EG)g(EG,t),
      (6) 
      where g(E,t), gex(EG), and g(EG,t) are the simulated value, the exact values at the ground state, and the simulated value at the ground state of the DOS, respectively. Therefore,
      Sn(E,t)=ln[gn(E,t)]=S(E,t)S(EG,t)+Sex(EG).
      (7) 

      It is also convenient to define the function F=ln[f], then Fk+1=Fkm (with m>1).

      Note that Fk is always positive and monotonically decreasing.

      It is necessary to emphasize that Eq. (6) breaks down for any classical continuous system, for which the entropy diverges both at very low and very high energies, therefore other normalization schemes must be introduced in order to avoid the divergence of the entropy. For instance, choosing other reference state with energy E in such a way that gex(E)0.

      Then S(E,t) can be written as

      S(E,t)=i=1t[H(E,i)H(E,i1)]Fi1,
      (8) 
      where F0=const [usually, F0=log(e)0.4342944].

      The flatness condition of the histogram H(E,t) and Eq. (1) guarantee that Fi+1 takes the value Fi a finite number of times before eventually decreasing to Fim. Moreover H(E,i)H(E,i1) is finite. Since k=1Fk is convergent, the series S(E,t) is also convergent for any value of m.

      Let us rewrite the error given in Eq. (4) as

      ϵ(E,t)=ΔS(E,t)ΔSex(E,t)Sex(E),
      (9) 
      where ΔS(E,t)=S(E,t)S(EG,t) and ΔSex(E)=Sex(E)Sex(EG). Since S(E,t) and S(EG,t) are convergent, ΔS(E,t)const when t. However, due to the stochastic character of the Markov process, both quantities are not statistically equal, ΔS(E,t)ΔSex(E), then the mean value of the error goes to a constant limit as time increases to infinity.

      Let us show, as an example, the calculation of the error for the 2D Ising model at the boundaries of the energy range, E=2L2. Note that the ground state energy is E=2L2 for this model. It is well known that Sex(2L2)=Sex(2L2)=ln(2), hence

      ϵ(2L2,t)=S(2L2,t)S(2L2,t)Sex(2L2).
      (10) 

      Then, using Eq. (8) and the argument of convergence, one can conclude that S(2L2,t)K(2L2) and S(2L2,t)K(2L2) when t, where K(2L2) and K(2L2) are constant. Due to the stochastic character of the process, these quantities are not statistically equal, therefore ϵ(2L2,t) saturates, and the mean value of the error ϵ saturates as well. Similar arguments can be used to demonstrate the saturation of the error for each energy level. In the present case of the 2D Ising model, the exact value of the density of states gex(E) was obtained from the method developed in Ref. 43.

      In conclusion, the WL algorithm does not converge to the true value of the DOS because the series given in Eq. (8) is convergent.

       

      In order to avoid the saturation of the error, the series given in Eq. (8) must be made divergent, with Fk monotonically decreasing and Fk tending to zero as fast as possible. The last condition is not arbitrary since it guarantees that g(E,t)gex(E) rapidly, reducing the computational time.

      The series which fulfills such conditions is the harmonic series k=01kp with P1, the fast converging being the one with p=1.

      In order to make the WL algorithm more efficient, the refinement parameter must take, for long times, the following time functionality:

      F(t)=1t,
      (11) 
      which is updated at each Monte Carlo step.

      Initially, F0=1, then at short time, and as was proposed previously,42 we choose F(t)>1t (i.e., Fi+1=Fi2 as in the original WL algorithm) in such a way that the algorithm visits all the energy configurations as fast as possible. As soon as Fi1t, that is at time tc, the algorithm changes and the refinement parameter takes the time functionality described above. Then, using Eq. (8), the entropy takes the form

      S(E,t)=k=1tc1[H(E,k)H(E,k1)]Fk+k=tct[H(E,k)H(E,k1)]1k,
      (12) 
      for ttc, Fk+1=Fk2, or Fk+1=Fk,

      From the definition of Eq. (3), and replacing the expression of S(E,t) given in Eq. (12), we obtain

      S(t)=1NEk=1tc1[H(E,k)H(E,k1)]Fk+k=tct[H(E,k)H(E,k1)]1k.
      (13) 
      We can rearrange the last equation, summing first on the energy range, then considering that on average H(t)H(t1)1, to finally obtain
      S(t)ln(t).
      (14) 
      This quantity is necessary to show the dynamics of the algorithm and not to obtain the observable which are calculated using the expression of the entropy given in Eq. (12).

      In Fig. 1(a), we show a comparison between the error ϵ(t) calculated using the WL method with different flatness conditions and the error obtained using our algorithm, for the Ising model in a 2D squares lattice with L=8 and Fk+1=Fkm (with m=2). In the inset, the behavior of the S(t) is shown in semilog plot for our algorithm. As one can see, after the initial time tc the function S takes the form given in Eq. (14). In Fig. 1(b) we show the behavior of the error as a function of time, for different values of the parameter m and flatness condition of 95%. The accuracy of the WL method decreases as m increases.

      FIG. 1.

      (a) Comparison between the error ϵ(t) calculated using the original WL method with different flatness conditions and the error obtained using our 1t algorithm, for a two-dimensional Ising model with nearest-neighbor lateral interaction in squares lattice with L=8. In the inset S(t) is shown vs time t, in semilog plot, to emphasize its logarithmic behavior. (b) Behavior of the error as a function of time, for different values of the parameter m for a flatness condition of 95%.

       

      Note that the error, in our algorithm, is proportional to 1t instead of 1t, as is predicted by Earl and Deem in Ref. 38.

      Note that instead of Eq. (11), we can propose a more general time dependence for the refinement parameter as

      F(t)=ctp.
      (15) 
      However, a simple observation of the behavior of F(t) as a function of p and c and the corresponding effect on the error ϵ(t) show that the best choice to optimize the efficiency of the algorithm is that for p=1 and c=1. In fact, in Fig. 2(a) we show the function F(t) as a function of t for different values of the exponent p with c=1. The effect on the error ϵ(t) is shown in Fig. 2(b). For p1, the error goes as ϵF(t) for long time (asymptotic regime), one also observes that the value which optimizes the error is p=1. As is discussed above for p>1, the error saturates. In Figs. 3(a) and 3(b) we show the function F(t) and the corresponding error ϵ(t), as a function of t for p=1 and three different values of c(c=0.1,1,10). The error goes as ϵ(t)ct for c1 and changes the time behavior for c<1. In fact, the error follows a power law dependence with an exponent bigger than 0.5. However, in all cases the error does not saturate.

      FIG. 2.

      (a) Dynamical behavior of F(t) for different values of the exponent p and c=1. (b) Error ϵ(t) vs time t, for the same values of p and c=1 [see Eq. (15)].

       

      FIG. 3.

      (a) Dynamical behavior of F(t) for different values of c and p=1. (b) Error ϵ(t) vs time t, for the same values of c and p=1 [see Eq. (15)].

       

      In Fig. 4 we show the error ϵ(t) versus the refinement parameter F calculated using the original WL algorithm, for different flatness conditions.

      FIG. 4.

      Error ϵ(t) vs the refinement parameter, calculated using the original WL method, for different values of the flatness condition (from top to bottom, 70%, 80%, 90%, 95%, 97%, and 99%).

       

      Note that the behavior of the error follows the law F before the error saturates, confirming the conclusion (ii) predicted in Ref. 32. On the other hand, no matter how large the flatness condition is, the error saturates in all cases for smaller values of F, demonstrating that the calculated DOS does not converge to the exact value; this fact is in contradiction with the third conclusion predicted by Zhou and Batt in Ref. 32.

       

      In this section we discuss the dynamical behavior of the N-fold version of the WL algorithm.30,41 in order to show that also in this case the error saturates.

      To overcome such a problem we introduce a new version of our method, the N-fold way 1t algorithm. As in original version of the WL N-fold way algorithm,30 we assume a spin system which may be in state σ with energy EI=[Emin,Emax], whereby I denotes the energy range for which we wish to estimate g(E). All spins are then partitioned into classes according to their energetic local environment, i.e., the energy difference ΔEi a spin flip will cause. For the special case of the two-dimensional nearest-neighbor Ising model, each spin belongs to one of only M=10 classes. The total probability P of any spin of class i being flipped is given by

      P(ΔEi)=n(σ,ΔEi)p(EE+ΔEi),i=1,,M,
      (16) 
      with n(σ,ΔEi) being the number of spins of state σ which belong to class i and p(EE+ΔEi) being given by
      p(EE+ΔEi)={min(1,g(E)g(E+ΔEi)),ifE+ΔEiI0,ifE+ΔEiI.}
      (17) 
      To determine the class from which to flip a spin, one calculates the numbers
      Qm=imP(ΔEi),m=1,,MandQ0=0,
      (18) 
      which are the integrated probabilities for a spin flip within the first m classes. Hence a class is selected by generating a random number RND such that 0<RNDQM, and taking class m if Qm1<RNDQm. The spin to be flipped is chosen from this class with equal probabilities. The numbers n(σ,ΔEi) change upon flipping the spin. Due to the flip, the spin and its interacting neighbors will change classes and correspondingly the numbers n(σ,ΔEi) will differ from their predecessors. Finally, one has to determine the average lifetime τ of the resulting state, i.e., one has to find out how many times the move just made would be rejected on average in the usual update scheme. The probability that the first random number would produce a flip is P̂=QML2. Therefore, one has for the probability that exactly n random numbers will result in a new configuration,
      P¯n=P̂(1P̂)n1.
      (19) 
      Thus the average lifetime becomes
      τ=n=1nP¯n=n=1nP̂n(1P̂)n1=L2QM,
      (20) 

      Based on these definitions, we can describe the N-fold way version of the 1t-algorithm as follows.

      • (i)  Choose an initial configuration and set H(E)=0, S(E)=0 for all E: t=0, F0=log(e)0.4342944 and also fix Ffinal.
      • (ii)  Determine (update) the probabilities p(EE+ΔEi) and the Qms of the (initial) configuration using Eqs. (16)–(18).
      • (iii)  Determine the average lifetime τ of (initial) configuration via Eq. (20).
      • (iv)  Increment histogram, density of states, the time t, and update fi,
        H(E)H(E)+1,
        S(E)S(E)+ΔS(E),
        tt+τN,
        fifi+1,
        with
        ΔS(E)={FiτifFiτF0F0ifFiτ>F0,}
        (21) 
        and
        Fi+1={FiifFiτF0ΔS(E)τifFiτ>F0.}
        (22) 
        In the case where FiFi+1>2, we set
        Fi+1Fi2.
        (23) 
      • (v)  After some fixed sweeps check H(E); if H(E)0E then refine Fi according to Fi+1=Fi2 and H(E) is reset.
      • (vi)  If Fi+11t then Fi+1=F=1t. Then H(E) and Eqs. (22) and (23) are not used in the rest of the simulation, while from Eq. (21) we only use ΔS(E)=Fiτ.
      • (vii)  If F<Ffinal the simulation stops.
      • (viii)  Determine the Qms [the p(EE+ΔEi)s are not updated here].
      • (ix)  Choose and flip spin as described above.
      • (x)  Go to (ii).

      The initial value of the refinement parameter F0 has a crucial importance at the beginning of the algorithm and can determine the subsequent dynamics. In fact, we observe that choosing F0 small, the algorithm wastes a long time visiting the most accessible states. On the contrary, if F0 is large enough, the algorithm wastes a long time visiting the least accessible states. The more efficient election is thus F00.5.

      As in the algorithm introduced in Ref. 30, ΔS(E) is kept below or equal F0, avoiding that QM=0 which would terminate the iteration procedure described above. The time t is the accumulate average lifetime used as a refinement parameter in our algorithm, similarly to 1t in the algorithm introduced previously in Ref. 42.

      In Fig. 5(b), we have plotted the dynamical behavior of the average error ε(t) as a function of the Monte Carlo time, both the original and the N-fold way version of the WL algorithm saturate, while our 1t version and the corresponding N-fold way variation do not. A comparison of these curves demonstrates that our methods are in both cases more efficient than the original algorithms. In Fig. 5(a), we have plotted the corresponding refinement parameter for each algorithm. As one observes, 1t behaves as limiting curves, as soon as the refinement parameter for the WL algorithm in both the original and the N-fold way versions is lower than this limiting curve, the corresponding errors saturate.

      FIG. 5.

      (a) Refinement parameter F(t) vs time and (b) dynamical behavior of the error for the four methods mentioned in the text.

       

      Clearly, the error saturates also in the N-fold way variation of the WL algorithm. The reason of such behavior was explained in Sec. II of the present paper. In fact, on the average Fiτi>Fi+1τi+1 and due to that S(E,t) can be written as a series, with a kernel Fiτ, then the series that it represents this function is convergent. Using the same argument, to avoid the saturation in the error, we have to choose the refinement parameter as F=1t.

       

      In this work, we discussed one of the weaknesses as of the well known Wang-Landau algorithm,20 namely, the saturation of the error or the nonconvergence of calculated density of states to the exact value. We presented an analytical demonstration of the nonconvergence to the exact DOS in the original version of the WL algorithm. We have also shown that the saturation in the error appears not only in the original version of the WL algorithm but in the N-fold way variation of such method.30 Alternatively, we deduced analytically the way to avoid the saturation of the error and gave an adequate form to the refinement parameter. This new algorithm, the so-called 1t algorithm already introduced by us in Ref. 42, was then extended within the N-fold way scheme.

      The nonconvergence of the original WL algorithm and other previous version, including N-fold way method, seemed very difficult to believe. However, in view of our results we are able to discuss some of these statements raised by other authors.32,38 Particularly, some of the conclusions presented by Zhou and Batt32 are not satisfied completely by the original WL algorithm. In fact, the second conclusion is true before the error saturates, after that it is no longer valid. On the other hand, the correlation between adjacent records in the histogram introduces a systematic error, which is not reduced by small F as is demonstrated in the present paper, therefore the third conclusion is not valid.

      It is interesting to emphasize that Landau in Ref. 41 suggest that ln(ffinal) cannot be chosen arbitrarily small or the modified ln[g(E)] will not differ from the unmodified one to within the number of digits in the double precision numbers used in the simulation. If this happens, the algorithm no longer converges to the true value, and the program may run forever. If ln(ffinal) within the double precision range is too small, the calculation might take excessively long to finish.

      Summarizing, in this work, we have presented an analytical proof of the origin of the error saturation in the WL algorithms and a method to avoid it. We have chosen, as a test laboratory, a discrete system, namely, the Ising model. However, the mathematical arguments of the source of the error for the WL algorithm seem to be more general and can be extended to all algorithms which consider a refinement parameter that change, according to the flatness condition of the energy histogram, with a law that decreases faster than 1t. In all these cases, a saturation of the error for the calculation of the density of states can be guaranteed. In fact, due that the entropy S(E,t) can be expressed as a series in which F(t) is the kernel. In those algorithms where Fk=Fk1m (with any value of m>1), the resulting series converges to a finite value, then the error saturates.

      The simplest way to avoid the saturation in the error is to choose a refinement parameter that depends on time as F(t)=tp with p1 (the optimum choice is p=1). In these cases the series becomes divergent and the calculated density of states approaches asymptotically to the exact values as tp2. This choice results in a more accurate and more efficient algorithm than other methods.

      Finally, we have considered the extension of our algorithm to the N-fold way method. It was shown that the adequate refinement parameter is F=1t, where t is the accumulate average lifetime. This definition ensures that the DOS approaches asymptotically the exact value more efficiently than using the N-fold way original WL algorithm which saturates at long times.

       
      1. M. E. J. Newman and G. T. Barkema,   (Oxford University Press, Oxford, 1999).
      2. D. P. Landau and K. Binder,   (Cambridge University Press, Cambridge, 2000).
      3. N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller,   http://dx.doi.org/10.1063/1.1699114 21, 1087 (1953).
      4. R. H. Swendsen and J.-S. Wang,   http://dx.doi.org/10.1103/PhysRevLett.58.86 58, 86 (1987);
        U. Wolff,   http://dx.doi.org/10.1103/PhysRevLett.62.361 62, 361 (1989).
      5. B. A. Berg and T. Neuhaus,   http://dx.doi.org/10.1103/PhysRevLett.68.9 68, 9 (1992).
      6. B. A. Berg and T. Celik,   http://dx.doi.org/10.1103/PhysRevLett.69.2292 69, 2292 (1992).
      7. B. A. Berg and T. Neuhaus,   http://dx.doi.org/10.1016/0370-2693(91)91256-U 267, 249 (1991).
      8. W. Janke and S. Kappler,   http://dx.doi.org/10.1103/PhysRevLett.74.212 74, 212 (1995).
      9. W. Janke,   http://dx.doi.org/10.1142/S0129183192000762 3, 375 (1992).
      10. B. A. Berg, U. Hansmann, and T. Neuhaus,   http://dx.doi.org/10.1103/PhysRevB.47.497 47, 497 (1993).
      11. W. Janke,   http://dx.doi.org/10.1016/S0378-4371(98)00014-4 254, 164 (1998).
      12. B. A. Berg and W. Janke,   http://dx.doi.org/10.1103/PhysRevLett.80.4771 80, 4771 (1998).
      13. B. A. Berg,   63, 982 (1998).
      14. B. A. Berg, T. Celik, and U. Hansmann,   http://dx.doi.org/10.1209/0295-5075/22/1/012 22, 63 (1993).
      15. J. Lee,   http://dx.doi.org/10.1103/PhysRevLett.71.211 71, 211 (1993).
      16. P. M. C. Oliveira, T. J. P. Penna, and H. J. Herrmann,   http://dx.doi.org/10.1007/s100510050172 26, 677 (1996).
      17. P. M. C. Oliveira, T. J. P. Penna, and H. J. Herrmann,   http://dx.doi.org/10.1007/s100510050172 1, 205 (1998).
      18. P. M. C. Oliveira,   http://dx.doi.org/10.1007/s100510050532 6, 111 (1998).
      19. B. A. Berg and W. Janke,   http://dx.doi.org/10.1103/PhysRevLett.98.040602 98, 040602 (2007).
      20. F. Wang and D. P. Landau,   http://dx.doi.org/10.1103/PhysRevLett.86.2050 86, 2050 (2001);
        F. Wang and D. P. Landau,  http://dx.doi.org/10.1103/PhysRevE.64.056101 64, 056101 (2001);
        D. P. Landau and F. Wang,   http://dx.doi.org/10.1016/S0010-4655(02)00374-0 147, 674 (2002).
      21. N. Rathore and J. J. de Pablo,   http://dx.doi.org/10.1063/1.1463059 116, 7225 (2002);
        N. Rathore, T. A. Knotts, and J. J. de Pablo,   http://dx.doi.org/10.1063/1.1542598 118, 4285 (2001).
      22. Y. W. Li, T. Wüst, D. P. Landau, and H. Q. Lin,   http://dx.doi.org/10.1016/j.cpc.2007.06.001 177, 524 (2007).
      23. Q. Yan, R. Faller, and J. J. de Pablo,   http://dx.doi.org/10.1063/1.1463055 116, 8745 (2002).
      24. M. Chopra and J. J. de Pablo,   http://dx.doi.org/10.1063/1.2178319 124, 114102 (2006);
        E. B. Kim, R. Feller, Q. Yan, N. L. Abbott, and J. J. de Pablo,   http://dx.doi.org/10.1063/1.1508365 117, 7781 (2002);
        A. Ethan and J. J. de Pablo,   http://dx.doi.org/10.1063/1.1874792 122, 124109 (2005).
      25. M. S. Shell, P. G. Debenedetti, and A. Z. Panagiotopoulus,   http://dx.doi.org/10.1103/PhysRevE.66.056703 66, 056703 (2002).
      26. C. Yamaguchi and Y. Okabe,   http://dx.doi.org/10.1088/0305-4470/34/42/305 34, 8781 (2001).
      27. Y. Okabe, Y. Tomita, and C. Yamaguchi,   http://dx.doi.org/10.1016/S0010-4655(02)00435-6 146, 63 (2002).
      28. E. A. Mastny and J. J. de Pablo,   http://dx.doi.org/10.1063/1.1874792 122, 124109 (2005).
      29. P. Poulain, F. Calvo, R. Antoine, M. Broyer, and P. Dugourd,   http://dx.doi.org/10.1103/PhysRevE.73.056704 73, 056704 (2006).
      30. B. J. Schulz, K. Binder, and M. Müller,   http://dx.doi.org/10.1142/S0129183102003243 13, 477 (2002).
      31. D. Jayasri, V. S. S. Sastry, and K. P. N. Murthy,   http://dx.doi.org/10.1103/PhysRevE.72.036702 72, 036702 (2005).
      32. C. Zhou and R. N. Bhatt,   72, 025701(R) (2005).
      33. A. Tröster and C. Dellago,   http://dx.doi.org/10.1103/PhysRevE.71.066705 71, 066705 (2005).
      34. C. Zhou, T. C. Schulthess, S. Torbrügge, and D. P. Landau,   http://dx.doi.org/10.1103/PhysRevLett.96.120201 96, 120201 (2006).
      35. S. Trebst, D. A. Huse, and M. Troyer,   http://dx.doi.org/10.1103/PhysRevE.70.056701 70, 056701 (2004).
      36. P. Dayal, S. Trebst, S. Wessel, D. Würtz, M. Troyer, S. Sabhapandit, and S. N. Coppersmith,   http://dx.doi.org/10.1103/PhysRevLett.92.097201 92, 097201 (2004).
      37. H. K. Lee, Y. Okabe, and D. P. Landau,   http://dx.doi.org/10.1016/j.cpc.2006.02.009 175, 36 (2006).
      38. D. Earl and M. Deem,   http://dx.doi.org/10.1021/jp045508t 109, 6701 (2005).
      39. B. A. Berg,   http://dx.doi.org/10.1016/S0010-4655(03)00245-5 153, 397 (2003).
      40. D. P. Landau, S.-H. Tsai, and M. Exler,   http://dx.doi.org/10.1119/1.1707017 72, 1294 (2004).
      41. A. Malakis, S. S. Martinos, I. A. Hadjiagapiou, and A. S. Peratzakis,   http://dx.doi.org/10.1142/S0129183104006182 15, 729 (2004).
      42. R. E. Belardinelli and V. D. Pereyra,   http://dx.doi.org/10.1103/PhysRevE.75.046701 75, 046701 (2007).
      43. P. D. Beale,   http://dx.doi.org/10.1103/PhysRevLett.76.78 76, 78 (1996).
       
      true
      true

      Access Key

      • FFree Content
      • OAOpen Access Content
      • SSubscribed Content
      • TFree Trial Content
      752b84549af89a08dbdd7fdb8b9568b5 journal.articlezxybnytfddd