[SciPy-User] [SciPy-user] Maximum entropy distribution for Ising model - setup?

David Warde-Farley dwf at cs.toronto.edu
Thu Jan 7 14:35:48 EST 2010


On 7-Jan-10, at 4:19 AM, Jordi Molins Coronado wrote:

> However, I would like to restrict the number of h_i and J_ij  
> possible, since having complete freedom could become an unwieldly  
> problem. For example, I could restrict h_i = H and J_ij = J for all  
> i,j=1,...N, i!=j, or I could have a partition of my nodes, say nodes  
> from 1 to M having h_i = H1 and J_ij=J1 i,j=1,...,M i!=j, and h_i=H2  
> and J_ij=J2 i,j=M+1,...,N i!=j, and the J_ij=J3 for i=1,...,M and j=M 
> +1,...N.
> If I understand correctly the discussion in the thread shown above,  
> a numerical solution for the inverse problem would be:
> hi_{new}=hi_{old} + K * (<si> - <si>_{emp})
> Jij_{new}=Jij_{old}+ K' * (<si*sj> - <si*sj>_{emp})

That's correct; the way that you'd usually calculate <si sj> is by  
starting from some state and running several iterations of Gibbs  
sampling to generate a new state, measure your s_i * s_j in that  
state, then run it for a whole bunch more steps and gather the s_i *  
s_j, etc. until you had enough measurements for a decent Monte Carlo  
approximation.  The Gibbs iterations form a Markov chain whose  
equilibrium distribution is P(s_1, s_2, ... s_N), the distribution of  
interest; the problem is there's no good way to know when you've run  
sufficiently many Gibbs steps so that the sample you draw is from the  
equilibrium distribution P. However, one can often get away with just  
running a small fixed number of steps. There is some analysis of the  
convergence properties of this trick here:

	http://www.cs.toronto.edu/~hinton/absps/cdmiguel.pdf
	(refer to the sections on "Visible Boltzmann machines")

I've never really heard of a situation where you'd really want to tie  
together parameters like you're suggesting, but it's possible and  
quite trivial to implement. Let's say you wanted to constrain hi and  
hj to be the same. Then you'd start them off at the same initial value  
and at every update, use the following equation instead:

hi_{new} = hj_{new} = hi_{old} + K/2 (<si> - <si>_{emp}) + K/2 (<sj> -  
<sj>_{emp})

If you wanted Jij = Jkl, set them to the same initial value and use  
the update

Jij_{new} = Jkl_{new} = Jij_{old}+ K'/2 * (<si*sj> - <si*sj>_{emp}) +  
K'/2 (<sk*sl> - <sk*sl>_{emp})

Similarly if you wanted to tie a whole set of these together you'd  
just average the updates and apply it to all of them at once.

David



More information about the SciPy-User mailing list