
On Tue, May 4, 2010 at 4:06 PM, gerardob <gberbeglia@gmail.com> wrote:
Hello, I have written a very simple code that computes the gradient by finite differences of any general function. Keeping the same idea, I would like modify the code using numpy to make it faster. Any ideas? Thanks.
def grad_finite_dif(self,x,user_data = None): assert len(x) == self.number_variables points=[] for j in range(self.number_variables): points.append(x.copy()) points[len(points)-1][j]=points[len(points)-1][j]+0.0000001 delta_f = [] counter=0 for j in range(self.number_variables): delta_f.append((self.eval(points[counter])-self.eval(x))/0.0000001)
it looks like your are evaluating the same point several times self.eval(x)
counter = counter + 1 return array(delta_f)
That's what I used as a pattern for a gradient function #from scipy.optimize def approx_fprime(xk,f,epsilon,*args): f0 = f(*((xk,)+args)) grad = np.zeros((len(xk),), float) ei = np.zeros((len(xk),), float) for k in range(len(xk)): ei[k] = epsilon grad[k] = (f(*((xk+ei,)+args)) - f0)/epsilon ei[k] = 0.0 return grad Josef
-- View this message in context: http://old.nabble.com/Improvement-of-performance-tp28452458p28452458.html Sent from the Numpy-discussion mailing list archive at Nabble.com.
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion