Finding a fixed point on the simplex
I recently wrote some code <https://github.com/egtaonline/gameanalysis/blob/master/gameanalysis/fixedpoint.py> that uses simplicial subdivision to compute an approximate fixed point for functions on a simplex. It's guaranteed to always return an approximate fixed point, but may take exponential time. It's particularly useful for functions which have fixed points but no basin of attraction, and so won't converge via iteration (see the code at at the bottom that's looking for a fixed point in a degenerate case of rock paper scissors). Does this warrant inclusion into scipy? Generally computing fixed points of continuous functions over compact sets seems useful to many communities, however there are a couple of issues I see with blindly putting this in scipy based off of the fact that a function for computing fixed points <https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fixed_point.html> already exists. - The proposed function works for cases where the existing function fails, which is an argument for inclusion. - The existing function is in optimize, but the proposed one is not an optimization algorithm, it's a search algorithm. - The proposed function only works for functions on the simplex. Most functions on compact sets should have homotopies that convert them into functions on the simplex, but the current fixed_point makes no such restriction. - The proposed function has different parameters than the current one. A few functions in optimize have signatures that take an `options` dictionary of parameters, but it would change the signature of fixed_point. - The algorithm is slightly more general than computing a fixed point of a simplex function. It actually finds fully labeled subsimplices for an arbitrary `proper` labeling function. From my very limited understanding, there are other labeling functions which produce approximate solutions to other problems. However, given this more general function, it's proper module seems less certain. In addition, the code as it stands is pure python, but it is relatively computationally intensive, and so potentially could be sped up by switching to a c or cython implementation. Conditioned on thinking that this would make a valid PR, is it worth looking into implementing it in a lower level language despite the fact that this makes a large number of calls to a supplied python function? Thanks, Erik Example ~~~~~~~ import numpy as np from scipy import optimize from gameanalysis import fixedpoint A = np.array([[[1, 0, 0], [2, 0, 0], [2, 0, 0]], [[0, 1, 0], [1, 0, 0], [1, 1, 0]], [[0, 0, 1], [1, 0, 1], [1, 0, 0]], [[0, 2, 0], [0, 1, 0], [0, 2, 0]], [[0, 1, 1], [0, 0, 1], [0, 1, 0]], [[0, 0, 2], [0, 0, 2], [0, 0, 1]]]) B = np.array([[ 0, 0, 0], [-2, 1, 0], [ 1, 0, -3], [ 0, 0, 0], [ 0, -3, 1], [ 0, 0, 0]]) FP = np.array([0.3129473 , 0.40598569, 0.28106701]) def fixed_point_func(x): x = np.maximum(x, 0) x /= x.sum() y = np.sum(np.prod(x ** A, -1) * B, 0) z = np.maximum(0, y - y.dot(x)) + x z /= z.sum() return z def main(): close = dict(rtol=1e-3, atol=1e-3) x0 = np.ones(3) / 3 # Verify fixed point assert np.allclose(FP, fixed_point_func(FP), **close) # Use guaranteed method fp = fixedpoint.fixed_point(fixed_point_func, x0, tol=1e-8) assert np.allclose(FP, fp, **close) # Expect error with scipy method try: optimize.fixed_point(fixed_point_func, x0) assert False, "shouldn't get this far" except RuntimeError: pass # Expected # Expect error with scipy method try: optimize.fixed_point(fixed_point_func, x0, method='iteration') assert False, "shouldn't get this far" except RuntimeError: pass # Expected if __name__ == '__main__': main()
participants (1)
-
Erik Brinkman