Hi Vicky, the random walker algorithm has a quite complex api, because it can use several modes (the 'mode' keyword argument) to solve the linear system of the algorithm - 'bf' (for brute force): an LU decomposition of the matrix is computed. This is very costly in memory, and only recommended for small images. - 'cg' (for conjugate gradient). This mode needs much less memory, I suggest that you use it to test whether the algorithm gives good results for your problem. However, this mode is quite slow. - 'cg_mg': conjugate gradient with multigrid acceleration. This mode is much faster, but you'll need to install the module pyamg to use it. I use the cg_mg mode on tomography data (arrays of 500*500*20 float32 numbers, typically). Hope this helps, Emmanuelle On Mon, Dec 31, 2012 at 11:00:42AM -0800, vicky Liau wrote:
Hi everyone, I use scikit-image to implement image segmentation for vegetation classification. The target imagery is about 1GB, and I renewed my laptop memory to 12GB (Window7 64 bit OS, CPU i7-2860QM) to read in imagery.� However, when I tried to implement random walker segmentation, ipython shows memory errors.� Also, I shrink the imagery to 188 MB, and implement the same algorithm. Ipython still shows errors as following.� RuntimeError � � � � � � � � � � � � � � �Traceback (most recent call last) C:\Dropbox\open_codes\RS_codes\<ipython-input-80-737c60591471> in <module>() ----> 1 labels = random_walker(newarray3, markers, beta=10, mode='bf') C:\Python27\lib\site-packages\skimage\segmentation\random_walker_segmentation.py c in random_walker(data, labels, beta, mode, tol, copy, multichannel, return_ful l_prob, depth) � � 396 � � if mode == 'bf': � � 397 � � � � X = _solve_bf(lap_sparse, B, --> 398 � � � � � � � � � � � return_full_prob=return_full_prob) � � 399 � � # Clean up results � � 400 � � if return_full_prob: C:\Python27\lib\site-packages\skimage\segmentation\random_walker_segmentation.py c in _solve_bf(lap_sparse, B, return_full_prob) � � 418 � � """ � � 419 � � lap_sparse = lap_sparse.tocsc() --> 420 � � solver = sparse.linalg.factorized(lap_sparse.astype(np.double)) � � 421 � � X = np.array([solver(np.array((-B[i]).todense()).ravel())\ � � 422 � � � � � � � � � for i in range(len(B))]) C:\Python27\lib\site-packages\scipy\sparse\linalg\dsolve\linsolve.pyc in factori zed(A) � � 274 � � 275 � � � � # Make LU decomposition. --> 276 � � � � umf.numeric( A ) � � 277 � � 278 � � � � def solve( b ): C:\Python27\lib\site-packages\scipy\sparse\linalg\dsolve\umfpack\umfpack.pyc in numeric(self, mtx) � � 401 � � 402 � � � � if self._symbolic is None: --> 403 � � � � � � self.symbolic( mtx ) � � 404 � � 405 � � � � indx = self._getIndx( mtx ) C:\Python27\lib\site-packages\scipy\sparse\linalg\dsolve\umfpack\umfpack.pyc in symbolic(self, mtx) � � 384 � � � � if status != UMFPACK_OK: � � 385 � � � � � � raise RuntimeError('%s failed with %s' % (self.funs.symbolic , --> 386 � � � � � � � � � � � � � � � � � � � � � � � � � � � �umfStatus[status] )) � � 387 � � 388 � � � � self.mtx = mtx RuntimeError: <function umfpack_di_symbolic at 0x000000015E72F9E8> failed with U MFPACK_ERROR_out_of_memory What can I do for that? Just want to test whether this algorithm works for my thoughts.� Thanks!�
participants (1)
-
Emmanuelle Gouillart