j: Next unread message
k: Previous unread message
j a: Jump to all threads
j l: Jump to MailingList overview
On 06/06/2013 05:41 PM, Steve Spicklemire wrote: >
On Jun 6, 2013, at 2:38 AM, Robert Cimrman cimr...@ntc.zcu.cz wrote:
On 06/06/2013 01:09 AM, steve wrote:
Is anybody pursuing parallelization at this point? Any hints about how that might be done? I'd be interested in contributing. I've got some experience with pypar (openmpi python wrapper), but not sure how sfepy would need to be modified to make that work.
We agreed with Ankit (the GSoC student), that both devise and implement the parallelization would be too much in the three or so months GSoC supports, and too risky, so I think nobody is pursuing that right now.
I have only some vague ideas, like that it should be done at a high level, so that the FE assembling code does not see it. So any help, mostly theoretical in the current state, would be appreciated!
So.. from my brief perusal it seems like the easiest approach might be something like that taken in FiPy. It looks like they switch between pysparse, scipy and trilinos depending on the situation and whether they want to use MPI. I have no idea how hard that would be to actually implement. ;-) To what degree does the FE assembling code depend explicitly on scipy?
Well, we can "sort of use" a parallel linear solver right now (see PETScParallelKrylovSolver in ), but that cannot be efficient unless the matrix is assembled in parallel (all terms evaluated in their respective parallel subdomains). The "sort of use" means, that the matrix is assembled in a single process, and then a parallel solution is launched. It can get us something like 2.5x speed-up at 4 cores, so it's not that bad, but cannot get us further...
Does FiPy has "parallel assembling"? I know it's finite volumes, but it's essentially the same as finite elements in this respect - one has a grid/mesh, that needs to be distributed among processors.
this is from the FiPy "solvers" page: http://www.ctcms.nist.gov/fipy/documentation/SOLVERS.html
FiPy requires either PySparse, SciPy or Trilinos to be installed in order to solve linear systems. From our experiences,FiPy runs most efficiently in serial when PySparse is the linear solver. Trilinos is the most complete of the three solvers due to its numerous preconditioning and solver capabilities and it also allows FiPy to run in parallel.