j: Next unread message
k: Previous unread message
j a: Jump to all threads
j l: Jump to MailingList overview
On Jun 6, 2013, at 2:38 AM, Robert Cimrman cimr...@ntc.zcu.cz wrote:
On 06/06/2013 01:09 AM, steve wrote:
Is anybody pursuing parallelization at this point? Any hints about how that might be done? I'd be interested in contributing. I've got some experience with pypar (openmpi python wrapper), but not sure how sfepy would need to be modified to make that work.
We agreed with Ankit (the GSoC student), that both devise and implement the parallelization would be too much in the three or so months GSoC supports, and too risky, so I think nobody is pursuing that right now.
I have only some vague ideas, like that it should be done at a high level, so that the FE assembling code does not see it. So any help, mostly theoretical in the current state, would be appreciated!
So.. from my brief perusal it seems like the easiest approach might be something like that taken in FiPy. It looks like they switch between pysparse, scipy and trilinos depending on the situation and whether they want to use MPI. I have no idea how hard that would be to actually implement. ;-) To what degree does the FE assembling code depend explicitly on scipy?
this is from the FiPy "solvers" page: http://www.ctcms.nist.gov/fipy/documentation/SOLVERS.html
FiPy requires either PySparse, SciPy or Trilinos to be installed in order to solve linear systems. From our experiences,FiPy runs most efficiently in serial when PySparse is the linear solver. Trilinos is the most complete of the three solvers due to its numerous preconditioning and solver capabilities and it also allows FiPy to run in parallel.