sfepy_00: petsc options: [] sfepy_00: dimensions: [ 1. 1. 1.] sfepy_00: shape: [11 11 11] sfepy_00: centre: [ 0. 0. 0.] sfepy_00: generating 1331 vertices... sfepy_00: ...done sfepy_00: generating 1000 cells... sfepy_00: ...done sfepy_00: field u order: 1 sfepy_00: field p order: 1 sfepy_00: rank 0 of 1 sfepy_00: reading mesh [user] (output-parallel/para.h5)... sfepy_00: ...done in 0.00 s sfepy_00: partitioning mesh into 1 subdomains... sfepy_00: cell counts: [1000] sfepy_00: ...done in 0.0 sfepy_00: distributing fields... sfepy_00: numbers of cells in tasks (without overlaps): [1000] sfepy_00: numbers of cells in tasks (without overlaps): [1000] [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 [unset]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0