surface_flux over whole surface doesn't sum to zero
Hi Robert,
I'm new to sfepy and having a good experience so far. But I've been having some issues understanding the d_surface_flux function.. a couple of questions that might help:
- In the poisson_neumann example, if I add a post_process function which evaluates various surface fluxes the results are not what I would expect. Here is my post_process function:
def post_process(out, pb, state, extend=False): from sfepy.base.base import Struct
flx = pb.evaluate('d_surface_flux.2.Gamma(coef.sigma, t)',
mode='el_avg') print "Flux, entire surface:", np.sum(flx), flx.max(), flx.min()
snkf = pb.evaluate('d_surface_flux.2.Gamma_Left(coef.sigma, t)',
mode='eval') print "Sink flux: ", snkf
srcf = pb.evaluate('d_surface_flux.2.Gamma_Right(coef.sigma, t)',
mode='eval') print "Source flux: ", srcf
blkf = pb.evaluate('d_surface_flux.2.Gamma_N(coef.sigma, t)',
mode='eval') print "Block flux: ", blkf
return out
I would expect that "Flux, entire surface" would be approximately zero, and the sink flux and source flux should be approximately equal but opposite sign, and block flux to be zero. Since Gamma_N is supposed to have a zero neumann condition on it my understanding is that flux would only be present over Gamma_Left and Gamma_Right and that they would be equal but opposite sign. What am I missing here?
- If I compute surface_flux using mode='el_avg' and then sum the results, why is that number not equal to surface_flux computed with mode='eval'? Whats the difference?
Thanks for your help,
Geoff
Hi Geoff,
On 08/01/2014 01:19 AM, Geoff Wright wrote:
Hi Robert,
I'm new to sfepy and having a good experience so far. But I've been having some issues understanding the d_surface_flux function.. a couple of questions that might help:
- In the poisson_neumann example, if I add a post_process function which evaluates various surface fluxes the results are not what I would expect. Here is my post_process function:
def post_process(out, pb, state, extend=False): from sfepy.base.base import Struct
flx = pb.evaluate('d_surface_flux.2.Gamma(coef.sigma, t)',
mode='el_avg') print "Flux, entire surface:", np.sum(flx), flx.max(), flx.min()
You should use mode='el' here, see [1] (the message and below). Then the sum equals to:
aa = pb.evaluate('d_surface_flux.2.Gamma(coef.sigma, t)',
mode='eval')
print "total flux eval: ", aa
[1] https://groups.google.com/d/msg/sfepy-devel/xfzjsLSRHvM/3W8Uacy5I9QJ
snkf = pb.evaluate('d_surface_flux.2.Gamma_Left(coef.sigma, t)',
mode='eval') print "Sink flux: ", snkf
srcf = pb.evaluate('d_surface_flux.2.Gamma_Right(coef.sigma, t)',
mode='eval') print "Source flux: ", srcf
blkf = pb.evaluate('d_surface_flux.2.Gamma_N(coef.sigma, t)',
mode='eval') print "Block flux: ", blkf
return out
I would expect that "Flux, entire surface" would be approximately zero, and the sink flux and source flux should be approximately equal but opposite sign, and block flux to be zero. Since Gamma_N is supposed to have a zero neumann condition on it my understanding is that flux would only be present over Gamma_Left and Gamma_Right and that they would be equal but opposite sign. What am I missing here?
When I use the 'el' mode instead of 'el_avg', and remove the dw_surface_integrate term from the equation, I get:
Flux, entire surface: 1.23780929311e-07 0.00362888105245 -0.00337552105702 Sink flux: 0.0971430725276 Source flux: -0.0971353016007 Block flux: -7.64714595901e-06 total flux eval: 1.23780929311e-07
which is what you expected. It seems that you left the flux source term dw_surface_integrate in the equation.
- If I compute surface_flux using mode='el_avg' and then sum the results, why is that number not equal to surface_flux computed with mode='eval'? Whats the difference?
See above :) The 'el_avg' modemode divides the resulting integrated value in an element by the volume of that element, which is not what you want to do with flux - you need just an integral over each element, which is what the 'el' mode does.
r.
Hi Robert,
Thanks for the fast response! The poisson_neuman example is making sense to me now, but when I take a step closer to my real problem, I'm still having some issues with the flux. I attached the mesh, problem definition and a screenshot of the result.
The intention was to have a sphere inside another sphere, with the source defined as a region on the inner sphere, and the sink defined as a region on the outer sphere. All other surfaces are supposed to have zero flux. However, if you look at the flux results, computed similar to before, I get the following:
Source flux: 0.978 Sink flux: -1.592 Block flux: -1.592 Entire Surf: -1.608
Whereas I expected source and sink to be equal and opposite, block and entire surf to be zero.
Any thoughts will be much appreciated. Thanks!
Geoff
On Thursday, July 31, 2014 7:19:20 PM UTC-4, Geoff Wright wrote:
Hi Robert,
I'm new to sfepy and having a good experience so far. But I've been having some issues understanding the d_surface_flux function.. a couple of questions that might help:
- In the poisson_neumann example, if I add a post_process function which evaluates various surface fluxes the results are not what I would expect. Here is my post_process function:
def post_process(out, pb, state, extend=False): from sfepy.base.base import Struct
flx = pb.evaluate('d_surface_flux.2.Gamma(coef.sigma, t)',
mode='el_avg') print "Flux, entire surface:", np.sum(flx), flx.max(), flx.min()
snkf = pb.evaluate('d_surface_flux.2.Gamma_Left(coef.sigma, t)',
mode='eval') print "Sink flux: ", snkf
srcf = pb.evaluate('d_surface_flux.2.Gamma_Right(coef.sigma, t)',
mode='eval') print "Source flux: ", srcf
blkf = pb.evaluate('d_surface_flux.2.Gamma_N(coef.sigma, t)',
mode='eval') print "Block flux: ", blkf
return out
I would expect that "Flux, entire surface" would be approximately zero, and the sink flux and source flux should be approximately equal but opposite sign, and block flux to be zero. Since Gamma_N is supposed to have a zero neumann condition on it my understanding is that flux would only be present over Gamma_Left and Gamma_Right and that they would be equal but opposite sign. What am I missing here?
- If I compute surface_flux using mode='el_avg' and then sum the results, why is that number not equal to surface_flux computed with mode='eval'? Whats the difference?
Thanks for your help,
Geoff
On 08/01/2014 03:45 PM, Geoff Wright wrote:
Hi Robert,
Thanks for the fast response! The poisson_neuman example is making sense to me now, but when I take a step closer to my real problem, I'm still having some issues with the flux. I attached the mesh, problem definition and a screenshot of the result.
The intention was to have a sphere inside another sphere, with the source defined as a region on the inner sphere, and the sink defined as a region on the outer sphere. All other surfaces are supposed to have zero flux. However, if you look at the flux results, computed similar to before, I get the following:
Source flux: 0.978 Sink flux: -1.592 Block flux: -1.592 Entire Surf: -1.608
I am getting:
Sink flux: -0.994294409461 Source flux: 0.977718087637 Block flux: -1.59176490161 Entire surf: -1.60834122343
Whereas I expected source and sink to be equal and opposite, block and entire surf to be zero.
so Source and Sink are pretty close and opposite, but the other fluxes are not zero. What happens if you refine the mesh so that the element size is uniform?
Btw. for field approx. order 2: (integral order 8):
Sink flux: -1.42052213723 Source flux: 1.60881848323 Block flux: 0.0581843449633 Entire surf: 0.24648069096
and approx. order 3:
Sink flux: -1.68822440062 Source flux: 1.86564560107 Block flux: -0.143661701834 Entire surf: 0.0337594986197
-> there is something fishy.
I tried to check the regions, it seems ok - make sure that there are no holes between sink + source and block - maybe check the surface areas using d_surface term?
Then I would check the topological consistency of the mesh etc. but I assume you made it by gmsh, right? So it should be ok.
That's all I can think of now. Let me know how it goes.
r.
Hi Robert,
Yes, I generated the domain using gmsh. I have tried uniform cell size and the numbers are different but still off. I checked the surface areas of the inner and outer spheres, they come out ~3% below the theoretical sphere surface area but i'm assuming this is due to the mesh approximation. As far as I can tell the meshes and regions are okay.
Please let me know if you have any ideas.. I attached the .geo file that I used to generate the meshes, in case it is useful.
Thanks,
Geoff
On Thursday, July 31, 2014 7:19:20 PM UTC-4, Geoff Wright wrote:
Hi Robert,
I'm new to sfepy and having a good experience so far. But I've been having some issues understanding the d_surface_flux function.. a couple of questions that might help:
- In the poisson_neumann example, if I add a post_process function which evaluates various surface fluxes the results are not what I would expect. Here is my post_process function:
def post_process(out, pb, state, extend=False): from sfepy.base.base import Struct
flx = pb.evaluate('d_surface_flux.2.Gamma(coef.sigma, t)',
mode='el_avg') print "Flux, entire surface:", np.sum(flx), flx.max(), flx.min()
snkf = pb.evaluate('d_surface_flux.2.Gamma_Left(coef.sigma, t)',
mode='eval') print "Sink flux: ", snkf
srcf = pb.evaluate('d_surface_flux.2.Gamma_Right(coef.sigma, t)',
mode='eval') print "Source flux: ", srcf
blkf = pb.evaluate('d_surface_flux.2.Gamma_N(coef.sigma, t)',
mode='eval') print "Block flux: ", blkf
return out
I would expect that "Flux, entire surface" would be approximately zero, and the sink flux and source flux should be approximately equal but opposite sign, and block flux to be zero. Since Gamma_N is supposed to have a zero neumann condition on it my understanding is that flux would only be present over Gamma_Left and Gamma_Right and that they would be equal but opposite sign. What am I missing here?
- If I compute surface_flux using mode='el_avg' and then sum the results, why is that number not equal to surface_flux computed with mode='eval'? Whats the difference?
Thanks for your help,
Geoff
On 08/01/2014 05:26 PM, Geoff Wright wrote:
Hi Robert,
Yes, I generated the domain using gmsh. I have tried uniform cell size and the numbers are different but still off. I checked the surface areas of the inner and outer spheres, they come out ~3% below the theoretical sphere surface area but i'm assuming this is due to the mesh approximation. As far as I can tell the meshes and regions are okay.
Please let me know if you have any ideas.. I attached the .geo file that I used to generate the meshes, in case it is useful.
Maybe try a cube_inside_cube mesh aligned with axes, so that the normals, surface areas, etc. could be easily checked and meshing artefacts were minimized. Do you have a setup where an analytical solution is known?
r.
Hi Robert,
I didn't try the cube idea yet, but I tried simplifying to a single sphere and adding a source one one side and a sink on the other. Flux still not making sense. See scripts attached. Flux output is:
Source flux: 2.50129488048 Sink flux: -1.46351297349 Block flux: -0.929787940721 Entire surf: -1.4574793787
Also noticed that if I run it several times, sometimes the source or sink flux will show up as 'nan', which is worrying. Is it possible that the surface_flux calculation is not reliable in the case of a curved surface such as a sphere? The general distribution looks correct..
Geoff
On Friday, August 1, 2014 2:19:54 PM UTC-4, Robert Cimrman wrote:
On 08/01/2014 05:26 PM, Geoff Wright wrote:
Hi Robert,
Yes, I generated the domain using gmsh. I have tried uniform cell size and the numbers are different but still off. I checked the surface areas of the inner and outer spheres, they come out ~3% below the theoretical sphere surface area but i'm assuming this is due to the mesh approximation. As far as I can tell the meshes and regions are okay.
Please let me know if you have any ideas.. I attached the .geo file that I used to generate the meshes, in case it is useful.
Maybe try a cube_inside_cube mesh aligned with axes, so that the normals, surface areas, etc. could be easily checked and meshing artefacts were minimized. Do you have a setup where an analytical solution is known?
r.
On 08/01/2014 09:25 PM, Geoff Wright wrote:
Hi Robert,
I didn't try the cube idea yet, but I tried simplifying to a single sphere and adding a source one one side and a sink on the other. Flux still not making sense. See scripts attached. Flux output is:
Source flux: 2.50129488048 Sink flux: -1.46351297349 Block flux: -0.929787940721 Entire surf: -1.4574793787
Also noticed that if I run it several times, sometimes the source or sink flux will show up as 'nan', which is worrying. Is it possible that the surface_flux calculation is not reliable in the case of a curved surface such as a sphere? The general distribution looks correct..
This is strange, nans should not be there. I did not get them in my runs.
The surface flux term is sort of dangerous by definition, as you need to compute a gradient on a surface - that requires more regularity in general than the weak solution has. But it behaved ok in my examples, including the poisson_neumann.py, that has a curved boundary as well. It's true that the nonzero flux regions are flat there, and the gradient is constant in planes perpendicular to the long axis - this may have a role too.
Try to edit the site_cfg.py:
debug_flags = '-DDEBUG_FMF'
and also remove optimization in compile_flags. Then rebuild the sources.
Another thing that is strange is that I get different flux values from you (but equally wrong):
Source flux: 2.09499463797 Sink flux: -1.44302331431 Block flux: -0.68702181322 Entire surf: -0.0350504895656
What is your platform and sfepy version? Also, the linear system solution converges well?
So: there might be a bug. Trying that on a rectangular shape might shed some light on the problem.
r.
On Friday, August 1, 2014 2:19:54 PM UTC-4, Robert Cimrman wrote:
On 08/01/2014 05:26 PM, Geoff Wright wrote:
Hi Robert,
Yes, I generated the domain using gmsh. I have tried uniform cell size and the numbers are different but still off. I checked the surface areas of the inner and outer spheres, they come out ~3% below the theoretical sphere surface area but i'm assuming this is due to the mesh approximation. As far as I can tell the meshes and regions are okay.
Please let me know if you have any ideas.. I attached the .geo file that I used to generate the meshes, in case it is useful.
Maybe try a cube_inside_cube mesh aligned with axes, so that the normals, surface areas, etc. could be easily checked and meshing artefacts were minimized. Do you have a setup where an analytical solution is known?
r.
Hi Robert,
I will try this tomorrow and let you know. My platform is 64bit Ubuntu, I cloned the repo on 7/22 so I'm at revision 433a3bc2a6a84d93145b04ee7d3debb493de0e50.
The linear system converges nicely and the voltage values look nice and smooth so I don't think there is anything grossly wrong. I think theres some issue with how flux is being integrated that is not exposed with the flat cylinder example.
I tried a similar test with a simple cube on Friday, and the results were correct when I had the source and sink on opposing faces of the cube. However, once I included more of the orthogonal faces in the source, I saw the issue again. I'll post the mesh and the python script for this tomorrow.
One more thing I wanted to try is to look at the flux values across the whole surface under some of these scenarios. I understand that I can pull out the surface values using mode='el', and same them in the vtk output file. But I'm not sure how to map them back to the appropriate vertex positions in order to visualize the flux. Can you suggest the best way of doing this? I'll post a screenshot once I have it.
Geoff
On Friday, August 1, 2014 5:58:53 PM UTC-4, Robert Cimrman wrote:
On 08/01/2014 09:25 PM, Geoff Wright wrote:
Hi Robert,
I didn't try the cube idea yet, but I tried simplifying to a single sphere and adding a source one one side and a sink on the other. Flux still not making sense. See scripts attached. Flux output is:
Source flux: 2.50129488048 Sink flux: -1.46351297349 Block flux: -0.929787940721 Entire surf: -1.4574793787
Also noticed that if I run it several times, sometimes the source or sink flux will show up as 'nan', which is worrying. Is it possible that the surface_flux calculation is not reliable in the case of a curved surface such as a sphere? The general distribution looks correct..
This is strange, nans should not be there. I did not get them in my runs.
The surface flux term is sort of dangerous by definition, as you need to compute a gradient on a surface - that requires more regularity in general than the weak solution has. But it behaved ok in my examples, including the poisson_neumann.py, that has a curved boundary as well. It's true that the nonzero flux regions are flat there, and the gradient is constant in planes perpendicular to the long axis - this may have a role too.
Try to edit the site_cfg.py:
debug_flags = '-DDEBUG_FMF'
and also remove optimization in compile_flags. Then rebuild the sources.
Another thing that is strange is that I get different flux values from you (but equally wrong):
Source flux: 2.09499463797 Sink flux: -1.44302331431 Block flux: -0.68702181322 Entire surf: -0.0350504895656
What is your platform and sfepy version? Also, the linear system solution converges well?
So: there might be a bug. Trying that on a rectangular shape might shed some light on the problem.
r.
On Friday, August 1, 2014 2:19:54 PM UTC-4, Robert Cimrman wrote:
On 08/01/2014 05:26 PM, Geoff Wright wrote:
Hi Robert,
Yes, I generated the domain using gmsh. I have tried uniform cell
and
the numbers are different but still off. I checked the surface areas of the inner and outer spheres, they come out ~3% below the theoretical sphere surface area but i'm assuming this is due to the mesh approximation. As far as I can tell the meshes and regions are okay.
Please let me know if you have any ideas.. I attached the .geo file
size that
I
used to generate the meshes, in case it is useful.
Maybe try a cube_inside_cube mesh aligned with axes, so that the normals, surface areas, etc. could be easily checked and meshing artefacts were minimized. Do you have a setup where an analytical solution is known?
r.
On 08/03/2014 05:44 PM, Geoff Wright wrote:
Hi Robert,
I will try this tomorrow and let you know. My platform is 64bit Ubuntu, I cloned the repo on 7/22 so I'm at revision 433a3bc2a6a84d93145b04ee7d3debb493de0e50.
The linear system converges nicely and the voltage values look nice and smooth so I don't think there is anything grossly wrong. I think theres some issue with how flux is being integrated that is not exposed with the flat cylinder example.
I tried a similar test with a simple cube on Friday, and the results were correct when I had the source and sink on opposing faces of the cube. However, once I included more of the orthogonal faces in the source, I saw the issue again. I'll post the mesh and the python script for this tomorrow.
One more thing I wanted to try is to look at the flux values across the whole surface under some of these scenarios. I understand that I can pull out the surface values using mode='el', and same them in the vtk output file. But I'm not sure how to map them back to the appropriate vertex positions in order to visualize the flux. Can you suggest the best way of doing this? I'll post a screenshot once I have it.
Something like this?
from sfepy.discrete.fem.utils import extend_cell_data
aa = pb.evaluate('d_surface_flux.2.Gamma_Left(coef.sigma, v)', mode='el')
flux = extend_cell_data(aa, pb.domain, 'Gamma_Left', is_surface=True)
out['flux'] = Struct(name='flux field', mode='cell', data=flux, dofs=None)
It averages the surface data from faces into cells of those faces (so it is wrong here: the fluxes should be summed, not averaged; it's ok for non-corner faces, i.e. a cell must have a single surface face). The cell data can then be saved normally.
Otherwise the regions can be accessed:
pb.domain.regions reg = pb.domain.regions['Gamma_Left'] print reg reg.vertices ...
The vertices are indices into the mesh vertices, so to get their coordinates, just use pb.domain.cmesh.coors[reg.vertices] etc.
Is that what you asked for? :)
r.
On 08/04/2014 02:31 PM, Robert Cimrman wrote:
On 08/03/2014 05:44 PM, Geoff Wright wrote:
Hi Robert,
I will try this tomorrow and let you know. My platform is 64bit Ubuntu, I cloned the repo on 7/22 so I'm at revision 433a3bc2a6a84d93145b04ee7d3debb493de0e50.
The linear system converges nicely and the voltage values look nice and smooth so I don't think there is anything grossly wrong. I think theres some issue with how flux is being integrated that is not exposed with the flat cylinder example.
I tried a similar test with a simple cube on Friday, and the results were correct when I had the source and sink on opposing faces of the cube. However, once I included more of the orthogonal faces in the source, I saw the issue again. I'll post the mesh and the python script for this tomorrow.
One more thing I wanted to try is to look at the flux values across the whole surface under some of these scenarios. I understand that I can pull out the surface values using mode='el', and same them in the vtk output file. But I'm not sure how to map them back to the appropriate vertex positions in order to visualize the flux. Can you suggest the best way of doing this? I'll post a screenshot once I have it.
Something like this?
from sfepy.discrete.fem.utils import extend_cell_data aa = pb.evaluate('d_surface_flux.2.Gamma_Left(coef.sigma, v)', mode='el') flux = extend_cell_data(aa, pb.domain, 'Gamma_Left', is_surface=True)
FYI: Try using
flux = extend_cell_data(aa, pb.domain, 'Gamma_Left', val=0.0, is_surface=True)
to fill the other cells by zeros instead of min. value of the flux.
out['flux'] = Struct(name='flux field', mode='cell', data=flux, dofs=None)
Hi Robert,
When I try the extend_cell_data on the surface of a cube, I get an error: "ValueError: The weights and list don't have the same length." I attached the file. It uses a mesh that is provided with sfepy.. can you take a look?
This is another example where the flux doesn't make sense. If I use only vertices on one face as the source then the flux values are good. But if you run the file as-is, the source extends a bit into the adjacent faces, and the flux results are incorrect. The electric field values look okay, and the voltage field looks fine. My results are as follows:
Source flux: 1.09348063065 Sink flux: -0.90821822135 Block flux: -0.040501111711 Entire surf: 0.298397270469 0.0181174920822 -0.0124582155367 Source surf area: 3.05555373528 Sink surf area: 1.0
I'm pretty sure there is an issue with the surface_flux function. Is this something you will be able to look at? I'm trying to decide whether to continue investing in this or switch to another tool/software.
Thanks,
Geoff
On Monday, August 4, 2014 8:54:36 AM UTC-4, Robert Cimrman wrote:
On 08/03/2014 05:44 PM, Geoff Wright wrote:
Hi Robert,
I will try this tomorrow and let you know. My platform is 64bit Ubuntu, I cloned the repo on 7/22 so I'm at revision 433a3bc2a6a84d93145b04ee7d3debb493de0e50.
The linear system converges nicely and the voltage values look nice and smooth so I don't think there is anything grossly wrong. I think
some issue with how flux is being integrated that is not exposed with
flat cylinder example.
I tried a similar test with a simple cube on Friday, and the results were correct when I had the source and sink on opposing faces of the cube. However, once I included more of the orthogonal faces in the source, I saw the issue again. I'll post the mesh and the python script for this tomorrow.
One more thing I wanted to try is to look at the flux values across the whole surface under some of these scenarios. I understand that I can
On 08/04/2014 02:31 PM, Robert Cimrman wrote: theres the pull
out the surface values using mode='el', and same them in the vtk output file. But I'm not sure how to map them back to the appropriate vertex positions in order to visualize the flux. Can you suggest the best way of doing this? I'll post a screenshot once I have it.
Something like this?
from sfepy.discrete.fem.utils import extend_cell_data aa = pb.evaluate('d_surface_flux.2.Gamma_Left(coef.sigma, v)',
mode='el') flux = extend_cell_data(aa, pb.domain, 'Gamma_Left', is_surface=True)
FYI: Try using
flux = extend_cell_data(aa, pb.domain, 'Gamma_Left', val=0.0, is_surface=True)
to fill the other cells by zeros instead of min. value of the flux.
out['flux'] = Struct(name='flux field', mode='cell', data=flux,
dofs=None)
On 08/04/2014 05:21 PM, Geoff Wright wrote:
Hi Robert,
When I try the extend_cell_data on the surface of a cube, I get an error: "ValueError: The weights and list don't have the same length." I attached the file. It uses a mesh that is provided with sfepy.. can you take a look?
This is another example where the flux doesn't make sense. If I use only vertices on one face as the source then the flux values are good. But if you run the file as-is, the source extends a bit into the adjacent faces, and the flux results are incorrect. The electric field values look okay, and the voltage field looks fine. My results are as follows:
Source flux: 1.09348063065 Sink flux: -0.90821822135 Block flux: -0.040501111711 Entire surf: 0.298397270469 0.0181174920822 -0.0124582155367 Source surf area: 3.05555373528 Sink surf area: 1.0
I'm pretty sure there is an issue with the surface_flux function. Is this something you will be able to look at? I'm trying to decide whether to continue investing in this or switch to another tool/software.
I am actually up to something, stay tuned. (And the surface_flux function is innocent - it comes down to region definitions and gradient computation accuracy).
r.
Okay sounds good. Keep me posted!
Thank you,
Geoff
On Monday, August 4, 2014 2:21:20 PM UTC-4, Robert Cimrman wrote:
Hi Robert,
When I try the extend_cell_data on the surface of a cube, I get an error: "ValueError: The weights and list don't have the same length." I attached the file. It uses a mesh that is provided with sfepy.. can you take a look?
This is another example where the flux doesn't make sense. If I use only vertices on one face as the source then the flux values are good. But if you run the file as-is, the source extends a bit into the adjacent faces, and the flux results are incorrect. The electric field values look okay, and the voltage field looks fine. My results are as follows:
Source flux: 1.09348063065 Sink flux: -0.90821822135 Block flux: -0.040501111711 Entire surf: 0.298397270469 0.0181174920822 -0.0124582155367 Source surf area: 3.05555373528 Sink surf area: 1.0
I'm pretty sure there is an issue with the surface_flux function. Is
On 08/04/2014 05:21 PM, Geoff Wright wrote: this
something you will be able to look at? I'm trying to decide whether to continue investing in this or switch to another tool/software.
I am actually up to something, stay tuned. (And the surface_flux function is innocent - it comes down to region definitions and gradient computation accuracy).
r.
Hi Geoff,
the explanation, AFAIK, is below. You will need the latest git sources to try it (see the attachment), as several things were fixed or updated.
On 08/04/2014 09:11 PM, Geoff Wright wrote:
Okay sounds good. Keep me posted!
Thank you,
Geoff
On Monday, August 4, 2014 2:21:20 PM UTC-4, Robert Cimrman wrote:
On 08/04/2014 05:21 PM, Geoff Wright wrote:
Hi Robert,
When I try the extend_cell_data on the surface of a cube, I get an error: "ValueError: The weights and list don't have the same length." I attached the file. It uses a mesh that is provided with sfepy.. can you take a look?
I have fixed extend_cell_data() to work, see e46188f541f9bbe.
This is another example where the flux doesn't make sense. If I use only vertices on one face as the source then the flux values are good. But if you run the file as-is, the source extends a bit into the adjacent faces, and the flux results are incorrect. The electric field values look okay, and the voltage field looks fine. My results are as follows:
Source flux: 1.09348063065 Sink flux: -0.90821822135 Block flux: -0.040501111711 Entire surf: 0.298397270469 0.0181174920822 -0.0124582155367 Source surf area: 3.05555373528 Sink surf area: 1.0
In [1]: 1.09348063065-0.90821822135-0.040501111711 Out[1]: 0.14476129758900003
which is not 0.298397270469 - this is because the Gamma_Left + Gamma_Right + Blocked < EntireSurf - some faces are missing there.
To fix that, use the 'f' operators instead of 'v':
'Gamma_Left' : ('r.EntireSurf *f vertices in (x < -0.3)', 'facet'),
'Gamma_Right' : ('r.EntireSurf *f vertices in (x > 0.4999)', 'facet'),
'Blocked' : ('r.EntireSurf -f (r.Gamma_Left +f r.Gamma_Right)', 'facet'),
This requires the ce7a6a73166e5d572 commit - there was a bug (or missing feature) that made the 'r.EntireSurf *f vertices in (x < -0.3)' kind of selection fail.
I'm pretty sure there is an issue with the surface_flux function. Is
this
something you will be able to look at? I'm trying to decide whether to continue investing in this or switch to another tool/software.
I am actually up to something, stay tuned. (And the surface_flux function is innocent - it comes down to region definitions and gradient computation accuracy).
After the above changes, the sum of partial fluxes equals to the entire surface flux. However, the FEM is not locally conservative, so there is an inherent error in flux computation. I modified the file to save also the partial fluxes in the surface region parts - you may notice a relatively high error (nonzero values) in the Blocked region near the Dirichlet boundary, where the gradients are high. To minimalize that, you can:
- use a finer mesh (at least near the Dirichlet boundary)
- use a higher order approximation Both will soon make the direct solver unusable. Luckily, the Laplacian can be easily resolved by conjugate gradients, see the commented out solver_0 (PETSc required for that).
The attached file uses approx_order 3 and gives:
Sum of partial fluxes == total: 0.0378874815651 == 0.0378874815651 2.42833097736e-18 Source flux: 1.05847824223 1.05847824223 Sink flux: -0.884664882961 -0.884664882961 Block flux: -0.135925877701 -0.135925877701 Entire surf: 0.0378874815651 0.0378874815651 0.0175191418749 -0.0162855737778 Source surf area: 1.5549799671 Sink surf area: 1.0 Block surf area: 3.4450200329 Entire surf area: 6.0 == 6.0
which is better than it was. Still, the mesh is too coarse.
If the above is not sufficient, another method, that is conservative, would be needed, for example the discontinuous Galerkin (DG) method. It is not implemented in SfePy yet, so I have talked a bit with some colleagues about a possible implementation. You may need to look for a DG or finite volume solver in the meantime.
Bottom line: the special case that "works" has the gradients aligned in the x-axis direction (it is essentially a 1D problem), so the error in surface gradient computation is not present.
That's all, let me know if it makes sense.
r.
Hi Robert,
Thanks for looking into it. Your response makes sense, I do see higher flux near the dirchlet boundary. I've been experimenting with denser meshes but haven't managed to get the flux error down to a satisfactory level yet.
I noticed that gmsh has the ability to generate a 2nd order mesh, which I thought was interesting as it might reduce the gradient errors. However I haven't had any success importing this mesh into sfepy. Do you know if it is possible? I've been generating it in gmsh by clicking on "Set order 2", then export it from gmsh as .mesh format, but then when I load it sfepy spits out errors about 'skipping unknown entity'. I'm attaching the 2nd order mesh to this post.
Note: this geo file is a bit different to what we were looking at earlier. I'm now defining the sink and source to both be well inside the overall domain, the source is a sphere and the sink is a cylinder. Both are much smaller than the outer volume. I've found this reduces the flux error to around 20%.
Thanks for your help,
Geoff
On Monday, August 4, 2014 5:30:35 PM UTC-4, Robert Cimrman wrote:
Hi Geoff,
the explanation, AFAIK, is below. You will need the latest git sources to try it (see the attachment), as several things were fixed or updated.
On 08/04/2014 09:11 PM, Geoff Wright wrote:
Okay sounds good. Keep me posted!
Thank you,
Geoff
On Monday, August 4, 2014 2:21:20 PM UTC-4, Robert Cimrman wrote:
On 08/04/2014 05:21 PM, Geoff Wright wrote:
Hi Robert,
When I try the extend_cell_data on the surface of a cube, I get an error: "ValueError: The weights and list don't have the same length." I attached the file. It uses a mesh that is provided with sfepy.. can you take a look?
I have fixed extend_cell_data() to work, see e46188f541f9bbe.
This is another example where the flux doesn't make sense. If I use only vertices on one face as the source then the flux values are good. But if you run the file as-is, the source extends a bit into the adjacent faces, and the flux results are incorrect. The electric field values look okay, and the voltage field looks fine. My results are as follows:
Source flux: 1.09348063065 Sink flux: -0.90821822135 Block flux: -0.040501111711 Entire surf: 0.298397270469 0.0181174920822 -0.0124582155367 Source surf area: 3.05555373528 Sink surf area: 1.0
In [1]: 1.09348063065-0.90821822135-0.040501111711 Out[1]: 0.14476129758900003
which is not 0.298397270469 - this is because the Gamma_Left + Gamma_Right + Blocked < EntireSurf - some faces are missing there.
To fix that, use the 'f' operators instead of 'v':
'Gamma_Left' : ('r.EntireSurf *f vertices in (x < -0.3)', 'facet'), 'Gamma_Right' : ('r.EntireSurf *f vertices in (x > 0.4999)',
'facet'), 'Blocked' : ('r.EntireSurf -f (r.Gamma_Left +f r.Gamma_Right)', 'facet'),
This requires the ce7a6a73166e5d572 commit - there was a bug (or missing feature) that made the 'r.EntireSurf *f vertices in (x < -0.3)' kind of selection fail.
I'm pretty sure there is an issue with the surface_flux function. Is
this
something you will be able to look at? I'm trying to decide whether to continue investing in this or switch to another tool/software.
I am actually up to something, stay tuned. (And the surface_flux function is innocent - it comes down to region definitions and gradient computation accuracy).
After the above changes, the sum of partial fluxes equals to the entire surface flux. However, the FEM is not locally conservative, so there is an inherent error in flux computation. I modified the file to save also the partial fluxes in the surface region parts - you may notice a relatively high error (nonzero values) in the Blocked region near the Dirichlet boundary, where the gradients are high. To minimalize that, you can:
- use a finer mesh (at least near the Dirichlet boundary)
- use a higher order approximation Both will soon make the direct solver unusable. Luckily, the Laplacian can be easily resolved by conjugate gradients, see the commented out solver_0 (PETSc required for that).
The attached file uses approx_order 3 and gives:
Sum of partial fluxes == total: 0.0378874815651 == 0.0378874815651 2.42833097736e-18 Source flux: 1.05847824223 1.05847824223 Sink flux: -0.884664882961 -0.884664882961 Block flux: -0.135925877701 -0.135925877701 Entire surf: 0.0378874815651 0.0378874815651 0.0175191418749 -0.0162855737778 Source surf area: 1.5549799671 Sink surf area: 1.0 Block surf area: 3.4450200329 Entire surf area: 6.0 == 6.0
which is better than it was. Still, the mesh is too coarse.
If the above is not sufficient, another method, that is conservative, would be needed, for example the discontinuous Galerkin (DG) method. It is not implemented in SfePy yet, so I have talked a bit with some colleagues about a possible implementation. You may need to look for a DG or finite volume solver in the meantime.
Bottom line: the special case that "works" has the gradients aligned in the x-axis direction (it is essentially a 1D problem), so the error in surface gradient computation is not present.
That's all, let me know if it makes sense.
r.
On 08/05/2014 10:18 PM, Geoff Wright wrote:
Hi Robert,
Thanks for looking into it. Your response makes sense, I do see higher flux near the dirchlet boundary. I've been experimenting with denser meshes but haven't managed to get the flux error down to a satisfactory level yet.
I noticed that gmsh has the ability to generate a 2nd order mesh, which I thought was interesting as it might reduce the gradient errors. However I haven't had any success importing this mesh into sfepy. Do you know if it is possible? I've been generating it in gmsh by clicking on "Set order 2", then export it from gmsh as .mesh format, but then when I load it sfepy spits out errors about 'skipping unknown entity'. I'm attaching the 2nd order mesh to this post.
The mesh order 2 means that 10-node tetrahedrons are used. Those are not currently supported. However, use of the order two mesh is essentially same as using the order 1 mesh and field approximation order 2 - the only difference is that the sphere surface in the former case is more "spherical" - SfePy generates the extra nodes for order 2 automatically, but they lay on the flat tetrahedral sides, as the code does not know that the geometry is actually a sphere. But it should be the same inside the sphere (IMHO).
Note: this geo file is a bit different to what we were looking at earlier. I'm now defining the sink and source to both be well inside the overall domain, the source is a sphere and the sink is a cylinder. Both are much smaller than the outer volume. I've found this reduces the flux error to around 20%.
Could you also send the region definitions, so that I could run a simulation with the mesh?
r.
Thanks for your help,
Geoff
On Monday, August 4, 2014 5:30:35 PM UTC-4, Robert Cimrman wrote:
Hi Geoff,
the explanation, AFAIK, is below. You will need the latest git sources to try it (see the attachment), as several things were fixed or updated.
On 08/04/2014 09:11 PM, Geoff Wright wrote:
Okay sounds good. Keep me posted!
Thank you,
Geoff
On Monday, August 4, 2014 2:21:20 PM UTC-4, Robert Cimrman wrote:
On 08/04/2014 05:21 PM, Geoff Wright wrote:
Hi Robert,
When I try the extend_cell_data on the surface of a cube, I get an error: "ValueError: The weights and list don't have the same length." I attached the file. It uses a mesh that is provided with sfepy.. can you take a look?
I have fixed extend_cell_data() to work, see e46188f541f9bbe.
This is another example where the flux doesn't make sense. If I use only vertices on one face as the source then the flux values are good. But if you run the file as-is, the source extends a bit into the adjacent faces, and the flux results are incorrect. The electric field values look okay, and the voltage field looks fine. My results are as follows:
Source flux: 1.09348063065 Sink flux: -0.90821822135 Block flux: -0.040501111711 Entire surf: 0.298397270469 0.0181174920822 -0.0124582155367 Source surf area: 3.05555373528 Sink surf area: 1.0
In [1]: 1.09348063065-0.90821822135-0.040501111711 Out[1]: 0.14476129758900003
which is not 0.298397270469 - this is because the Gamma_Left + Gamma_Right + Blocked < EntireSurf - some faces are missing there.
To fix that, use the 'f' operators instead of 'v':
'Gamma_Left' : ('r.EntireSurf *f vertices in (x < -0.3)', 'facet'), 'Gamma_Right' : ('r.EntireSurf *f vertices in (x > 0.4999)',
'facet'), 'Blocked' : ('r.EntireSurf -f (r.Gamma_Left +f r.Gamma_Right)', 'facet'),
This requires the ce7a6a73166e5d572 commit - there was a bug (or missing feature) that made the 'r.EntireSurf *f vertices in (x < -0.3)' kind of selection fail.
I'm pretty sure there is an issue with the surface_flux function. Is
this
something you will be able to look at? I'm trying to decide whether to continue investing in this or switch to another tool/software.
I am actually up to something, stay tuned. (And the surface_flux function is innocent - it comes down to region definitions and gradient computation accuracy).
After the above changes, the sum of partial fluxes equals to the entire surface flux. However, the FEM is not locally conservative, so there is an inherent error in flux computation. I modified the file to save also the partial fluxes in the surface region parts - you may notice a relatively high error (nonzero values) in the Blocked region near the Dirichlet boundary, where the gradients are high. To minimalize that, you can:
- use a finer mesh (at least near the Dirichlet boundary)
- use a higher order approximation Both will soon make the direct solver unusable. Luckily, the Laplacian can be easily resolved by conjugate gradients, see the commented out solver_0 (PETSc required for that).
The attached file uses approx_order 3 and gives:
Sum of partial fluxes == total: 0.0378874815651 == 0.0378874815651 2.42833097736e-18 Source flux: 1.05847824223 1.05847824223 Sink flux: -0.884664882961 -0.884664882961 Block flux: -0.135925877701 -0.135925877701 Entire surf: 0.0378874815651 0.0378874815651 0.0175191418749 -0.0162855737778 Source surf area: 1.5549799671 Sink surf area: 1.0 Block surf area: 3.4450200329 Entire surf area: 6.0 == 6.0
which is better than it was. Still, the mesh is too coarse.
If the above is not sufficient, another method, that is conservative, would be needed, for example the discontinuous Galerkin (DG) method. It is not implemented in SfePy yet, so I have talked a bit with some colleagues about a possible implementation. You may need to look for a DG or finite volume solver in the meantime.
Bottom line: the special case that "works" has the gradients aligned in the x-axis direction (it is essentially a 1D problem), so the error in surface gradient computation is not present.
That's all, let me know if it makes sense.
r.
Hi Robert,
I'm attaching the files. Also attaching a screenshot from paraview of the voltage solution.
Two questions:
- I know you were suggesting I use a higher order for the voltage field. But I haven't found that it makes much difference. For example, with approx_order==1 results are: Source flux: 5.70041929669 Sink flux: -3.1885130653 Block flux: -0.222003267731 Entire surf: 2.28990296366
and with approx_order==2, results are: Source flux: 6.47939779365 Sink flux: -3.90908956187 Block flux: 0.0399662212412 Entire surf: 2.61027445302
Its true that the amount of leakage in the block region has decreased, but the proportion of current that is observed at the Sink relative to the source is 56% in the first case and 60% in the second case... not much difference.
- I've been thinking more about your explanation for the flux error and I don't get it. If the solver is properly enforcing the Neuman boundary condition of zero flux on the surface then it should not reach a solution with residual approx equal to zero. I.e. the non zero flux on the surface should contribute to the residual and the system would not converge. However when I look at the residual I see something like e-10. Can you explain this? I'm wondering whether we just need another term in the equations that explicitly enforces the Neuman boundary condition in the "block" region. What do you think?
Thank you,
Geoff
On Wednesday, August 6, 2014 4:53:44 AM UTC-4, Robert Cimrman wrote:
On 08/05/2014 10:18 PM, Geoff Wright wrote:
Hi Robert,
Thanks for looking into it. Your response makes sense, I do see higher flux near the dirchlet boundary. I've been experimenting with denser meshes but haven't managed to get the flux error down to a satisfactory level yet.
I noticed that gmsh has the ability to generate a 2nd order mesh, which I thought was interesting as it might reduce the gradient errors. However I haven't had any success importing this mesh into sfepy. Do you know if it is possible? I've been generating it in gmsh by clicking on "Set order 2", then export it from gmsh as .mesh format, but then when I load it sfepy spits out errors about 'skipping unknown entity'. I'm attaching the 2nd order mesh to this post.
The mesh order 2 means that 10-node tetrahedrons are used. Those are not currently supported. However, use of the order two mesh is essentially same as using the order 1 mesh and field approximation order 2 - the only difference is that the sphere surface in the former case is more "spherical" - SfePy generates the extra nodes for order 2 automatically, but they lay on the flat tetrahedral sides, as the code does not know that the geometry is actually a sphere. But it should be the same inside the sphere (IMHO).
Note: this geo file is a bit different to what we were looking at earlier. I'm now defining the sink and source to both be well inside the overall domain, the source is a sphere and the sink is a cylinder. Both are much smaller than the outer volume. I've found this reduces the flux error to around 20%.
Could you also send the region definitions, so that I could run a simulation with the mesh?
r.
Thanks for your help,
Geoff
On Monday, August 4, 2014 5:30:35 PM UTC-4, Robert Cimrman wrote:
Hi Geoff,
the explanation, AFAIK, is below. You will need the latest git sources
to
try it (see the attachment), as several things were fixed or updated.
On 08/04/2014 09:11 PM, Geoff Wright wrote:
Okay sounds good. Keep me posted!
Thank you,
Geoff
On Monday, August 4, 2014 2:21:20 PM UTC-4, Robert Cimrman wrote:
On 08/04/2014 05:21 PM, Geoff Wright wrote:
Hi Robert,
When I try the extend_cell_data on the surface of a cube, I get an error: "ValueError: The weights and list don't have the same length." I attached the file. It uses a mesh that is provided with sfepy.. can you take
a
look?
I have fixed extend_cell_data() to work, see e46188f541f9bbe.
This is another example where the flux doesn't make sense. If I use only vertices on one face as the source then the flux values are good. But if you run the file as-is, the source extends a bit into the adjacent faces, and the flux results are incorrect. The electric field values look okay, and the voltage field looks fine. My results are as follows:
Source flux: 1.09348063065 Sink flux: -0.90821822135 Block flux: -0.040501111711 Entire surf: 0.298397270469 0.0181174920822 -0.0124582155367 Source surf area: 3.05555373528 Sink surf area: 1.0
In [1]: 1.09348063065-0.90821822135-0.040501111711 Out[1]: 0.14476129758900003
which is not 0.298397270469 - this is because the Gamma_Left + Gamma_Right + Blocked < EntireSurf - some faces are missing there.
To fix that, use the 'f' operators instead of 'v':
'Gamma_Left' : ('r.EntireSurf *f vertices in (x < -0.3)',
'facet'), 'Gamma_Right' : ('r.EntireSurf *f vertices in (x > 0.4999)', 'facet'), 'Blocked' : ('r.EntireSurf -f (r.Gamma_Left +f r.Gamma_Right)', 'facet'),
This requires the ce7a6a73166e5d572 commit - there was a bug (or missing feature) that made the 'r.EntireSurf *f vertices in (x < -0.3)' kind of selection fail.
I'm pretty sure there is an issue with the surface_flux function.
Is this
something you will be able to look at? I'm trying to decide whether to continue investing in this or switch to another tool/software.
I am actually up to something, stay tuned. (And the surface_flux function is innocent - it comes down to region definitions and gradient computation accuracy).
After the above changes, the sum of partial fluxes equals to the entire surface flux. However, the FEM is not locally conservative, so there is an inherent error in flux computation. I modified the file to save also the partial fluxes in the surface region parts - you may notice a relatively high error (nonzero values) in the Blocked region near the Dirichlet boundary, where the gradients are high. To minimalize that, you can:
- use a finer mesh (at least near the Dirichlet boundary)
- use a higher order approximation Both will soon make the direct solver unusable. Luckily, the Laplacian can be easily resolved by conjugate gradients, see the commented out solver_0 (PETSc required for that).
The attached file uses approx_order 3 and gives:
Sum of partial fluxes == total: 0.0378874815651 == 0.0378874815651 2.42833097736e-18 Source flux: 1.05847824223 1.05847824223 Sink flux: -0.884664882961 -0.884664882961 Block flux: -0.135925877701 -0.135925877701 Entire surf: 0.0378874815651 0.0378874815651 0.0175191418749 -0.0162855737778 Source surf area: 1.5549799671 Sink surf area: 1.0 Block surf area: 3.4450200329 Entire surf area: 6.0 == 6.0
which is better than it was. Still, the mesh is too coarse.
If the above is not sufficient, another method, that is conservative, would be needed, for example the discontinuous Galerkin (DG) method. It is not implemented in SfePy yet, so I have talked a bit with some colleagues about a possible implementation. You may need to look for a DG or finite volume solver in the meantime.
Bottom line: the special case that "works" has the gradients aligned in the x-axis direction (it is essentially a 1D problem), so the error in surface gradient computation is not present.
That's all, let me know if it makes sense.
r.
Hi Geoff,
On 08/06/2014 09:58 PM, Geoff Wright wrote:
Hi Robert,
I'm attaching the files. Also attaching a screenshot from paraview of the voltage solution.
Two questions:
- I know you were suggesting I use a higher order for the voltage field. But I haven't found that it makes much difference. For example, with approx_order==1 results are: Source flux: 5.70041929669 Sink flux: -3.1885130653 Block flux: -0.222003267731 Entire surf: 2.28990296366
My results with approx_order==1:
Source flux: 0.113939919911 Sink flux: -0.094333017518 Block flux: 1.73879971396e-05 Entire surf: 0.0196242903901
Much better, right? :) That's because I used the same resistivity in the laplace equation and in the flux evaluation. You were using the getResistivity() output in the equation and
m = Material('coef', sigma=nm.eye(3))
in flux evaluation. So use
out['val'] = mult
out['sigma'] = mult * np.eye(3)
in getResistivity() and
snkf = pb.evaluate('d_surface_flux.i2.Gamma_Left(p.sigma, v)', mode='eval')
instead of
snkf = pb.evaluate('d_surface_flux.i1.Gamma_Left(coef.sigma, v)', mode='eval', coef=m)
etc in post_process(),
and with approx_order==2, results are: Source flux: 6.47939779365 Sink flux: -3.90908956187 Block flux: 0.0399662212412 Entire surf: 2.61027445302
I will try that tomorrow on a bigger machine...
Its true that the amount of leakage in the block region has decreased, but the proportion of current that is observed at the Sink relative to the source is 56% in the first case and 60% in the second case... not much difference.
- I've been thinking more about your explanation for the flux error and I don't get it. If the solver is properly enforcing the Neuman boundary condition of zero flux on the surface then it should not reach a solution with residual approx equal to zero. I.e. the non zero flux on the surface should contribute to the residual and the system would not converge. However when I look at the residual I see something like e-10. Can you explain this? I'm wondering whether we just need another term in the equations that explicitly enforces the Neuman boundary condition in the "block" region. What do you think?
The Neumann boundary condition can be enforced by adding a term to the equation. When it is zero, the term is zero, so it disappears from the equation. Maybe you can try adding a dw_surface_flux(...) = 0 equation via a Lagrange multiplier, but I do not see how it could be better.
Everything is in the weak sense with respect to the test function space. And the gradient computation on surface is a sort of extrapolation from the inside of the boundary element - it is influenced by the inside of the domain (the normal component of the gradient) -> the smaller the boundary elements the better. That is all explanation I can come up with now.
r.
Hi Robert,
Thanks for the response -- and good catch! The first order results are a lot better now with the bugfix in the material. But still losing about 20% of flux using first order. I tried running it with approx_order=2, and using the petsc solver, but it always converges to this trivial/invalid solution:
Source flux: 4.69775273941 Sink flux: -5.7976979887e-25 Block flux: 8.34388481281e-06 Entire surf: 4.6977610833 Sum of partial fluxes == total: 4.6977610833 == 4.6977610833
I'm hopeful that if we could get the second order field working then the errors would be small enough to be tolerable. Any ideas how to make it converge to the right solution? I attached the definitions file and two screenshots showing the 1st order and invalid 2nd order results. Its the same mesh as my previous post.
Thank you,
Geoff
On Wednesday, August 6, 2014 6:55:03 PM UTC-4, Robert Cimrman wrote:
Hi Geoff,
Hi Robert,
I'm attaching the files. Also attaching a screenshot from paraview of
On 08/06/2014 09:58 PM, Geoff Wright wrote: the
voltage solution.
Two questions:
- I know you were suggesting I use a higher order for the voltage field. But I haven't found that it makes much difference. For example, with approx_order==1 results are: Source flux: 5.70041929669 Sink flux: -3.1885130653 Block flux: -0.222003267731 Entire surf: 2.28990296366
My results with approx_order==1:
Source flux: 0.113939919911 Sink flux: -0.094333017518 Block flux: 1.73879971396e-05 Entire surf: 0.0196242903901
Much better, right? :) That's because I used the same resistivity in the laplace equation and in the flux evaluation. You were using the getResistivity() output in the equation and
m = Material('coef', sigma=nm.eye(3))
in flux evaluation. So use
out['val'] = mult out['sigma'] = mult * np.eye(3)
in getResistivity() and
snkf = pb.evaluate('d_surface_flux.i2.Gamma_Left(p.sigma, v)', mode='eval')
instead of
snkf = pb.evaluate('d_surface_flux.i1.Gamma_Left(coef.sigma, v)', mode='eval', coef=m)
etc in post_process(),
and with approx_order==2, results are: Source flux: 6.47939779365 Sink flux: -3.90908956187 Block flux: 0.0399662212412 Entire surf: 2.61027445302
I will try that tomorrow on a bigger machine...
Its true that the amount of leakage in the block region has decreased, but the proportion of current that is observed at the Sink relative to the source is 56% in the first case and 60% in the second case... not much difference.
- I've been thinking more about your explanation for the flux error and I don't get it. If the solver is properly enforcing the Neuman boundary condition of zero flux on the surface then it should not reach a solution with residual approx equal to zero. I.e. the non zero flux on the surface should contribute to the residual and the system would not converge. However when I look at the residual I see something like e-10. Can you explain this? I'm wondering whether we just need another term in the equations that explicitly enforces the Neuman boundary condition in the "block" region. What do you think?
The Neumann boundary condition can be enforced by adding a term to the equation. When it is zero, the term is zero, so it disappears from the equation. Maybe you can try adding a dw_surface_flux(...) = 0 equation via a Lagrange multiplier, but I do not see how it could be better.
Everything is in the weak sense with respect to the test function space. And the gradient computation on surface is a sort of extrapolation from the inside of the boundary element - it is influenced by the inside of the domain (the normal component of the gradient) -> the smaller the boundary elements the better. That is all explanation I can come up with now.
r.
On 08/07/2014 01:48 AM, Geoff Wright wrote:
Hi Robert,
Thanks for the response -- and good catch! The first order results are a lot better now with the bugfix in the material. But still losing about 20% of flux using first order. I tried running it with approx_order=2, and using the petsc solver, but it always converges to this trivial/invalid solution:
Source flux: 4.69775273941 Sink flux: -5.7976979887e-25 Block flux: 8.34388481281e-06 Entire surf: 4.6977610833 Sum of partial fluxes == total: 4.6977610833 == 4.6977610833
I could get the petsc solvers converge either. But I had some success with pyamg and finer geometry (with smaller domain and coarser outer sphere- probably not needed):
lc = 0.015;
...
// Define the outer sphere lc = 10.0; R = 20;
solver_0 = { 'name' : 'ls', 'kind' : 'ls.pyamg', 'method' : 'smoothed_aggregation_solver', 'accel' : 'gmres', 'eps_r' : 1e-18, # rtol - not met anyway }
results for approx_order = 1 in:
(two years old pyamg) Source flux: 0.129920699311 Sink flux: -0.120377402218 Block flux: -0.0479851466075 Entire surf: -0.0384418495146
(current pyamg from github) Source flux: 0.12990612732 Sink flux: -0.120381187623 Block flux: -0.0480000264776 Entire surf: -0.0384750867815
although it reduces the residual by two orders of magnitude only. Guess it's enough.
I'm hopeful that if we could get the second order field working then the errors would be small enough to be tolerable. Any ideas how to make it converge to the right solution? I attached the definitions file and two screenshots showing the 1st order and invalid 2nd order results. Its the same mesh as my previous post.
Finding an iterative solver + preconditioner that works for a problem is often difficult. Even pyamg does not converge for the second order field. Also note that I had to use gmres as an accelerator, because cg failed after a while saying that the matrix is indefinite - there is some instability, as this should not happen for a Laplacian. Maybe some scaling of the matrix is needed. I am not expert in this.
There are multigrid solvers in PETSc too but I never tried them - not sure if the sfepy interface is flexible enough to use them.
r.
On 08/07/2014 01:59 PM, Robert Cimrman wrote:
On 08/07/2014 01:48 AM, Geoff Wright wrote:
Hi Robert,
Thanks for the response -- and good catch! The first order results are a lot better now with the bugfix in the material. But still losing about 20% of flux using first order. I tried running it with approx_order=2, and using the petsc solver, but it always converges to this trivial/invalid solution:
Source flux: 4.69775273941 Sink flux: -5.7976979887e-25 Block flux: 8.34388481281e-06 Entire surf: 4.6977610833 Sum of partial fluxes == total: 4.6977610833 == 4.6977610833
I could get the petsc solvers converge either. But I had some success with
...could not...
Hi Robert,
I'm getting pretty good results now, even on my final model which is a bit more complex than what I've been showing you. In addition to the points your mentioned, which were very helpful, I also found that running the 'optimize 3D' option in gmsh does give a significant improvement in reducing the magnitude of this error. My hypothesis is that before running the mesh optimizer there may be a lot of very small cells which might have wild gradients. The 3D optimizer I assume does some sort of regularization, removing the small cells. After combining all these effects the loss in flux at the sink is approx 1%, which is acceptable for my purposes.
Overall factors which helped:
- increasing mesh density around the areas of interest
- choosing the right solver
- using gmsh 3D optimizer to eliminate small cells
- fixing various bugs which Robert found (thanks!)
G
On Thursday, August 7, 2014 8:00:58 AM UTC-4, Robert Cimrman wrote:
On 08/07/2014 01:59 PM, Robert Cimrman wrote:
On 08/07/2014 01:48 AM, Geoff Wright wrote:
Hi Robert,
Thanks for the response -- and good catch! The first order results are a lot better now with the bugfix in the material. But still losing about 20% of flux using first order. I tried running it with approx_order=2, and using the petsc solver, but it always converges to this trivial/invalid solution:
Source flux: 4.69775273941 Sink flux: -5.7976979887e-25 Block flux: 8.34388481281e-06 Entire surf: 4.6977610833 Sum of partial fluxes == total: 4.6977610833 == 4.6977610833
I could get the petsc solvers converge either. But I had some success with
...could not...
Hi Geoff,
On 08/07/2014 10:05 PM, Geoff Wright wrote:
Hi Robert,
I'm getting pretty good results now, even on my final model which is a bit more complex than what I've been showing you. In addition to the points your mentioned, which were very helpful, I also found that running the 'optimize 3D' option in gmsh does give a significant improvement in reducing the magnitude of this error. My hypothesis is that before running the mesh optimizer there may be a lot of very small cells which might have wild gradients. The 3D optimizer I assume does some sort of regularization, removing the small cells. After combining all these effects the loss in flux at the sink is approx 1%, which is acceptable for my purposes.
Glad to hear that! If it's not secret, I would like to see some nice figures :)
Overall factors which helped:
- increasing mesh density around the areas of interest
- choosing the right solver
You are using pyamg, right?
- using gmsh 3D optimizer to eliminate small cells
This has to be run from the GUI, or is there a command-line switch as well? I tried:
gmsh -3 sphere_disc_inside_sphere.geo -format mesh -optimize
r.
- fixing various bugs which Robert found (thanks!)
Hi Robert,
On Friday, August 8, 2014 4:33:03 AM UTC-4, Robert Cimrman wrote:
Hi Geoff,
On 08/07/2014 10:05 PM, Geoff Wright wrote:
Hi Robert,
I'm getting pretty good results now, even on my final model which is a bit more complex than what I've been showing you. In addition to the points your mentioned, which were very helpful, I also found that running the 'optimize 3D' option in gmsh does give a significant improvement in reducing the magnitude of this error. My hypothesis is that before running the mesh optimizer there may be a lot of very small cells which might have wild gradients. The 3D optimizer I assume does some sort of regularization, removing the small cells. After combining all these effects the loss in flux at the sink is approx 1%, which is acceptable for my purposes.
Glad to hear that! If it's not secret, I would like to see some nice figures :)
Unfortunately the full model is proprietary right now. Once it becomes public I will try and remember to come back here and add the figures!
Overall factors which helped:
- increasing mesh density around the areas of interest
- choosing the right solver
You are using pyamg, right?
Yes, I'm using pyamg as you described earlier in this thread
- using gmsh 3D optimizer to eliminate small cells
This has to be run from the GUI, or is there a command-line switch as well? I tried:
gmsh -3 sphere_disc_inside_sphere.geo -format mesh -optimize
Yes, I think that is equivalent to what I did in the GUI. Theres also the optimize_netgen option which I haven't experimented with.
r.
- fixing various bugs which Robert found (thanks!)
On 08/08/2014 04:31 PM, Geoff Wright wrote:
Hi Robert,
On Friday, August 8, 2014 4:33:03 AM UTC-4, Robert Cimrman wrote:
Hi Geoff,
On 08/07/2014 10:05 PM, Geoff Wright wrote:
Hi Robert,
I'm getting pretty good results now, even on my final model which is a bit more complex than what I've been showing you. In addition to the points your mentioned, which were very helpful, I also found that running the 'optimize 3D' option in gmsh does give a significant improvement in reducing the magnitude of this error. My hypothesis is that before running the mesh optimizer there may be a lot of very small cells which might have wild gradients. The 3D optimizer I assume does some sort of regularization, removing the small cells. After combining all these effects the loss in flux at the sink is approx 1%, which is acceptable for my purposes.
Glad to hear that! If it's not secret, I would like to see some nice figures :)
Unfortunately the full model is proprietary right now. Once it becomes public I will try and remember to come back here and add the figures!
No problem, just curious :)
Overall factors which helped:
- increasing mesh density around the areas of interest
- choosing the right solver
You are using pyamg, right?
Yes, I'm using pyamg as you described earlier in this thread
FYI: With the optimized mesh the pyamg solver converges even for approx_order = 2 (while for 3 it does not). You can safely reduce the integral order to be 2 * approx_order, to speed up assembling.
- using gmsh 3D optimizer to eliminate small cells
This has to be run from the GUI, or is there a command-line switch as well? I tried:
gmsh -3 sphere_disc_inside_sphere.geo -format mesh -optimize
Yes, I think that is equivalent to what I did in the GUI. Theres also the optimize_netgen option which I haven't experimented with.
ok, thanks!
r.
participants (2)
-
Geoff Wright
-
Robert Cimrman