Hi Robert,
Thanks for the response -- and good catch! The first order results are a lot better now with the bugfix in the material. But still losing about 20% of flux using first order. I tried running it with approx_order=2, and using the petsc solver, but it always converges to this trivial/invalid solution:
Source flux: 4.69775273941 Sink flux: -5.7976979887e-25 Block flux: 8.34388481281e-06 Entire surf: 4.6977610833 Sum of partial fluxes == total: 4.6977610833 == 4.6977610833
I'm hopeful that if we could get the second order field working then the errors would be small enough to be tolerable. Any ideas how to make it converge to the right solution? I attached the definitions file and two screenshots showing the 1st order and invalid 2nd order results. Its the same mesh as my previous post.
Thank you,
Geoff
On Wednesday, August 6, 2014 6:55:03 PM UTC-4, Robert Cimrman wrote:
Hi Geoff,
Hi Robert,
I'm attaching the files. Also attaching a screenshot from paraview of
On 08/06/2014 09:58 PM, Geoff Wright wrote: the
voltage solution.
Two questions:
- I know you were suggesting I use a higher order for the voltage field. But I haven't found that it makes much difference. For example, with approx_order==1 results are: Source flux: 5.70041929669 Sink flux: -3.1885130653 Block flux: -0.222003267731 Entire surf: 2.28990296366
My results with approx_order==1:
Source flux: 0.113939919911 Sink flux: -0.094333017518 Block flux: 1.73879971396e-05 Entire surf: 0.0196242903901
Much better, right? :) That's because I used the same resistivity in the laplace equation and in the flux evaluation. You were using the getResistivity() output in the equation and
m = Material('coef', sigma=nm.eye(3))
in flux evaluation. So use
out['val'] = mult out['sigma'] = mult * np.eye(3)
in getResistivity() and
snkf = pb.evaluate('d_surface_flux.i2.Gamma_Left(p.sigma, v)', mode='eval')
instead of
snkf = pb.evaluate('d_surface_flux.i1.Gamma_Left(coef.sigma, v)', mode='eval', coef=m)
etc in post_process(),
and with approx_order==2, results are: Source flux: 6.47939779365 Sink flux: -3.90908956187 Block flux: 0.0399662212412 Entire surf: 2.61027445302
I will try that tomorrow on a bigger machine...
Its true that the amount of leakage in the block region has decreased, but the proportion of current that is observed at the Sink relative to the source is 56% in the first case and 60% in the second case... not much difference.
- I've been thinking more about your explanation for the flux error and I don't get it. If the solver is properly enforcing the Neuman boundary condition of zero flux on the surface then it should not reach a solution with residual approx equal to zero. I.e. the non zero flux on the surface should contribute to the residual and the system would not converge. However when I look at the residual I see something like e-10. Can you explain this? I'm wondering whether we just need another term in the equations that explicitly enforces the Neuman boundary condition in the "block" region. What do you think?
The Neumann boundary condition can be enforced by adding a term to the equation. When it is zero, the term is zero, so it disappears from the equation. Maybe you can try adding a dw_surface_flux(...) = 0 equation via a Lagrange multiplier, but I do not see how it could be better.
Everything is in the weak sense with respect to the test function space. And the gradient computation on surface is a sort of extrapolation from the inside of the boundary element - it is influenced by the inside of the domain (the normal component of the gradient) -> the smaller the boundary elements the better. That is all explanation I can come up with now.
r.