Hi Robert,

I'm attaching the files.  Also attaching a screenshot from paraview of the voltage solution.

Two questions:
1. I know you were suggesting I use a higher order for the voltage field.  But I haven't found that it makes much difference.  For example, with approx_order==1 results are:
Source flux:  5.70041929669
Sink flux:  -3.1885130653
Block flux:  -0.222003267731
Entire surf:  2.28990296366

and with approx_order==2, results are:
Source flux:  6.47939779365
Sink flux:  -3.90908956187
Block flux:  0.0399662212412
Entire surf:  2.61027445302

Its true that the amount of leakage in the block region has decreased, but the proportion of current that is observed at the Sink relative to the source is 56% in the first case and 60% in the second case... not much difference.


2. I've been thinking more about your explanation for the flux error and I don't get it.  If the solver is properly enforcing the Neuman boundary condition of zero flux on the surface then it should not reach a solution with residual approx equal to zero.  I.e. the non zero flux on the surface should contribute to the residual and the system would not converge.  However when I look at the residual I see something like e-10.  Can you explain this?  I'm wondering whether we just need another term in the equations that explicitly enforces the Neuman boundary condition in the "block" region.  What do you think?

Thank you,

Geoff


On Wednesday, August 6, 2014 4:53:44 AM UTC-4, Robert Cimrman wrote:
On 08/05/2014 10:18 PM, Geoff Wright wrote:
> Hi Robert,
>
> Thanks for looking into it.  Your response makes sense, I do see higher
> flux near the dirchlet boundary.  I've been experimenting with denser
> meshes but haven't managed to get the flux error down to a satisfactory
> level yet.
>
> I noticed that gmsh has the ability to generate a 2nd order mesh, which I
> thought was interesting as it might reduce the gradient errors.  However I
> haven't had any success importing this mesh into sfepy.  Do you know if it
> is possible?  I've been generating it in gmsh by clicking on "Set order 2",
> then export it from gmsh as .mesh format, but then when I load it sfepy
> spits out errors about 'skipping unknown entity'.  I'm attaching the 2nd
> order mesh to this post.

The mesh order 2 means that 10-node tetrahedrons are used. Those are not
currently supported. However, use of the order two mesh is essentially same as
using the order 1 mesh and field approximation order 2 - the only difference is
that the sphere surface in the former case is more "spherical" - SfePy
generates the extra nodes for order 2 automatically, but they lay on the flat
tetrahedral sides, as the code does not know that the geometry is actually a
sphere. But it should be the same inside the sphere (IMHO).

> Note: this geo file is a bit different to what we were looking at earlier.
>   I'm now defining the sink and source to both be well inside the overall
> domain, the source is a sphere and the sink is a cylinder.  Both are much
> smaller than the outer volume.  I've found this reduces the flux error to
> around 20%.

Could you also send the region definitions, so that I could run a simulation
with the mesh?

r.

> Thanks for your help,
>
> Geoff
>
>
>
> On Monday, August 4, 2014 5:30:35 PM UTC-4, Robert Cimrman wrote:
>>
>> Hi Geoff,
>>
>> the explanation, AFAIK, is below. You will need the latest git sources to
>> try
>> it (see the attachment), as several things were fixed or updated.
>>
>> On 08/04/2014 09:11 PM, Geoff Wright wrote:
>>> Okay sounds good.  Keep me posted!
>>>
>>> Thank you,
>>>
>>> Geoff
>>>
>>>
>>> On Monday, August 4, 2014 2:21:20 PM UTC-4, Robert Cimrman wrote:
>>>>
>>>> On 08/04/2014 05:21 PM, Geoff Wright wrote:
>>>>> Hi Robert,
>>>>>
>>>>> When I try the extend_cell_data on the surface of a cube, I get an
>>>>> error:
>>>>> "ValueError: The weights and list don't have the same length."  I
>>>>> attached
>>>>> the file.  It uses a mesh that is provided with sfepy.. can you take a
>>>>> look?
>>
>> I have fixed extend_cell_data() to work, see e46188f541f9bbe.
>>
>>>>> This is another example where the flux doesn't make sense.  If I use
>>>>> only
>>>>> vertices on one face as the source then the flux values are good.  But
>>>>> if
>>>>> you run the file as-is, the source extends a bit into the adjacent
>>>>> faces,
>>>>> and the flux results are incorrect.  The electric field values look
>>>>> okay,
>>>>> and the voltage field looks fine.  My results are as follows:
>>>>>
>>>>> Source flux:  1.09348063065
>>>>> Sink flux:  -0.90821822135
>>>>> Block flux:  -0.040501111711
>>>>> Entire surf:  0.298397270469 0.0181174920822 -0.0124582155367
>>>>> Source surf area: 3.05555373528
>>>>> Sink surf area: 1.0
>>
>> In [1]: 1.09348063065-0.90821822135-0.040501111711
>> Out[1]: 0.14476129758900003
>>
>> which is not 0.298397270469 - this is because the Gamma_Left + Gamma_Right
>> +
>> Blocked < EntireSurf - some faces are missing there.
>>
>> To fix that, use the 'f' operators instead of 'v':
>>
>>       'Gamma_Left' : ('r.EntireSurf *f vertices in (x < -0.3)', 'facet'),
>>       'Gamma_Right' : ('r.EntireSurf *f vertices in (x > 0.4999)',
>> 'facet'),
>>       'Blocked' : ('r.EntireSurf -f (r.Gamma_Left +f r.Gamma_Right)',
>> 'facet'),
>>
>> This requires the ce7a6a73166e5d572 commit - there was a bug (or missing
>> feature) that made the 'r.EntireSurf *f vertices in (x < -0.3)' kind of
>> selection fail.
>>
>>>>>
>>>>>
>>>>> I'm pretty sure there is an issue with the surface_flux function.  Is
>>>> this
>>>>> something you will be able to look at?  I'm trying to decide whether
>> to
>>>>> continue investing in this or switch to another tool/software.
>>>>
>>>> I am actually up to something, stay tuned. (And the surface_flux
>> function
>>>> is
>>>> innocent - it comes down to region definitions and gradient computation
>>>> accuracy).
>>
>> After the above changes, the sum of partial fluxes equals to the entire
>> surface
>> flux. However, the FEM is not locally conservative, so there is an
>> inherent
>> error in flux computation. I modified the file to save also the partial
>> fluxes
>> in the surface region parts - you may notice a relatively high error
>> (nonzero
>> values) in the Blocked region near the Dirichlet boundary, where the
>> gradients
>> are high. To minimalize that, you can:
>> - use a finer mesh (at least near the Dirichlet boundary)
>> - use a higher order approximation
>> Both will soon make the direct solver unusable. Luckily, the Laplacian can
>> be
>> easily resolved by conjugate gradients, see the commented out solver_0
>> (PETSc
>> required for that).
>>
>> The attached file uses approx_order 3 and gives:
>>
>> Sum of partial fluxes == total: 0.0378874815651 == 0.0378874815651
>> 2.42833097736e-18
>> Source flux:  1.05847824223 1.05847824223
>> Sink flux:  -0.884664882961 -0.884664882961
>> Block flux:  -0.135925877701 -0.135925877701
>> Entire surf:  0.0378874815651 0.0378874815651 0.0175191418749
>> -0.0162855737778
>> Source surf area: 1.5549799671
>> Sink surf area: 1.0
>> Block surf area: 3.4450200329
>> Entire surf area: 6.0 == 6.0
>>
>> which is better than it was. Still, the mesh is too coarse.
>>
>> If the above is not sufficient, another method, that is conservative,
>> would be
>> needed, for example the discontinuous Galerkin (DG) method. It is not
>> implemented in SfePy yet, so I have talked a bit with some colleagues
>> about a
>> possible implementation. You may need to look for a DG or finite volume
>> solver
>> in the meantime.
>>
>> Bottom line: the special case that "works" has the gradients aligned in
>> the
>> x-axis direction (it is essentially a 1D problem), so the error in surface
>> gradient computation is not present.
>>
>> That's all, let me know if it makes sense.
>>
>> r.
>>
>>
>