Hi,
I'm using SfePy to solve a simple Laplace problem in a unit square with Dirichlet boundary conditions (well, what I'm actually doing is a bit more involved, but this is just a minimal example). I'm having some problems with the fact that probing with a PointsProbe occasionally returns nan instead of a proper value. I have attached a mesh file which I created by using gmsh with the attached .geo file and then manually converting it to the desired format for SfePy (that is, I removed the edges and the z-coordinates of the nodes). The following code, with the attached mesh file, should be able to reproduce this issue:
import numpy as np from sfepy.discrete import (FieldVariable, Integral, Equation, Equations, Problem) from sfepy.discrete.fem import (Mesh, FEDomain, Field) from sfepy.terms import Term from sfepy.discrete.conditions import Conditions, EssentialBC from sfepy.postprocess.viewer import Viewer
mesh = Mesh.from_file('rectangle2D.mesh') domain = FEDomain('domain', mesh) omega = domain.create_region('Omega', 'all') # Groups 1 and 3 correspond to the bottom and top of the square, respectively gamma0 = domain.create_region('Gamma 0', 'vertices of group 1', 'facet') gamma1 = domain.create_region('Gamma 1', 'vertices of group 3', 'facet') field = Field.from_args('fu', np.float64, 'scalar', omega, approx_order=2) u = FieldVariable('u', 'unknown', field) v = FieldVariable('v', 'test', field, primary_var_name='u') integral = Integral('i', order=2) term = Term.new('dw_laplace(v, u)', integral, omega, v=v, u=u) eq = Equation('Laplace equation', term) eqs = Equations([eq]) ebc0 = EssentialBC('Dirichlet 0', gamma0, {'u.all': 0.0}) ebc1 = EssentialBC('Dirichlet 1', gamma1, {'u.all': 1.0}) pb = Problem('Laplace problem', equations=eqs) pb.time_update(ebcs=Conditions([ebc0, ebc1])) vec = pb.solve() pb.save_state('rectangle2D.vtk', vec) view = Viewer('rectangle2D.vtk') view()
from sfepy.discrete.probes import PointsProbe probe = PointsProbe(np.random.rand(100000,2)) pars, res = probe.probe(pb.get_variables()['u']) print(np.any(np.isnan(res))) print(probe.points[np.nonzero(np.isnan(res))[0]])
The code above solves the Laplace problem and creates a PointsProbe with 100000 random points. It then prints True if any of the probed values equal nan, and prints the coordinates of the corresponding points. When I run this code, it usually turns up around 5-10 points which produce a nan value. This is an example of the output on my system:
sfepy: reading mesh [line2, tri3, quad4, tetra4, hexa8] (rectangle2D.mesh)... sfepy: ...done in 0.03 s sfepy: updating variables... sfepy: ...done sfepy: setting up dof connectivities... sfepy: ...done in 0.00 s sfepy: matrix shape: (13343, 13343) sfepy: assembling matrix graph... sfepy: ...done in 0.01 s sfepy: matrix structural nonzeros: 151301 (8.50e-04% fill) sfepy: updating materials... sfepy: ...done in 0.00 s sfepy: nls: iter: 0, residual: 1.819854e+01 (rel: 1.000000e+00) sfepy: rezidual: 0.01 [s] sfepy: solve: 0.07 [s] sfepy: matrix: 0.01 [s] sfepy: nls: iter: 1, residual: 4.729175e-14 (rel: 2.598656e-15) sfepy: point scalars u at [-0.5 -0.5 0. ] sfepy: range: 0.00e+00 1.00e+00 l2 norm range: 0.00e+00 1.00e+00 sfepy: iconn: 0.016510 s sfepy: kdtree: 0.000420 s sfepy: evaluating in 100000 points... sfepy: reference field: 3436 vertices sfepy: kdtree query: 0.291867 s sfepy: ref. coordinates: 0.466726 s sfepy: interpolation: 0.135751 s sfepy: ...done True [[ 0.48132644 0.32787828] [ 0.1449193 0.69671521] [ 0.48135807 0.32673518] [ 0.77758596 0.35547628] [ 0.64769394 0.79028833] [ 0.66355676 0.78836725] [ 0.4799345 0.32667195] [ 0.48147468 0.3281688 ]]
After some further examination, I've found that the problem occurs in some small triangular regions of the domain. To illustrate this problem, I made a (rather ugly, but possibly helpful) scatter plot of the values in a grid around the first point of the list above, using this (extremely messy) code:
temp = probe.points[np.nonzero(np.isnan(res))[0][0]] X, Y = np.mgrid[temp[0]-0.003:temp[0]+0.003:40j, temp[1]-0.003:temp[1]+0.003:40j] gridprobe = PointsProbe(np.array(np.array([X.flatten(), Y.flatten()]).transpose(), order='C')) plot_data = gridprobe.probe(pb.get_variables()['u'])[1].flatten()
from pylab import * figure() scatter(X.flatten()[~np.isnan(plot_data)], Y.flatten()[~np.isnan(plot_data)]) closest_points = vec.variables['u'].field.get_coor(np.argsort(np.linalg.norm(vec.variables['u'].field.get_coor()
scatter(closest_points[:,0], closest_points[:,1], color='r') scatter([temp[0]], [temp[1]], color='g') show()
The resulting plot is attached. The triangular area inside the blue grid is where probing results in nans. The green point ('temp' above) is the first nan point which was found earlier (i.e. the point (0.48132644 0.32787828)), and the red points are the four mesh nodes closest to the green point. I included the mesh points because my first assumption was that the nan area would correspond to a mesh triangle, but that does not seem to be the case.
Now the obvious question at this point is: am I doing something wrong, or is this a bug?
Regards, Jonatan
P.S. I was originally using evaluate_at() instead of a PointProbe, and that function returns 0.0 instead of nan. I'm not entirely sure if this is intended, but at least I found it extremely confusing.
Hi Jonatan,
it is definitely a bug. As a matter of coincidence, I stumbled upon it two days ago, when creating the new example [1], but did not get to investigating it properly (just worked around the bug poorly by setting close_limit parameter of probes to a higher value, see the example). So thank you for the demonstration scripts! More to the bug that below.
[1] http://sfepy.org/doc-devel/examples/diffusion/time_poisson_interactive.html
On 03/19/2015 06:13 PM, Jonatan Lehtonen wrote:
Hi,
I'm using SfePy to solve a simple Laplace problem in a unit square with Dirichlet boundary conditions (well, what I'm actually doing is a bit more involved, but this is just a minimal example). I'm having some problems with the fact that probing with a PointsProbe occasionally returns nan instead of a proper value. I have attached a mesh file which I crea,ed by using gmsh with the attached .geo file and then manually converting it to the desired format for SfePy (that is, I removed the edges and the z-coordinates of the nodes). The following code, with the attached mesh file, should be able to reproduce this issue:
<snip> > > The code above solves the Laplace problem and creates a PointsProbe with > 100000 random points. It then prints True if any of the probed values equal > nan, and prints the coordinates of the corresponding points. When I run > this code, it usually turns up around 5-10 points which produce a nan > value. This is an example of the output on my system: > <snip>
After some further examination, I've found that the problem occurs in some small triangular regions of the domain. To illustrate this problem, I made a (rather ugly, but possibly helpful) scatter plot of the values in a grid around the first point of the list above, using this (extremely messy) code:
temp = probe.points[np.nonzero(np.isnan(res))[0][0]] X, Y = np.mgrid[temp[0]-0.003:temp[0]+0.003:40j, temp[1]-0.003:temp[1]+0.003:40j] gridprobe = PointsProbe(np.array(np.array([X.flatten(), Y.flatten()]).transpose(), order='C')) plot_data = gridprobe.probe(pb.get_variables()['u'])[1].flatten()
from pylab import * figure() scatter(X.flatten()[~np.isnan(plot_data)], Y.flatten()[~np.isnan(plot_data)]) closest_points = vec.variables['u'].field.get_coor(np.argsort(np.linalg.norm(vec.variables['u'].field.get_coor()
- temp, 2, axis=1))[:4])
scatter(closest_points[:,0], closest_points[:,1], color='r') scatter([temp[0]], [temp[1]], color='g') show()
I have added this:
from sfepy.postprocess.plot_cmesh import plot_wireframe
ax = gca() plot_wireframe(ax, domain.cmesh) ax.axis('equal')
The resulting plot is attached. The triangular area inside the blue grid is where probing results in nans. The green point ('temp' above) is the first nan point which was found earlier (i.e. the point (0.48132644 0.32787828)), and the red points are the four mesh nodes closest to the green point. I included the mesh points because my first assumption was that the nan area would correspond to a mesh triangle, but that does not seem to be the case.
The figure resulting from the above code, zoomed to the region of interest, is attached - it displays also a wire-frame of the mesh.
Now the obvious question at this point is: am I doing something wrong, or is this a bug?
The bug is caused by a wrong assumption I made in past. When evaluating a field in general points, the algorithm, in essence, is as follows:
the reference-physical element mapping to the point - if the reference coordinates are within [-close_limit, 1+close_limit], the point is in that cell -> evaluate the FE basis there, and interpolate the field to get the value in the point - otherwise continue 3. if the above fails, the value is undetermined (-> nan in the probe)
As you can see in the attached figure, the points in the white triangle are closest to a mesh vertex, that is shared by four cells (elements) and the cell the point is in is not among them. The points close to the cell edge are blue, because they fall within the close_limit range. Try: gridprobe.set_options(close_limit=0.0).
Regards, Jonatan
P.S. I was originally using evaluate_at() instead of a PointProbe, and that function returns 0.0 instead of nan. I'm not entirely sure if this is intended, but at least I found it extremely confusing.
The probes use evaluate_at() internally, but they set the values in not-found points explicitly to nan, to indicate probing outside of the domain, which can legitimately happen. While evaluate_at() per se was meant to be used inside the domain and should not fail - bad guess :) You are right it is confusing, evaluate_at() should return nan as well.
Until it is fixed properly, which might take some time as the bug is in a low-level cython code, you can hack-around with setting close_limit to a higher value. (Also, it is just the time of grant project proposals and tons of bureaucracy :].)
Could you create a new issue for this, with a link to this discussion?
Thank you for your debugging effort! r.
Hi Robert,
Thank you for the fast response! I have opened a new issue at https://github.com/sfepy/sfepy/issues/285.
I don't have a very good intuition about how close_limit works, what do you think would be a "safe" value to ensure that this bug does not occur? Is that dependent on element shape? Also, are there some drawbacks to using a higher value?
Regards, Jonatan
On 03/20/2015 10:36 AM, Jonatan Lehtonen wrote:
Hi Robert,
Thank you for the fast response! I have opened a new issue at https://github.com/sfepy/sfepy/issues/285.
Thanks!
I don't have a very good intuition about how close_limit works, what do you think would be a "safe" value to ensure that this bug does not occur? Is that dependent on element shape? Also, are there some drawbacks to using a higher value?
The close_limit is relative to the element size, so, roughly, 0.5 corresponds half the element size etc. That means, that the value attributed to a point can be from a different point as far as half-element size away. The drawback are mainly a loss of precision (the finer the mesh the smaller) and irregularity of the sampled values. Imagine sampling a smooth function along a line - if the actual sampling points are shifted a bit randomly (i.e. not on the line), the sampled function will not be smooth but will jump slightly (*). Anyway, for getting an overview of a solution it might suffice. Try setting it to the smallest value that does not give nans.
r. (*) This happens in the time_poisson_interactive.py example, especially with the diffusion velocity.
Hi Robert,
Do you have any timeline for when this bug might be fixed? I don't want to rush you, I understand that it might be a pain to fix and you did mention that you're busy. It would just be really helpful for me to know if I can expect a fix in, say, the next month or so.
I do have a workaround for the bug (essentially using increasingly larger values of close_limit until no results are NaN or a pre-defined maximum value of close_limit is reached). So, I'm not in trouble if this doesn't get fixed. =)
Regards, Jonatan
On Friday, March 20, 2015 at 12:03:27 PM UTC+2, Robert Cimrman wrote:
On 03/20/2015 10:36 AM, Jonatan Lehtonen wrote:
Hi Robert,
Thank you for the fast response! I have opened a new issue at https://github.com/sfepy/sfepy/issues/285.
Thanks!
Hi Jonatan,
On 04/15/2015 12:53 PM, Jonatan Lehtonen wrote:
Hi Robert,
Do you have any timeline for when this bug might be fixed? I don't want to rush you, I understand that it might be a pain to fix and you did mention that you're busy. It would just be really helpful for me to know if I can expect a fix in, say, the next month or so.
I am about to merge in some pretty big changes of the internals (I will post a summary here), and then I may have some time to address this. So, if things go well, it might be fixed within a month.
r.
I do have a workaround for the bug (essentially using increasingly larger values of close_limit until no results are NaN or a pre-defined maximum value of close_limit is reached). So, I'm not in trouble if this doesn't get fixed. =)
Regards, Jonatan
On Friday, March 20, 2015 at 12:03:27 PM UTC+2, Robert Cimrman wrote:
On 03/20/2015 10:36 AM, Jonatan Lehtonen wrote:
Hi Robert,
Thank you for the fast response! I have opened a new issue at https://github.com/sfepy/sfepy/issues/285.
Thanks!
On 04/15/2015 01:02 PM, Robert Cimrman wrote:
Hi Jonatan,
On 04/15/2015 12:53 PM, Jonatan Lehtonen wrote:
Hi Robert,
Do you have any timeline for when this bug might be fixed? I don't want to rush you, I understand that it might be a pain to fix and you did mention that you're busy. It would just be really helpful for me to know if I can expect a fix in, say, the next month or so.
I am about to merge in some pretty big changes of the internals (I will post a summary here), and then I may have some time to address this. So, if things go well, it might be fixed within a month.
I have the new code almost ready. To test it properly, a little help would be appreciated - could you prepare some small meshes in gmsh with really ugly elements (large size variations, elements with very acute angle, etc)?
It produces no nans with your original script...
r.
Hi Robert,
Sounds great!
Unfortunately I don't really know how to manually create arbitrary meshes in gmsh (the meshes it generates by default are "too good" for this purpose). However, here's at least one mesh that I think should be fairly horrible (and is aptly named). There are some points in there for which there are almost ten elements "between" the point and its closest mesh node. Naturally, you can make the mesh worse by an arbitrary factor by changing nodes 4-13 (increase y-coordinate or push x-coordinates closer to each other) or by adding more triangles in the same style.
I tested this mesh with the latest code from Github (which, I assume, doesn't contain the changes you're talking about). I used the same solution script as in the OP (just replaced "vertices of group 3" with "vertices of group 2"), and picked points randomly from the quadrilateral with corners at (0, 0), (0, 1), (-0.1, 4), (-1, 4). Using PointsProbe with close_limit=0.0, out of the 92475 random points as many as 55155 returned NaN.
I hope this helps. =)
-Jonatan
On Tuesday, April 28, 2015 at 7:30:49 PM UTC+3, Robert Cimrman wrote:
On 04/15/2015 01:02 PM, Robert Cimrman wrote:
Hi Jonatan,
On 04/15/2015 12:53 PM, Jonatan Lehtonen wrote:
Hi Robert,
Do you have any timeline for when this bug might be fixed? I don't want
to
rush you, I understand that it might be a pain to fix and you did
mention
that you're busy. It would just be really helpful for me to know if I
can
expect a fix in, say, the next month or so.
I am about to merge in some pretty big changes of the internals (I will
post a
summary here), and then I may have some time to address this. So, if
things go
well, it might be fixed within a month.
I have the new code almost ready. To test it properly, a little help would be appreciated - could you prepare some small meshes in gmsh with really ugly elements (large size variations, elements with very acute angle, etc)?
It produces no nans with your original script...
r.
On 04/28/2015 07:52 PM, Jonatan Lehtonen wrote:
Hi Robert,
Sounds great!
Unfortunately I don't really know how to manually create arbitrary meshes in gmsh (the meshes it generates by default are "too good" for this purpose). However, here's at least one mesh that I think should be fairly horrible (and is aptly named). There are some points in there for which there are almost ten elements "between" the point and its closest mesh node. Naturally, you can make the mesh worse by an arbitrary factor by changing nodes 4-13 (increase y-coordinate or push x-coordinates closer to each other) or by adding more triangles in the same style.
I tested this mesh with the latest code from Github (which, I assume, doesn't contain the changes you're talking about). I used the same solution script as in the OP (just replaced "vertices of group 3" with "vertices of group 2"), and picked points randomly from the quadrilateral with corners at (0, 0), (0, 1), (-0.1, 4), (-1, 4). Using PointsProbe with close_limit=0.0, out of the 92475 random points as many as 55155 returned NaN.
I hope this helps. =)
Yes, it's horrible. :)
Short story: The new code fails as well, I know why (non-convex domain), and do not know a workaround (quick internet search did not help either).
Long story: Imagine a mesh with a hole (e.g. a long rectangular hole). The mesh has nice small elements on one side of the hole, and very big elements on the other side, much bigger than the hole side-to-side dimension. If a target (probe) point is in a big element, and the closest mesh point in a small element on the other side of the hole, I have no idea, how to reach it from there. The current algorithm follows the elements in the direction from the closest point to the target, but it cannot cross the hole. It would work for a convex hull mesh of the original mesh, though.
So, for the moment, I will merge the current "solution", as it is about 2-3x faster than the old one and works well for reasonable meshes (much better than the old one).
We may also add a slow "brute force" fall back algorithm (search all elements in spheres of increasing radius centered in the target point - scipy's kdtree will help here), that will be optionally tried for points not found by the fast one.
If you are interested, try searching for articles or for how libraries like VTK do it (there are probes in paraview, and VTK probes are now in sfepy as well, thanks to Vladimir, see [1], but work only for approximation order 1 (or 0)).
r.
Yes, it's horrible. :)
Short story: The new code fails as well, I know why (non-convex domain), and do not know a workaround (quick internet search did not help either).
Long story: Imagine a mesh with a hole (e.g. a long rectangular hole). The mesh has nice small elements on one side of the hole, and very big elements on the other side, much bigger than the hole side-to-side dimension. If a target (probe) point is in a big element, and the closest mesh point in a small element on the other side of the hole, I have no idea, how to reach it from there. The current algorithm follows the elements in the direction from the closest point to the target, but it cannot cross the hole. It would work for a convex hull mesh of the original mesh, though.
So, for the moment, I will merge the current "solution", as it is about 2-3x faster than the old one and works well for reasonable meshes (much better than the old one).
We may also add a slow "brute force" fall back algorithm (search all elements in spheres of increasing radius centered in the target point - scipy's kdtree will help here), that will be optionally tried for points not found by the fast one.
If you are interested, try searching for articles or for how libraries like VTK do it (there are probes in paraview, and VTK probes are now in sfepy as well, thanks to Vladimir, see [1], but work only for approximation order 1 (or 0)).
r.
All I've been able to find is the algorithm described in the accepted answer of this StackExchange question http://scicomp.stackexchange.com/questions/2624/finding-which-triangles-points-are-in. The upside is that it should never fail for awful.mesh. The downside is that it may fail to converge in non-convex meshes (since it cannot jump over gaps), it might only work for triangular/tetrahedral meshes, and it runs in O(N^(1/d)). The runtime would probably be significantly improved by first finding a nearby mesh node with the kdtree, though.
My intuition says that it's always possible to find a counter-example for any non-brute-force algorithm though, so it may not be worth spending too much time on it. E.g. while the above algorithm would work for awful.mesh, it would fail the moment you add one more triangle with corners at (0, 1), (-0.1, 4) and, say, (0.5, 0.6). Then it wouldn't converge for a point at (0.45, 0.45) with first-order elements.
Naturally one option would be to add an extra hidden mesh which, when combined with the original mesh, gets the convex hull of the object. That way you could probably guarantee convergence, but that would involve a considerable computational cost.
The best option is probably to go with what you've got and add the fall back brute force, as you described.
-Jonatan
On 04/29/2015 12:31 PM, Jonatan Lehtonen wrote:
Yes, it's horrible. :)
Short story: The new code fails as well, I know why (non-convex domain), and do not know a workaround (quick internet search did not help either).
Long story: Imagine a mesh with a hole (e.g. a long rectangular hole). The mesh has nice small elements on one side of the hole, and very big elements on the other side, much bigger than the hole side-to-side dimension. If a target (probe) point is in a big element, and the closest mesh point in a small element on the other side of the hole, I have no idea, how to reach it from there. The current algorithm follows the elements in the direction from the closest point to the target, but it cannot cross the hole. It would work for a convex hull mesh of the original mesh, though.
So, for the moment, I will merge the current "solution", as it is about 2-3x faster than the old one and works well for reasonable meshes (much better than the old one).
We may also add a slow "brute force" fall back algorithm (search all elements in spheres of increasing radius centered in the target point - scipy's kdtree will help here), that will be optionally tried for points not found by the fast one.
If you are interested, try searching for articles or for how libraries like VTK do it (there are probes in paraview, and VTK probes are now in sfepy as well, thanks to Vladimir, see [1], but work only for approximation order 1 (or 0)).
r.
All I've been able to find is the algorithm described in the accepted answer of this StackExchange question http://scicomp.stackexchange.com/questions/2624/finding-which-triangles-points-are-in. The upside is that it should never fail for awful.mesh. The downside is that it may fail to converge in non-convex meshes (since it cannot jump over gaps), it might only work for triangular/tetrahedral meshes, and it runs in O(N^(1/d)). The runtime would probably be significantly improved by first finding a nearby mesh node with the kdtree, though.
There is also a link to an interesting PDF in Edit2... Anyway, the current code should deal also with hexahedrons with non-planar faces (non-convex!) as long as the centroids stay inside.
My intuition says that it's always possible to find a counter-example for any non-brute-force algorithm though, so it may not be worth spending too much time on it. E.g. while the above algorithm would work for awful.mesh, it would fail the moment you add one more triangle with corners at (0, 1), (-0.1, 4) and, say, (0.5, 0.6). Then it wouldn't converge for a point at (0.45, 0.45) with first-order elements.
Naturally one option would be to add an extra hidden mesh which, when combined with the original mesh, gets the convex hull of the object. That way you could probably guarantee convergence, but that would involve a considerable computational cost.
Yes, I have thought about this possibility, but decided not to go that way now :).
The best option is probably to go with what you've got and add the fall back brute force, as you described.
Or using quad-/oct-trees with the brute force (but kdtree is available and fast).
Try the new code [1], please, and let me know if it works for your (non-awful) meshes. I have also fixed the nan vs. zero problem.
The brute force will have to wait.
r. [1] https://github.com/rc/sfepy/tree/improve-find-ref-coors
On Wednesday, April 29, 2015 at 3:18:59 PM UTC+3, Robert Cimrman wrote:
On 04/29/2015 12:31 PM, Jonatan Lehtonen wrote:
Yes, it's horrible. :)
Short story: The new code fails as well, I know why (non-convex
domain),
and do not know a workaround (quick internet search did not help either).
Long story: Imagine a mesh with a hole (e.g. a long rectangular hole).
The
mesh has nice small elements on one side of the hole, and very big elements
on
the other side, much bigger than the hole side-to-side dimension. If a
target
(probe) point is in a big element, and the closest mesh point in a
small
element on the other side of the hole, I have no idea, how to reach it from there. The current algorithm follows the elements in the direction from the closest point to the target, but it cannot cross the hole. It would
work
for a convex hull mesh of the original mesh, though.
So, for the moment, I will merge the current "solution", as it is about 2-3x faster than the old one and works well for reasonable meshes (much
better
than the old one).
We may also add a slow "brute force" fall back algorithm (search all elements in spheres of increasing radius centered in the target point - scipy's kdtree will help here), that will be optionally tried for points not found by
the
fast one.
If you are interested, try searching for articles or for how libraries like VTK do it (there are probes in paraview, and VTK probes are now in sfepy as well, thanks to Vladimir, see [1], but work only for approximation order 1
(or
0)).
r.
All I've been able to find is the algorithm described in the accepted answer of this StackExchange question <
http://scicomp.stackexchange.com/questions/2624/finding-which-triangles-poin....
The upside is that it should never fail for awful.mesh. The downside is that it may fail to converge in non-convex meshes (since it cannot jump over gaps), it might only work for triangular/tetrahedral meshes, and it runs in O(N^(1/d)). The runtime would probably be significantly improved
by
first finding a nearby mesh node with the kdtree, though.
There is also a link to an interesting PDF in Edit2... Anyway, the current code should deal also with hexahedrons with non-planar faces (non-convex!) as long as the centroids stay inside.
My intuition says that it's always possible to find a counter-example
for
any non-brute-force algorithm though, so it may not be worth spending
too
much time on it. E.g. while the above algorithm would work for
awful.mesh,
it would fail the moment you add one more triangle with corners at (0,
1),
(-0.1, 4) and, say, (0.5, 0.6). Then it wouldn't converge for a point at (0.45, 0.45) with first-order elements.
Naturally one option would be to add an extra hidden mesh which, when combined with the original mesh, gets the convex hull of the object.
That
way you could probably guarantee convergence, but that would involve a considerable computational cost.
Yes, I have thought about this possibility, but decided not to go that way now :).
The best option is probably to go with what you've got and add the fall back brute force, as you described.
Or using quad-/oct-trees with the brute force (but kdtree is available and fast).
Try the new code [1], please, and let me know if it works for your (non-awful) meshes. I have also fixed the nan vs. zero problem.
The brute force will have to wait.
r. [1] https://github.com/rc/sfepy/tree/improve-find-ref-coors
I tried your solution with the mesh I've been working on, using roughly 5 million measurement points inside the mesh. With the old code (the latest major release, I think), probing took 46 seconds and resulted in 898 NaNs. With the new code, it took 20 seconds and resulted in no NaNs. So it certainly looks like your code works. =)
While working with the new code, I noticed that apparently get_evaluate_cache has been removed from FEDomain. Currently in my code, after solving the Laplace problems with code very similar to the OP, I execute the following line of code:
Probe.cache = pb.domain.get_evaluate_cache(cache=None, share_geometry=True)
So what I'm wondering is, what should I replace this code with? Essentially what I want to do is ensure that every probe I create has the evaluate cache corresponding to that mesh (so that if I were to run the code again with a different mesh, every subsequent probe would automatically use the evaluate cache of the new mesh).
-Jonatan
On 04/30/2015 01:29 PM, Jonatan Lehtonen wrote:
On Wednesday, April 29, 2015 at 3:18:59 PM UTC+3, Robert Cimrman wrote:
On 04/29/2015 12:31 PM, Jonatan Lehtonen wrote:
Yes, it's horrible. :)
Short story: The new code fails as well, I know why (non-convex
domain),
and do not know a workaround (quick internet search did not help either).
Long story: Imagine a mesh with a hole (e.g. a long rectangular hole).
The
mesh has nice small elements on one side of the hole, and very big elements
on
the other side, much bigger than the hole side-to-side dimension. If a
target
(probe) point is in a big element, and the closest mesh point in a
small
element on the other side of the hole, I have no idea, how to reach it from there. The current algorithm follows the elements in the direction from the closest point to the target, but it cannot cross the hole. It would
work
for a convex hull mesh of the original mesh, though.
So, for the moment, I will merge the current "solution", as it is about 2-3x faster than the old one and works well for reasonable meshes (much
better
than the old one).
We may also add a slow "brute force" fall back algorithm (search all elements in spheres of increasing radius centered in the target point - scipy's kdtree will help here), that will be optionally tried for points not found by
the
fast one.
If you are interested, try searching for articles or for how libraries like VTK do it (there are probes in paraview, and VTK probes are now in sfepy as well, thanks to Vladimir, see [1], but work only for approximation order 1
(or
0)).
r.
All I've been able to find is the algorithm described in the accepted answer of this StackExchange question <
http://scicomp.stackexchange.com/questions/2624/finding-which-triangles-poin....
The upside is that it should never fail for awful.mesh. The downside is that it may fail to converge in non-convex meshes (since it cannot jump over gaps), it might only work for triangular/tetrahedral meshes, and it runs in O(N^(1/d)). The runtime would probably be significantly improved
by
first finding a nearby mesh node with the kdtree, though.
There is also a link to an interesting PDF in Edit2... Anyway, the current code should deal also with hexahedrons with non-planar faces (non-convex!) as long as the centroids stay inside.
My intuition says that it's always possible to find a counter-example
for
any non-brute-force algorithm though, so it may not be worth spending
too
much time on it. E.g. while the above algorithm would work for
awful.mesh,
it would fail the moment you add one more triangle with corners at (0,
1),
(-0.1, 4) and, say, (0.5, 0.6). Then it wouldn't converge for a point at (0.45, 0.45) with first-order elements.
Naturally one option would be to add an extra hidden mesh which, when combined with the original mesh, gets the convex hull of the object.
That
way you could probably guarantee convergence, but that would involve a considerable computational cost.
Yes, I have thought about this possibility, but decided not to go that way now :).
The best option is probably to go with what you've got and add the fall back brute force, as you described.
Or using quad-/oct-trees with the brute force (but kdtree is available and fast).
Try the new code [1], please, and let me know if it works for your (non-awful) meshes. I have also fixed the nan vs. zero problem.
The brute force will have to wait.
r. [1] https://github.com/rc/sfepy/tree/improve-find-ref-coors
I tried your solution with the mesh I've been working on, using roughly 5 million measurement points inside the mesh. With the old code (the latest major release, I think), probing took 46 seconds and resulted in 898 NaNs. With the new code, it took 20 seconds and resulted in no NaNs. So it certainly looks like your code works. =)
Good, thanks for trying. I will merge it into master soon.
While working with the new code, I noticed that apparently get_evaluate_cache has been removed from FEDomain. Currently in my code, after solving the Laplace problems with code very similar to the OP, I execute the following line of code:
Probe.cache = pb.domain.get_evaluate_cache(cache=None, share_geometry=True)
So what I'm wondering is, what should I replace this code with? Essentially what I want to do is ensure that every probe I create has the evaluate cache corresponding to that mesh (so that if I were to run the code again with a different mesh, every subsequent probe would automatically use the evaluate cache of the new mesh).
It was not removed, but moved to Field, because fields can be defined on a subdomain. Use field.get_evaluate_cache(cache=None, share_geometry=True).
r.