Hi Stephen,
In my dataset I've found that I need at least 52 for it to work in all cases. I did some statistics and the mean minimum multiplier required is 18 with a standard deviation of 9, so 52 is about 3 sigma out, so it's not that rare. I don't know how globally-applicable these statistics are; this is just one dataset.
What I'm wondering is what we should do about this? Make the multiplier large enough to capture (N)sigma likelihood? Or wrap the call to construct_boundary_relationships in countour_finder.py in a while loop with a try/except that increases the multiplier with each failure?
So why does it need to be so high? If you look at the sets of loops, it should be fully-determined.
for i in range(nx): for j in range(ny): for offset_i in range(3): if i == 0 and oi == -1: continue without incrementing if i == ny - 1 and offset_i == 1: continue without incrementing for offset_j in range(3): if something: ti += 1 if something_else: ti += 1
and then this gets repeated with nx&nz and ny&nz.
Maybe I missed the multiplication by two for the two possible incrementations, so maybe it should be:
s = (ny*nx + nx*nz + nx*nz - 2) * 18
Either way, I think statistics here might be misguided. It's a fully-determined loop. I just did the math wrong.
-Matt