Ryan Krauss wrote:
So, I think I am making some progress on trying to understand LinearElasticTerm.__call__, but I have some questions.
First of all, as far as the big picture, it seems like the method (and especially the underlying C function dw_lin_elastic_iso) must implement something like Hooke's law in 3D. But if the __call__ method takes the current state of everything and finds all the element displacements for the current time step, it almost seems like it is doing the entire finite element method. Is either or both of these a good way to think about what the __call__ method does?
__call__() constructs element contributions to the global "stiffness" matrix (I have used "" as it is a stiffness matrix just in mechanics. In general, it could be called a jacobian or tangent matrix.)
Yes, dw_lin_elastic_iso() implements a Hooke's law, in 2D and 3D. The job of __call__() is to pass to it correct arguments, following from the current state of everything, as you say.
What are the input and output of __call__?. It seems like the input is everything that is known at the current time step. The output contains three things. The primary one is out, which for my problem is a 4D array with shape (177, 1, 12, 1). 177 is the number of elements. So, it seems like this is an array that contains 12 parameters per element. Are there 12 d.o.f. for each element? If so, what are they? I guess it makes sense that an element with 4 nodes could have 12 d.o.f., but that would be a redundant way to think about things (obviously the nodes are shared by several elements). So, it seems like it must be 12 element strains. The other two outputs seem straight forward: chunk is a list of element #'s (since I have 177 elements and a chunk size of 100000, it is simply all my elements - i.e. range(177)) and status is always 0. I assume this is a flag for when something goes wrong.
Let's say we have a term implementing some form a(v, u), with v the test function and u unknown
All terms have the same call signature: def __call__( self, diff_var = None, chunk_size = None, **kwargs )
if diff_var is None, it evaluates the term for the given actual state (a(v,u) ~ A * u)
if diff_var == 'u', it returns a gradient of the form a(v, u) w.r.t. 'u', i.e. A
chunk_size controls the (max.) number of elements processed in one call.
**kwargs contain everything else (the state, etc.), the arguments of the term are accessed via self.get_args( **kwargs )
In its current form, my input file (see attached) has only 2 times steps (t=0 and t=1). The only difference between the two is that at t=0 the traction load is 0 and at t=1 the traction load is 0.1.
It seems like the traction load must be part of the fargs that are passed to dw_lin_elastic_iso in the self.function call, but I don't see it. Maybe this is just a problem with what gets printed using Pdb, but is the traction in fargs somewhere?
No. The traction load is done by having dw_surface_ltr.isurf.Top( traction.val, v ) in the equation. Use pdb in termsSurface.py. It is completely independent on dw_lin_elastic_iso.
One term = one integral in the equation.
With two time steps, why is __call__ called 4 times? The first two calls result in out containing all 0's. The third and fourth calls produce non-zero out. The out of the third and fourth calls appear to be different.
Let me explain what happens in one time step, if 'problem' is 'nonlinear' in solver_1:
of the previous time step 2. Dirichlet boundary conditions of the current time step are applied to the state vector. 3. Newton iteration loop is entered, solving general f( u ) = 0: - (*) f( u ), i.e. the rezidual, is evaluated - 1. __call__ - if it is zero, we are done, return u - (o) if it is nonzero, df/du is evaluated - 2. __call__, diff_var is 'u' - du is cumputed by solving the linear system - solution is updated u = u + du - go to (*) until convergence or max. iteration count reached.
In your case, f( u ) = a( v, u ) + b( v ), with 'a' the linear elasticity operator and 'b' the surface load (tractions).
So for linear problems we have in general three __call__ calls per time step, as the Newton method should converge in one iteration. If it does not converge, you see immediately that your term is wrong.
In your case you see four calls, one for step 0, as in step 0 u = 0 is the solution already, hence there is nothing to solve, and three for step 1.
This is a very general scheme for solving any PDEs, possibly nonlinear, and you should stick with it until you are sure that your matrix (A) and your rezidual (A*u) evaluations are ok and consistent one to the other. Then, for linear problems, you can use 'problem' : 'linear' in solver_1, which pre-assembles the tangent matrix prior to the time-stepping, so that step denoted by (o) in the Newton loop is a no-operation and __call__ is called only once per time-step.
I think those are my main questions for now.
We are writing a manual here :)