So, I dug into sfepy/sfepy/terms/termsLinElasticity.py(61)__call__() a bit to try and understand how linear elasticity is being handled:
def __call__( self, diff_var = None, chunk_size = None, kwargs ): Pdb().set_trace() material, virtual, state = self.get_args( kwargs ) ap, vg = virtual.get_approximation( self.get_current_group(), 'Volume' )
shape, mode = self.get_shape( diff_var, chunk_size, ap )
fargs = self.build_c_fun_args( material, state, ap, vg )
for out, chunk in self.char_fun( chunk_size, shape ):
status = self.function( out, *fargs + (chunk, mode) )
yield out, chunk, status
I think I understand the first line: material, virtual, state = self.get_args( **kwargs ). It seems to extract the material, including its Lame parameters and all that, along with the state. I am not entirely sure what state means:
ipdb> print state Variable:u kind: unknown _order: 0 name: u family: field dual_var_name: v dpn: 3 dof_conns: Struct:dof_conns indx: slice(0, 1062, None) dof_name: u dtype: <type 'numpy.float64'> field: Field:3_displacement step: None eq_map: Struct flags: set([0, 10]) i_dof_max: 1062 key: variable_1 current_ap: None n_dof: 1062 dofs: ['u.0', 'u.1', 'u.2'] data: deque([array([ 0., 0., 0., ..., 0., 0., 0.])]) history: None
Some of those terms make sense. But what are deque and flags? What is the indx used for? Is that to get this node out of the mesh?
But it seems like the real action happens when self.function is called. self.function refers to sfepy/sfepy/terms/extmods/terms.py(166)dw_lin_elastic_iso() which just calls a C function. I am scared that implementing a nonlinear material model will require C programming. I haven't done much C lately.
Ryan
Ryan Krauss wrote:
So, I dug into sfepy/sfepy/terms/termsLinElasticity.py(61)__call__() a bit to try and understand how linear elasticity is being handled:
Let me go line-by-line:
def __call__( self, diff_var = None, chunk_size = None, kwargs ): Pdb().set_trace() material, virtual, state = self.get_args( kwargs )
here we extract the arguments passed to the term.
ap, vg = virtual.get_approximation(
self.get_current_group(), 'Volume' )
'ap' stores various FE-related data corresponding to the given variable, here 'virtual'. state.get_approximation(...) would return the same, as both the test (virtual) and unknown (state) variables use the same approximation here.
'vg' is a VolumeGeometry instance, it holds derivatives of shape (base) functions transformed from a reference element to the real ones. It holds also the necessary jacobians and weights required for performing a numerical quadrature on the domain of the term.
shape, mode = self.get_shape( diff_var,
chunk_size, ap )
'shape' is a shape of element contributions (rezidual if mode == 0, matrix if mode == 1). It has always the following structure: (n_el, n_qp, n_row, n_col) n_el - number of elements to assemble n_qp - number of quadrature points in each element (= 1 here, as the C function already integrates the data over elements) n_row, n_col - actual dimensions
rezidual mode assembles D * u, matrix mode just D
fargs = self.build_c_fun_args( material,
state, ap, vg )
this just builds arguments passed to the C function. subclasses may use different function, they just override build_c_fun_args.
for out, chunk in self.char_fun(
chunk_size, shape ):
status = self.function( out, *fargs + (chunk, mode) )
yield out, chunk, status
all actual computation is done here, the function is implemented in C for speed, see sfepy/terms/extmods/termsLinearElasticity.c, function dw_lin_elastic_iso()
C is not required though, it is an implementation detail - you can well prototype your code in Python.
I think I understand the first line: material, virtual, state = self.get_args( **kwargs ). It seems to extract the material, including its Lame parameters and all that, along with the state. I am not entirely sure what state means:
ipdb> print state Variable:u kind: unknown _order: 0 name: u family: field dual_var_name: v dpn: 3 dof_conns: Struct:dof_conns indx: slice(0, 1062, None) dof_name: u dtype: <type 'numpy.float64'> field: Field:3_displacement step: None eq_map: Struct flags: set([0, 10]) i_dof_max: 1062 key: variable_1 current_ap: None n_dof: 1062 dofs: ['u.0', 'u.1', 'u.2'] data: deque([array([ 0., 0., 0., ..., 0., 0., 0.])]) history: None
Some of those terms make sense. But what are deque and flags? What is the indx used for? Is that to get this node out of the mesh?
indx - if the problem is described by more variables, then indx of each variable point to the global state vector to data of corresponding to the variable.
data - in time-dependent problems we usually need to remember some previous time-steps - they are stored in a deque, which is just a list which you can prepend and rotate.
You do not need to care much about this, just use u = state() to get to the actual displacements.
But it seems like the real action happens when self.function is called. self.function refers to sfepy/sfepy/terms/extmods/terms.py(166)dw_lin_elastic_iso() which just calls a C function. I am scared that implementing a nonlinear material model will require C programming. I haven't done much C lately.
As I said, you can well start with just Python, speed can be obtained later.
r.
Thanks for the explanations. Prototyping in Python makes a lot of sense and is far less intimidating .
Ryan
On Fri, Aug 1, 2008 at 4:50 AM, Robert Cimrman cimr...@ntc.zcu.cz wrote: >
Ryan Krauss wrote:
So, I dug into sfepy/sfepy/terms/termsLinElasticity.py(61)__call__() a bit to try and understand how linear elasticity is being handled:
Let me go line-by-line:
def __call__( self, diff_var = None, chunk_size = None, kwargs ): Pdb().set_trace() material, virtual, state = self.get_args( kwargs )
here we extract the arguments passed to the term.
ap, vg = virtual.get_approximation(
self.get_current_group(), 'Volume' )
'ap' stores various FE-related data corresponding to the given variable, here 'virtual'. state.get_approximation(...) would return the same, as both the test (virtual) and unknown (state) variables use the same approximation here.
'vg' is a VolumeGeometry instance, it holds derivatives of shape (base) functions transformed from a reference element to the real ones. It holds also the necessary jacobians and weights required for performing a numerical quadrature on the domain of the term.
shape, mode = self.get_shape(
diff_var, chunk_size, ap )
'shape' is a shape of element contributions (rezidual if mode == 0, matrix if mode == 1). It has always the following structure: (n_el, n_qp, n_row, n_col) n_el - number of elements to assemble n_qp - number of quadrature points in each element (= 1 here, as the C function already integrates the data over elements) n_row, n_col - actual dimensions
rezidual mode assembles D * u, matrix mode just D
fargs = self.build_c_fun_args(
material, state, ap, vg )
this just builds arguments passed to the C function. subclasses may use different function, they just override build_c_fun_args.
for out, chunk in self.char_fun(
chunk_size, shape ):
status = self.function( out, *fargs + (chunk, mode) )
yield out, chunk, status
all actual computation is done here, the function is implemented in C for speed, see sfepy/terms/extmods/termsLinearElasticity.c, function dw_lin_elastic_iso()
C is not required though, it is an implementation detail - you can well prototype your code in Python.
>
I think I understand the first line: material, virtual, state = self.get_args( **kwargs ). It seems to extract the material, including its Lame parameters and all that, along with the state. I am not entirely sure what state means:
ipdb> print state Variable:u kind: unknown _order: 0 name: u family: field dual_var_name: v dpn: 3 dof_conns: Struct:dof_conns indx: slice(0, 1062, None) dof_name: u dtype: <type 'numpy.float64'> field: Field:3_displacement step: None eq_map: Struct flags: set([0, 10]) i_dof_max: 1062 key: variable_1 current_ap: None n_dof: 1062 dofs: ['u.0', 'u.1', 'u.2'] data: deque([array([ 0., 0., 0., ..., 0., 0., 0.])]) history: None
Some of those terms make sense. But what are deque and flags? What is the indx used for? Is that to get this node out of the mesh?
indx - if the problem is described by more variables, then indx of each variable point to the global state vector to data of corresponding to the variable.
data - in time-dependent problems we usually need to remember some previous time-steps - they are stored in a deque, which is just a list which you can prepend and rotate.
You do not need to care much about this, just use u = state() to get to the actual displacements.
But it seems like the real action happens when self.function is called. self.function refers to sfepy/sfepy/terms/extmods/terms.py(166)dw_lin_elastic_iso() which just calls a C function. I am scared that implementing a nonlinear material model will require C programming. I haven't done much C lately.
As I said, you can well start with just Python, speed can be obtained later.
r.
>
Ryan Krauss wrote:
Thanks for the explanations. Prototyping in Python makes a lot of sense and is far less intimidating .
I can imagine it. The C part developed from what I had scavenged from my previous FE programs.
Just one more note: Term subclasses need just two methods: __init__() and __call__(). The remaining methods you may see are auxiliary, usually serving to avoid typing when further subclassing.
r.
Cool. Good to know.
On Fri, Aug 1, 2008 at 6:02 AM, Robert Cimrman cimr...@ntc.zcu.cz wrote: >
Ryan Krauss wrote:
Thanks for the explanations. Prototyping in Python makes a lot of sense and is far less intimidating .
I can imagine it. The C part developed from what I had scavenged from my previous FE programs.
Just one more note: Term subclasses need just two methods: __init__() and __call__(). The remaining methods you may see are auxiliary, usually serving to avoid typing when further subclassing.
r.
>
So, I think I am making some progress on trying to understand LinearElasticTerm.__call__, but I have some questions.
First of all, as far as the big picture, it seems like the method (and especially the underlying C function dw_lin_elastic_iso) must implement something like Hooke's law in 3D. But if the __call__ method takes the current state of everything and finds all the element displacements for the current time step, it almost seems like it is doing the entire finite element method. Is either or both of these a good way to think about what the __call__ method does?
Next question:
What are the input and output of __call__?. It seems like the input is everything that is known at the current time step. The output contains three things. The primary one is out, which for my problem is a 4D array with shape (177, 1, 12, 1). 177 is the number of elements. So, it seems like this is an array that contains 12 parameters per element. Are there 12 d.o.f. for each element? If so, what are they? I guess it makes sense that an element with 4 nodes could have 12 d.o.f., but that would be a redundant way to think about things (obviously the nodes are shared by several elements). So, it seems like it must be 12 element strains. The other two outputs seem straight forward: chunk is a list of element #'s (since I have 177 elements and a chunk size of 100000, it is simply all my elements - i.e. range(177)) and status is always 0. I assume this is a flag for when something goes wrong.
Additional questions:
In its current form, my input file (see attached) has only 2 times steps (t=0 and t=1). The only difference between the two is that at t=0 the traction load is 0 and at t=1 the traction load is 0.1.
It seems like the traction load must be part of the fargs that are passed to dw_lin_elastic_iso in the self.function call, but I don't see it. Maybe this is just a problem with what gets printed using Pdb, but is the traction in fargs somewhere?
With two time steps, why is __call__ called 4 times? The first two calls result in out containing all 0's. The third and fourth calls produce non-zero out. The out of the third and fourth calls appear to be different.
I think those are my main questions for now.
Thanks,
Ryan
On Fri, Aug 1, 2008 at 6:07 AM, Ryan Krauss ryan...@gmail.com wrote:
Cool. Good to know.
On Fri, Aug 1, 2008 at 6:02 AM, Robert Cimrman cimr...@ntc.zcu.cz wrote: >
Ryan Krauss wrote:
Thanks for the explanations. Prototyping in Python makes a lot of sense and is far less intimidating .
I can imagine it. The C part developed from what I had scavenged from my previous FE programs.
Just one more note: Term subclasses need just two methods: __init__() and __call__(). The remaining methods you may see are auxiliary, usually serving to avoid typing when further subclassing.
r.
>
Ryan Krauss wrote:
So, I think I am making some progress on trying to understand LinearElasticTerm.__call__, but I have some questions.
First of all, as far as the big picture, it seems like the method (and especially the underlying C function dw_lin_elastic_iso) must implement something like Hooke's law in 3D. But if the __call__ method takes the current state of everything and finds all the element displacements for the current time step, it almost seems like it is doing the entire finite element method. Is either or both of these a good way to think about what the __call__ method does?
__call__() constructs element contributions to the global "stiffness" matrix (I have used "" as it is a stiffness matrix just in mechanics. In general, it could be called a jacobian or tangent matrix.)
Yes, dw_lin_elastic_iso() implements a Hooke's law, in 2D and 3D. The job of __call__() is to pass to it correct arguments, following from the current state of everything, as you say.
Next question:
What are the input and output of __call__?. It seems like the input is everything that is known at the current time step. The output contains three things. The primary one is out, which for my problem is a 4D array with shape (177, 1, 12, 1). 177 is the number of elements. So, it seems like this is an array that contains 12 parameters per element. Are there 12 d.o.f. for each element? If so, what are they? I guess it makes sense that an element with 4 nodes could have 12 d.o.f., but that would be a redundant way to think about things (obviously the nodes are shared by several elements). So, it seems like it must be 12 element strains. The other two outputs seem straight forward: chunk is a list of element
# 's (since I have 177 elements and a chunk size of 100000, it is
simply all my elements - i.e. range(177)) and status is always 0. I assume this is a flag for when something goes wrong.
Let's say we have a term implementing some form a(v, u), with v the test function and u unknown
All terms have the same call signature: def __call__( self, diff_var = None, chunk_size = None, **kwargs )
if diff_var is None, it evaluates the term for the given actual state (a(v,u) ~ A * u)
if diff_var == 'u', it returns a gradient of the form a(v, u) w.r.t. 'u', i.e. A
chunk_size controls the (max.) number of elements processed in one call.
kwargs contain everything else (the state, etc.), the arguments of the term are accessed via self.get_args( kwargs )
Additional questions:
In its current form, my input file (see attached) has only 2 times steps (t=0 and t=1). The only difference between the two is that at t=0 the traction load is 0 and at t=1 the traction load is 0.1.
I see.
It seems like the traction load must be part of the fargs that are passed to dw_lin_elastic_iso in the self.function call, but I don't see it. Maybe this is just a problem with what gets printed using Pdb, but is the traction in fargs somewhere?
No. The traction load is done by having dw_surface_ltr.isurf.Top( traction.val, v ) in the equation. Use pdb in termsSurface.py. It is completely independent on dw_lin_elastic_iso.
One term = one integral in the equation.
With two time steps, why is __call__ called 4 times? The first two calls result in out containing all 0's. The third and fourth calls produce non-zero out. The out of the third and fourth calls appear to be different.
Let me explain what happens in one time step, if 'problem' is 'nonlinear' in solver_1:
In your case, f( u ) = a( v, u ) + b( v ), with 'a' the linear elasticity operator and 'b' the surface load (tractions).
So for linear problems we have in general three __call__ calls per time step, as the Newton method should converge in one iteration. If it does not converge, you see immediately that your term is wrong.
In your case you see four calls, one for step 0, as in step 0 u = 0 is the solution already, hence there is nothing to solve, and three for step 1.
This is a very general scheme for solving any PDEs, possibly nonlinear, and you should stick with it until you are sure that your matrix (A) and your rezidual (A*u) evaluations are ok and consistent one to the other. Then, for linear problems, you can use 'problem' : 'linear' in solver_1, which pre-assembles the tangent matrix prior to the time-stepping, so that step denoted by (o) in the Newton loop is a no-operation and __call__ is called only once per time-step.
I think those are my main questions for now.
We are writing a manual here :)
r.