Re-assemble only certain terms of an equation

Hi Robert,
I am currently facing a new difficulty using Sfepy for solving nonlinear PDEs. I have coded my own Newton’s iterative method that simply consists in linearizing the PDE F(u)=0 around the previous iteration u_k. I have two questions:
- At each iteration, I create a
Problem
instance for the linearized PDE and solve it the usual
way in Sfepy. However, only the “linearized-nonlinear” terms need to be recomputed from one iteration to another. For the sake of computational efficiency, I wish not to reconstruct all matrices / rhs vectors but only the ones that need to be updated. How would you achieve that? In other words, how can you store some assembled terms while re-assembling some others?
- For some reason, I need to evaluate another function, say G (a residual), at each iteration in
the weak form. Again, some terms in G are constant while others are iteration-dependent (through their material). How would you efficiently evaluate G(u_k) at each iteration? Currently, the only way I found is to do
dummy_ts = TimeStepper(0, 100) # ficticious time-stepper
eqs = Equations([my_equation]) # equation associated with function G
eqs.init_state()
ebcs = Conditions(my_ebcs)
epbcs = Conditions(my_epbcs)
eqs.time_update(dummy_ts, ebcs=ebcs, epbcs=epbcs)
res_vec = eqs.eval_residuals(sol) # sol is the solution at current iteration
which I guess this is not Sfepy’s philosophy… I hope my questions are clear enough. Thank you very much in advance for your help. Best regards, Hugo L.

Hi Hugo,
On 27/04/2022 09:29, hugo.levy@gmail.com wrote:
Hi Robert,
I am currently facing a new difficulty using Sfepy for solving nonlinear PDEs. I have coded my own Newton’s iterative method that simply consists in linearizing the PDE F(u)=0 around the previous iteration u_k. I have two questions:
- At each iteration, I create a
Problem
instance for the linearized PDE and solve it the usualway in Sfepy. However, only the “linearized-nonlinear” terms need to be recomputed from one iteration to another. For the sake of computational efficiency, I wish not to reconstruct all matrices / rhs vectors but only the ones that need to be updated. How would you achieve that? In other words, how can you store some assembled terms while re-assembling some others?
There is no built-in support for this, but because you implement your own solver, it should be possible to get without too much hassle. I suggest you to split your equations to linear and nonlinear parts and evaluate those separately. There are several ways how to do that, e.g. (pb is a Problem instance):
eqs = { 'K' : '...', }
... pb.time_update() pb.update_materials()
mtx = pb.mtx_a.copy() mtx_k = eqs['K'].evaluate(mode='weak', dw_mode='matrix', asm_obj=mtx)
etc. See also assemble_matrices() in [1].
Another way would be to have two problem instance for the linear and nonlinear parts and evaluate those.
[1] https://sfepy.org/doc-devel/examples/linear_elasticity-dispersion_analysis.h...
- For some reason, I need to evaluate another function, say G (a residual), at each iteration in
the weak form. Again, some terms in G are constant while others are iteration-dependent (through their material). How would you efficiently evaluate G(u_k) at each iteration? Currently, the only way I found is to do
dummy_ts = TimeStepper(0, 100) # ficticious time-stepper eqs = Equations([my_equation]) # equation associated with function G eqs.init_state() ebcs = Conditions(my_ebcs) epbcs = Conditions(my_epbcs) eqs.time_update(dummy_ts, ebcs=ebcs, epbcs=epbcs) res_vec = eqs.eval_residuals(sol) # sol is the solution at current iteration
which I guess this is not Sfepy’s philosophy…
It looks OK. Note that you probably do not need to use the dummy_ts. Maybe you could use Problem.evaluate(), or Problem.create_evaluable(), .eval_equations() combo?
I hope my questions are clear enough. Thank you very much in advance for your help.
As usual, send a self-contained small-size example if the above does not help or you get stuck.
Cheers, r.

Hi Robert,
Thank you for your swift reply, very useful as always!
I have dug into the code (Problem.solve
, Newton
class from
solvers\nls.py) in order
to see what's happening with the residual vector and the tangent matrix.
Especially, since
I only solve linear systems at each iteration of my own Newton algorithm, I
could by-pass the
whole Sfepy's Newton call and keep only the lines
vec_dx = lin_solver(vec_r, x0=vec_x, eps_a=eps_a, eps_r=eps_r, mtx=mtx_a, status=ls_status)
where I would have pre-computed vec_r and mtx_a the way you proposed.
There are a few catches, though (at least for me):
- I am using LCBCs. As you pointed out in [1], LCBSs are not compatible
with setting 'active_only' to False at the creation of the Problem instance. This means the matrices & vectors I assemble "by hand" will be reduced in size (because of the constrained DOFs), and so will be my solution vector. I will then recover the full solution with
variables.set_state(vec_dx, pb.active_only)
- LCBCs modify the residual vector as well as the tangent matrix (class
Evaluator
methods
eval_residual
and eval_tangent_matrix
), which would not be accounted
for if I do
mtx = pb.mtx_a.copy() mtx_k = eqs['K'].evaluate(mode='weak', dw_mode='matrix', asm_obj=mtx)
So I believe creating two Problem instances (each with the same BCs) as you suggested is better as I wouldn't have to worry about it.
- When pb.is_linear() is True, there is a presolve step which computes a LU
matrix factorization from what I understand. Does this mean I have to call ls.presolve(mtx) before calling the lin_solver() function? Just like in [2] for instance.
So far, the code could look something like this
...
pb_constant = Problem('pb_constant', equations=eqs_constant, active_only=True) pb_constant.set_bcs(ebcs=my_ebcs, epbcs=my_epbcs, lcbcs=my_lcbcs)
# Compute matrix & vector of linear terms pb_constant.set_solver(nls) tss = pb_constant.get_solver() pb_constant.equations.set_data(None, ignore_unknown=True) variables = pb_constant.get_initial_state(vec=None) pb_constant.time_update(tss.ts) variables.apply_ebc(force_values=None) pb_constant.update_materials() ev_constant = pb_constant.get_evaluator() mtx_constant = ev.eval_tangent_matrix(variables(), is_full=True) rhs_constant = ev.eval_residual(variables(), is_full=True)
pb_update = Problem('pb_update', equations=eqs_update, active_only=True) pb_update.set_bcs(ebcs=my_ebcs, epbcs=my_epbcs, lcbcs=my_lcbcs)
while 1: # my Newton iteration loop
# Compute matrix & vector of nonlinear terms pb_update.set_solver(nls) tss = pb_constant.get_solver() pb_update.equations.set_data(None, ignore_unknown=True) variables = pb_update.get_initial_state(vec=None) pb_update.time_update(tss.ts) variables.apply_ebc(force_values=None) pb_update.update_materials() ev_constant = pb_update.get_evaluator() mtx_update = ev.eval_tangent_matrix(variables(), is_full=True) rhs_update = ev.eval_residual(variables(), is_full=True)
# Construct the full matrix / rhs mtx = mtx_update + mtx_constant rhs = rhs_update + rhs_constant
# Solve ls = nls.lin_solver ls.presolve(mtx) vec_x = ls(rhs, x0=None, eps_a=eps_a, eps_r=eps_r, mtx=mtx, status={}) sol = variables.set_state(vec_x, pb_update.active_only) # This might not be the correct syntax with the last release!
if stop_condition: break
... ...
I do not quite get what the is_full=True
option does exactly. Should it
be set to True when
active_only is True?
I would be very glad if you could give me a quick feedback on the code / remarks. If you think this looks fine (i.e. the computational cost is actually reduced by proceeding in this manner), I will implement this solution for both of my initial questions.
Thanks again,
Hugo
[1] https://github.com/sfepy/sfepy/issues/785 [2] https://sfepy.org/doc-devel/examples/diffusion-time_poisson_interactive.html
Le jeu. 28 avr. 2022 à 11:08, Robert Cimrman cimrman3@ntc.zcu.cz a écrit :
Hi Hugo,
On 27/04/2022 09:29, hugo.levy@gmail.com wrote:
Hi Robert,
I am currently facing a new difficulty using Sfepy for solving nonlinear
PDEs. I have coded my own
Newton’s iterative method that simply consists in linearizing the PDE
F(u)=0 around the previous
iteration u_k. I have two questions:
- At each iteration, I create a
Problem
instance for the linearizedPDE and solve it the usual
way in Sfepy. However, only the “linearized-nonlinear” terms need to be
recomputed from one iteration
to another. For the sake of computational efficiency, I wish not to
reconstruct all matrices / rhs vectors
but only the ones that need to be updated. How would you achieve that?
In other words, how can you
store some assembled terms while re-assembling some others?
There is no built-in support for this, but because you implement your own solver, it should be possible to get without too much hassle. I suggest you to split your equations to linear and nonlinear parts and evaluate those separately. There are several ways how to do that, e.g. (pb is a Problem instance):
eqs = { 'K' : '...', }
... pb.time_update() pb.update_materials()
mtx = pb.mtx_a.copy() mtx_k = eqs['K'].evaluate(mode='weak', dw_mode='matrix', asm_obj=mtx)
etc. See also assemble_matrices() in [1].
Another way would be to have two problem instance for the linear and nonlinear parts and evaluate those.
[1] https://sfepy.org/doc-devel/examples/linear_elasticity-dispersion_analysis.h...
- For some reason, I need to evaluate another function, say G (a
residual), at each iteration in
the weak form. Again, some terms in G are constant while others are
iteration-dependent (through
their material). How would you efficiently evaluate G(u_k) at each
iteration? Currently, the only way
I found is to do
dummy_ts = TimeStepper(0, 100) # ficticious time-stepper eqs = Equations([my_equation]) # equation associated with
function G
eqs.init_state() ebcs = Conditions(my_ebcs) epbcs = Conditions(my_epbcs) eqs.time_update(dummy_ts, ebcs=ebcs, epbcs=epbcs) res_vec = eqs.eval_residuals(sol) # sol is the solution at
current iteration
which I guess this is not Sfepy’s philosophy…
It looks OK. Note that you probably do not need to use the dummy_ts. Maybe you could use Problem.evaluate(), or Problem.create_evaluable(), .eval_equations() combo?
I hope my questions are clear enough. Thank you very much in advance for
your help.
As usual, send a self-contained small-size example if the above does not help or you get stuck.
Cheers, r. _______________________________________________ SfePy mailing list -- sfepy@python.org To unsubscribe send an email to sfepy-leave@python.org https://mail.python.org/mailman3/lists/sfepy.python.org/ Member address: hugo.levy@gmail.com

Hi Hugo,
On 29/04/2022 15:57, Hugo Lévy wrote:
Hi Robert,
Thank you for your swift reply, very useful as always!
I have dug into the code (
Problem.solve
,Newton
class from solvers\nls.py) in order to see what's happening with the residual vector and the tangent matrix. Especially, since I only solve linear systems at each iteration of my own Newton algorithm, I could by-pass the whole Sfepy's Newton call and keep only the linesvec_dx = lin_solver(vec_r, x0=vec_x, eps_a=eps_a, eps_r=eps_r, mtx=mtx_a, status=ls_status)
where I would have pre-computed vec_r and mtx_a the way you proposed.
Yes.
There are a few catches, though (at least for me):
- I am using LCBCs. As you pointed out in [1], LCBSs are not compatible
with setting 'active_only' to False at the creation of the Problem instance. This means the matrices & vectors I assemble "by hand" will be reduced in size (because of the constrained DOFs), and so will be my solution vector. I will then recover the full solution with
variables.set_state(vec_dx, pb.active_only)
In general, I am not too happy about LCBCs, as they work only for the nodal (Lagrange) basis. Remind me what kind of constraints do you use? Enforcing those by some penalty or Lagrange multipliers might be a more future-proof way to go.
- LCBCs modify the residual vector as well as the tangent matrix (class
Evaluator
methodseval_residual
andeval_tangent_matrix
), which would not be accounted for if I domtx = pb.mtx_a.copy() mtx_k = eqs['K'].evaluate(mode='weak', dw_mode='matrix', asm_obj=mtx)
So I believe creating two Problem instances (each with the same BCs) as you suggested is better as I wouldn't have to worry about it.
- When pb.is_linear() is True, there is a presolve step which computes a LU
matrix factorization from what I understand. Does this mean I have to call ls.presolve(mtx) before calling the lin_solver() function? Just like in [2] for instance.
Yes, but it is not your case - you solve a different linear system in each iteration, right?
So far, the code could look something like this
...
pb_constant = Problem('pb_constant', equations=eqs_constant, active_only=True) pb_constant.set_bcs(ebcs=my_ebcs, epbcs=my_epbcs, lcbcs=my_lcbcs)
# Compute matrix & vector of linear terms pb_constant.set_solver(nls) tss = pb_constant.get_solver() pb_constant.equations.set_data(None, ignore_unknown=True) variables = pb_constant.get_initial_state(vec=None) pb_constant.time_update(tss.ts) variables.apply_ebc(force_values=None) pb_constant.update_materials() ev_constant = pb_constant.get_evaluator() mtx_constant = ev.eval_tangent_matrix(variables(), is_full=True) rhs_constant = ev.eval_residual(variables(), is_full=True)
pb_update = Problem('pb_update', equations=eqs_update, active_only=True) pb_update.set_bcs(ebcs=my_ebcs, epbcs=my_epbcs, lcbcs=my_lcbcs)
while 1: # my Newton iteration loop
# Compute matrix & vector of nonlinear terms pb_update.set_solver(nls) tss = pb_constant.get_solver() pb_update.equations.set_data(None, ignore_unknown=True) variables = pb_update.get_initial_state(vec=None) pb_update.time_update(tss.ts) variables.apply_ebc(force_values=None) pb_update.update_materials() ev_constant = pb_update.get_evaluator() mtx_update = ev.eval_tangent_matrix(variables(), is_full=True) rhs_update = ev.eval_residual(variables(), is_full=True)
# Construct the full matrix / rhs mtx = mtx_update + mtx_constant rhs = rhs_update + rhs_constant
# Solve ls = nls.lin_solver ls.presolve(mtx) vec_x = ls(rhs, x0=None, eps_a=eps_a, eps_r=eps_r, mtx=mtx, status={}) sol = variables.set_state(vec_x, pb_update.active_only) # This might not be the correct syntax with the last release!
if stop_condition: break
... ...
I do not quite get what the
is_full=True
option does exactly. Should it be set to True when active_only is True?
The is_full
just says whether the passed-in vector is full or reduced - in
the latter case the evaluator function calls make_full_vec(). So yes, it
interplays with active_only - when active_only is True, is_full in the Newton
solver is False, as it works with the reduced linear system.
I would be very glad if you could give me a quick feedback on the code / remarks. If you think this looks fine (i.e. the computational cost is actually reduced by proceeding in this manner), I will implement this solution for both of my initial questions.
It looks OK at a glance.
r.
Thanks again,
Hugo
[1] https://github.com/sfepy/sfepy/issues/785 [2] https://sfepy.org/doc-devel/examples/diffusion-time_poisson_interactive.html
Le jeu. 28 avr. 2022 à 11:08, Robert Cimrman cimrman3@ntc.zcu.cz a écrit :
Hi Hugo,
On 27/04/2022 09:29, hugo.levy@gmail.com wrote:
Hi Robert,
I am currently facing a new difficulty using Sfepy for solving nonlinear
PDEs. I have coded my own
Newton’s iterative method that simply consists in linearizing the PDE
F(u)=0 around the previous
iteration u_k. I have two questions:
- At each iteration, I create a
Problem
instance for the linearizedPDE and solve it the usual
way in Sfepy. However, only the “linearized-nonlinear” terms need to be
recomputed from one iteration
to another. For the sake of computational efficiency, I wish not to
reconstruct all matrices / rhs vectors
but only the ones that need to be updated. How would you achieve that?
In other words, how can you
store some assembled terms while re-assembling some others?
There is no built-in support for this, but because you implement your own solver, it should be possible to get without too much hassle. I suggest you to split your equations to linear and nonlinear parts and evaluate those separately. There are several ways how to do that, e.g. (pb is a Problem instance):
eqs = { 'K' : '...', }
... pb.time_update() pb.update_materials()
mtx = pb.mtx_a.copy() mtx_k = eqs['K'].evaluate(mode='weak', dw_mode='matrix', asm_obj=mtx)
etc. See also assemble_matrices() in [1].
Another way would be to have two problem instance for the linear and nonlinear parts and evaluate those.
[1] https://sfepy.org/doc-devel/examples/linear_elasticity-dispersion_analysis.h...
- For some reason, I need to evaluate another function, say G (a
residual), at each iteration in
the weak form. Again, some terms in G are constant while others are
iteration-dependent (through
their material). How would you efficiently evaluate G(u_k) at each
iteration? Currently, the only way
I found is to do
dummy_ts = TimeStepper(0, 100) # ficticious time-stepper eqs = Equations([my_equation]) # equation associated with
function G
eqs.init_state() ebcs = Conditions(my_ebcs) epbcs = Conditions(my_epbcs) eqs.time_update(dummy_ts, ebcs=ebcs, epbcs=epbcs) res_vec = eqs.eval_residuals(sol) # sol is the solution at
current iteration
which I guess this is not Sfepy’s philosophy…
It looks OK. Note that you probably do not need to use the dummy_ts. Maybe you could use Problem.evaluate(), or Problem.create_evaluable(), .eval_equations() combo?
I hope my questions are clear enough. Thank you very much in advance for
your help.
As usual, send a self-contained small-size example if the above does not help or you get stuck.
Cheers, r.

Hi Robert, Thank you for taking the time to review these bits of code.
In general, I am not too happy about LCBCs, as they work only for the
nodal
(Lagrange) basis. Remind me what kind of constraints do you use?
Enforcing
those by some penalty or Lagrange multipliers might be a more
future-proof way
to go.
Basically I have two domains with a different PDE defined on each and I want the solution to match at a frontier shared by the two domains (a bit like [1]). To do so, I use something like
def zero_shift(ts, coors, region):
return np.zeros_like(coors[:, 0], dtype=np.float64)
match_coors = Function('match_coors', per.match_coors)
shift_fun = Function('shift_fun', zero_shift)
lcbcs = Conditions([LinearCombinationBC('lc12', [gamma1, gamma2],
{'u2.all' : 'u1.all'}, match_coors, 'shifted_periodic',
arguments=(shift_fun,))])
I tend to find it quite convenient as Sfepy automatically builds the full stiffness matrix, with DOFs from both domains. Could this be easily achieved with Lagrange multipliers?
Yes, but it is not your case - you solve a different linear system in
each
iteration, right?
Yes indeed, so no need to pre-compute the factorization beforehand.
I have an additional question. Say I have assembled both the matrix and rhs
(in the reduced form,
just as detailed in the previous messages) and I want to modify the
essential boundary conditions
of my problem. According to [2], the stiffness matrix stays the same while
the rhs vector is modified.
Is there a way to update the rhs vector without having to make a full
ev.eval_residual()
call?
Either way, you have answered my initial questions and my implementation seems to be working.
Cheers!
Hugo
[1] https://github.com/sfepy/sfepy/issues/689 [2] https://sfepy.org/doc-devel/ebcs_implementation.html?
Le lun. 2 mai 2022 à 20:01, Robert Cimrman cimrman3@ntc.zcu.cz a écrit :
Hi Hugo,
On 29/04/2022 15:57, Hugo Lévy wrote:
Hi Robert,
Thank you for your swift reply, very useful as always!
I have dug into the code (
Problem.solve
,Newton
class from solvers\nls.py) in order to see what's happening with the residual vector and the tangent matrix. Especially, since I only solve linear systems at each iteration of my own Newtonalgorithm, I
could by-pass the whole Sfepy's Newton call and keep only the lines
vec_dx = lin_solver(vec_r, x0=vec_x, eps_a=eps_a, eps_r=eps_r, mtx=mtx_a, status=ls_status)
where I would have pre-computed vec_r and mtx_a the way you proposed.
Yes.
There are a few catches, though (at least for me):
- I am using LCBCs. As you pointed out in [1], LCBSs are not compatible
with setting 'active_only' to False at the creation of the Problem instance. This
means
the matrices & vectors I assemble "by hand" will be reduced in size (because of the constrained DOFs), and so will be my solution vector. I will then recover the full solution with
variables.set_state(vec_dx, pb.active_only)
In general, I am not too happy about LCBCs, as they work only for the nodal (Lagrange) basis. Remind me what kind of constraints do you use? Enforcing those by some penalty or Lagrange multipliers might be a more future-proof way to go.
- LCBCs modify the residual vector as well as the tangent matrix (class
Evaluator
methodseval_residual
andeval_tangent_matrix
), which would not be accounted for if I domtx = pb.mtx_a.copy() mtx_k = eqs['K'].evaluate(mode='weak', dw_mode='matrix', asm_obj=mtx)
So I believe creating two Problem instances (each with the same BCs) as
you
suggested is better as I wouldn't have to worry about it.
- When pb.is_linear() is True, there is a presolve step which computes a
LU
matrix factorization from what I understand. Does this mean I have to call ls.presolve(mtx) before calling the lin_solver() function? Just like in [2] for instance.
Yes, but it is not your case - you solve a different linear system in each iteration, right?
So far, the code could look something like this
...
pb_constant = Problem('pb_constant', equations=eqs_constant, active_only=True) pb_constant.set_bcs(ebcs=my_ebcs, epbcs=my_epbcs, lcbcs=my_lcbcs)
# Compute matrix & vector of linear terms pb_constant.set_solver(nls) tss = pb_constant.get_solver() pb_constant.equations.set_data(None, ignore_unknown=True) variables = pb_constant.get_initial_state(vec=None) pb_constant.time_update(tss.ts) variables.apply_ebc(force_values=None) pb_constant.update_materials() ev_constant = pb_constant.get_evaluator() mtx_constant = ev.eval_tangent_matrix(variables(), is_full=True) rhs_constant = ev.eval_residual(variables(), is_full=True)
pb_update = Problem('pb_update', equations=eqs_update, active_only=True) pb_update.set_bcs(ebcs=my_ebcs, epbcs=my_epbcs, lcbcs=my_lcbcs)
while 1: # my Newton iteration loop
# Compute matrix & vector of nonlinear terms pb_update.set_solver(nls) tss = pb_constant.get_solver() pb_update.equations.set_data(None, ignore_unknown=True) variables = pb_update.get_initial_state(vec=None) pb_update.time_update(tss.ts) variables.apply_ebc(force_values=None) pb_update.update_materials() ev_constant = pb_update.get_evaluator() mtx_update = ev.eval_tangent_matrix(variables(), is_full=True) rhs_update = ev.eval_residual(variables(), is_full=True)
# Construct the full matrix / rhs mtx = mtx_update + mtx_constant rhs = rhs_update + rhs_constant
# Solve ls = nls.lin_solver ls.presolve(mtx) vec_x = ls(rhs, x0=None, eps_a=eps_a, eps_r=eps_r, mtx=mtx, status={}) sol = variables.set_state(vec_x, pb_update.active_only) # This might not
be
the correct syntax with the last release!
if stop_condition: break
... ...
I do not quite get what the
is_full=True
option does exactly. Should it be set to True when active_only is True?The
is_full
just says whether the passed-in vector is full or reduced - in the latter case the evaluator function calls make_full_vec(). So yes, it interplays with active_only - when active_only is True, is_full in the Newton solver is False, as it works with the reduced linear system.I would be very glad if you could give me a quick feedback on the code / remarks. If you think this looks fine (i.e. the computational cost is actually reduced by proceeding in this manner), I will implement this solution for both of my initial questions.
It looks OK at a glance.
r.
Thanks again,
Hugo
https://sfepy.org/doc-devel/examples/diffusion-time_poisson_interactive.html
Le jeu. 28 avr. 2022 à 11:08, Robert Cimrman cimrman3@ntc.zcu.cz a
écrit :
Hi Hugo,
On 27/04/2022 09:29, hugo.levy@gmail.com wrote:
Hi Robert,
I am currently facing a new difficulty using Sfepy for solving
nonlinear
PDEs. I have coded my own
Newton’s iterative method that simply consists in linearizing the PDE
F(u)=0 around the previous
iteration u_k. I have two questions:
- At each iteration, I create a
Problem
instance for the linearizedPDE and solve it the usual
way in Sfepy. However, only the “linearized-nonlinear” terms need to be
recomputed from one iteration
to another. For the sake of computational efficiency, I wish not to
reconstruct all matrices / rhs vectors
but only the ones that need to be updated. How would you achieve that?
In other words, how can you
store some assembled terms while re-assembling some others?
There is no built-in support for this, but because you implement your
own
solver, it should be possible to get without too much hassle. I suggest you to split your equations to linear and nonlinear parts and evaluate those separately. There are several ways how to do that, e.g. (pb is a Problem instance):
eqs = { 'K' : '...', }
... pb.time_update() pb.update_materials()
mtx = pb.mtx_a.copy() mtx_k = eqs['K'].evaluate(mode='weak', dw_mode='matrix', asm_obj=mtx)
etc. See also assemble_matrices() in [1].
Another way would be to have two problem instance for the linear and nonlinear parts and evaluate those.
[1]
https://sfepy.org/doc-devel/examples/linear_elasticity-dispersion_analysis.h...
- For some reason, I need to evaluate another function, say G (a
residual), at each iteration in
the weak form. Again, some terms in G are constant while others are
iteration-dependent (through
their material). How would you efficiently evaluate G(u_k) at each
iteration? Currently, the only way
I found is to do
dummy_ts = TimeStepper(0, 100) # ficticious time-stepper eqs = Equations([my_equation]) # equation associated with
function G
eqs.init_state() ebcs = Conditions(my_ebcs) epbcs = Conditions(my_epbcs) eqs.time_update(dummy_ts, ebcs=ebcs, epbcs=epbcs) res_vec = eqs.eval_residuals(sol) # sol is the solution at
current iteration
which I guess this is not Sfepy’s philosophy…
It looks OK. Note that you probably do not need to use the dummy_ts.
Maybe
you could use Problem.evaluate(), or Problem.create_evaluable(), .eval_equations() combo?
I hope my questions are clear enough. Thank you very much in advance
for
your help.
As usual, send a self-contained small-size example if the above does not help or you get stuck.
Cheers, r.
SfePy mailing list -- sfepy@python.org To unsubscribe send an email to sfepy-leave@python.org https://mail.python.org/mailman3/lists/sfepy.python.org/ Member address: hugo.levy@gmail.com

Hi Hugo,
On 11/05/2022 09:48, Hugo Lévy wrote:
Hi Robert, Thank you for taking the time to review these bits of code.
In general, I am not too happy about LCBCs, as they work only for the
nodal
(Lagrange) basis. Remind me what kind of constraints do you use?
Enforcing
those by some penalty or Lagrange multipliers might be a more
future-proof way
to go.
Basically I have two domains with a different PDE defined on each and I want the solution to match at a frontier shared by the two domains (a bit like [1]). To do so, I use something like
def zero_shift(ts, coors, region): return np.zeros_like(coors[:, 0], dtype=np.float64) match_coors = Function('match_coors', per.match_coors) shift_fun = Function('shift_fun', zero_shift) lcbcs = Conditions([LinearCombinationBC('lc12', [gamma1, gamma2], {'u2.all' : 'u1.all'}, match_coors, 'shifted_periodic', arguments=(shift_fun,))])
I tend to find it quite convenient as Sfepy automatically builds the full stiffness matrix, with DOFs from both domains. Could this be easily achieved with Lagrange multipliers?
Compare the examples [1,2]. It is a different problem, but illustrates the approach. The drawback of the multipliers is the additional field. A penalty could be used to avoid this, but then you have to choose the penalty parameter, see [3].
[1] https://sfepy.org/doc-devel/examples/multi_physics-biot_npbc.html [2] https://sfepy.org/doc-devel/examples/multi_physics-biot_npbc_lagrange.html [3] https://sfepy.org/doc-devel/examples/navier_stokes-stokes_slip_bc.html
Yes, but it is not your case - you solve a different linear system in
each
iteration, right?
Yes indeed, so no need to pre-compute the factorization beforehand.
I have an additional question. Say I have assembled both the matrix and rhs (in the reduced form, just as detailed in the previous messages) and I want to modify the essential boundary conditions of my problem. According to [2], the stiffness matrix stays the same while the rhs vector is modified. Is there a way to update the rhs vector without having to make a full
ev.eval_residual()
call?
Not really. You could work around it, but I am not sure it is worth the hassle:
- You could only evaluate terms affected by the conditions change.
- You could only evaluate in cells neighboring with the boundary, where the
conditions are applied and update the residual at appropriate places.
Either way, you have answered my initial questions and my implementation seems to be working.
Hth, r.
Cheers!
Hugo
[1] https://github.com/sfepy/sfepy/issues/689 [2] https://sfepy.org/doc-devel/ebcs_implementation.html?
Le lun. 2 mai 2022 à 20:01, Robert Cimrman cimrman3@ntc.zcu.cz a écrit :
Hi Hugo,
On 29/04/2022 15:57, Hugo Lévy wrote:
Hi Robert,
Thank you for your swift reply, very useful as always!
I have dug into the code (
Problem.solve
,Newton
class from solvers\nls.py) in order to see what's happening with the residual vector and the tangent matrix. Especially, since I only solve linear systems at each iteration of my own Newtonalgorithm, I
could by-pass the whole Sfepy's Newton call and keep only the lines
vec_dx = lin_solver(vec_r, x0=vec_x, eps_a=eps_a, eps_r=eps_r, mtx=mtx_a, status=ls_status)
where I would have pre-computed vec_r and mtx_a the way you proposed.
Yes.
There are a few catches, though (at least for me):
- I am using LCBCs. As you pointed out in [1], LCBSs are not compatible
with setting 'active_only' to False at the creation of the Problem instance. This
means
the matrices & vectors I assemble "by hand" will be reduced in size (because of the constrained DOFs), and so will be my solution vector. I will then recover the full solution with
variables.set_state(vec_dx, pb.active_only)
In general, I am not too happy about LCBCs, as they work only for the nodal (Lagrange) basis. Remind me what kind of constraints do you use? Enforcing those by some penalty or Lagrange multipliers might be a more future-proof way to go.
- LCBCs modify the residual vector as well as the tangent matrix (class
Evaluator
methodseval_residual
andeval_tangent_matrix
), which would not be accounted for if I domtx = pb.mtx_a.copy() mtx_k = eqs['K'].evaluate(mode='weak', dw_mode='matrix', asm_obj=mtx)
So I believe creating two Problem instances (each with the same BCs) as
you
suggested is better as I wouldn't have to worry about it.
- When pb.is_linear() is True, there is a presolve step which computes a
LU
matrix factorization from what I understand. Does this mean I have to call ls.presolve(mtx) before calling the lin_solver() function? Just like in [2] for instance.
Yes, but it is not your case - you solve a different linear system in each iteration, right?
So far, the code could look something like this
...
pb_constant = Problem('pb_constant', equations=eqs_constant, active_only=True) pb_constant.set_bcs(ebcs=my_ebcs, epbcs=my_epbcs, lcbcs=my_lcbcs)
# Compute matrix & vector of linear terms pb_constant.set_solver(nls) tss = pb_constant.get_solver() pb_constant.equations.set_data(None, ignore_unknown=True) variables = pb_constant.get_initial_state(vec=None) pb_constant.time_update(tss.ts) variables.apply_ebc(force_values=None) pb_constant.update_materials() ev_constant = pb_constant.get_evaluator() mtx_constant = ev.eval_tangent_matrix(variables(), is_full=True) rhs_constant = ev.eval_residual(variables(), is_full=True)
pb_update = Problem('pb_update', equations=eqs_update, active_only=True) pb_update.set_bcs(ebcs=my_ebcs, epbcs=my_epbcs, lcbcs=my_lcbcs)
while 1: # my Newton iteration loop
# Compute matrix & vector of nonlinear terms pb_update.set_solver(nls) tss = pb_constant.get_solver() pb_update.equations.set_data(None, ignore_unknown=True) variables = pb_update.get_initial_state(vec=None) pb_update.time_update(tss.ts) variables.apply_ebc(force_values=None) pb_update.update_materials() ev_constant = pb_update.get_evaluator() mtx_update = ev.eval_tangent_matrix(variables(), is_full=True) rhs_update = ev.eval_residual(variables(), is_full=True)
# Construct the full matrix / rhs mtx = mtx_update + mtx_constant rhs = rhs_update + rhs_constant
# Solve ls = nls.lin_solver ls.presolve(mtx) vec_x = ls(rhs, x0=None, eps_a=eps_a, eps_r=eps_r, mtx=mtx, status={}) sol = variables.set_state(vec_x, pb_update.active_only) # This might not
be
the correct syntax with the last release!
if stop_condition: break
... ...
I do not quite get what the
is_full=True
option does exactly. Should it be set to True when active_only is True?The
is_full
just says whether the passed-in vector is full or reduced - in the latter case the evaluator function calls make_full_vec(). So yes, it interplays with active_only - when active_only is True, is_full in the Newton solver is False, as it works with the reduced linear system.I would be very glad if you could give me a quick feedback on the code / remarks. If you think this looks fine (i.e. the computational cost is actually reduced by proceeding in this manner), I will implement this solution for both of my initial questions.
It looks OK at a glance.
r.
Thanks again,
Hugo
https://sfepy.org/doc-devel/examples/diffusion-time_poisson_interactive.html
Le jeu. 28 avr. 2022 à 11:08, Robert Cimrman cimrman3@ntc.zcu.cz a
écrit :
Hi Hugo,
On 27/04/2022 09:29, hugo.levy@gmail.com wrote:
Hi Robert,
I am currently facing a new difficulty using Sfepy for solving
nonlinear
PDEs. I have coded my own
Newton’s iterative method that simply consists in linearizing the PDE
F(u)=0 around the previous
iteration u_k. I have two questions:
- At each iteration, I create a
Problem
instance for the linearizedPDE and solve it the usual
way in Sfepy. However, only the “linearized-nonlinear” terms need to be
recomputed from one iteration
to another. For the sake of computational efficiency, I wish not to
reconstruct all matrices / rhs vectors
but only the ones that need to be updated. How would you achieve that?
In other words, how can you
store some assembled terms while re-assembling some others?
There is no built-in support for this, but because you implement your
own
solver, it should be possible to get without too much hassle. I suggest you to split your equations to linear and nonlinear parts and evaluate those separately. There are several ways how to do that, e.g. (pb is a Problem instance):
eqs = { 'K' : '...', }
... pb.time_update() pb.update_materials()
mtx = pb.mtx_a.copy() mtx_k = eqs['K'].evaluate(mode='weak', dw_mode='matrix', asm_obj=mtx)
etc. See also assemble_matrices() in [1].
Another way would be to have two problem instance for the linear and nonlinear parts and evaluate those.
[1]
https://sfepy.org/doc-devel/examples/linear_elasticity-dispersion_analysis.h...
- For some reason, I need to evaluate another function, say G (a
residual), at each iteration in
the weak form. Again, some terms in G are constant while others are
iteration-dependent (through
their material). How would you efficiently evaluate G(u_k) at each
iteration? Currently, the only way
I found is to do
dummy_ts = TimeStepper(0, 100) # ficticious time-stepper eqs = Equations([my_equation]) # equation associated with
function G
eqs.init_state() ebcs = Conditions(my_ebcs) epbcs = Conditions(my_epbcs) eqs.time_update(dummy_ts, ebcs=ebcs, epbcs=epbcs) res_vec = eqs.eval_residuals(sol) # sol is the solution at
current iteration
which I guess this is not Sfepy’s philosophy…
It looks OK. Note that you probably do not need to use the dummy_ts.
Maybe
you could use Problem.evaluate(), or Problem.create_evaluable(), .eval_equations() combo?
I hope my questions are clear enough. Thank you very much in advance
for
your help.
As usual, send a self-contained small-size example if the above does not help or you get stuck.
Cheers, r.
participants (3)
-
Hugo Lévy
-
hugo.levy@gmail.com
-
Robert Cimrman