Hi,
I am back. The conference was very nice, and several things are clear to me now:
- sphinx was a good move (thanks, Logan!)
- cython is the way to go for speed
- writing a GUI with traits and mayavi should not be that difficult for a dedicated person (one guy did a very cool GUI for some brain imaging in six months as his master thesis at EPFL, without prior knowledge of Python...)
Check out the conference site [0], all the slides should appear there sooner or later.
Also, a very interesting collection of tutorials was assembled, see [1], (or [2] with my scipy.sparse tutorial, to be merged to [1]).
r.
[0] http://www.euroscipy.org/conference/euroscipy2010 [1] http://github.com/emmanuelle/Euroscipy-intro-tutorials [2] http://github.com/rc/Euroscipy-intro-tutorials
Hi Robert,
Welcome back!
On Mon, Jul 12, 2010 at 9:48 AM, Robert Cimrman <cimr...@ntc.zcu.cz> wrote:
Hi,
I am back. The conference was very nice, and several things are clear to me now:
- sphinx was a good move (thanks, Logan!)
No problem! Sphinx is proving to be useful to me as well so I'm glad I learned.
- cython is the way to go for speed
- writing a GUI with traits and mayavi should not be that difficult for a dedicated person (one guy did a very cool GUI for some brain imaging in six months as his master thesis at EPFL, without prior knowledge of Python...)
Check out the conference site [0], all the slides should appear there sooner or later.
Also, a very interesting collection of tutorials was assembled, see [1], (or [2] with my scipy.sparse tutorial, to be merged to [1]).
Interesting, will be checking these out, thanks for the links!
Logan
r.
[0] http://www.euroscipy.org/conference/euroscipy2010 [1] http://github.com/emmanuelle/Euroscipy-intro-tutorials [2] http://github.com/rc/Euroscipy-intro-tutorials
-- You received this message because you are subscribed to the Google Groups "sfepy-devel" group. To post to this group, send email to sfepy...@googlegroups.com. To unsubscribe from this group, send email to sfepy-devel...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sfepy-devel?hl=en.
On Thu, Jul 15, 2010 at 4:36 PM, Logan Sorenson <logan.s...@gmail.com> wrote:
Hi Robert,
Welcome back!
On Mon, Jul 12, 2010 at 9:48 AM, Robert Cimrman <cimr...@ntc.zcu.cz> wrote:
Hi,
I am back. The conference was very nice, and several things are clear to me now:
- sphinx was a good move (thanks, Logan!)
No problem! Sphinx is proving to be useful to me as well so I'm glad I learned.
Oh, definitely. I even wrote a book in sphinx:
http://ondrejcertik.blogspot.com/2010/07/theoretical-physics-reference-book....
- cython is the way to go for speed
That's clear to me as well.
One thing though which is not so clear to me is how to efficiently handle C arrays in Cython. I'll ask on the cython list. Essentially one can use "double *", that works fast, but then it's a bit tedious to convert to/from numpy and also one has to handle memory allocation.
What I want to say is that one can just use C++ (let's say for finite elements) and it's quite fast. It requires quite a bit of knowledge of Cython to get a code, that is the same fast.
Ondrej
On 08/05/10 07:54, Ondrej Certik wrote:
On Thu, Jul 15, 2010 at 4:36 PM, Logan Sorenson <logan.s...@gmail.com> wrote:
Hi Robert,
Welcome back!
On Mon, Jul 12, 2010 at 9:48 AM, Robert Cimrman<cimr...@ntc.zcu.cz> wrote:
Hi,
I am back. The conference was very nice, and several things are clear to me now:
- sphinx was a good move (thanks, Logan!)
No problem! Sphinx is proving to be useful to me as well so I'm glad I learned.
Oh, definitely. I even wrote a book in sphinx:
http://ondrejcertik.blogspot.com/2010/07/theoretical-physics-reference-book....
Yes, I have seen your blog post - it's pretty cool what could be achieved nowadays thanks to internet.
- cython is the way to go for speed
That's clear to me as well.
One thing though which is not so clear to me is how to efficiently handle C arrays in Cython. I'll ask on the cython list. Essentially one can use "double *", that works fast, but then it's a bit tedious to convert to/from numpy and also one has to handle memory allocation.
The cython support for numpy arrays is not enough?
What I want to say is that one can just use C++ (let's say for finite elements) and it's quite fast. It requires quite a bit of knowledge of Cython to get a code, that is the same fast.
We will have to rewrite all the C anyway sooner or later as the low level assembling will have to change to allow for adaptivity (or using hermes), so I am considering which language/tools should we use. I am thinking about using just Cython, first in the pure python mode to get things done, and then gradually optimize the bottlenecks for speed.
r.
On Thu, Aug 5, 2010 at 2:17 AM, Robert Cimrman <cimr...@ntc.zcu.cz> wrote:
On 08/05/10 07:54, Ondrej Certik wrote:
On Thu, Jul 15, 2010 at 4:36 PM, Logan Sorenson <logan.s...@gmail.com> wrote:
Hi Robert,
Welcome back!
On Mon, Jul 12, 2010 at 9:48 AM, Robert Cimrman<cimr...@ntc.zcu.cz> wrote:
Hi,
I am back. The conference was very nice, and several things are clear to me now:
- sphinx was a good move (thanks, Logan!)
No problem! Sphinx is proving to be useful to me as well so I'm glad I learned.
Oh, definitely. I even wrote a book in sphinx:
http://ondrejcertik.blogspot.com/2010/07/theoretical-physics-reference-book....
Yes, I have seen your blog post - it's pretty cool what could be achieved nowadays thanks to internet.
- cython is the way to go for speed
That's clear to me as well.
One thing though which is not so clear to me is how to efficiently handle C arrays in Cython. I'll ask on the cython list. Essentially one can use "double *", that works fast, but then it's a bit tedious to convert to/from numpy and also one has to handle memory allocation.
The cython support for numpy arrays is not enough?
It's not. See my "can cython replace C++ for scientific programs" thread in cython-users, where I put real life examples.
What I want to say is that one can just use C++ (let's say for finite elements) and it's quite fast. It requires quite a bit of knowledge of Cython to get a code, that is the same fast.
We will have to rewrite all the C anyway sooner or later as the low level assembling will have to change to allow for adaptivity (or using hermes), so I am considering which language/tools should we use. I am thinking about using just Cython, first in the pure python mode to get things done, and then gradually optimize the bottlenecks for speed.
I would do the same. I'll help you with optimizing it.
Try to make it, so that you can do "hp" assembly (different "p" at different elements). That should not be difficult. Then you can automatically use spectral elements for example and so on. Well, there will be issue with handing nodes --- will you support such refinement in sfepy?
I was thinking about this a lot lately, since I learned a lot about hp-FEM this summer. I think that one can achieve reasonable results by doing some simplifications, for example for the schroedinger equation I don't need curvilinear elements (e.g I have constant jacobians) -> huge simplification/speedup in assembly, I will eventually need some refinement (hanging nodes?), but if the solution is piecewise smooth, then spectral elements seem to be very very efficient. All you need is a good mesh to start from, and then just increase "p" uniformly. It easily beats hp-FEM, unless you do it very very right. Then hp-FEM wins, but it's lots of work. Also hp-FEM doens't need any knowledge about the problem/mesh.
So my experience so far is that it's hard to beat hp-FEM in general, but in many practical applications it's possible to get close if you use some information about the problem.
So if sfepy allowed spectral elements as I wrote above, and were simple (faster) than h3d currently, that'd be cool. In h3d, we'll have to optimize it for constant jacobians and so on, it will still be some work.
Ondrej
On Thu, 5 Aug 2010, Ondrej Certik wrote:
On Thu, Aug 5, 2010 at 2:17 AM, Robert Cimrman <cimr...@ntc.zcu.cz> wrote:
On 08/05/10 07:54, Ondrej Certik wrote:
On Thu, Jul 15, 2010 at 4:36 PM, Logan Sorenson <logan.s...@gmail.com> �wrote:
Hi Robert,
Welcome back!
On Mon, Jul 12, 2010 at 9:48 AM, Robert Cimrman<cimr...@ntc.zcu.cz> �wrote:
Hi,
I am back. The conference was very nice, and several things are clear to me now:
- sphinx was a good move (thanks, Logan!)
No problem! Sphinx is proving to be useful to me as well so I'm glad I learned.
Oh, definitely. I even wrote a book in sphinx:
http://ondrejcertik.blogspot.com/2010/07/theoretical-physics-reference-book....
Yes, I have seen your blog post - it's pretty cool what could be achieved nowadays thanks to internet.
- cython is the way to go for speed
That's clear to me as well.
One thing though which is not so clear to me is how to efficiently handle C arrays in Cython. I'll ask on the cython list. Essentially one can use "double *", that works fast, but then it's a bit tedious to convert to/from numpy and also one has to handle memory allocation.
The cython support for numpy arrays is not enough?
It's not. See my "can cython replace C++ for scientific programs" thread in cython-users, where I put real life examples.
Very interesting thread, thanks for the link (and for raising the issue). But the conclusions seem rather optimistic to me.
What I want to say is that one can just use C++ (let's say for finite elements) and it's quite fast. It requires quite a bit of knowledge of Cython to get a code, that is the same fast.
We will have to rewrite all the C anyway sooner or later as the low level assembling will have to change to allow for adaptivity (or using hermes), so I am considering which language/tools should we use. I am thinking about using just Cython, first in the pure python mode to get things done, and then gradually optimize the bottlenecks for speed.
I would do the same. I'll help you with optimizing it.
That would be great (see below...)
Try to make it, so that you can do "hp" assembly (different "p" at different elements). That should not be difficult. Then you can automatically use spectral elements for example and so on. Well, there will be issue with handing nodes --- will you support such refinement in sfepy?
Well, it took several years to hpfem group to figure out the adaptivity with hanging nodes correctly, so my initial aims are lower. I would like support for non-nodal elements (hierarchical bases), so first I have to get rid off all places where I use directly the DOF coefficients in place of function values. I would also like to support different (automatic) quadrature orders in each element, different polynomial order (this on a level where I need not care about hanging nodes - I would leave those to hermes) and so on.
It's a rather complex topic, and some parts of the big picture are still rather foggy for me. Frankly, I am not sure when and how exactly this will happen.
I was thinking about this a lot lately, since I learned a lot about hp-FEM this summer. I think that one can achieve reasonable results by doing some simplifications, for example for the schroedinger equation I don't need curvilinear elements (e.g I have constant jacobians) -> huge simplification/speedup in assembly, I will eventually need some refinement (hanging nodes?), but if the solution is piecewise smooth, then spectral elements seem to be very very efficient. All you need is a good mesh to start from, and then just increase "p" uniformly. It easily beats hp-FEM, unless you do it very very right. Then hp-FEM wins, but it's lots of work. Also hp-FEM doens't need any knowledge about the problem/mesh.
Yes, uniform p refinement is the first thing to do... We would need the basis functions done in Cython.
So my experience so far is that it's hard to beat hp-FEM in general, but in many practical applications it's possible to get close if you use some information about the problem.
So if sfepy allowed spectral elements as I wrote above, and were simple (faster) than h3d currently, that'd be cool. In h3d, we'll have to optimize it for constant jacobians and so on, it will still be some work.
I am not sure we can beat h3d in speed, but even just having it working is valuable. Neither sfepy makes use of constant jacobians on simplex elements...
r.
On Fri, Aug 6, 2010 at 3:30 AM, Robert Cimrman <cimr...@ntc.zcu.cz> wrote:
On Thu, 5 Aug 2010, Ondrej Certik wrote:
On Thu, Aug 5, 2010 at 2:17 AM, Robert Cimrman <cimr...@ntc.zcu.cz> wrote:
On 08/05/10 07:54, Ondrej Certik wrote:
On Thu, Jul 15, 2010 at 4:36 PM, Logan Sorenson <logan.s...@gmail.com> wrote:
Hi Robert,
Welcome back!
On Mon, Jul 12, 2010 at 9:48 AM, Robert Cimrman<cimr...@ntc.zcu.cz> wrote:
Hi,
I am back. The conference was very nice, and several things are clear to me now:
- sphinx was a good move (thanks, Logan!)
No problem! Sphinx is proving to be useful to me as well so I'm glad I learned.
Oh, definitely. I even wrote a book in sphinx:
http://ondrejcertik.blogspot.com/2010/07/theoretical-physics-reference-book....
Yes, I have seen your blog post - it's pretty cool what could be achieved nowadays thanks to internet.
- cython is the way to go for speed
That's clear to me as well.
One thing though which is not so clear to me is how to efficiently handle C arrays in Cython. I'll ask on the cython list. Essentially one can use "double *", that works fast, but then it's a bit tedious to convert to/from numpy and also one has to handle memory allocation.
The cython support for numpy arrays is not enough?
It's not. See my "can cython replace C++ for scientific programs" thread in cython-users, where I put real life examples.
Very interesting thread, thanks for the link (and for raising the issue). But the conclusions seem rather optimistic to me.
What I want to say is that one can just use C++ (let's say for finite elements) and it's quite fast. It requires quite a bit of knowledge of Cython to get a code, that is the same fast.
We will have to rewrite all the C anyway sooner or later as the low level assembling will have to change to allow for adaptivity (or using hermes), so I am considering which language/tools should we use. I am thinking about using just Cython, first in the pure python mode to get things done, and then gradually optimize the bottlenecks for speed.
I would do the same. I'll help you with optimizing it.
That would be great (see below...)
Try to make it, so that you can do "hp" assembly (different "p" at different elements). That should not be difficult. Then you can automatically use spectral elements for example and so on. Well, there will be issue with handing nodes --- will you support such refinement in sfepy?
Well, it took several years to hpfem group to figure out the adaptivity with hanging nodes correctly, so my initial aims are lower. I would like support for non-nodal elements (hierarchical bases), so first I have to get rid off all places where I use directly the DOF coefficients in place of function values. I would also like to support different (automatic) quadrature orders in each element, different polynomial order (this on a level where I need not care about hanging nodes - I would leave those to hermes) and so on.
It's a rather complex topic, and some parts of the big picture are still rather foggy for me. Frankly, I am not sure when and how exactly this will happen.
Remember, the most difficult part is to do the first step.
I was thinking about this a lot lately, since I learned a lot about hp-FEM this summer. I think that one can achieve reasonable results by doing some simplifications, for example for the schroedinger equation I don't need curvilinear elements (e.g I have constant jacobians) -> huge simplification/speedup in assembly, I will eventually need some refinement (hanging nodes?), but if the solution is piecewise smooth, then spectral elements seem to be very very efficient. All you need is a good mesh to start from, and then just increase "p" uniformly. It easily beats hp-FEM, unless you do it very very right. Then hp-FEM wins, but it's lots of work. Also hp-FEM doens't need any knowledge about the problem/mesh.
Yes, uniform p refinement is the first thing to do... We would need the basis functions done in Cython.
So my experience so far is that it's hard to beat hp-FEM in general, but in many practical applications it's possible to get close if you use some information about the problem.
So if sfepy allowed spectral elements as I wrote above, and were simple (faster) than h3d currently, that'd be cool. In h3d, we'll have to optimize it for constant jacobians and so on, it will still be some work.
I am not sure we can beat h3d in speed, but even just having it working is valuable. Neither sfepy makes use of constant jacobians on simplex elements...
I mean on quads, as that's what I need for the Schroedinger equation.
Ondrej
On 08/06/10 18:30, Ondrej Certik wrote:
On Fri, Aug 6, 2010 at 3:30 AM, Robert Cimrman<cimr...@ntc.zcu.cz> wrote:
On Thu, 5 Aug 2010, Ondrej Certik wrote:
On Thu, Aug 5, 2010 at 2:17 AM, Robert Cimrman<cimr...@ntc.zcu.cz> wrote:
On 08/05/10 07:54, Ondrej Certik wrote:
On Thu, Jul 15, 2010 at 4:36 PM, Logan Sorenson <logan.s...@gmail.com> wrote:
Hi Robert,
Welcome back!
On Mon, Jul 12, 2010 at 9:48 AM, Robert Cimrman<cimr...@ntc.zcu.cz> wrote: > > Hi, > > I am back. The conference was very nice, and several things are clear > to > me > now: > > - sphinx was a good move (thanks, Logan!)
No problem! Sphinx is proving to be useful to me as well so I'm glad I learned.
Oh, definitely. I even wrote a book in sphinx:
http://ondrejcertik.blogspot.com/2010/07/theoretical-physics-reference-book....
Yes, I have seen your blog post - it's pretty cool what could be achieved nowadays thanks to internet.
> - cython is the way to go for speed
That's clear to me as well.
One thing though which is not so clear to me is how to efficiently handle C arrays in Cython. I'll ask on the cython list. Essentially one can use "double *", that works fast, but then it's a bit tedious to convert to/from numpy and also one has to handle memory allocation.
The cython support for numpy arrays is not enough?
It's not. See my "can cython replace C++ for scientific programs" thread in cython-users, where I put real life examples.
Very interesting thread, thanks for the link (and for raising the issue). But the conclusions seem rather optimistic to me.
What I want to say is that one can just use C++ (let's say for finite elements) and it's quite fast. It requires quite a bit of knowledge of Cython to get a code, that is the same fast.
We will have to rewrite all the C anyway sooner or later as the low level assembling will have to change to allow for adaptivity (or using hermes), so I am considering which language/tools should we use. I am thinking about using just Cython, first in the pure python mode to get things done, and then gradually optimize the bottlenecks for speed.
I would do the same. I'll help you with optimizing it.
That would be great (see below...)
Try to make it, so that you can do "hp" assembly (different "p" at different elements). That should not be difficult. Then you can automatically use spectral elements for example and so on. Well, there will be issue with handing nodes --- will you support such refinement in sfepy?
Well, it took several years to hpfem group to figure out the adaptivity with hanging nodes correctly, so my initial aims are lower. I would like support for non-nodal elements (hierarchical bases), so first I have to get rid off all places where I use directly the DOF coefficients in place of function values. I would also like to support different (automatic) quadrature orders in each element, different polynomial order (this on a level where I need not care about hanging nodes - I would leave those to hermes) and so on.
It's a rather complex topic, and some parts of the big picture are still rather foggy for me. Frankly, I am not sure when and how exactly this will happen.
Remember, the most difficult part is to do the first step.
True. We have to revisit the sfepy build system first - it uses numpy distutils or makefiles.
I was thinking about this a lot lately, since I learned a lot about hp-FEM this summer. I think that one can achieve reasonable results by doing some simplifications, for example for the schroedinger equation I don't need curvilinear elements (e.g I have constant jacobians) -> huge simplification/speedup in assembly, I will eventually need some refinement (hanging nodes?), but if the solution is piecewise smooth, then spectral elements seem to be very very efficient. All you need is a good mesh to start from, and then just increase "p" uniformly. It easily beats hp-FEM, unless you do it very very right. Then hp-FEM wins, but it's lots of work. Also hp-FEM doens't need any knowledge about the problem/mesh.
Yes, uniform p refinement is the first thing to do... We would need the basis functions done in Cython.
So my experience so far is that it's hard to beat hp-FEM in general, but in many practical applications it's possible to get close if you use some information about the problem.
So if sfepy allowed spectral elements as I wrote above, and were simple (faster) than h3d currently, that'd be cool. In h3d, we'll have to optimize it for constant jacobians and so on, it will still be some work.
I am not sure we can beat h3d in speed, but even just having it working is valuable. Neither sfepy makes use of constant jacobians on simplex elements...
I mean on quads, as that's what I need for the Schroedinger equation.
Ok. Still we do not have any special optimizations...
r.
participants (3)
-
Logan Sorenson
-
Ondrej Certik
-
Robert Cimrman