Segfault with large cdef'd list
I'm getting a weird segfault from a tiny function (SSCCE) using cython with python 2.7. I'm seeing something similar with cython and python 3.5, though I did not create an SSCCE for 3.5. This same code used to work with slightly older cythons and pythons, and a slightly older version of Linux Mint. The code is at http://stromberg.dnsalias.org/svn/why-is-python-slow/trunk (more specifically at http://stromberg.dnsalias.org/svn/why-is-python-slow/trunk/tst.pyx ) In short, cdef'ing a list of doubles with about a million elements, and using only the 0th element once, segfaults - but cdef'ing a slightly smaller array does not segfault under otherwise identical conditions. Any suggestions? Does Cython have a limit on the max size of a stack frame? Thanks! I quite like cython.
Cython itself doesn't impose any limits, but it does inherit whatever limit exists in the C complier and runtime. The variance may be due to whatever else happens to be placed on the stack. On Sat, Jan 6, 2018 at 10:57 PM, Dan Stromberg <drsalists@gmail.com> wrote:
I'm getting a weird segfault from a tiny function (SSCCE) using cython with python 2.7. I'm seeing something similar with cython and python 3.5, though I did not create an SSCCE for 3.5.
This same code used to work with slightly older cythons and pythons, and a slightly older version of Linux Mint.
The code is at http://stromberg.dnsalias.org/svn/why-is-python-slow/trunk (more specifically at http://stromberg.dnsalias.org/svn/why-is-python-slow/trunk/tst.pyx )
In short, cdef'ing a list of doubles with about a million elements, and using only the 0th element once, segfaults - but cdef'ing a slightly smaller array does not segfault under otherwise identical conditions.
Any suggestions? Does Cython have a limit on the max size of a stack frame?
Thanks! I quite like cython. _______________________________________________ cython-devel mailing list cython-devel@python.org https://mail.python.org/mailman/listinfo/cython-devel
Robert Bradshaw schrieb am 07.01.2018 um 09:48:
On Sat, Jan 6, 2018 at 10:57 PM, Dan Stromberg wrote:
I'm getting a weird segfault from a tiny function (SSCCE) using cython with python 2.7. I'm seeing something similar with cython and python 3.5, though I did not create an SSCCE for 3.5.
This same code used to work with slightly older cythons and pythons, and a slightly older version of Linux Mint.
The code is at http://stromberg.dnsalias.org/svn/why-is-python-slow/trunk (more specifically at http://stromberg.dnsalias.org/svn/why-is-python-slow/trunk/tst.pyx )
In short, cdef'ing a list of doubles with about a million elements, and using only the 0th element once, segfaults - but cdef'ing a slightly smaller array does not segfault under otherwise identical conditions.
Any suggestions? Does Cython have a limit on the max size of a stack frame?
Cython itself doesn't impose any limits, but it does inherit whatever limit exists in the C complier and runtime. The variance may be due to whatever else happens to be placed on the stack.
Let me add that I wouldn't consider it a good idea to allocate large chunks of memory on the stack. If it's meant to hold substantial amounts of data (which also suggests that there is a substantial amount of processing and/or copying involved), it's probably also worth a [PyMem_]malloc() call. Heap allocation allows you to respond to allocation failures with a MemoryError rather than a crash, as you get now. How much stack space you have left is user controlled through call depth and recursion, which makes it a somewhat easy target. Stefan
On Sun, Jan 7, 2018 at 2:18 AM, Stefan Behnel <stefan_ml@behnel.de> wrote:
Robert Bradshaw schrieb am 07.01.2018 um 09:48:
Cython itself doesn't impose any limits, but it does inherit whatever limit exists in the C complier and runtime. The variance may be due to whatever else happens to be placed on the stack.
Let me add that I wouldn't consider it a good idea to allocate large chunks of memory on the stack. If it's meant to hold substantial amounts of data (which also suggests that there is a substantial amount of processing and/or copying involved), it's probably also worth a [PyMem_]malloc() call. Heap allocation allows you to respond to allocation failures with a MemoryError rather than a crash, as you get now. How much stack space you have left is user controlled through call depth and recursion, which makes it a somewhat easy target.
Thanks - it's working now with malloc() and free(). Code at: http://stromberg.dnsalias.org/svn/why-is-python-slow/trunk/cython3_types_t.p... It turns out the 2.x and 3.x versions are identical. :)
participants (3)
-
Dan Stromberg -
Robert Bradshaw -
Stefan Behnel