[Python-bugs-list] [ python-Bugs-556025 ] list(xrange(1e9)) --> seg fault

noreply@sourceforge.net noreply@sourceforge.net
Wed, 07 Aug 2002 13:11:08 -0700


Bugs item #556025, was opened at 2002-05-14 12:41
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=556025&group_id=5470

Category: Python Interpreter Core
Group: None
Status: Open
Resolution: Accepted
Priority: 5
Submitted By: Raymond Hettinger (rhettinger)
>Assigned to: Jason Tishler (jlt63)
Summary: list(xrange(1e9))  -->  seg fault

Initial Comment:
>From c.lang.py:

'''
 Python 2.2.1 (#2, Apr 21 2002, 22:22:55) 
[GCC 2.96 20000731 (Mandrake Linux 8.1 2.96-0.62mdk)] 
on linux2
Type "help", "copyright", "credits" or "license" for 
more information.
>>> list(xrange(1e9))
Segmentation fault
'''

I've reproduced the same fault on Windows ME using 
Py2.2.0 and Py2.3a.

----------------------------------------------------------------------

>Comment By: Neal Norwitz (nnorwitz)
Date: 2002-08-07 16:11

Message:
Logged In: YES 
user_id=33168

Actually, I think Jason may be more appropriate, since this
is a cygwin problem.  Jason, can you test/replicate this?

----------------------------------------------------------------------

Comment By: Neal Norwitz (nnorwitz)
Date: 2002-08-07 14:20

Message:
Logged In: YES 
user_id=33168

Hmmm, Tim can you reproduce this?  I luckily don't have a
windows box. :-)

----------------------------------------------------------------------

Comment By: Steve Holden (holdenweb)
Date: 2002-08-07 13:28

Message:
Logged In: YES 
user_id=88157

I hope re-opening this is the right thing to do (I'm new here).

Current CVS fails under Win2000+Cygwin with a 
segmentation fault in the updated test_b1.py. Easily 
reproduced:

$ ./python.exe
Python 2.3a0 (#1, Aug  7 2002, 12:18:38)
[GCC 2.95.3-5 (cygwin special)] on cygwin
Type "help", "copyright", "credits" or "license" for more 
information.
>>> import sys
>>> list(xrange(sys.maxint/4))
Segmentation fault (core dumped)

This does seem to be size-related, as:

>>> s = sys.maxint/8
>>> list(xrange(s))
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
MemoryError

which is as expected in test_b1.py


----------------------------------------------------------------------

Comment By: Neal Norwitz (nnorwitz)
Date: 2002-05-22 19:20

Message:
Logged In: YES 
user_id=33168

Checked in as:
  listobject.c 2.106
  test_b1.py 1.46

----------------------------------------------------------------------

Comment By: Raymond Hettinger (rhettinger)
Date: 2002-05-21 08:59

Message:
Logged In: YES 
user_id=80475

Good plan!  Thx for squashing this bug. 

----------------------------------------------------------------------

Comment By: Neal Norwitz (nnorwitz)
Date: 2002-05-21 08:52

Message:
Logged In: YES 
user_id=33168

How about using sys.maxint / 4?  Does that make more sense
than 1e9?  This assumption is a little better, that the data
and address sizes are the same.  I can add a comment to this
effect.

----------------------------------------------------------------------

Comment By: Raymond Hettinger (rhettinger)
Date: 2002-05-20 23:37

Message:
Logged In: YES 
user_id=80475

When you load the fix, please commit the regression test 
patch also.  Thx,  Raymond

----------------------------------------------------------------------

Comment By: Tim Peters (tim_one)
Date: 2002-05-20 22:51

Message:
Logged In: YES 
user_id=31435

Yup, lookin' better.  Python does assume 8-bit bytes in 
several places, and also 2's-complement integers.  Since 
size_t is guaranteed (by C) to be an unsigned type, the 
largest value of type size_t is more easily expressed as

(~(size_t)0)

The C part of the patch looks fine then.  The test is a 
little dubious:  who says the machine can't create a 
billion-integer list?  The idea that 1e9 necessarily 
overflows in this context is a 32-bit address-space 
assumption.  But I'm willing to delay fixing that until a 
machine with a usable larger address space appears <wink>.

So marked Accepted and assigned to you for checkin.  Thanks!

----------------------------------------------------------------------

Comment By: Neal Norwitz (nnorwitz)
Date: 2002-05-20 11:43

Message:
Logged In: YES 
user_id=33168

Ok, there were other problems, too:
  * Need to divide by the size of the type, 
    not >> 4 which was completely broken.
  * There was a missing PyErr_NoMemory().

I uploaded a new patch.

I'm not sure the size_t fix is correct.
I hope we can at least assume 8-bit machines: :-)

if (_new_size <= ((1 << (sizeof(size_t)*8 - 1)) / sizeof(type)))



----------------------------------------------------------------------

Comment By: Tim Peters (tim_one)
Date: 2002-05-20 00:48

Message:
Logged In: YES 
user_id=31435

The code patch has some problems:  you can't assume any 
relation between a size_t and an unsigned long; C simply 
doesn't define how big size_t is, and relative sizes do 
vary on 64-bit platforms.  However that gets fixed, if you 
decide it's "too big", var should be set to NULL (not 0 -- 
this is a "Guido thing" <wink>), and no exception should be 
set.  It's the caller's responsibility to check var for 
NULL after the macro is invoked, and set an appropriate 
exception.  listobject.c sometimes doesn't check the result 
for NULL, but that should only be when it knows it's 
*shrinking* a memory area, so that realloc can't fail.

----------------------------------------------------------------------

Comment By: Raymond Hettinger (rhettinger)
Date: 2002-05-17 20:12

Message:
Logged In: YES 
user_id=80475

Added a patch to add this bug to the testbank.

Shallow code review:  Patch compiles okay (applied to 
Py2.3a0). Fixes the original problem.  Passes the smell 
test.  Macro style good (only the "var" operand is used 
more than once; no side-effects except setting "var" to 
zero upon a resize error).  Passes the standard regression 
tests.  Passes regression testing versus my personal (real 
code) testbank.

Will give it a deeper look this week-end.

One other thought: should the ValueError be replaced with a 
MemoryError to make the message consistent with PyList_New?



----------------------------------------------------------------------

Comment By: Neal Norwitz (nnorwitz)
Date: 2002-05-17 17:03

Message:
Logged In: YES 
user_id=33168

Ok, this time I have a patch.  The patch only fixes listobject.

I looked over the other uses of PyMem_R{ESIZE,ALLOC}() and
they don't appear to be nearly as problematic as list.  For
example, if the grammar has 1e9 nodes, there are going to be
other problems well before then (ie, memory will probably be
gone).

----------------------------------------------------------------------

Comment By: Neal Norwitz (nnorwitz)
Date: 2002-05-17 16:25

Message:
Logged In: YES 
user_id=33168

Oops, sorry about that last comment.  That was something I
was playing with.  The CVS version is fine for [x]range(float).

----------------------------------------------------------------------

Comment By: Neal Norwitz (nnorwitz)
Date: 2002-05-17 16:22

Message:
Logged In: YES 
user_id=33168

Note, this problem is even more generic in CVS version:

>>> range(1.1)
Segmentation fault (core dumped)
>>> xrange(1.1)
Segmentation fault (core dumped)

[x]xrange(float) work fine in 2.2.1.

----------------------------------------------------------------------

Comment By: Raymond Hettinger (rhettinger)
Date: 2002-05-14 20:44

Message:
Logged In: YES 
user_id=80475

Would there be some merit to converting PyMem_RESIZE to a 
function with an overflow check?  

I would guess that the time spent on a realloc dwarfs the 
overhead of a short wrapper function.

OTOH, I'll bet this core dump only occurs in toy examples 
and never in real code.



----------------------------------------------------------------------

Comment By: Tim Peters (tim_one)
Date: 2002-05-14 18:15

Message:
Logged In: YES 
user_id=31435

Heh.  This is an instance of a general flaw:  the 
PyMem_RESIZE macro doesn't check for int overflow in its

    (n) * sizeof(type)

subexpression. The basic deal is that 1000000000 fits in an 
int, but 4 times that (silently) overflows.  In more 
detail, for this specific test case, listobject.c'.s 
roundupsize rounds 1e9 up to 0x40000000, which silently 
underflows to 0!() when PyMem_RESIZE multiplies it by 4.

Hard to know how to fix this in general; PyMem_RESIZE 
callers don't generally worry about overflow now.

----------------------------------------------------------------------

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=556025&group_id=5470