[ python-Bugs-1045509 ] Problems with os.system and ulimit -f

SourceForge.net noreply at sourceforge.net
Mon Oct 3 08:35:37 CEST 2005


Bugs item #1045509, was opened at 2004-10-12 08:34
Message generated for change (Comment added) made by nnorwitz
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1045509&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: Python 2.4
Status: Open
>Resolution: Invalid
Priority: 5
Submitted By: Tomasz Kowaltowski (kowaltowski)
>Assigned to: Neal Norwitz (nnorwitz)
Summary: Problems with os.system and ulimit -f

Initial Comment:
Python version (running under Fedora Core 2 Linux):
   Python 2.3.3 (#1, May  7 2004, 10:31:40)
   [GCC 3.3.3 20040412 (Red Hat Linux 3.3.3-7)] on linux2
---------------------------------------------------------------------------
I found a problem while executing the 'ulimit -f' bash
command within the 'os.system' function. According to
the documentation this function should behave exactly
like the stdlib function 'system'. However it does not
happen as illustrated by the minimal Python and C
examples: testsystem.py and testsystem.c (see attached
zipped file).

In these examples 'largefile' is a compiled C program
which writes an endless file into the stdout (also
attached). The C program testsystem.c works as expected
and prints the following output:

   command: ulimit -f 10; largefile > xxx;
   result = 153

The Python program testsystem.py **does not stop**; if
interrupted by Ctrl-C it prints:

  command: ulimit -f 10; largefile > xxx;
  result = 0

In both cases the output file 'xxx' has 10240 bytes,
ie, 10 blocks as limited by 'ulimit'. 

It is interesting though that the command 'ulimit -t 1'
(CPU time) produces correct results under both Python
and C versions, ie, interrupts the execution and prints:

  command: ulimit -t 1; largefile > xxx;
  result = 137






----------------------------------------------------------------------

>Comment By: Neal Norwitz (nnorwitz)
Date: 2005-10-02 23:35

Message:
Logged In: YES 
user_id=33168

It's set in Python/pythonrun.c. The reason is below.  I'm
not sure this can be changed.  Any suggestions?  I'd be
happy to update doc, but have no idea where.  

revision 2.160
date: 2002/04/23 20:31:01;  author: jhylton;  state: Exp; 
lines: +3 -0
Ignore SIGXFSZ.

The SIGXFSZ signal is sent when the maximum file size limit is
exceeded (RLIMIT_FSIZE).  Apparently, it is also sent when
the 2GB
file limit is reached on platforms without large file support.

The default action for SIGXFSZ is to terminate the process
and dump
core.  When it is ignored, the system call that caused the
limit to be
exceeded returns an error and sets errno to EFBIG.  Python
always checks errno on I/O syscalls, so there is nothing to
do with
the signal.


----------------------------------------------------------------------

Comment By: Gustavo Sverzut Barbieri (gsbarbieri)
Date: 2004-11-26 11:38

Message:
Logged In: YES 
user_id=511989

The problem is that python ignores the SIGXFSZ (File size
limit exceeded) signal.

import signal
signal.signal( signal.SIGXFSZ, signal.SIG_DFL )

solves your problem.


Any python developer: why does python ignores this signal?

----------------------------------------------------------------------

Comment By: Tomasz Kowaltowski (kowaltowski)
Date: 2004-10-22 04:52

Message:
Logged In: YES 
user_id=185428

I also tested the new version "Python 2.4b1" -- the problem
still occurs :-(.

----------------------------------------------------------------------

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1045509&group_id=5470


More information about the Python-bugs-list mailing list