can you please test the following piece of code with your compilers
and check if they're issuing any warnings? Thanks!
int main(int argc, char *argv)
char c = 1;
if (!(c & ~255)) printf("Thanks!\n");
*** S t a c k l e s s S p r i n t ***
Dear Stackless Friends and Python developers,
we are planning a sprint on developing and
learning Stackless Python.
It will be located in Berlin, happening probably
somewhere in March 2004.
Duration is not settled, maybe little more than a weekend,
maybe more, this depends on what you like.
Well, there is a lot possible.
Current topics which come into mind are
* scheduling object
* brain storming!
* Zope Wiki and Documentation
* making channels really stackless
* find the Bajo bug if I still can't?
* demo applications
* more regression tests
* real zope apps?
* minimalist Stackless Python with no hardware dependency
* assembly-free Stackless with setjmp/longjmp
* spreading the internals between developers
but this is completely open for discussion.
Well, it will be a bit simpler than the pypy-sprints,
which I think are very difficult, but Stackless
has its built-in difficulties by nature.
This is by far, now and forever, the best possible way
to learn about Stackless Python and to become a core
developer. But no guarantee possible :-)
Please contact me.
cheers - chris
Christian Tismer :^) <mailto:firstname.lastname@example.org>
Mission Impossible 5oftware : Have a break! Take a ride on Python's
Johannes-Niemeyer-Weg 9a : *Starship* http://starship.python.net/
14109 Berlin : PGP key -> http://wwwkeys.pgp.net/
work +49 30 89 09 53 34 home +49 30 802 86 56 mobile +49 173 24 18 776
PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04
whom do you want to sponsor today? http://www.stackless.com/
Hey, hey! After looking at this on and off for months, I finally
found a safe way to dramatically improve the speed of building lists:
We've always known that every list.append() resulted in a realloc()
call after computing an adaptive, oversized block length.
The obvious solution was to save the block size and avoid calls to
realloc() when the block was already large enough to hold a new
element. Unfortunately, the list structure was wide open to
manipulation by every piece of code written since Python's infancy.
Almost no assumptions could be made about other code directly
hacking the list structure elements.
The safe solution turned out to be saving a copy of the ob_item pointer
whenever a block of known size was allocated. That copy is then
used a guard for the size check. That way, if the pointer is swapped
out by another routine (not a myth, it happens), we assume our saved
block size is invalid and just do a normal realloc.
The results are wonderful:
* list(it) speeds up by at least 40% when "it" is an iterable
not defining __len__ (there is no change for iterables that
do define __len__ because they pre-size in a single step).
. list comprehensions show a similar speed-up
. list.append(x) is about twice as fast
AFAICT, this is a straight-win with no trade-offs along the way.
The only offsetting cost is adding two fields to the list
structure (Tim said that was bad but I don't remember why).
Please take a look and see if there are any holes in the theory.
While the patch passes the test suite, it is possible that some
extension module reallocs() the list to a smaller size (leaving
the pointer unchanged) and then changes its mind and starts growing
the list again. Given the magnitude of the speed-up, I think that
risk is warranted. The offending code would have to be by-passing
the C API for lists and tinkering with the internals in atypical
way (swapping one pointer for another and/or altering ob_size is
typical; downward resizing the existing pointer is not).
P.S. SF is down for maintenance. Will post the patch there tomorrow.
While the change is simple, the patch is long because all the NRESIZE
macros had to be replaced by a function call.
In 2.3, if you have a thread running when the interpreter tears itself
down you get an exception from what looks like the thread catching an
exception from trying to access something that has been torn down which
then raises its own exception from the fact that globals have already
disappeared since it tries to access currentThread which is a global
(all of this is in my last comment in the bug report).
My question is whether this is worth fixing. Since it only occurs if
you don't shut down your threads it seems like it is not really a bug
but just unexpected behavior from not cleaning up after yourself. But,
as I said in my bug report post, I think it is fixable. So is it even
worth fixing in some non-pretty way?
Edward Jones writes:
> I would like to propose a class ExceptionList which is an exception
> containing a list, It can be used in two ways. Suppose
> MyError = ExceptionList( (ValueError, TypeError) )
> "except MyError:"
> handles MyError, ValueError, and TypeError or any exception derived from
> them. This generalizes the current "except (ValueError, TypeError):".
I don't believe that this *generalizes* using a tuple in an except clause,
I believe it *exactly replicates* that functionality. In other words, I
don't see how
MyError = ExceptionList( (ValueError, TypeError) )
is any better than this (which works today):
MyError = (ValueError, TypeError)
So I'd be a strong -1 on providing a new way of doing something which
is already easy to do (hey... it's LESS typing with the current syntax).
Edward Jones continues:
> "raise MyError"
> raises an exception that can be handled by any except clause that
> handles MyError, ValueError, or TypeError.
Well, that's easy to do in current Python also:
>>> class CompoundError(TypeError, ValueError):
... raise CompoundError
... except ValueError:
... print "it's a value error"
it's a value error
... raise CompoundError
... except TypeError:
... print "it's a type error"
it's a type error
So again, there's no need for your proposal.
But even if these things WEREN'T already built into Python, I'd STILL
be opposed to adding new syntax to allow them. Catching more than one
exception in the same except clause (the first use) is fairly handy,
but throwing an exception which masquerades as several different
exception types is rather peculiar... I can't think of a realistic way
in which I would want to use this.
-- Michael Chermside
Here is a weak proposal for a generalized type of exception. Since I
know nothing about either language design or implementation, my hope is
to trigger some thinking by the experts.
I would like to propose a class ExceptionList which is an exception
containing a list, It can be used in two ways. Suppose
MyError = ExceptionList( (ValueError, TypeError) )
handles MyError, ValueError, and TypeError or any exception derived from
them. This generalizes the current "except (ValueError, TypeError):".
raises an exception that can be handled by any except clause that
handles MyError, ValueError, or TypeError. Here is an example for this:
Suppose I invent the exception LengthError which is raised when a
sequence (including sys.argv) is the wrong length. It is also raised if
a function is passed the wrong number of arguments. I have a lot of code
which raises and/or handles ValueError or TypeError in these situations.
MyError can handle exceptions raised by the legacy code or vice versa.
Perhaps MyError could be a class with base classes ValueError and TypeError.
(For those of you not on python-checkins...)
I've removed a number of obsolete features as specified by PEP 11. In
particular, this morning I removed support for --without-universal-newlines
and transitional pthread variants dating from the 1997 time frame. SunOS 4
support is also gone. For full details see Misc/NEWS. I've been testing on
Mac OS X, but am also testing on Solaris 8 and Mandrake 8.1.
If you haven't cvs up'd and rebuilt recently (don't forget to make clean and
execute ./configure on Unix-y platforms) please do so and run make test.
Dan reported a problem with the benchmark on Mac OS X:
> > Hrm. On my OS X laptop:
> > lir:~/Desktop/parrotbench dan$ python -O
> > Python 2.3 (#1, Sep 13 2003, 00:49:11)
> > [GCC 3.3 20030304 (Apple Computer, Inc. build 1495)] on darwin
> > Type "help", "copyright", "credits" or "license" for more information.
> > >>> import b
> > --> iteration 0
> > --> b0
> > Traceback (most recent call last):
> > File "<stdin>", line 1, in ?
> > File "b.py", line 12, in ?
> > b0.main()
> > File "b0.py", line 893, in main
> > checkoutput(4201300315)
> > File "b0.py", line 763, in checkoutput
> > check(strhash(outputtext), n)
> > File "b0.py", line 3, in check
> > raise AssertionError("%.30r != %.30r" % (a, b))
> > AssertionError: 503203581L != 4201300315L
> > >>>
After some off-line email exchanges I think I have a fix for this
behavior, which must have to do with a different length of the
addresses shown in the default repr(), e.g. "<Foo object at 0xffff>".
Version 1.0.1 of the benchmark is at the same place as before:
(You can tell whether you have the fixed version by looking at first
line of README.txt; if it says "Parrot benchmark 1.0.1" you do.)
I haven't heard back from Dan, but I assume that the fix works.
Happy New Year everyone!
--Guido van Rossum (home page: http://www.python.org/~guido/)
Why was Python syntax designed so
except (TypeError, ValueError):
is OK but
except [TypeError, ValueError]:
is forbidden. Should this be changed? Is immutability needed here? Where
in Python is immutability really needed or really improves efficiency?
I am writing a Python procedural language implementation for PostgreSQL(embedding)(Use CVS -tOPYMEM; http://gborg.postgresql.org/project/postgrespy/), and I have need for free'ing all memory allocated by Python when a Postgres ERROR occurs as there is a potential of longjmp'ing out of the interpreter, which would leak memory all over the floor. Actually, most places within the plpy library have memory allocated by Python has a chance of leaking.(Yes, there is a way to work around this, but it is ugly, IMO. See the small paragraph on how this is normally handled by PLs at the end of this letter)
Ideally, Python would allocate memory within a Postgres MemoryContext, so that when an ERROR occurs Python's memory can be freed up in the same way as other Postgres allocations. This can be achieved by making Python's low-level object [rd]e-allocators use three mutable, global function pointers.
In my test implementation I added these FPs to Objects/obmalloc.c
void *(*PyMemInternal_Reallocate)(void *, size_t) = realloc;
void *(*PyMemInternal_Allocate)(size_t) = malloc;
void (*PyMemInternal_Free)(void *) = free;
And, of course, then replace direct calls to malloc, realloc, and free in obmalloc.c and pymem.h with their global fp counterparts, and extern the declarations in pymem.h as well(not necessary, but seemed appropriate).
Overloading the base type's tp_alloc and tp_free does not seem to be a complete option as many builtin types specify tp_alloc and tp_free as good ol' PyObject_Malloc/Del/etc(at least stringobject.c, IIRC), or with GC_* functions, and that does not cover direct calls to PyObject_*alloc|free anyways.
I hear that there are some linker hacks that may be able to emulate this, but portability is very desirable as there are some PostgreSQL developers working on native Windows support.
The main problem that I can see with this request is that my use may be a special case, which few embedders would ever need to use.
Another possible solution is a function in obmalloc.c that iterates through the arenas and frees them up, this seems like it would be more likely to be accepted, but the former solution is more desirable for my usage.
Of course, just freeing up all the memory--either resetting the memory context that Python memory is allocated under or free'ing up the arenas--leaves the Python library in an unusable state, even if done after Py_Finalize(yes, I've tested this, dangling globals and states(especially in obmalloc.c, IIUC)). To do this without restarting the process or re-dlinking(My chosen solution for my application), it would require Py_Finalize to completely reset libpython; that is, doing complete finalization(resetting libpython to its pre-Py_Initialize state). This seems a rather large request, and probably beyond anything that anyone is willing to do for the rarely used result(?)(I'm not too eager to jump on it, if it's even acceptable, but if nobody has a problem with me tackling it, I may look into it).
I plan to add support for reloading dll's in Postgres to make this work properly for my app(closing and opening the lib should reset the globals, no? I haven't tested this yet, but I'm fairly confident that it is at least a reasonable assumption.). I think reload on ERROR would be a useful feature for lib authors, so I plan submit a proposal to pgsql-hackers soon, depending on what I am able to work out here..
Normally, PL's use *sigjmp fun to clean up this kind of memory, but it is a serious pain for me in plpy. Trapping these potential jumps must be done within every function with Python memory allocations that makes a call that may ERROR out(There are lots of them, especially in my Postgres interface module(the "if" CVS repository).
You can get my patch against 2.3.3 that implements those global FPs at rhid.com/pymfp.patch, it is pretty trivial. Doesn't touch the thread*.c or strdup.c as I didn't think it really applied to it so much, but perhaps they should be updated as well.
Comments, Criticisms, Flames?
James William Pye