I want to be able to create a version of python24.lib that is a static library,
suitable for creating a python.exe or other .exe using python's api.
So I did as the earlier poster suggested, using 2.4.1 sources. I modified the
PCBuild/pythoncore and python .vcproj files as follows:
General/ ConfigurationType/ Static library (was dynamic in pythoncore)
c/C++ Code Generation RT Library /MT (was /MTD for mt DLL)
c/c++/Precompiled/ Not Using Precompiled headers (based on some MSDN hints)
librarian OutputFile .//python24.lib
Preprocessor: added Py_NO_ENABLED_SHARED. Removed USE_DL_IMPORT
I built pythoncore and python. The resulting python.exe worked fine, but did
indeed fail when I tried to dynamically load anything (Dialog said: the
application terminated abnormally)
Now I am not very clueful about the dllimport/dllexport business. But it seems
that I should be able to link MY program against a .lib somehow (a real lib),
and let the .EXE export the symbols somehow.
My first guess is to try to use /MTD, use Py_NO_ENABLE_SHARED when building
python24.lib, but then use PY_ENABLE_SHARED when compiling the python.c. I'll
try that later, but anyone have more insight into the right way to do this?
PEP 255 ("Simple Generators") closes with:
> Q. Then why not allow an expression on "return" too?
> A. Perhaps we will someday. In Icon, "return expr" means both "I'm
> done", and "but I have one final useful value to return too, and
> this is it". At the start, and in the absence of compelling uses
> for "return expr", it's simply cleaner to use "yield" exclusively
> for delivering values.
Now that Python 2.5 gained enhanced generators (multitudes rejoice!), i think
there is a compelling use for valued return statements in cooperative
multitasking code, of the kind:
Data = yield Client.read()
MoreData = yield Client.read()
Result = yield foo()
For generators written in this style, "yield" means "suspend execution of the
current call until the requested result/resource can be provided", and
"return" regains its full conventional meaning of "terminate the current call
with a given result".
The simplest / most straightforward implementation would be for "return Foo"
to translate to "raise StopIteration, Foo". This is consistent with "return"
translating to "raise StopIteration", and does not break any existing
(Another way to think about this change is that if a plain StopIteration means
"the iterator terminated", then a valued StopIteration, by extension, means
"the iterator terminated with the given value".)
Motivation by real-world example:
One system that could benefit from this change is Christopher Armstrong's
defgen.py for Twisted, which he recently reincarnated (as newdefgen.py) to
use enhanced generators. The resulting code is much cleaner than before, and
closer to the conventional synchronous style of writing.
 the saga of which is summarized here:
However, because enhanced generators have no way to differentiate their
intermediate results from their "real" result, the current solution is a
somewhat confusing compromise: the last value yielded by the generator
implicitly becomes the result returned by the call. Thus, to return
something, in general, requires the idiom "yield Foo; return". If valued
returns are allowed, this would become "return Foo" (and the code implementing
defgen itself would probably end up simpler, as well).
At 05:15 PM 10/3/2005 -0400, Jason Orendorff wrote:
>Phillip J. Eby writes:
> > You didn't offer any reasons why this would be useful and/or good.
>It makes it dramatically easier to write Python classes that correctly
>support 'with'. I don't see any simple way to do this under PEP 343;
>the only sane thing to do is write a separate @contextmanager
>generator, as all of the examples do.
Wha? For locks (the example you originally gave), this is trivial.
> # decimal.py
> class Context:
> def __enter__(self):
> def __exit__(self, t, v, tb):
> DefaultContext = Context(...)
>Kindly implement __enter__() and __exit__(). Make sure your
>implementation is thread-safe (not easy, even though
>decimal.getcontext/.setcontext are thread-safe!). Also make sure it
>supports nested 'with DefaultContext:' blocks (I don't mean lexically
>nested, of course; I mean nested at runtime.)
>The answer requires thread-local storage and a separate stack of saved
>context objects per thread. It seems a little ridiculous to me.
Okay, it was completely non-obvious from your post that this was the
problem you're trying to solve.
> class Context:
> def __with__(self):
> old = decimal.getcontext()
This could also be done with a Context.replace() @contextmanager method.
On the whole, I'm torn. I definitely like the additional flexibility this
gives. On the other hand, it seems to me that __with__ and the additional
C baggage violates the "if the implementation is hard to explain"
rule. Also, people have already put a lot of effort into implementation
and documentation patches based on an accepted PEP. That's not enough to
override "the right thing to do", especially if it comes with a volunteer
willing to update the work, but in this case the amount of additional
goodness seems small, and it's not immediately apparent that you're
volunteering to help change this even if Guido blessed it.
I looked whether I could make the test suite pass again
when compiled with --disable-unicode.
One problem is that no Unicode escapes can be used since compiling
the file raises ValueErrors for them. Such strings would have to
be produced using unichr().
Is this the right way? Or is disabling Unicode not supported any more?
Mail address is perfectly valid!
I'm -1 on PEP 343. It seems ...complex. And even with all the
complexity, I *still* won't be able to type
with self.lock: ...
which I submit is perfectly reasonable, clean, and clear. Instead I
have to type
with locking(self.lock): ...
where locking() is apparently either a new builtin, a standard library
function, or some 6-line contextmanager I have to write myself.
So I have two suggestions.
1. I didn't find any suggestion of a __with__() method in the
archives. So I feel I should suggest it. It would work just like
__with__() always returns a new context manager object. Just as with
iterators, a context manager object has "cm.__with__() is cm".
The 'with' statement would call __with__(), of course.
Optionally, the type constructor could magically apply @contextmanager
to __with__() if it's a generator, which is the usual case. It looks
like it already does similar magic with __new__(). Perhaps this is
too cute though.
2. More radical: Let's get rid of __enter__() and __exit__(). The
only example in PEP 343 that uses them is Example 4, which exists only
to show that "there's more than one way to do it". It all seems fishy
to me. Why not get rid of them and use only __with__()? In this
scenario, Python would expect __with__() to return a coroutine (not to
say "iterator") that yields exactly once.
Then the "@contextmanager" decorator wouldn't be needed on __with__(),
and neither would any type constructor magic.
The only drawback I see is that context manager methods implemented in
C will work differently from those implemented in Python. Since C
doesn't have coroutines, I imagine there would have to be enter() and
exit() slots. Maybe this is a major design concern; I don't know.
My apologies if this is redundant or unwelcome at this date.
The new ndarray object of scipy core (successor to Numeric Python) is a
C extension type that has a getitem defined in both the as_mapping and
the as_sequence structure.
The as_sequence mapping is just so PySequence_GetItem will work correctly.
As exposed to Python the ndarray object as a .__getitem__ wrapper method.
Why does this wrapper call the sequence getitem instead of the mapping
Is there anyway to get at a mapping-style __getitem__ method from Python?
This looks like a bug to me (which is why I'm posting here...)
Thanks for any help or insight.
On 9/29/05, Robey Pointer <robey at lag.net> wrote:
> Yesterday I ran into a bug in the C API docs. The top of this page:
> This type represents a 16-bit unsigned storage type which is
> used by Python internally as basis for holding Unicode ordinals. On
> platforms where wchar_t is available and also has 16-bits, Py_UNICODE
> is a typedef alias for wchar_t to enhance native platform
> compatibility. On all other platforms, Py_UNICODE is a typedef alias
> for unsigned short.
Steven Bethard wrote:
> I believe this is the same issue that was brought up in May. My
> impression was that people could not agree on a documentation patch.
>  http://www.python.org/dev/summary/2005-05-01_2005-05-15.html
I thought the problem was disagreement over how the
system *should* pick an underlying type to alias.
Given the current policy, are there objections to a patch
that at least steers people away from assuming they can
use the underlying type directly?
Python uses this type to store Unicode ordinals. It is
typically a typedef alias, but the underlying type -- and
the size of that type -- varies across different systems.
I have various reports that the Python 2.4 installer does
not work if you are trying to install in a non-standard location
as a non-privileged user, e.g. #1298962, #1234328,
Despite many attempts, I haven't been able to reproduce any
such problem, and the submitters weren't really able to experiment
So, if anybody is able to reproduce any of these reports, and give
me instructions on how to reproduce it myself, that would be
very much appreciated.
More than a year and a half ago, I posted a big patch to IDLE which
adds support for completion and much better calltips, along with some
other improvements. Since then, I had some mail conversations with
Kurt B. Kaiser, who is responsible for IDLE, which resulted in
nothing. My last mail, from Jul 10, saying (with more details) "I made
the minor changes you asked for, let's get it in, it's not very
complicated" was unanswered.
This is just an example of the fact that IDLE development was
virtually nonexistent in the last months, because most patches were
I and my colleges use IDLE intensively - that is, a heavily patched
IDLE. It includes my patch and many other improvements made by me and
The improved IDLE is MUCH better than the standard IDLE, especially
for interactive work. Since we would like to share our work with the
rest of the world, if nothing is changed we would start a new IDLE
fork soon, perhaps at python-hosting.com.
I really don't like that - maintaining a fork requires a lot of extra
work, and it is certain that many more people will enjoy our work if
it integrated in the standard Python distribution. But sending patches
and watching them stay open despite a continuous nagging is worse.
Please, either convince KBK to invest more time in IDLE development,
or find someone else who would take care of it. If you like, I would
happily help in the development.
I hope I am not sounding offensive. It's actually quite simple: if the
excellent development environment IDLE can't develop inside standard
Python, it should be developed outside it. As I said, I prefer the
Have a good week,
Bruce Eckel wrote:
> 3) Tasks are cheap enough that I can make
> thousands of them, ...
> 4) Tasks are "self-guarding," so they prevent
> other tasks from interfering with them. The
> only way tasks can communicate with each
> other is through some kind of formal
> mechanism (something queue-ish,
> I'd imagine).
I think these two are the hardest to reconcile.
Shane Hathaway's suggestion starts from the
A new process isn't cheap. Keeping the other
process' interpreter alive and feeding it more
requests through a queue just hides the
problem; you can't have more non-sequential
tasks than processors without restarting the
whole contention issue. Even using sequential
tasks (similar to "import dummy_thread") lets
task1 mess up the builtins (or other library
modules) for future task2. The more guards
you add, the heavier each task gets.
At the other end are generators; I think what
generators are missing is
(A) You can't easily send them a message.
This can be solved by wrapping them in an
object, or (probably) by waiting until 2.5.
(B) The programmer has to supply a scheduler.
This could be solved by a standard library module.
(C) That scheduler is non-preemptive. A single
greedy generator can starve all the others.
You can reduce the problem by scheduling
generators on more than one thread.
To really solve it would require language support.
That *might* be almost as simple as a new object
type that automatically yielded control every so often
(D) Generators can interfere with each other unless
the programmer is careful to treat all externally visible
objects as immutable.
Again, it would require language support. I also have
a vague feeling that fixing (C) or (D) might make the
(E) Generators are not reentrant.
I know you said that some restrictions are reasonable,
and this might fall into that category ... but I think this
becomes a problem as soon as Tasks can handle
more than one type of message, or can send delayed
replies to specific correspondents. (To finish your
request, I need some more information...)