Em 09/05/2007, às 06:07, Giovanni Bajo escreveu:
> On 07/05/2007 7.36, Josiah Carlson wrote:
>> By going multi-process rather than multi-threaded, one generally
>> shared memory from the equasion. Note that this has the same
>> effect as
>> using queues with threads, which is generally seen as the only way of
>> making threads "easy". If one *needs* shared memory, we can
>> create an mmap-based shared memory subsystem with fine-grained object
>> locking, or emulate it via a server process as the processing package
>> has done.
>> Seriously, give the processing package a try. It's much faster
>> than one
>> would expect.
> I'm fully +1 with you on everything.
> And part of the fact that we have to advocate this is because
> Python has
> always had pretty good threading libraries, but not processing
> actually, Python does have problems at spawning processes: the whole
> popen/popen2/subprocess mess isn't even fully solved yet.
> One thing to be said, though, is that using multiple processes
> cause some
> headaches with frozen distributions (PyInstaller, py2exe, etc.),
> like those
> usually found on Windows, specifically because Windows does not
> have fork().
> The processing module, for instance, doesn't take this problem into
> account at
> all, making it worthless for many of my real-world use cases.
> Giovanni Bajo
Another problem is that althought people like the idea of processing
more, the only module on python stdlib is thread and Threading... So
why not include processing in stdlib and in both thread and Threading
give pointers to its use? This would probably diminish the problem
with newcomers on the language that hate the GIL because they don't
know better (and also because threading is supported out of the box
on python). Yes I know there is fork, but still it is not as useable
and pythonic as processing.