Python's "only one way to do it" philosophy isn't good?
Thu Jun 28 10:01:20 CEST 2007
Douglas Alan <doug at alum.mit.edu> writes:
> > Before the with statement, you could do the same thing but you
> > needed nested try/finally blocks
> No, you didn't -- you could just encapsulate the resource acquisition
> into an object and allow the destructor to deallocate the resource.
But without the try/finally blocks, if there is an unhandled
exception, it passes a traceback object to higher levels of the
program, and the traceback contains a pointer to the resource, so you
can't be sure the resource will ever be freed. That was part of the
motivation for the with statement.
> And how's that? I should think that modern architectures would have
> an efficient way of adding and subtracting from an int atomically.
I'm not sure. In STM implementations it's usually done with a
compare-and-swap instruction (CMPXCHG on the x86) so you read the old
integer, increment a local copy, and CMPXCHG the copy into the object,
checking the swapped-out value to make sure that nobody else changed
the object between the copy and the swap (rollback and try again if
someone has). It might be interesting to wrap Python refcounts that
way, but really, Python should move to a compacting GC of some kind,
so the heap doesn't get all fragmented. Cache misses are a lot more
expensive now than they were in the era when CPython was first
> If they don't, I have a hard time seeing how *any* multi-threaded
> applications are going to be able to make good use of multiple processors.
They carefully manage the number of mutable objects shared between
threads is how. A concept that doesn't mix with CPython's use of
> Yes, there is. [Lisp] it's a very flexible language that can adapt
> to the needs of projects that need to push the boundaries of what
> computer programmers typically do.
Really, if they used better languages they'd be able to operate within
boundaries instead of pushing them.
More information about the Python-list