The usual way of resolving configuration is command line -> environment ->
Currently argparse supports only command line -> default, I'd like to
suggest an optional "env" keyword to add_argument that will also resolve
from environment. (And also optional env dictionary to the ArgumentParser
__init__ method [or to parse_args], which will default to os.environ).
parser = ArgumentParser()
parser.add_argument('--spam', env='SPAM', default=7)
args = parser.parse_args()
./spam.py -> 7
./spam.py --spam=12 -> 12
SPAM=9 ./spam.py -> 9
SPAM=9 ./spam.py --spam=12 -> 12
What do you think?
Just an FYI that there are under 3 days to apply to Google Summer of
Code for mentoring organizations:
student application deadline is later on in May.
If you run a project that is interested in applying under the Python
umbrella organization, contact Terri Oda at terri(a)zone12.com
since the previous discussions raised lots of bikeshedding in
maintenance-pain directions about pick/marshal in sourcecode,
i'd like to reboot the discussion in a more narrow scope
the pattern is that the stdlib has lazy-computed private globals in
i would like to propose an alternative way of doing those
instead of code such as
_hostprog = None
"""splithost('//host[:port]/path') --> 'host[:port]', '/path'."""
if _hostprog is None:
_hostprog = re.compile('^//([^/?]*)(.*)$')
i would prefer to see code like
from functools import lazy_global
as far as i can tell the implementation for simple cases of expensive
things will not need smart proxying, just some getattr hook
an untested example implementation could thus look like the following
(note that it would end up creating the object twice in thread races a
lock may be necessary)
def __init__(self, func):
self.__func = func
def __getattr__(self, name):
obj = self.__computed
self.__computed = obj = self.__func()
# replace myself in the module scope to get rid of indirection
# in the except block cause if someone imports us we need to
self.__func.__globals__[self.__func.__name__] = obj
return getattr(obj, name)
I just posted an answers on quora.com about OOP (http://qr.ae/TM1Vb)
and wanted to engage the python community on the subject.
Alan Kay's idea of message-passing in Smalltalk are interesting, and
like the questioner says, never took off. My answer was that Alan
Kay's abstraction of "Everything is an object" fails because you can't
have message-passing, an I/O task, working in the same space as your
objects -- they are two very different functionalities and they have
to be preserved **for the programmer**.
This functional separation made me think that Python could benefit
from a syntactical, language-given separation between Classes and the
messages between them, to encourage loosely-coupled, modular OOP.
Something that OOP has always promised but never delivered.
I think we should co-opt C++'s poorly used >> and << I/O operators
(for files) and re-purpose them for objects/classes. One could then
have within interpreter space, the ability to pass in a message to an
>>> 42 >> MyObject #sends 42 as a message into MyObject
The Object definition would then have special methods __in__ to
receive data and a special way of outputing data that can be caught
I'm hoping the community can comment on the matter....
On Tue, Mar 19, 2013 at 1:09 PM, Terry Reedy <tjreedy(a)udel.edu> wrote:
> On 3/18/2013 11:31 PM, Andrew Barnert wrote:
>> The idea that message passing is fundamentally different from method
>> calling also turned out to be one of those strange ideas, since it
>> only took a couple years to prove that they are theoretically
>> completely isomorphic—and,
> Since the isomorphism is so obvious, I somehow missed that Kay actually
> thought that they were different. I suppose one could have different (but
> isomorphic) mental image models.
Yes, that's the point I'm making, and it's significant because other
programmers can't see other's mental models.
On Mon, Mar 18, 2013 at 2:51 PM, Andrew Barnert <abarnert(a)yahoo.com> wrote:
> Have you even looked at a message-passing language?
> A Smalltalk "message" is a selector and a sequence of arguments. That's what you send around. Newer dynamic-typed message-passing OO and actor languages are basically the same as Smalltalk.
Yes, but you have to understand that Alan Kays came with strange ideas
of some future computer-human symbiosis. So his language design and
other similar attempts (like php) is rather skewed from that premise
And also, despite name-dropping, I'm not trying to create anything
like that idea of message-passing. I'm talking about something very
simple, a basic and universal way for objects to communicate.
>> With function or method syntax, you're telling the computer to
>> "execute something", but that is not the right concepts for OOP. You
>> want the objects to interact with each other and in a high-level
>> language, the syntax should assist with that.
> And you have to tell the object _how_ to interact with each other.
This is a different paradigm that what I'm talking about. In the OOP
of my world, Objects already embody the intelligence of how they are
going to interact with the outside world, because I put them there.
> Even with reasonably intelligent animals, you don't just tell two animals to interact, except in the rare case where you don't care whether they become friends or dinner.
You're model of computer programming is very alien to me. So I don't
think it will be productive to try to convince you of what I'm
suggesting, but feel free to continue...
I'm one of the authors of monocle (https://github.com/saucelabs/monocle),
an async Python programming framework with a blocking look-alike syntax
based on generators. We've been using monocle at my startup Sauce Labs, and
I used its predecessor on a small team at BitTorrent. I think it's met with
some amazing success vs. similar projects using threads. In about five
years of working with teams with some members who haven't always understood
monocle very well, we've never written a deadlock, for example.
I've been keeping up with PEPs and skimming python-ideas discussions about
concurrency-related features in Python, and feeling unsure about how best
to jump into the conversation. Today it occurred to me that sooner is
better and I should just get all my various opinions out there in case
First of all, I want to reiterate: while I have mainly criticism to offer,
it is only intended to be helpful. I don't have a lot of time to devote to
this myself, so I'm trying to do what I can to be useful to those who do.
I think I'll take this one PEP at a time, and start with the easiest one.
PEP 3148 - Futures
First, I like the idea of this PEP. I think it'll improve interoperability
between async frameworks in Python.
I believe the name "future" is a poor choice. After many years of
explaining this concept to programmers, I've found that its various
abstract-conceptual names do a lot to confuse people. Twisted's "deferred"
is probably the worst offender here, but both "future" and "promise" also
make this very simple construct sound complicated, and it's a crime.
There are two ways I've found to explain this that seem to keep people from
The first is, call it a "callback registry". Many programmers are familiar
with the idea of passing callbacks into a function. You just explain that
here, instead of taking the callbacks as parameters, we return a callback
registry object and you can add your callbacks there. (If the operation is
already done they'll just get called when you add them.)
The second is a variant on that: say that this function, instead of taking
a callback as a parameter, returns its callback. That sounds useless, but
fortunately the callback it returns is a callable object, and it also has
an "add" method that lets you add functions that it will call when called
back. So it's a callback that you can manipulate to do what you want. (This
is what we do in monocle:
These approaches, by focusing on the simple and familiar idea of callbacks,
do a lot to eliminate confusing ideas about communicating with the future
of the process.
Finally, I'm not sure I get the point of the Executor class in PEP 3148. It
seems to me that any real implementation of that class needs a call_later
method for scheduling operations, and a way of connecting up to event-based
IO APIs. I don't really understand what Executor would be good for.
Thanks for reading.
Sauce Labs Cofounder and VP Product
> Ian Cordasco wrote:
>> On Sun, Mar 17, 2013 at 11:53 PM, Mark Janssen
>> <dreamingforward(a)gmail.com> wrote:
>>> I just posted an answers on quora.com about OOP (http://qr.ae/TM1Vb)
>>> and wanted to engage the python community on the subject.
> My answer to that question would be that it *did*
> catch on, it's just that we changed the terminology.
> Instead of message passing, we talk about calling
Yes, but this is where it breaks the OOP abstraction by 90 degrees.
By using function calls, you're telling the machine to do something.
But when you want to pass something to an object there should be a
natural way to do this for every object. By using methods you pollute
the concept space with all sorts of semi-random (i.e. personal) names,
like append, add, enqueue, etc.
This proposal would not only make a consistent syntax across all
objects, but train the programmer to *think* modularly in the sense of
having a community of re-usable object. I.e. "What should I do if
another object passes me something?". No one thinks this now, because
the programmer expects new developers to learn *their* interface!