On Nov 13, 2013, at 20:17, Haoyi Li <haoyi.sg@gmail.com> wrote:

But to handle a vararg function, you'd need a separate syntax for partializing vs. calling.

I personally like the _ notation used by scala; with macros you could easily write something like:

f[spam(_, 5)] 
f[spam(_, n=_)]

Which desugars into

lambda x: spam(x, 5)
lambda x, y: spam(x, n=y)

Actually, you can get about 80% of the way without macros. I've got an expression template library (that I never finished) that lets you write things like this:

    _2 ** (1 - _1)

Where _1 and _2 are clever objects that turn this into an equivalent of:

    lambda x, y: y ** (1 - x)

(Although it's actually a chain of calls, one for each operator.)

The big missing part is handling function calls. It's easy when the function is one of your magic lambda args: _1(0) becomes lambda f: f(0). But when the function is a normal function, there's no __rcall__ to override, so you have to write it like this:

    _(f)(_1)

And you need similar tricks whenever you have an expression where neither argument is magic but you want the value delayed. For example, to get lambda: a+b you need to write _(a) + b.

And there's also the fact that not every operator in Python is overloadable.

Anyway, I like the explicitly-numbered arguments better than the single _, because it allows you to use the same argument multiple times in the expression. (Plus, I already used plain _ as the wrapper to turn a normal value into a magic delayed value.) But I'll bet you could design it so _ works like in Scala, but _1 through _9 work my way (much as format strings can take both {} and {0}).

As for why they're 1-based instead of 0-based, I don't remember; I suspect the only explanation is that I'm an idiot.

Anyway, the big problem with the _ delay function is that sometimes you want a value closure, and sometimes a name, and the only way I could think of to handle both is to spell the latter as _('x'), which is ugly, and seems to encourage dynamic strings as arguments (which work, but it's as bad an idea as using globals()['x'] or eval('x') in normal code), and worst of all it looks exactly like gettext, which is a great way to confuse people.

I later realized that with a bit of frame hacking, I could write a separate function _n(x). Tearing everything apart to make that work is where I ran out of steam and ended up with an incomplete library.

If you were willing to special-case further, you could simply have

spam(_, 5) 
spam(_, n=_) 

be representative of the partial-application. Granted _ is already used for other things (e.g. i18n) but that's a solvable problem (make i18n use __, let partial application use $). 




On Wed, Nov 13, 2013 at 7:13 PM, Andrew Barnert <abarnert@yahoo.com> wrote:
On Nov 13, 2013, at 12:44, אלעזר <elazarg@gmail.com> wrote:

> If it was part of a bigger feature, like
> ML's curried functions syntax, it would have been great - things like:
>
>    perr = print sys.stderr
>    perr "Bad command or file name"

I know this is getting way off topic, but the real problem with doing curried/auto-partial functions in Python isn't the parens, it's the variable arguments. An auto-partial function has to accumulate parameters if it doesn't get enough, execute when it does. (Currying gives you that for free, because it means you only get one argument at a time, but you can do auto-partials without currying.)

With print, how do you know when it has "enough" arguments?

You can write a decorator that only works on functions with fixed parameter counts pretty easily:

    @autopartial
    def spam(word, n):
        for _ in range(n):
            print(word)

    eggs = spam('eggs')
    eggs(3)

But to handle a vararg function, you'd need a separate syntax for partializing vs. calling.

Using a different operator like [] or % or << seems attractive at first, but it can't handle keywords.

You could add a method, so spam._(n=5) returns partial(spam, n=5), but ._ is hideous, and anything meaningful like bind or partial is no longer a shortcut.

You could use a special argument value, and ... looks perfect, especially as args[-1]: spam('eggs', ...). Until you consider keywords args, which come after args[-1]. So the best you can do is args[0]: spam(..., 'eggs', n=3). That isn't terrible, but I'm not sure it's nice enough to be worth the cost of people not understanding what it's doing.



_______________________________________________
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas