Python-ideas search results for query "matrix function composition"
python-ideas@python.org- 67 messages
Re: [Python-ideas] Fwd: [RFC] draft PEP: Dedicated infix operators for matrix multiplication and matrix power
by Nick Coghlan
On 18 March 2014 20:47, Robert Kern <robert.kern(a)gmail.com> wrote:
> On 2014-03-18 08:02, Nick Coghlan wrote:
>> > operator.matmul and PyObject_MatrixMultiply are obvious enough, but
>> > I'm afraid I'm not too clear on the tradeoffs about adding a C level
>> > type slot, or even entirely sure what the alternative is. (I guess I
>> > just assumed that all special methods used C level type slots and
>> > there was nothing to think about.) Do you (or anyone) have any
>> > thoughts?
>>
>> I suspect you're going to want one, as without it, the implementation
>> method
>> ends up in the class dict instead (the context management protocol works
>> that way).
>>
>> I suspect the design we will want is a new struct for Py_Matrix slots
>> (akin to
>> those for numbers, etc). The alternative would be to just add more
>> "Number"
>> slots, but that isn't really accurate.
So, here's the change to PyHeapType object that makes the most sense
to me (assuming both "@" and "@@" are added - drop the new "power"
methods if "@@" is dropped from the PEP):
- add "PyMatrixMethods as_matrix;" as a new field in PyHeapTypeObject
- define PyMatrixMethods as:
typedef struct {
binaryfunc mt_multiply;
binaryfunc mt_power;
binaryfunc mt_inplace_multiply;
binaryfunc mt_inplace_power;
} PyMatrixMethods;
This approach increases the size of all type objects by one pointer.
The other way to do it would be to just add four new slots to PyNumberMethods:
binaryfunc nb_matrix_multiply;
binaryfunc nb_matrix_power;
binaryfunc nb_inplace_matrix_multiply;
binaryfunc nb_inplace_matrix_power;
This approach increases the size of all type objects that define one
or more of the numeric functions by four pointers, and doesn't really
make sense at a conceptual level. The latter is the main reason I
prefer the separate PyMatrixMethods struct.
Other "should probably be listed in the PEP for completeness" change
is that this will need new opcodes and AST nodes. Reviewing the
current opcode list and node names, I would suggest:
BINARY_MATRIX_MULTIPLY
BINARY_MATRIX_POWER
INPLACE_MATRIX_MULTIPLY
INPLACE_MATRIX_POWER
MatMult | MatPow
> Would it be more palatable if the name were something like __altmul__ or
> __auxmul__ rather than __matmul__? Really, it's just a second
> multiplication-like operator. The leading use case for a second
> multiplication-like operator happens to be matrix multiplication, but I
> strongly suspect it will get used for other mathematical things like
> symbolic function composition or operator application (as in "linear
> operator", not +-*/) and maybe some secondary multiplication types in the
> weirder groups and fields (you can bet I will resurrect my Clifford algebra
> module to use this operator for one of the several types of multiplication
> they support). Granted, there is still some awkwardness in that *none* of
> the builtin number types will support it.
I think "matmul" is fine. That makes the primary intended use case
clear, without preventing its use for other purposes (like vector dot
products or more exotic things). The magic method names are "add",
"mul", "div", "mod", etc, even though we occasionally use them for
other purposes (e.g. concatenation, sequence repetition, path joining,
interpolation).
Cheers,
Nick.
--
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
10 years, 2 months
Re: [Python-ideas] Function composition (was no subject)
by Ivan Levkivskyi
On 10 May 2015 at 02:05, Andrew Barnert <abarnert(a)yahoo.com> wrote:
> On May 9, 2015, at 16:28, Ivan Levkivskyi <levkivskyi(a)gmail.com> wrote:
>
> I was thinking about recent ideas discussed here. I also returned back to
> origins of my initial idea. The point is that it came from Numpy, I use
> Numpy arrays everyday, and typically I do exactly something like
> root(mean(square(data))).
>
> Now I am thinking: what is actually a matrix? It is something that takes a
> vector and returns a vector. But on the other hand the same actually do
> elementwise functions. It does not really matter, what we do with a vector:
> transform by a product of matrices or by composition of functions. In other
> words I agree with Andrew that "elementwise" is a good match with compose,
> and what we really need is to "pipe" things that take a vector (or just an
> iterable) and return a vector (iterable).
>
> So that probably a good place (in a potential future) for compose would be
> not functools but itertools. But indeed a good place to test this would be
> Numpy.
>
>
> Itertools is an interesting idea.
>
> Anyway, assuming NumPy isn't going to add this in the near future (has
> anyone even brought it up on the NumPy list, or only here?), it wouldn't be
> that hard to write a (maybe inefficient but working) @composable wrapper
> and wrap all the relevant callables from NumPy or from itertools, upload it
> to PyPI, and let people start coming up with good examples. If it's later
> worth direct support in NumPy and/or Python (for simplicity or
> performance), the module will still be useful for backward compatibility.
>
>
This is a good step-by-step approach. This is what I would try.
> An additional comment: it is indeed good to have both @ and | for compose
> and rcompose.
> Side note, one can actually overload __rmatmul__ on arrays as well so that
> you can write
>
> root @ mean @ square @ data
>
>
> But this doesn't need to overload it on arrays, only on the utuncs, right?
>
> Unless you're suggesting that one of these operations could be a matrix as
> easily as a function, and NumPy users often won't have to care which it is?
>
>
Exactly, this is what I want. Note that in such approach you have no
parentheses at all.
>
> Moreover, one can overload __or__ on arrays, so that one can write
>
> data | square | mean | root
>
> even with ordinary functions (not Numpy's ufuncs or composable) .
>
>
> That's an interesting point. But I think this will be a bit confusing,
> because now it _does_ matter whether square is a matrix or a
> function--you'll get elementwise bitwise or instead of application. (And
> really, this is the whole reason for @ in the first place--we needed an
> operator that never means elementwise.)
>
> Also, this doesn't let you actually compose functions--if you want square
> | mean | root to be a function, square has to have a __or__ operator.
>
>
This is true. The | is more limited because of its current semantics. The
fact that | operator already has a widely used semantics is also why I
would choose @ if I would need to choose only one: @ or |
> These examples are actually "flat is better than nested" in the extreme
> form.
>
> Anyway, they (Numpy) are going to implement the @ operator for arrays, may
> be it would be a good idea to check that if something on the left from me
> (array) is not an array but a callable then apply it elementwise.
>
> Concerning the multi-argument functions, I don't like $ symbol, don't know
> why. It seems really unintuitive why it means partial application.
> One can autocurry composable functions and apply same rules that Numpy
> uses for ufuncs.
> More precisely, if I write
>
> add(data1, data2)
>
> with arrays it applies add pairwise. But if I write
>
> add(data1, 42)
>
> it is also fine, it simply adds 42 to every element. With autocurrying one
> could write
>
> root @ mean @ add(data) @ square @ data2
>
> or
>
> root @ mean @ square @ add(42) @ data
>
> However, as I see it now it is not very readable, so that may be the best
> choise is to reserve @ and | for "piping" iterables through transformers
> that take one argument. In other words it should be left to user to make
> add(42) of an appropriate type. It is the same logic as for decorators, if
> I write
>
> @modify(arg)
> def func(x):
> return None
>
> I must care that modify(arg) evaluates to something that takes one
> callable and returns a callable.
>
>
> On May 9, 2015, at 01:36, Stephen J. Turnbull <stephen(a)xemacs.org> wrote:
>> >
>> > Andrew Barnert writes:
>> >>> On May 8, 2015, at 19:58, Stephen J. Turnbull <stephen(a)xemacs.org>
>> wrote:
>> >>>
>> >>> Koos Zevenhoven writes:
>> >>>
>> >>>> As a random example, (root @ mean @ square)(x) would produce the
>> right
>> >>>> order for rms when using [2].
>> >>>
>> >>> Hardly interesting. :-) The result is an exception, as root and
>> square
>> >>> are conceptually scalar-to-scalar, while mean is sequence-to-scalar.
>> >>
>> >> Unless you're using an elementwise square and an array-to-scalar
>> >> mean, like the ones in NumPy,
>> >
>> > Erm, why would square be elementwise and root not? I would suppose
>> > that everything is element-wise in Numpy (not a user yet).
>>
>> Most functions in NumPy are elementwise when applied to arrays, but can
>> also be applied to scalars. So, square is elementwise because it's called
>> on an array, root is scalar because it's called on a scalar. (In fact, root
>> could also be elementwise--aggregating functions like mean can be applied
>> across just one axis of a 2D or higher array, reducing it by one dimension,
>> if you want.)
>>
>> Before you try it, this sounds like a complicated nightmare that can't
>> possibly work in practice. But play with it for just a few minutes and it's
>> completely natural. (Except for a few cases where you want some array-wide
>> but not element-wise operation, most famously matrix multiplication, which
>> is why we now have the @ operator to play with.)
>>
>> >> in which case it works perfectly well...
>> >
>> > But that's an aspect of my point (evidently, obscure). Conceptually,
>> > as taught in junior high school or so, root and square are scalar-to-
>> > scalar. If you are working in a context such as Numpy where it makes
>> > sense to assume they are element-wise and thus composable, the context
>> > should provide the compose operator(s).
>>
>> I was actually thinking on these lines: what if @ didn't work on
>> types.FunctionType, but did work on numpy.ufunc (the name for the
>> "universal function" type that knows how to broadcast across arrays but
>> also work on scalars)? That's something NumPy could implement without any
>> help from the core language. (Methods are a minor problem here, but it's
>> obvious how to solve them, so I won't get into it.) And if it turned out to
>> be useful all over the place in NumPy, that might turn up some great uses
>> for the idiomatic non-NumPy Python, or it might show that, like elementwise
>> addition, it's really more a part of NumPy than of Python.
>>
>> But of course that's more of a proposal for NumPy than for Python.
>>
>> > Without that context, Koos's
>> > example looks like a TypeError.
>>
>> >> But Koos's example, even if it was possibly inadvertent, shows that
>> >> I may be wrong about that. Maybe compose together with element-wise
>> >> operators actually _is_ sufficient for something beyond toy
>> >> examples.
>> >
>> > Of course it is!<wink /> I didn't really think there was any doubt
>> > about that.
>>
>> I think there was, and still is. People keep coming up with abstract toy
>> examples, but as soon as someone tries to give a good real example, it only
>> makes sense with NumPy (Koos's) or with some syntax that Python doesn't
>> have (yours), because to write them with actual Python functions would
>> actually be ugly and verbose (my version of yours).
>>
>> I don't think that's a coincidence. You didn't write "map square" because
>> you don't know how to think in Python, but because using compose profitably
>> inherently implies not thinking in Python. (Except, maybe, in the case of
>> NumPy... which is a different idiom.) Maybe someone has a bunch of obvious
>> good use cases for compose that don't also require other functions,
>> operators, or syntax we don't have, but so far, nobody's mentioned one.
>>
>> ------------------------------
>>
>> On 5/9/2015 6:19 AM, Andrew Barnert via Python-ideas wrote:
>>
>> > I think there was, and still is. People keep coming up with abstract
>> toy examples, but as soon as someone tries to give a good real example, it
>> only makes sense with NumPy (Koos's) or with some syntax that Python
>> doesn't have (yours), because to write them with actual Python functions
>> would actually be ugly and verbose (my version of yours).
>> >
>> > I don't think that's a coincidence. You didn't write "map square"
>> because you don't know how to think in Python, but because using compose
>> profitably inherently implies not thinking in Python. (Except, maybe, in
>> the case of NumPy... which is a different idiom.) Maybe someone has a bunch
>> of obvious good use cases for compose that don't also require other
>> functions, operators, or syntax we don't have, but so far, nobody's
>> mentioned one.
>>
>> I agree that @ is most likely to be usefull in numpy's restricted context.
>>
>> A composition operator is usually defined by application: f@g(x) is
>> defined as f(g(x)). (I sure there are also axiomatic treatments.) It
>> is an optional syntactic abbreviation. It is most useful in a context
>> where there is one set of data objects, such as the real numbers, or one
>> set + arrays (vectors) defined on the one set; where all function are
>> univariate (or possible multivariate, but that can can be transformed to
>> univariate on vectors); *and* where parameter names are dummies like
>> 'x', 'y', 'z', or '_'.
>>
>> The last point is important. Abbreviating h(x) = f(g(x)) with h = f @ g
>> does not lose any information as 'x' is basically a placeholder (so get
>> rid of it). But parameter names are important in most practical
>> contexts, both for understanding a composition and for using it.
>>
>> dev npv(transfers, discount):
>> '''Return the net present value of discounted transfers.
>>
>> transfers: finite iterable of amounts at constant intervals
>> discount: fraction per interval
>> '''
>> divisor = 1 + discount
>> return sum(tranfer/divisor**time
>> for time, transfer in enumerate(transfers))
>>
>> Even if one could replace the def statement with
>> npv = <some combination of @, sum, map, add, div, power, enumerate, ...>
>> with parameter names omitted, it would be harder to understand. Using
>> it would require the ability to infer argument types and order from the
>> composed expression.
>>
>> I intentionally added a statement to calculate the common subexpression
>> prior to the return. I believe it would have to put back in the return
>> expression before converting.
>>
>> --
>> Terry Jan Reedy
>>
>>
>>
>> ------------------------------
>>
>> On 05/09/2015 03:21 AM, Andrew Barnert via Python-ideas wrote:
>> >> >I suppose you could write (root @ mean @ (map square)) (xs),
>>
>> > Actually, you can't. You could write (root @ mean @ partial(map,
>> > square))(xs), but that's pretty clearly less readable than
>> > root(mean(map(square, xs))) or root(mean(x*x for x in xs). And that's
>> > been my main argument: Without a full suite of higher-level operators
>> > and related syntax, compose alone doesn't do you any good except for toy
>> > examples.
>>
>> How about an operator for partial?
>>
>> root @ mean @ map $ square(xs)
>>
>>
>> Actually I'd rather reuse the binary operators. (I'd be happy if they
>> were
>> just methods on bytes objects BTW.)
>>
>> compose(root, mean, map(square, xs))
>>
>> root ^ mean ^ map & square (xs)
>>
>> root ^ mean ^ map & square ^ xs ()
>>
>> Read this as...
>>
>> compose root, of mean, of map with square, of xs
>>
>> Or...
>>
>> apply(map(square, xs), mean, root)
>>
>> map & square | mean | root (xs)
>>
>> xs | map & square | mean | root ()
>>
>>
>> Read this as...
>>
>> apply xs, to map with square, to mean, to root
>>
>>
>> These are kind of cool, but does it make python code easier to read? That
>> seems like it may be subjective depending on the amount of programming
>> experience someone has.
>>
>> Cheers,
>> Ron
>>
>>
>>
>> ------------------------------
>>
>> Hi,
>> I had to answer some of these questions when I wrote Lawvere:
>> https://pypi.python.org/pypi/lawvere
>>
>> First, there is two kind of composition: pipe and circle so I think a
>> single operator like @ is a bit restrictive.
>> I like "->" and "<-"
>>
>> Then, for function name and function to string I had to introduce function
>> signature (a tuple).
>> It provides a good tool for decomposition, introspection and comparison in
>> respect with mathematic definition.
>>
>> Finally, for me composition make sense when you have typed functions
>> otherwise it can easily become a mess and this make composition tied to
>> multiple dispatch.
>>
>> I really hope composition will be introduced in python but I can't see how
>> it be made without rethinking a good part of function definition.
>>
>>
>>
>> 2015-05-09 17:38 GMT+02:00 Ron Adam <ron3200(a)gmail.com>:
>>
>> >
>> >
>> > On 05/09/2015 03:21 AM, Andrew Barnert via Python-ideas wrote:
>> >
>> >> >I suppose you could write (root @ mean @ (map square)) (xs),
>> >>>
>> >>
>> > Actually, you can't. You could write (root @ mean @ partial(map,
>> >> square))(xs), but that's pretty clearly less readable than
>> >> root(mean(map(square, xs))) or root(mean(x*x for x in xs). And that's
>> >> been my main argument: Without a full suite of higher-level operators
>> >> and related syntax, compose alone doesn't do you any good except for
>> toy
>> >> examples.
>> >>
>> >
>> > How about an operator for partial?
>> >
>> > root @ mean @ map $ square(xs)
>> >
>> >
>> > Actually I'd rather reuse the binary operators. (I'd be happy if they
>> > were just methods on bytes objects BTW.)
>> >
>> > compose(root, mean, map(square, xs))
>> >
>> > root ^ mean ^ map & square (xs)
>> >
>> > root ^ mean ^ map & square ^ xs ()
>> >
>> > Read this as...
>> >
>> > compose root, of mean, of map with square, of xs
>> >
>> > Or...
>> >
>> > apply(map(square, xs), mean, root)
>> >
>> > map & square | mean | root (xs)
>> >
>> > xs | map & square | mean | root ()
>> >
>> >
>> > Read this as...
>> >
>> > apply xs, to map with square, to mean, to root
>> >
>> >
>> > These are kind of cool, but does it make python code easier to read?
>> That
>> > seems like it may be subjective depending on the amount of programming
>> > experience someone has.
>> >
>> > Cheers,
>> > Ron
>> >
>> >
>>
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas(a)python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
>
9 years
Re: [Python-ideas] (no subject)
by Andrew Barnert
On May 6, 2015, at 18:13, Stephen J. Turnbull <stephen(a)xemacs.org> wrote:
>
> Ivan Levkivskyi writes:
>
>> Ok, I will try inspecting all existing approaches to find the one
>> that seems more "right" to me :)
>
> If you do inspect all the approaches you can find, I hope you'll keep
> notes and publish them, perhaps as a blog article.
>
>> In any case that approach could be updated by incorporating matrix
>> @ as a dedicated operator for compositions.
>
> I think rather than "dedicated" you mean "suggested". One of Andrew's
> main points is that you're unlikely to find more than a small minority
> agreeing on the "right" approach, no matter which one you choose.
Whatever wording you use, I do think it's likely that at least some of the existing libraries would become much more readable just by using @ in place of what they currently use. Even better,
It may also turn out that the @ notation just "feels right" with one solution to the argument problem and wrong with another, narrowing down the possibility space.
So, I think it's definitely worth pushing the experiments if someone has the time and inclination, so I'm glad Ivan has volunteered.
>> At least, it seems that Erik from astropy likes this idea and it is
>> quite natural for people with "scientific" background.
I forgot to say before, but: it's great to have input from people coming from the MATLAB-y scientific/numeric world like him (I think) rather than just the Haskell/ML-y mathematical/CS world like you (Stephen, I think), as we usually get in these discussions. If there's one option that's universally obviously right to everyone in the first group, maybe everyone in the second group can shut up and deal with it. If not (which I think is likely, but I'll keep an open mind), well, at least we've got broader viewpoints and more data for Ivan's summary.
> Sure, but as he also points out, when you know that you're going to be
> composing only functions of one argument, the Unix pipe symbol is also
> quite natural (as is Haskell's operator-less notation). While one of
> my hobbies is category theory (basically, the mathematical theory of
> composable maps for those not familiar with the term), I find the Unix
> pipeline somehow easier to think about than abstract composition,
> although I believe they're equivalent (at least as composition is
> modeled by category theory).
I think you're right that they're equivalent in theory.
But I feel like they're also equivalent in usability and readability (as in for 1/3 simple cases they're both fine, for 1/3 compose looks better, for 1/3 rcompose), but I definitely can't argue for that.
What always throws me is that most languages that offer both choose different precedence (and sometimes associativity, too) for them. The consequence seems to be that when I just use compose and rcompose operators without thinking about it, I always get them right, but as soon as I ask myself "which one is like shell pipes?" or "why did I put parens here?" I get confused and have to go take a break before I can write any more code. Haskell's operatorless notation is nice because it prevents me from noticing what I'm doing and asking myself those questions. :)
9 years
Re: [Python-ideas] Function composition (was no subject)
by Ivan Levkivskyi
I was thinking about recent ideas discussed here. I also returned back to
origins of my initial idea. The point is that it came from Numpy, I use
Numpy arrays everyday, and typically I do exactly something like
root(mean(square(data))).
Now I am thinking: what is actually a matrix? It is something that takes a
vector and returns a vector. But on the other hand the same actually do
elementwise functions. It does not really matter, what we do with a vector:
transform by a product of matrices or by composition of functions. In other
words I agree with Andrew that "elementwise" is a good match with compose,
and what we really need is to "pipe" things that take a vector (or just an
iterable) and return a vector (iterable).
So that probably a good place (in a potential future) for compose would be
not functools but itertools. But indeed a good place to test this would be
Numpy.
An additional comment: it is indeed good to have both @ and | for compose
and rcompose.
Side note, one can actually overload __rmatmul__ on arrays as well so that
you can write
root @ mean @ square @ data
Moreover, one can overload __or__ on arrays, so that one can write
data | square | mean | root
even with ordinary functions (not Numpy's ufuncs or composable) . These
examples are actually "flat is better than nested" in the extreme form.
Anyway, they (Numpy) are going to implement the @ operator for arrays, may
be it would be a good idea to check that if something on the left from me
(array) is not an array but a callable then apply it elementwise.
Concerning the multi-argument functions, I don't like $ symbol, don't know
why. It seems really unintuitive why it means partial application.
One can autocurry composable functions and apply same rules that Numpy uses
for ufuncs.
More precisely, if I write
add(data1, data2)
with arrays it applies add pairwise. But if I write
add(data1, 42)
it is also fine, it simply adds 42 to every element. With autocurrying one
could write
root @ mean @ add(data) @ square @ data2
or
root @ mean @ square @ add(42) @ data
However, as I see it now it is not very readable, so that may be the best
choise is to reserve @ and | for "piping" iterables through transformers
that take one argument. In other words it should be left to user to make
add(42) of an appropriate type. It is the same logic as for decorators, if
I write
@modify(arg)
def func(x):
return None
I must care that modify(arg) evaluates to something that takes one callable
and returns a callable.
On May 9, 2015, at 01:36, Stephen J. Turnbull <stephen(a)xemacs.org> wrote:
> >
> > Andrew Barnert writes:
> >>> On May 8, 2015, at 19:58, Stephen J. Turnbull <stephen(a)xemacs.org>
> wrote:
> >>>
> >>> Koos Zevenhoven writes:
> >>>
> >>>> As a random example, (root @ mean @ square)(x) would produce the right
> >>>> order for rms when using [2].
> >>>
> >>> Hardly interesting. :-) The result is an exception, as root and square
> >>> are conceptually scalar-to-scalar, while mean is sequence-to-scalar.
> >>
> >> Unless you're using an elementwise square and an array-to-scalar
> >> mean, like the ones in NumPy,
> >
> > Erm, why would square be elementwise and root not? I would suppose
> > that everything is element-wise in Numpy (not a user yet).
>
> Most functions in NumPy are elementwise when applied to arrays, but can
> also be applied to scalars. So, square is elementwise because it's called
> on an array, root is scalar because it's called on a scalar. (In fact, root
> could also be elementwise--aggregating functions like mean can be applied
> across just one axis of a 2D or higher array, reducing it by one dimension,
> if you want.)
>
> Before you try it, this sounds like a complicated nightmare that can't
> possibly work in practice. But play with it for just a few minutes and it's
> completely natural. (Except for a few cases where you want some array-wide
> but not element-wise operation, most famously matrix multiplication, which
> is why we now have the @ operator to play with.)
>
> >> in which case it works perfectly well...
> >
> > But that's an aspect of my point (evidently, obscure). Conceptually,
> > as taught in junior high school or so, root and square are scalar-to-
> > scalar. If you are working in a context such as Numpy where it makes
> > sense to assume they are element-wise and thus composable, the context
> > should provide the compose operator(s).
>
> I was actually thinking on these lines: what if @ didn't work on
> types.FunctionType, but did work on numpy.ufunc (the name for the
> "universal function" type that knows how to broadcast across arrays but
> also work on scalars)? That's something NumPy could implement without any
> help from the core language. (Methods are a minor problem here, but it's
> obvious how to solve them, so I won't get into it.) And if it turned out to
> be useful all over the place in NumPy, that might turn up some great uses
> for the idiomatic non-NumPy Python, or it might show that, like elementwise
> addition, it's really more a part of NumPy than of Python.
>
> But of course that's more of a proposal for NumPy than for Python.
>
> > Without that context, Koos's
> > example looks like a TypeError.
>
> >> But Koos's example, even if it was possibly inadvertent, shows that
> >> I may be wrong about that. Maybe compose together with element-wise
> >> operators actually _is_ sufficient for something beyond toy
> >> examples.
> >
> > Of course it is!<wink /> I didn't really think there was any doubt
> > about that.
>
> I think there was, and still is. People keep coming up with abstract toy
> examples, but as soon as someone tries to give a good real example, it only
> makes sense with NumPy (Koos's) or with some syntax that Python doesn't
> have (yours), because to write them with actual Python functions would
> actually be ugly and verbose (my version of yours).
>
> I don't think that's a coincidence. You didn't write "map square" because
> you don't know how to think in Python, but because using compose profitably
> inherently implies not thinking in Python. (Except, maybe, in the case of
> NumPy... which is a different idiom.) Maybe someone has a bunch of obvious
> good use cases for compose that don't also require other functions,
> operators, or syntax we don't have, but so far, nobody's mentioned one.
>
> ------------------------------
>
> On 5/9/2015 6:19 AM, Andrew Barnert via Python-ideas wrote:
>
> > I think there was, and still is. People keep coming up with abstract toy
> examples, but as soon as someone tries to give a good real example, it only
> makes sense with NumPy (Koos's) or with some syntax that Python doesn't
> have (yours), because to write them with actual Python functions would
> actually be ugly and verbose (my version of yours).
> >
> > I don't think that's a coincidence. You didn't write "map square"
> because you don't know how to think in Python, but because using compose
> profitably inherently implies not thinking in Python. (Except, maybe, in
> the case of NumPy... which is a different idiom.) Maybe someone has a bunch
> of obvious good use cases for compose that don't also require other
> functions, operators, or syntax we don't have, but so far, nobody's
> mentioned one.
>
> I agree that @ is most likely to be usefull in numpy's restricted context.
>
> A composition operator is usually defined by application: f@g(x) is
> defined as f(g(x)). (I sure there are also axiomatic treatments.) It
> is an optional syntactic abbreviation. It is most useful in a context
> where there is one set of data objects, such as the real numbers, or one
> set + arrays (vectors) defined on the one set; where all function are
> univariate (or possible multivariate, but that can can be transformed to
> univariate on vectors); *and* where parameter names are dummies like
> 'x', 'y', 'z', or '_'.
>
> The last point is important. Abbreviating h(x) = f(g(x)) with h = f @ g
> does not lose any information as 'x' is basically a placeholder (so get
> rid of it). But parameter names are important in most practical
> contexts, both for understanding a composition and for using it.
>
> dev npv(transfers, discount):
> '''Return the net present value of discounted transfers.
>
> transfers: finite iterable of amounts at constant intervals
> discount: fraction per interval
> '''
> divisor = 1 + discount
> return sum(tranfer/divisor**time
> for time, transfer in enumerate(transfers))
>
> Even if one could replace the def statement with
> npv = <some combination of @, sum, map, add, div, power, enumerate, ...>
> with parameter names omitted, it would be harder to understand. Using
> it would require the ability to infer argument types and order from the
> composed expression.
>
> I intentionally added a statement to calculate the common subexpression
> prior to the return. I believe it would have to put back in the return
> expression before converting.
>
> --
> Terry Jan Reedy
>
>
>
> ------------------------------
>
> On 05/09/2015 03:21 AM, Andrew Barnert via Python-ideas wrote:
> >> >I suppose you could write (root @ mean @ (map square)) (xs),
>
> > Actually, you can't. You could write (root @ mean @ partial(map,
> > square))(xs), but that's pretty clearly less readable than
> > root(mean(map(square, xs))) or root(mean(x*x for x in xs). And that's
> > been my main argument: Without a full suite of higher-level operators
> > and related syntax, compose alone doesn't do you any good except for toy
> > examples.
>
> How about an operator for partial?
>
> root @ mean @ map $ square(xs)
>
>
> Actually I'd rather reuse the binary operators. (I'd be happy if they were
> just methods on bytes objects BTW.)
>
> compose(root, mean, map(square, xs))
>
> root ^ mean ^ map & square (xs)
>
> root ^ mean ^ map & square ^ xs ()
>
> Read this as...
>
> compose root, of mean, of map with square, of xs
>
> Or...
>
> apply(map(square, xs), mean, root)
>
> map & square | mean | root (xs)
>
> xs | map & square | mean | root ()
>
>
> Read this as...
>
> apply xs, to map with square, to mean, to root
>
>
> These are kind of cool, but does it make python code easier to read? That
> seems like it may be subjective depending on the amount of programming
> experience someone has.
>
> Cheers,
> Ron
>
>
>
> ------------------------------
>
> Hi,
> I had to answer some of these questions when I wrote Lawvere:
> https://pypi.python.org/pypi/lawvere
>
> First, there is two kind of composition: pipe and circle so I think a
> single operator like @ is a bit restrictive.
> I like "->" and "<-"
>
> Then, for function name and function to string I had to introduce function
> signature (a tuple).
> It provides a good tool for decomposition, introspection and comparison in
> respect with mathematic definition.
>
> Finally, for me composition make sense when you have typed functions
> otherwise it can easily become a mess and this make composition tied to
> multiple dispatch.
>
> I really hope composition will be introduced in python but I can't see how
> it be made without rethinking a good part of function definition.
>
>
>
> 2015-05-09 17:38 GMT+02:00 Ron Adam <ron3200(a)gmail.com>:
>
> >
> >
> > On 05/09/2015 03:21 AM, Andrew Barnert via Python-ideas wrote:
> >
> >> >I suppose you could write (root @ mean @ (map square)) (xs),
> >>>
> >>
> > Actually, you can't. You could write (root @ mean @ partial(map,
> >> square))(xs), but that's pretty clearly less readable than
> >> root(mean(map(square, xs))) or root(mean(x*x for x in xs). And that's
> >> been my main argument: Without a full suite of higher-level operators
> >> and related syntax, compose alone doesn't do you any good except for toy
> >> examples.
> >>
> >
> > How about an operator for partial?
> >
> > root @ mean @ map $ square(xs)
> >
> >
> > Actually I'd rather reuse the binary operators. (I'd be happy if they
> > were just methods on bytes objects BTW.)
> >
> > compose(root, mean, map(square, xs))
> >
> > root ^ mean ^ map & square (xs)
> >
> > root ^ mean ^ map & square ^ xs ()
> >
> > Read this as...
> >
> > compose root, of mean, of map with square, of xs
> >
> > Or...
> >
> > apply(map(square, xs), mean, root)
> >
> > map & square | mean | root (xs)
> >
> > xs | map & square | mean | root ()
> >
> >
> > Read this as...
> >
> > apply xs, to map with square, to mean, to root
> >
> >
> > These are kind of cool, but does it make python code easier to read?
> That
> > seems like it may be subjective depending on the amount of programming
> > experience someone has.
> >
> > Cheers,
> > Ron
> >
> >
>
9 years
Re: Cartesian Product on `__mul__`
by Thomas Jollans
On 25/07/2019 19.41, Andrew Barnert via Python-ideas wrote:
> On Jul 25, 2019, at 09:46, Batuhan Taskaya <isidentical(a)gmail.com> wrote:
>> I think it looks very fine when you type {1, 2, 3} * {"a", "b", "c"} and get set(itertools.product({1, 2, 3}, {"a", "b", "c"})). So i am proposing set multiplication implementation as cartesian product.
> I think it might make more sense to reopen the discussion of using @ for cartesian product for all containers, for a few reasons.
>
> * The * operator means repeat on lists, tuples, and strings. While there are already some operator differences between sets and other containers, but it’s generally the types supporting different operators (like sets having | and &, and not having +) rather than giving different meanings to the same operators. Plus, tuples are frequently used for small things that are conceptually sets or set-like; this is even enshrined in APIs like str.endswith and isinstance.
>
> * (Ordered) Cartesian product for lists, tuples, iterators, etc. makes just as much sense even when they’re not being used as pseudo-sets, ordered sets, or multisets, and might even be more common in code in practice (which is presumably why there’s a function for that in the stdlib). So, why only allow it for sets?
>
> * Cartesian product feels more like matrix multiplication than like scalar or elementwise multiplication or repetition, doesn’t it?
Adding @ to sets for Cartesian products *might* be reasonable, but
giving @ a meaning other than matrix multiplication for ordered
collections (lists, tuples) sounds like a terrible idea that will only
cause confusion.
>
> * Adding @ to the builtin containers was already raised during the @ discussion and tabled for the future. I believe this is because the NumPy community wanted the proposal to be as minimal as possible so they could be sure of getting it, and get it ASAP, not because anyone had or anticipated objections beyond “is it common enough of a need to be worth it?”
>
> I don’t think this discussion would need to include other ideas deferred at the same time, like @ for composition on functions. (Although @ on iterators might be worth bringing up, if only to reject it. We don’t allow + between arbitrary iterables defaulting to chain but meaning type(lhs)(chain(…)) on some types, so why do the equivalent with @?)
> _______________________________________________
> Python-ideas mailing list -- python-ideas(a)python.org
> To unsubscribe send an email to python-ideas-leave(a)python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/LR5PQ…
> Code of Conduct: http://python.org/psf/codeofconduct/
4 years, 9 months
Re: [Python-ideas] [RFC] draft PEP: Dedicated infix operators for matrix multiplication and matrix power
by David Mertz
I think the fundamental issue with the degree sign is quite simply that it
is *not-ASCII*. If we are willing to expand the syntax of Python to
include other characters, then we should just use Unicode Character 'DOT
OPERATOR' (U+22C5), which is actually the *exact* correct thing, not just
"something that looks a bit similar."
If we are worried about "stuff that's easy to enter on the keyboard",
there's no reason AZERTY is necessarily more relevant than Dubeolsik or
JCUKEN. And if we aim for "something that looks similar" we probably have
lots of options on some keyboard layout in the world.
On Fri, Mar 14, 2014 at 7:54 AM, Joseph Martinot-Lagarde <
joseph.martinot-lagarde(a)m4x.org> wrote:
> Robert Kern <robert.kern@...> writes:
>
> >
> > On 2014-03-14 13:20, M.-A. Lemburg wrote:
> > > On 14.03.2014 12:25, Robert Kern wrote:
> > >> On 2014-03-14 10:16, M.-A. Lemburg wrote:
> > >>
> > >>> I have some questions:
> > >>>
> > >>> 1. Since in math, the operator is usually spelt "·" (the center dot,
> > >>> or "." but that's already reserved for methods and attributes in
> > >>> Python), why not try to use that instead of " <at> " (which in
> Python
> > >>> already identifies decorators) ?
> > >>
> > >> I think the current feeling of the Python core team is against
> including non-ASCII characters in the
> > >> language's keywords or operators. Even if that were not so, I would
> still recommend against it
> > >> because it would be quite difficult to type. I don't know off-hand the
> key combination to do it on
> > >> my native system, and it would change from system to system.
> > >
> > > That's a fair argument. How about using the degree symbol instead: "°"
> ?
> > >
> > > (A ° B).T == B.T ° A.T
> >
> > Your point is taken, though. I do find these smaller symbols more
> readable
> and
> > similar to standard mathematical notation than an <at> sign, which is
> as
> big or
> > bigger than most uppercase characters. Unfortunately, ASCII leaves us few
> > single-character options.
> >
>
> Putting aside tha ascii problem, ° is easily written using a AZERTY
> keyboard. It is smaller and less convoluted than @ and looks like the
> mathematical notation for function composition, which is similar to matrix
> multiplication.
> Still, not ascii and not displayed on every keyboard...
>
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas(a)python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
--
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons. Intellectual property is
to the 21st century what the slave trade was to the 16th.
10 years, 2 months
Re: [Python-ideas] (no subject)
by Ivan Levkivskyi
On May 7, 2015 9:46 AM, "Andrew Barnert" <abarnert(a)yahoo.com> wrote:
>
> On May 6, 2015, at 18:13, Stephen J. Turnbull <stephen(a)xemacs.org> wrote:
> >
> > Ivan Levkivskyi writes:
> >
> >> Ok, I will try inspecting all existing approaches to find the one
> >> that seems more "right" to me :)
> >
> > If you do inspect all the approaches you can find, I hope you'll keep
> > notes and publish them, perhaps as a blog article.
> >
> >> In any case that approach could be updated by incorporating matrix
> >> @ as a dedicated operator for compositions.
> >
> > I think rather than "dedicated" you mean "suggested". One of Andrew's
> > main points is that you're unlikely to find more than a small minority
> > agreeing on the "right" approach, no matter which one you choose.
>
> Whatever wording you use, I do think it's likely that at least some of
the existing libraries would become much more readable just by using @ in
place of what they currently use. Even better,
> It may also turn out that the @ notation just "feels right" with one
solution to the argument problem and wrong with another, narrowing down the
possibility space.
>
> So, I think it's definitely worth pushing the experiments if someone has
the time and inclination, so I'm glad Ivan has volunteered.
>
Thank you for encouraging me. It will be definitely an interesting
experience to do this.
> >> At least, it seems that Erik from astropy likes this idea and it is
> >> quite natural for people with "scientific" background.
>
> I forgot to say before, but: it's great to have input from people coming
from the MATLAB-y scientific/numeric world like him (I think) rather than
just the Haskell/ML-y mathematical/CS world like you (Stephen, I think), as
we usually get in these discussions. If there's one option that's
universally obviously right to everyone in the first group, maybe everyone
in the second group can shut up and deal with it. If not (which I think is
likely, but I'll keep an open mind), well, at least we've got broader
viewpoints and more data for Ivan's summary.
>
> > Sure, but as he also points out, when you know that you're going to be
> > composing only functions of one argument, the Unix pipe symbol is also
> > quite natural (as is Haskell's operator-less notation). While one of
> > my hobbies is category theory (basically, the mathematical theory of
> > composable maps for those not familiar with the term), I find the Unix
> > pipeline somehow easier to think about than abstract composition,
> > although I believe they're equivalent (at least as composition is
> > modeled by category theory).
>
> I think you're right that they're equivalent in theory.
>
> But I feel like they're also equivalent in usability and readability (as
in for 1/3 simple cases they're both fine, for 1/3 compose looks better,
for 1/3 rcompose), but I definitely can't argue for that.
>
> What always throws me is that most languages that offer both choose
different precedence (and sometimes associativity, too) for them. The
consequence seems to be that when I just use compose and rcompose operators
without thinking about it, I always get them right, but as soon as I ask
myself "which one is like shell pipes?" or "why did I put parens here?" I
get confused and have to go take a break before I can write any more code.
Haskell's operatorless notation is nice because it prevents me from
noticing what I'm doing and asking myself those questions. :)
9 years
Re: [Python-ideas] Function composition (was no subject)
by Andrew Barnert
On May 9, 2015, at 01:36, Stephen J. Turnbull <stephen(a)xemacs.org> wrote:
>
> Andrew Barnert writes:
>>> On May 8, 2015, at 19:58, Stephen J. Turnbull <stephen(a)xemacs.org> wrote:
>>>
>>> Koos Zevenhoven writes:
>>>
>>>> As a random example, (root @ mean @ square)(x) would produce the right
>>>> order for rms when using [2].
>>>
>>> Hardly interesting. :-) The result is an exception, as root and square
>>> are conceptually scalar-to-scalar, while mean is sequence-to-scalar.
>>
>> Unless you're using an elementwise square and an array-to-scalar
>> mean, like the ones in NumPy,
>
> Erm, why would square be elementwise and root not? I would suppose
> that everything is element-wise in Numpy (not a user yet).
Most functions in NumPy are elementwise when applied to arrays, but can also be applied to scalars. So, square is elementwise because it's called on an array, root is scalar because it's called on a scalar. (In fact, root could also be elementwise--aggregating functions like mean can be applied across just one axis of a 2D or higher array, reducing it by one dimension, if you want.)
Before you try it, this sounds like a complicated nightmare that can't possibly work in practice. But play with it for just a few minutes and it's completely natural. (Except for a few cases where you want some array-wide but not element-wise operation, most famously matrix multiplication, which is why we now have the @ operator to play with.)
>> in which case it works perfectly well...
>
> But that's an aspect of my point (evidently, obscure). Conceptually,
> as taught in junior high school or so, root and square are scalar-to-
> scalar. If you are working in a context such as Numpy where it makes
> sense to assume they are element-wise and thus composable, the context
> should provide the compose operator(s).
I was actually thinking on these lines: what if @ didn't work on types.FunctionType, but did work on numpy.ufunc (the name for the "universal function" type that knows how to broadcast across arrays but also work on scalars)? That's something NumPy could implement without any help from the core language. (Methods are a minor problem here, but it's obvious how to solve them, so I won't get into it.) And if it turned out to be useful all over the place in NumPy, that might turn up some great uses for the idiomatic non-NumPy Python, or it might show that, like elementwise addition, it's really more a part of NumPy than of Python.
But of course that's more of a proposal for NumPy than for Python.
> Without that context, Koos's
> example looks like a TypeError.
>> But Koos's example, even if it was possibly inadvertent, shows that
>> I may be wrong about that. Maybe compose together with element-wise
>> operators actually _is_ sufficient for something beyond toy
>> examples.
>
> Of course it is!<wink /> I didn't really think there was any doubt
> about that.
I think there was, and still is. People keep coming up with abstract toy examples, but as soon as someone tries to give a good real example, it only makes sense with NumPy (Koos's) or with some syntax that Python doesn't have (yours), because to write them with actual Python functions would actually be ugly and verbose (my version of yours).
I don't think that's a coincidence. You didn't write "map square" because you don't know how to think in Python, but because using compose profitably inherently implies not thinking in Python. (Except, maybe, in the case of NumPy... which is a different idiom.) Maybe someone has a bunch of obvious good use cases for compose that don't also require other functions, operators, or syntax we don't have, but so far, nobody's mentioned one.
9 years
Re: [Python-ideas] Function composition (was no subject)
by Andrew Barnert
On May 10, 2015, at 00:13, Ivan Levkivskyi <levkivskyi(a)gmail.com> wrote:
>
>> On 10 May 2015 at 02:05, Andrew Barnert <abarnert(a)yahoo.com> wrote:
>>> On May 9, 2015, at 16:28, Ivan Levkivskyi <levkivskyi(a)gmail.com> wrote:
>>>
>>> I was thinking about recent ideas discussed here. I also returned back to origins of my initial idea. The point is that it came from Numpy, I use Numpy arrays everyday, and typically I do exactly something like root(mean(square(data))).
>>>
>>> Now I am thinking: what is actually a matrix? It is something that takes a vector and returns a vector. But on the other hand the same actually do elementwise functions. It does not really matter, what we do with a vector: transform by a product of matrices or by composition of functions. In other words I agree with Andrew that "elementwise" is a good match with compose, and what we really need is to "pipe" things that take a vector (or just an iterable) and return a vector (iterable).
>>>
>>> So that probably a good place (in a potential future) for compose would be not functools but itertools. But indeed a good place to test this would be Numpy.
>>
>> Itertools is an interesting idea.
>>
>> Anyway, assuming NumPy isn't going to add this in the near future (has anyone even brought it up on the NumPy list, or only here?), it wouldn't be that hard to write a (maybe inefficient but working) @composable wrapper and wrap all the relevant callables from NumPy or from itertools, upload it to PyPI, and let people start coming up with good examples. If it's later worth direct support in NumPy and/or Python (for simplicity or performance), the module will still be useful for backward compatibility.
>
> This is a good step-by-step approach. This is what I would try.
>
>>> An additional comment: it is indeed good to have both @ and | for compose and rcompose.
>>> Side note, one can actually overload __rmatmul__ on arrays as well so that you can write
>>>
>>> root @ mean @ square @ data
>>
>> But this doesn't need to overload it on arrays, only on the utuncs, right?
>>
>> Unless you're suggesting that one of these operations could be a matrix as easily as a function, and NumPy users often won't have to care which it is?
>
> Exactly, this is what I want. Note that in such approach you have no parentheses at all.
It's worth working up some practical examples here.
Annoyingly, I actually had a perfect example a few years ago, but I can't find it. I'm sure you can imagine what it was. We built-in vector transforms implemented as functions, and a way for a user to input new transforms as matrices, and a way for the user to chain built-in and user-defined transforms. Under the covers, we had to wrap each user transform in a function just so they'd all be callables, which led to a couple of annoying debugging sessions and probably a performance hit. If we could compose them interchangeably, that might have avoided those problems. But if I can't find the code, it's hard to say for sure, so now I'm offering the same vague, untestable use cases that I was complaining about. :)
>
>>>
>>> Moreover, one can overload __or__ on arrays, so that one can write
>>>
>>> data | square | mean | root
>>>
>>> even with ordinary functions (not Numpy's ufuncs or composable) .
>>
>> That's an interesting point. But I think this will be a bit confusing, because now it _does_ matter whether square is a matrix or a function--you'll get elementwise bitwise or instead of application. (And really, this is the whole reason for @ in the first place--we needed an operator that never means elementwise.)
>>
>> Also, this doesn't let you actually compose functions--if you want square | mean | root to be a function, square has to have a __or__ operator.
>
> This is true. The | is more limited because of its current semantics. The fact that | operator already has a widely used semantics is also why I would choose @ if I would need to choose only one: @ or |
>
>>> These examples are actually "flat is better than nested" in the extreme form.
>>>
>>> Anyway, they (Numpy) are going to implement the @ operator for arrays, may be it would be a good idea to check that if something on the left from me (array) is not an array but a callable then apply it elementwise.
>>>
>>> Concerning the multi-argument functions, I don't like $ symbol, don't know why. It seems really unintuitive why it means partial application.
>>> One can autocurry composable functions and apply same rules that Numpy uses for ufuncs.
>>> More precisely, if I write
>>>
>>> add(data1, data2)
>>>
>>> with arrays it applies add pairwise. But if I write
>>>
>>> add(data1, 42)
>>>
>>> it is also fine, it simply adds 42 to every element. With autocurrying one could write
>>>
>>> root @ mean @ add(data) @ square @ data2
>>>
>>> or
>>>
>>> root @ mean @ square @ add(42) @ data
>>>
>>> However, as I see it now it is not very readable, so that may be the best choise is to reserve @ and | for "piping" iterables through transformers that take one argument. In other words it should be left to user to make add(42) of an appropriate type. It is the same logic as for decorators, if I write
>>>
>>> @modify(arg)
>>> def func(x):
>>> return None
>>>
>>> I must care that modify(arg) evaluates to something that takes one callable and returns a callable.
>>>
>>>
>>>> On May 9, 2015, at 01:36, Stephen J. Turnbull <stephen(a)xemacs.org> wrote:
>>>> >
>>>> > Andrew Barnert writes:
>>>> >>> On May 8, 2015, at 19:58, Stephen J. Turnbull <stephen(a)xemacs.org> wrote:
>>>> >>>
>>>> >>> Koos Zevenhoven writes:
>>>> >>>
>>>> >>>> As a random example, (root @ mean @ square)(x) would produce the right
>>>> >>>> order for rms when using [2].
>>>> >>>
>>>> >>> Hardly interesting. :-) The result is an exception, as root and square
>>>> >>> are conceptually scalar-to-scalar, while mean is sequence-to-scalar.
>>>> >>
>>>> >> Unless you're using an elementwise square and an array-to-scalar
>>>> >> mean, like the ones in NumPy,
>>>> >
>>>> > Erm, why would square be elementwise and root not? I would suppose
>>>> > that everything is element-wise in Numpy (not a user yet).
>>>>
>>>> Most functions in NumPy are elementwise when applied to arrays, but can also be applied to scalars. So, square is elementwise because it's called on an array, root is scalar because it's called on a scalar. (In fact, root could also be elementwise--aggregating functions like mean can be applied across just one axis of a 2D or higher array, reducing it by one dimension, if you want.)
>>>>
>>>> Before you try it, this sounds like a complicated nightmare that can't possibly work in practice. But play with it for just a few minutes and it's completely natural. (Except for a few cases where you want some array-wide but not element-wise operation, most famously matrix multiplication, which is why we now have the @ operator to play with.)
>>>>
>>>> >> in which case it works perfectly well...
>>>> >
>>>> > But that's an aspect of my point (evidently, obscure). Conceptually,
>>>> > as taught in junior high school or so, root and square are scalar-to-
>>>> > scalar. If you are working in a context such as Numpy where it makes
>>>> > sense to assume they are element-wise and thus composable, the context
>>>> > should provide the compose operator(s).
>>>>
>>>> I was actually thinking on these lines: what if @ didn't work on types.FunctionType, but did work on numpy.ufunc (the name for the "universal function" type that knows how to broadcast across arrays but also work on scalars)? That's something NumPy could implement without any help from the core language. (Methods are a minor problem here, but it's obvious how to solve them, so I won't get into it.) And if it turned out to be useful all over the place in NumPy, that might turn up some great uses for the idiomatic non-NumPy Python, or it might show that, like elementwise addition, it's really more a part of NumPy than of Python.
>>>>
>>>> But of course that's more of a proposal for NumPy than for Python.
>>>>
>>>> > Without that context, Koos's
>>>> > example looks like a TypeError.
>>>>
>>>> >> But Koos's example, even if it was possibly inadvertent, shows that
>>>> >> I may be wrong about that. Maybe compose together with element-wise
>>>> >> operators actually _is_ sufficient for something beyond toy
>>>> >> examples.
>>>> >
>>>> > Of course it is!<wink /> I didn't really think there was any doubt
>>>> > about that.
>>>>
>>>> I think there was, and still is. People keep coming up with abstract toy examples, but as soon as someone tries to give a good real example, it only makes sense with NumPy (Koos's) or with some syntax that Python doesn't have (yours), because to write them with actual Python functions would actually be ugly and verbose (my version of yours).
>>>>
>>>> I don't think that's a coincidence. You didn't write "map square" because you don't know how to think in Python, but because using compose profitably inherently implies not thinking in Python. (Except, maybe, in the case of NumPy... which is a different idiom.) Maybe someone has a bunch of obvious good use cases for compose that don't also require other functions, operators, or syntax we don't have, but so far, nobody's mentioned one.
>>>>
>>>> ------------------------------
>>>>
>>>> On 5/9/2015 6:19 AM, Andrew Barnert via Python-ideas wrote:
>>>>
>>>> > I think there was, and still is. People keep coming up with abstract toy examples, but as soon as someone tries to give a good real example, it only makes sense with NumPy (Koos's) or with some syntax that Python doesn't have (yours), because to write them with actual Python functions would actually be ugly and verbose (my version of yours).
>>>> >
>>>> > I don't think that's a coincidence. You didn't write "map square" because you don't know how to think in Python, but because using compose profitably inherently implies not thinking in Python. (Except, maybe, in the case of NumPy... which is a different idiom.) Maybe someone has a bunch of obvious good use cases for compose that don't also require other functions, operators, or syntax we don't have, but so far, nobody's mentioned one.
>>>>
>>>> I agree that @ is most likely to be usefull in numpy's restricted context.
>>>>
>>>> A composition operator is usually defined by application: f@g(x) is
>>>> defined as f(g(x)). (I sure there are also axiomatic treatments.) It
>>>> is an optional syntactic abbreviation. It is most useful in a context
>>>> where there is one set of data objects, such as the real numbers, or one
>>>> set + arrays (vectors) defined on the one set; where all function are
>>>> univariate (or possible multivariate, but that can can be transformed to
>>>> univariate on vectors); *and* where parameter names are dummies like
>>>> 'x', 'y', 'z', or '_'.
>>>>
>>>> The last point is important. Abbreviating h(x) = f(g(x)) with h = f @ g
>>>> does not lose any information as 'x' is basically a placeholder (so get
>>>> rid of it). But parameter names are important in most practical
>>>> contexts, both for understanding a composition and for using it.
>>>>
>>>> dev npv(transfers, discount):
>>>> '''Return the net present value of discounted transfers.
>>>>
>>>> transfers: finite iterable of amounts at constant intervals
>>>> discount: fraction per interval
>>>> '''
>>>> divisor = 1 + discount
>>>> return sum(tranfer/divisor**time
>>>> for time, transfer in enumerate(transfers))
>>>>
>>>> Even if one could replace the def statement with
>>>> npv = <some combination of @, sum, map, add, div, power, enumerate, ...>
>>>> with parameter names omitted, it would be harder to understand. Using
>>>> it would require the ability to infer argument types and order from the
>>>> composed expression.
>>>>
>>>> I intentionally added a statement to calculate the common subexpression
>>>> prior to the return. I believe it would have to put back in the return
>>>> expression before converting.
>>>>
>>>> --
>>>> Terry Jan Reedy
>>>>
>>>>
>>>>
>>>> ------------------------------
>>>>
>>>> On 05/09/2015 03:21 AM, Andrew Barnert via Python-ideas wrote:
>>>> >> >I suppose you could write (root @ mean @ (map square)) (xs),
>>>>
>>>> > Actually, you can't. You could write (root @ mean @ partial(map,
>>>> > square))(xs), but that's pretty clearly less readable than
>>>> > root(mean(map(square, xs))) or root(mean(x*x for x in xs). And that's
>>>> > been my main argument: Without a full suite of higher-level operators
>>>> > and related syntax, compose alone doesn't do you any good except for toy
>>>> > examples.
>>>>
>>>> How about an operator for partial?
>>>>
>>>> root @ mean @ map $ square(xs)
>>>>
>>>>
>>>> Actually I'd rather reuse the binary operators. (I'd be happy if they were
>>>> just methods on bytes objects BTW.)
>>>>
>>>> compose(root, mean, map(square, xs))
>>>>
>>>> root ^ mean ^ map & square (xs)
>>>>
>>>> root ^ mean ^ map & square ^ xs ()
>>>>
>>>> Read this as...
>>>>
>>>> compose root, of mean, of map with square, of xs
>>>>
>>>> Or...
>>>>
>>>> apply(map(square, xs), mean, root)
>>>>
>>>> map & square | mean | root (xs)
>>>>
>>>> xs | map & square | mean | root ()
>>>>
>>>>
>>>> Read this as...
>>>>
>>>> apply xs, to map with square, to mean, to root
>>>>
>>>>
>>>> These are kind of cool, but does it make python code easier to read? That
>>>> seems like it may be subjective depending on the amount of programming
>>>> experience someone has.
>>>>
>>>> Cheers,
>>>> Ron
>>>>
>>>>
>>>>
>>>> ------------------------------
>>>>
>>>> Hi,
>>>> I had to answer some of these questions when I wrote Lawvere:
>>>> https://pypi.python.org/pypi/lawvere
>>>>
>>>> First, there is two kind of composition: pipe and circle so I think a
>>>> single operator like @ is a bit restrictive.
>>>> I like "->" and "<-"
>>>>
>>>> Then, for function name and function to string I had to introduce function
>>>> signature (a tuple).
>>>> It provides a good tool for decomposition, introspection and comparison in
>>>> respect with mathematic definition.
>>>>
>>>> Finally, for me composition make sense when you have typed functions
>>>> otherwise it can easily become a mess and this make composition tied to
>>>> multiple dispatch.
>>>>
>>>> I really hope composition will be introduced in python but I can't see how
>>>> it be made without rethinking a good part of function definition.
>>>>
>>>>
>>>>
>>>> 2015-05-09 17:38 GMT+02:00 Ron Adam <ron3200(a)gmail.com>:
>>>>
>>>> >
>>>> >
>>>> > On 05/09/2015 03:21 AM, Andrew Barnert via Python-ideas wrote:
>>>> >
>>>> >> >I suppose you could write (root @ mean @ (map square)) (xs),
>>>> >>>
>>>> >>
>>>> > Actually, you can't. You could write (root @ mean @ partial(map,
>>>> >> square))(xs), but that's pretty clearly less readable than
>>>> >> root(mean(map(square, xs))) or root(mean(x*x for x in xs). And that's
>>>> >> been my main argument: Without a full suite of higher-level operators
>>>> >> and related syntax, compose alone doesn't do you any good except for toy
>>>> >> examples.
>>>> >>
>>>> >
>>>> > How about an operator for partial?
>>>> >
>>>> > root @ mean @ map $ square(xs)
>>>> >
>>>> >
>>>> > Actually I'd rather reuse the binary operators. (I'd be happy if they
>>>> > were just methods on bytes objects BTW.)
>>>> >
>>>> > compose(root, mean, map(square, xs))
>>>> >
>>>> > root ^ mean ^ map & square (xs)
>>>> >
>>>> > root ^ mean ^ map & square ^ xs ()
>>>> >
>>>> > Read this as...
>>>> >
>>>> > compose root, of mean, of map with square, of xs
>>>> >
>>>> > Or...
>>>> >
>>>> > apply(map(square, xs), mean, root)
>>>> >
>>>> > map & square | mean | root (xs)
>>>> >
>>>> > xs | map & square | mean | root ()
>>>> >
>>>> >
>>>> > Read this as...
>>>> >
>>>> > apply xs, to map with square, to mean, to root
>>>> >
>>>> >
>>>> > These are kind of cool, but does it make python code easier to read? That
>>>> > seems like it may be subjective depending on the amount of programming
>>>> > experience someone has.
>>>> >
>>>> > Cheers,
>>>> > Ron
>>>> >
>>>> >
>>>
>>> _______________________________________________
>>> Python-ideas mailing list
>>> Python-ideas(a)python.org
>>> https://mail.python.org/mailman/listinfo/python-ideas
>>> Code of Conduct: http://python.org/psf/codeofconduct/
>
9 years
Re: len(path) == len(str(path))
by Ram Rachum
What's wrong with using @? If I understand correctly, it's used for matrix
multiplication, which is far enough from function composition to avoid
confusion. And it's slightly similar visually to a circle.
On Sun, May 24, 2020 at 4:25 PM Dan Sommers <
2QdxY4RzWzUUiLuE(a)potatochowder.com> wrote:
>
> On Sunday, May 24, 2020, at 08:07 -0400, Steven D'Aprano wrote:
>
> > On Sun, May 24, 2020 at 02:27:00PM +0300, Ram Rachum wrote:
> >
> >> Today I wrote a script and did this:
> >>
> >> sorted(paths, key=lambda path: len(str(path)), reverse=True)
> >>
> >> But it would have been nicer if I could do this:
> >>
> >> sorted(paths, key=len, reverse=True)
> >
> > It would have been even nicer if we could compose functions:
> >
> > sorted(paths, key=len∘str, reverse=True)
> >
> > *semi-wink*
>
> It started with y = len(str(f(g(h(x))))), which is ugly. Some people
> like pipes, and wrote object-like functions that could be composed with
> the "|" character slash:
>
> y = x | h | g | f | str | len
>
> (Or something like that. Maybe there was a ">" at the beginning.) Then
> a clojure fan I know showed me function called "thread":
>
> f = thread([len, str, f, g, h])
> y = f(x)
>
> An untested Python implementation:
>
> def thread(fs): # or maybe you like def thread(*fs) instead
> fs = reversed(fs) # or not, depending on how fs was constructed
> def inner(x):
> for f in fs:
> x = f(x)
> return x
> return inner
>
> sorted(paths, key=thread([len, str]), reverse=True)
>
> On the one hand, it's not quite as concise as composing the functions
> directly. On the other hand, ∘ ruffles a lot of ASCII feathers (but I'm
> sure Steven knows that).
> _______________________________________________
> Python-ideas mailing list -- python-ideas(a)python.org
> To unsubscribe send an email to python-ideas-leave(a)python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/6WEAN…
> Code of Conduct: http://python.org/psf/codeofconduct/
>
3 years, 12 months