Functions could reliably support both vector and scalar context, by>> checking that a type contract exists.>>> That sounds like type checking, which is not Pythonic. Duck-typing, on the> other hand, *is*. While you'll hear no argument from me on duck typing, in this instance that collides head-on with having clear names for elementwise versions of member methods. I personally would feel better erring on the side of clearer names at the cost of some kind of *very cursory* examination of what you are being passed. Exactly what form that examination might take is an open question that deserves further thought.
def capitalize_me(some_obj):> some_obj = some_obj.capitalize()>> --> capitalize_me("a simple string")> "A simple string">> --> capitalize_me(["a", "list", "of", "strings"])> ["A", "List", "Of", "Strings"] Though I don't want to comment on the definition of capitalize_me, in terms of function behavior across the string and list, that is pretty much what I am talking about.
Because of the name clash between parent and elementwise member>> methods when dealing with collections of collections (and some other>> classes), if the type contract provided "broadcast" versions of child>> methods, they would have to be provided with under an alias, perhaps X>> -> elementwise_X. I do not think people should be able to be>> oblivious of when they are calling collection and member methods, that>> encourages sloppy programming. I like the broadcast feature because>> it makes it easier to write code in a clear left to right narrative>> style, and that seems very pythonic to me compared with an>> "inside-out" style.>>> Duck-typing is neither oblivious, nor sloppy -- at least, not inherently.> Is my sample code above not what you had in mind? I don't think duck typing is sloppy, with the caveat that the thing you expect to behave like a duck is some kind of animal, reasonably crafted animatronic robot, or operated puppet :). I feel like when you move from a single object to a collection that is basically like having a second function, which has been integrated in with the first function for the purposes of API simplicity. Then, it becomes an outer function that has the job of returning the correct inner function and executing it with the given input.
---
On Mon, Dec 19, 2011 at 3:37 PM, Joshua Landau
...or we could just extend Pep 225 (deferred) with "~." so we have "['a', 'b', 'c']~.upper()" [syntax debatable]. You don't get the type-checking but that seemed more of a problem to me, as I'm not a real duck.
That does run into some of the same problems w.r.t duck typing, and solutions to problems with overloaded operators are hard to google for. If the method calls were aliased, google searches would be easier and less experienced users wouldn't have to worry about wrestling with operator overloading. That being said, I think elementwise operators are a great idea in a numeric/scientific context. There are of course both pros and cons to having type declarations. I know that people do not like to be limited by a lack of foresight in their predecessors; I have run into unreasonable interface/type specifications and annoyances with private/protected variables in Java and I would never want to see that infiltrate Python. I think what I am trying to sell here is more akin to metadata than static types. People should be able to basically keep doing what they are already doing, but have a reasonable mechanism to provide additional object metadata in a standard way that is easily accessible by downstream consumers. That gives you a lot of the upsides of type declarations while staying Pythonic. Maybe a better option than having a "type contract" base for homogeneous collections is to give objects a metadata namespace with a loose schema, and declare it there? As long as you could provide the metadata at object creation time you would still get all the same benefits, and it would avoid potentially overworking the class/mixin concept.
Your example: my_string.split("\n").capitalize().join_items("\n") to "\n".join(my_string.split("\n")~.capitalize())
And personally I like "string.join(lst)" over "lst.join_with(string)"...
Of course, everyone has their own preferences; having a very small set of internally consistent ways to do something rather than just one way is good for this reason. Nathan