On Fri, Feb 1, 2019 at 4:51 AM Chris Barker firstname.lastname@example.org wrote:
I know that when I'm used to working with numpy and then need to do some string processing or some such, I find myself missing this "vectorization" -- if I want to do the same operation on a whole bunch of strings, why do I need to write a loop or comprehension or map? that is:
[s.lower() for s in a_list_of_strings]
(NOTE: I prefer comprehension syntax to map, but map would work fine here, too)
It strikes me that that is the direction some folks want to go.
If so, then I think the way to do it is not to add a bunch of stuff to Python's str or sequence types, but rather to make a new library that provides quick and easy manipulation of sequences of strings. -- kind of a stringpy -- analogous to numpy.
At the core of numpy is the ndarray: a "a multidimensional, homogeneous array of fixed-size items"
a strarray could be simpler -- I don't see any reason for more than 1-D, nor more than one datatype. But it could be a "vector" of strings that was guaranteed to be all strings, and provide operations that acted on the entire collection in one fell swoop.
Here's a simpler and more general approach: a "vector" type. Any time you attempt to look up any attribute, it returns a vector of that attribute for each of its elements. When you call a vector, it calls each element (with the same args) and returns a vector of the results. So the vector would, in effect, have a .lower() method that returns .lower() of all its elements.
(David, your mail came in as I was typing mine, so it looks fairly similar, except that this proposed vector type wouldn't require you to put ".str" in the middle of it, so it would work with any type.)