[Numpy-discussion] proposal: smaller representation of string arrays

Charles R Harris charlesr.harris at gmail.com
Tue Apr 25 21:27:57 EDT 2017


On Tue, Apr 25, 2017 at 5:50 PM, Robert Kern <robert.kern at gmail.com> wrote:

> On Tue, Apr 25, 2017 at 3:47 PM, Chris Barker - NOAA Federal <
> chris.barker at noaa.gov> wrote:
>
> >> Presumably you're getting byte strings (with  unknown encoding.
> >
> > No -- thus is for creating and using mostly ascii string data with
> python and numpy.
> >
> > Unknown encoding bytes belong in byte arrays -- they are not text.
>
> You are welcome to try to convince Thomas of that. That is the status quo
> for him, but he is finding that difficult to work with.
>
> > I DO recommend Latin-1 As a default encoding ONLY for  "mostly ascii,
> with a few extra characters" data. With all the sloppiness over the years,
> there are way to many files like that.
>
> That sloppiness that you mention is precisely the "unknown encoding"
> problem. Your previous advocacy has also touched on using latin-1 to decode
> existing files with unknown encodings as well. If you want to advocate for
> using latin-1 only for the creation of new data, maybe stop talking about
> existing files? :-)
>
> > Note: the primary use-case I have in mind is working with ascii text in
> numpy arrays efficiently-- folks have called for that. All I'm saying is
> use Latin-1 instead of ascii -- that buys you some useful extra characters.
>
> For that use case, the alternative in play isn't ASCII, it's UTF-8, which
> buys you a whole bunch of useful extra characters. ;-)
>
> There are several use cases being brought forth here. Some involve file
> reading, some involve file writing, and some involve in-memory
> manipulation. Whatever change we make is going to impinge somehow on all of
> the use cases. If all we do is add a latin-1 dtype for people to use to
> create new in-memory data, then someone is going to use it to read existing
> data in unknown or ambiguous encodings.
>


The maximum length of an UTF-8 character is 4 bytes, so we could use that
to size arrays by character length. The advantage over UTF-32 is that it is
easily compressible, probably by a factor of 4 in many cases. That doesn't
solve the in memory problem, but does have some advantages on disk as well
as making for easy display. We could compress it ourselves after encoding
by truncation.

Note that for terminal display we will want something supported by the
system, which is another problem altogether. Let me break the problem down
into four categories


   1. Storage -- hdf5, .npy, fits, etc.
   2. Display -- ?
   3. Modification -- editing
   4. Parsing -- fits, etc.

There is probably no one solution that is optimal for all of those.

Chuck
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20170425/0ebaa1e3/attachment-0001.html>


More information about the NumPy-Discussion mailing list