<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#330033">
On 8/23/2011 5:46 PM, Terry Reedy wrote:
<blockquote cite="mid:j31hl7$dp5$1@dough.gmane.org" type="cite">On
8/23/2011 6:20 AM, "Martin v. Löwis" wrote:
<br>
<blockquote type="cite">Am 23.08.2011 11:46, schrieb Xavier Morel:
<br>
<blockquote type="cite">Mostly ascii is pretty common for
western-european languages (French, for
<br>
instance, is probably 90 to 95% ascii). It's also a risk in
english, when
<br>
the writer "correctly" spells foreign words (résumé and the
like).
<br>
</blockquote>
<br>
I know - I still question whether it is "extremely common" (so
much as
<br>
to justify a special case). I.e. on what application with what
dataset
<br>
would you gain what speedup, at the expense of what amount of
extra
<br>
lines, and potential slow-down for other datasets?
<br>
</blockquote>
[snip]
<br>
<blockquote type="cite">In the PEP 393 approach, if the string has
a two-byte representation,
<br>
each character needs to widened to two bytes, and likewise for
four
<br>
bytes. So three separate copies of the unrolled loop would be
needed,
<br>
one for each target size.
<br>
</blockquote>
<br>
I fully support the declared purpose of the PEP, which I
understand to be to have a full,correct Unicode implementation on
all new Python releases without paying unnecessary space (and
consequent time) penalties. I think the erroneous length,
iteration, indexing, and slicing for strings with non-BMP chars in
narrow builds needs to be fixed for future versions. I think we
should at least consider alternatives to the PEP393 solution of
double or quadrupling space if needed for at least one char.
<br>
<br>
In utf16.py, attached to <a class="moz-txt-link-freetext" href="http://bugs.python.org/issue12729">http://bugs.python.org/issue12729</a>
<br>
I propose for consideration a prototype of different solution to
the 'mostly BMP chars, few non-BMP chars' case. Rather than expand
every character from 2 bytes to 4, attach an array cpdex of
character (ie code point, not code unit) indexes. Then for
indexing and slicing, the correction is simple, simpler than I
first expected:
<br>
code-unit-index = char-index + bisect.bisect_left(cpdex,
char_index)
<br>
where code-unit-index is the adjusted index into the full
underlying double-byte array. This adds a time penalty of
log2(len(cpdex)), but avoids most of the space penalty and the
consequent time penalty of moving more bytes around and increasing
cache misses.
<br>
<br>
I believe the same idea would work for utf8 and the mostly-ascii
case. The main difference is that non-ascii chars have various
byte sizes rather than the 1 extra double-byte of non-BMP chars in
UCS2 builds. So the offset correction would not simply be the
bisect-left return but would require another lookup
<br>
byte-index = char-index + offsets[bisect-left(cpdex,
char-index)]
<br>
<br>
If possible, I would have the with-index-array versions be
separate subtypes, as in utf16.py. I believe either index-array
implementation might benefit from a subtype for single multi-unit
chars, as a single non-ASCII or non-BMP char does not need an
auxiliary [0] array and a senseless lookup therein but does need
its length fixed at 1 instead of the number of base array units.
<br>
<br>
</blockquote>
So am I correctly reading between the lines when, after reading this
thread so far, and the complete issue discussion so far, that I see
a PEP 393 revision or replacement that has the following
characteristics:<br>
<br>
1) Narrow builds are dropped. The conceptual idea of PEP 393
eliminates the need for narrow builds, as the internal string data
structures adjust to the actuality of the data. If you want a
narrow build, just don't use code points > 65535.<br>
<br>
2) There are more, or different, internal kinds of strings, which
affect the processing patterns. Here is an enumeration of the ones
I can think of, as complete as possible, with recognition that
benchmarking and clever algorithms may eliminate the need for some
of them.<br>
<br>
a) all ASCII<br>
b) latin-1 (8-bit codepoints, the first 256 Unicode codepoints) This
kind may not be able to support a "mostly" variation, and may be no
more efficient than case b). But it might also be popular in parts
of Europe :) And appropriate benchmarks may discover whether or not
it has worth.<br>
c) mostly ASCII (utf8) with clever indexing/caching to be efficient<br>
d) UTF-8 with clever indexing/caching to be efficient<br>
e) 16-bit codepoints<br>
f) UTF-16 with clever indexing/caching to be efficient<br>
g) 32-bit codepoints<br>
h) UTF-32<br>
<br>
When instantiating a str, a new parameter or subtype would restrict
the implementation to using only a), b), d), f), and h) when fully
conformant Unicode behavior is desired. No lone surrogates, no out
of range code points, no illegal codepoints. A default str would
prefer a), b), c), e), and g) for efficiency and flexibility.<br>
<br>
When manipulations outside of Unicode are necessary [Windows seems
to use e) for example, suffering from the same sorts of backward
compatibility problems as Python, in some ways], the default str
type would permit them, using e) and g) kinds of representations.
Although the surrogate escape codec only uses prefix surrogates (or
is it only suffix ones?) which would never match up, note that a
conversion from 16-bit codepoints to other formats may produce
matches between the results of the surrogate escape codec, and other
unchecked data introduced by the user/program.<br>
<br>
A method should be provided to validate and promote a string from
default, unchecked str type to the subtype or variation that
enforces Unicode, if it qualifies; if it doesn't qualify, an
exception would be raised by the method. (This could generally be
done in place if the value is bound to only a single variable, but
would generate a copy and rebind the variable to the promoted copy
if it is multiply referenced?)<br>
<br>
Another parameter or subtype of the conformant str would add
grapheme support, which has a different set of rules for the clever
indexing/caching, but could be applied to any of a)†, c)†, d), f),
or h).<br>
<br>
† It is unnecessary to apply clever indexing/caching to a) and c)
kinds of string internals, because there is a one-to-one mapping
between bytes, codepoints, and graphemes in these ranges. So plain
array indexing can be used in the implementation of these kinds.<br>
</body>
</html>