<div dir="ltr"><div><div><div>One common issue in computational geometry is the need to operate rapidly on arrays with "heterogeneous shapes."<br><br></div>So, an array that has rows with different numbers of columns -- shape (1,3) for the first polygon and shape (1, 12) for the second polygon and so on.<br><br></div>This seems like a particularly nasty scenario when the loss of "homogeneity" in shape precludes traditional vectorization -- I think numpy effectively converts these to dtype=object, etc. I don't<br></div>think is necessarily a BLAS issue since wrapping comp. geo. libraries does happen in a subset of cases to handle this, but if there's overlap in utility you could pass it along I suppose.<br><div><div><br><br></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On 9 January 2018 at 04:40, Ilhan Polat <span dir="ltr"><<a href="mailto:ilhanpolat@gmail.com" target="_blank">ilhanpolat@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto">I couldn't find an item to place this but I think ilaenv and also calling the function twice (one with lwork=-1 and reading the optimal block size and the call the function again properly with lwork=<result>) in LAPACK needs to be gotten rid of.<div dir="auto"><br></div><div dir="auto">That's a major annoyance during the wrapping of LAPACK routines for SciPy. </div><div dir="auto"><br></div><div dir="auto">I don't know if this is realistic but the values ilaenv needed can be computed once (or again if hardware is changed) at the install and can be read off by the routines.</div><div dir="auto"><br></div><div dir="auto"><br></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Jan 9, 2018 09:25, "Nathaniel Smith" <<a href="mailto:njs@pobox.com" target="_blank">njs@pobox.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi all,<br>
<br>
As mentioned earlier [1][2], there's work underway to revise and<br>
update the BLAS standard -- e.g. we might get support for strided<br>
arrays and lose xerbla! There's a draft at [3]. They're interested in<br>
feedback from users, so I've written up a first draft of comments<br>
about what we would like as NumPy/SciPy developers. This is very much<br>
a first attempt -- I know we have lots of people who are more expert<br>
on BLAS than me on these lists :-). Please let me know what you think.<br>
<br>
-n<br>
<br>
[1] <a href="https://mail.python.org/pipermail/numpy-discussion/2017-November/077420.html" rel="noreferrer" target="_blank">https://mail.python.org/piperm<wbr>ail/numpy-discussion/2017-<wbr>November/077420.html</a><br>
[2] <a href="https://mail.python.org/pipermail/scipy-dev/2017-November/022267.html" rel="noreferrer" target="_blank">https://mail.python.org/piperm<wbr>ail/scipy-dev/2017-November/<wbr>022267.html</a><br>
[3] <a href="https://docs.google.com/document/d/1DY4ImZT1coqri2382GusXgBTTTVdBDvtD5I14QHp9OE/edit" rel="noreferrer" target="_blank">https://docs.google.com/docume<wbr>nt/d/1DY4ImZT1coqri2382GusXgBT<wbr>TTVdBDvtD5I14QHp9OE/edit</a><br>
<br>
-----<br>
<br>
# Comments from NumPy / SciPy developers on "A Proposal for a<br>
Next-Generation BLAS"<br>
<br>
These are comments on [A Proposal for a Next-Generation<br>
BLAS](<a href="https://docs.google.com/document/d/1DY4ImZT1coqri2382GusXgBTTTVdBDvtD5I14QHp9OE/edit#" rel="noreferrer" target="_blank">https://docs.google.com/<wbr>document/d/1DY4ImZT1coqri2382G<wbr>usXgBTTTVdBDvtD5I14QHp9OE/<wbr>edit#</a>)<br>
(version as of 2017-12-13), from the perspective of the developers of<br>
the NumPy and SciPy libraries. We hope this feedback is useful, and<br>
welcome further discussion.<br>
<br>
## Who are we?<br>
<br>
NumPy and SciPy are the two foundational libraries of the Python<br>
numerical ecosystem, and one of their duties is to wrap BLAS and<br>
expose it for the use of other Python libraries. (NumPy primarily<br>
provides a GEMM wrapper, while SciPy exposes more specialized<br>
operations.) It's unclear how many users we have exactly, but we<br>
certainly ship multiple million copies of BLAS every month, and<br>
provide one of the most popular numerical toolkits for both novice and<br>
expert users.<br>
<br>
Looking at the original BLAS and LAPACK interfaces, it often seems<br>
that their imagined user is something like a classic supercomputer<br>
consumer, who writes code directly in Fortran or C against the BLAS<br>
API, and where the person writing the code and running the code are<br>
the same. NumPy/SciPy are coming from a very different perspective:<br>
our users generally know nothing about the details of the underlying<br>
BLAS; they just want to describe their problem in some high-level way,<br>
and the library is responsible for making it happen as efficiently as<br>
possible, and is often integrated into some larger system (e.g. a<br>
real-time analytics platform embedded in a web server).<br>
<br>
When it comes to our BLAS usage, we mostly use only a small subset of<br>
the routines. However, as "consumer software" used by a wide variety<br>
of users with differing degress of technical expertise, we're expected<br>
to Just Work on a wide variety of systems, and with as many different<br>
vendor BLAS libraries as possible. On the other hand, the fact that<br>
we're working with Python means we don't tend to worry about small<br>
inefficiencies that will be lost in the noise in any case, and are<br>
willing to sacrifice some performance to get more reliable operation<br>
across our diverse userbase.<br>
<br>
## Comments on specific aspects of the proposal<br>
<br>
### Data Layout<br>
<br>
We are **strongly in favor** of the proposal to support arbitrary<br>
strided data layouts. Ideally, this would support strides *specified<br>
in bytes* (allowing for unaligned data layouts), and allow for truly<br>
arbitrary strides, including *zero or negative* values. However, we<br>
think it's fine if some of the weirder cases suffer a performance<br>
penalty.<br>
<br>
Rationale: NumPy – and thus, most of the scientific Python ecosystem –<br>
only has one way of representing an array: the `numpy.ndarray` type,<br>
which is an arbitrary dimensional tensor with arbitrary strides. It is<br>
common to encounter matrices with non-trivial strides. For example::<br>
<br>
# Make a 3-dimensional tensor, 10 x 9 x 8<br>
t = np.zeros((10, 9, 8))<br>
# Considering this as a stack of eight 10x9 matrices, extract the first:<br>
mat = t[:, :, 0]<br>
<br>
Now `mat` has non-trivial strides on both axes. (If running this in a<br>
Python interpreter, you can see this by looking at the value of<br>
`mat.strides`.) Another case where interesting strides arise is when<br>
performing ["broadcasting"](<a href="https://docs.scipy.org/doc/numpy-1.13.0/user/basics.broadcasting.html" rel="noreferrer" target="_blank">https://docs.<wbr>scipy.org/doc/numpy-1.13.0/use<wbr>r/basics.broadcasting.html</a>),<br>
which is the name for NumPy's rules for stretching arrays to make<br>
their shapes match. For example, in an expression like::<br>
<br>
np.array([1, 2, 3]) + 1<br>
<br>
the scalar `1` is "broadcast" to create a vector `[1, 1, 1]`. This is<br>
accomplished without allocating memory, by creating a vector with<br>
settings length = 3, strides = 0 – so all the elements share a single<br>
location in memory. Similarly, by using negative strides we can<br>
reverse an array without allocating memory::<br>
<br>
a = np.array([1, 2, 3])<br>
a_flipped = a[::-1]<br>
<br>
Now `a_flipped` has the value `[3, 2, 1]`, while sharing storage with<br>
the array `a = [1, 2, 3]`. Misaligned data is also possible (e.g. an<br>
array of 8-byte doubles with a 9-byte stride), though it arises more<br>
rarely. (An example of when it might occurs is in an on-disk data<br>
format that alternates between storing a double value and then a<br>
single byte value, which is then memory-mapped.)<br>
<br>
While this array representation is very flexible and useful, it makes<br>
interfacing with BLAS a challenge: how do you perform a GEMM operation<br>
on two arrays with arbitrary strides? Currently, NumPy attempts to<br>
detect a number of special cases: if the strides in both arrays imply<br>
a column-major layout, then call BLAS directly; if one of them has<br>
strides corresponding to a row-major layout, then set the<br>
corresponding `transA`/`transB` argument, etc. – and if all else<br>
fails, either copy the data into a contiguous buffer, or else fall<br>
back on a naive triple-nested-loop GEMM implementation. (There's also<br>
a check where if we can determine through examining the arrays' data<br>
pointers and strides that they're actually transposes of each other,<br>
then we instead dispatch to SYRK.)<br>
<br>
So for us, native stride support in the BLAS has two major advantages:<br>
<br>
- It allows a wider variety of operations to be transparently handled<br>
using high-speed kernels.<br>
<br>
- It reduces the complexity of the NumPy/BLAS interface code.<br>
<br>
Any kind of stride support produces both of these advantages to some<br>
extent. The ideal would be if BLAS supports strides specified in bytes<br>
(not elements), and allowing truly arbitrary values, because in this<br>
case, we can simply pass through our arrays directly to the library<br>
without having to do any fiddly checking at all. Note that for these<br>
purposes, it's OK if "weird" arrays pay a speed penalty: if the BLAS<br>
didn't support them, we'd just have to fall back on some even worse<br>
strategy for handling them ourselves. And this approach would mean<br>
that if some BLAS vendors do at some point decide to optimize these<br>
cases, then our users will immediately see the advantages.<br>
<br>
### Error-reporting<br>
<br>
We are **strongly in favor** of the draft's proposal to replace XERBLA<br>
with proper error codes. XERBLA is fine for one-off programs in<br>
Fortran, but awful for wrapper libraries like NumPy/SciPy.<br>
<br>
### Handling of exceptional values<br>
<br>
A recurring issue in low-level linear-algebra libraries is that they<br>
may crash, freeze, or produce unexpected results when receiving<br>
non-finite inputs. Of course as a wrapper library, NumPy/SciPy can't<br>
control what kind of strange data our users might pass in, and we have<br>
to produce sensible results regardless, so we sometimes accumulate<br>
hacks like having to validate all incoming data for exceptional values<br>
<br>
We support the proposal's recommendation of "NaN interpretation (4)".<br>
<br>
We also note that contrary to the note under "Interpretation (1)"; in<br>
the R statistical environment NaN does *not* represent missing data. R<br>
maintains a clear distinction between "NA" (missing) values and "NaN"<br>
(invalid) values. This is important for various statistical procedures<br>
such as imputation, where you might want to estimate the true value of<br>
an unmeasured variable, but you don't want to estimate the true value<br>
of 0/0. For efficiency, they do perform some calculations on NA values<br>
by storing them as NaNs with a specific payload; in the R context,<br>
therefore, it's particularly valuable if operations **correctly<br>
propagate NaN payloads**. NumPy does not yet provide first-class<br>
missing value support, but we plan to add it within the next 1-2<br>
years, and when we do it seems likely that we'll have similar needs to<br>
R.<br>
<br>
### BLAS G2<br>
<br>
NumPy has a rich type system, including 16-, 32-, and 64-bit floats<br>
(plus extended precision on some platforms), a similar set of complex<br>
values, and is further extensible with new types. We have a similarly<br>
rich dispatch system for operations that attempts to find the best<br>
matching optimized-loop for an arbitrary set of input types. We're<br>
thus quite interested in the proposed mixed-type operations, and if<br>
implemented we'll probably start transparently taking advantage of<br>
them (i.e., in cases where we're currently upcasting to perform a<br>
requested operation, we may switch to using the native precision<br>
operation without requiring any changes in user code).<br>
<br>
Two comments, though:<br>
<br>
1. The various tricks to make names less redundant (e.g., specifying<br>
just one type if they're all the same) are convenient for those<br>
calling these routines by hand, but rather inconvenient for those of<br>
us who will be generating a table mapping different type combinations<br>
to different functions. When designing a compression scheme for names,<br>
please remember that the scheme will have to be unambiguously<br>
implemented in code by multiple projects. Another option to consider:<br>
ask vendors to implement the full "uncompressed" names, and then<br>
provide a shim library mapping short "convenience" names to their full<br>
form.<br>
<br>
2. Regarding the suggestion that the naming scheme will allow for a<br>
wide variety of differently typed routines, but that only some subset<br>
will be implemented by any particular vendor: we are worried about<br>
this subset business. For a library like NumPy that wants to wrap<br>
"all" the provided routines, while supporting "all" the different BLAS<br>
libraries ... how will this work? **How do we know which routines any<br>
particular library provides?** What if new routines are added to the<br>
list later?<br>
<br>
### Reproducible BLAS<br>
<br>
This is interesting, but we don't have any particularly useful comments.<br>
<br>
### Batch BLAS<br>
<br>
NumPy/SciPy actually provide batched operations in many cases,<br>
currently implemented using a `for` loop around individual calls to<br>
BLAS/LAPACK. However, our interface is relatively restricted compared<br>
to some of the proposals considered here: generally all we need is the<br>
ability to perform an operation on two "stacks" of size-homogenous<br>
matrices, with the offset between matrices determined by an arbitrary<br>
(potentially zero) stride.<br>
<br>
### Fixed-point BLAS<br>
<br>
This is interesting, but we don't have any particularly useful comments.<br>
<br>
<br>
--<br>
Nathaniel J. Smith -- <a href="https://vorpus.org" rel="noreferrer" target="_blank">https://vorpus.org</a><br>
______________________________<wbr>_________________<br>
SciPy-Dev mailing list<br>
<a href="mailto:SciPy-Dev@python.org" target="_blank">SciPy-Dev@python.org</a><br>
<a href="https://mail.python.org/mailman/listinfo/scipy-dev" rel="noreferrer" target="_blank">https://mail.python.org/mailma<wbr>n/listinfo/scipy-dev</a><br>
</blockquote></div></div>
</div></div><br>______________________________<wbr>_________________<br>
SciPy-Dev mailing list<br>
<a href="mailto:SciPy-Dev@python.org">SciPy-Dev@python.org</a><br>
<a href="https://mail.python.org/mailman/listinfo/scipy-dev" rel="noreferrer" target="_blank">https://mail.python.org/<wbr>mailman/listinfo/scipy-dev</a><br>
<br></blockquote></div><br></div>