In regard to Feature Request: https://github.com/numpy/numpy/issues/16469
It was suggested to sent to the mailing list. I think I can make a strong
point as to why the support for this naming convention would make sense.
Such as it would follow other frameworks that often work alongside numpy
such as tensorflow. For backward compatibility, it can simply be an alias
to np.concatenate
I often convert portions of code from tf to np, it is as simple as changing
the base module from tf to np. e.g. np.expand_dims -> tf.expand_dims. This
is done either in debugging (e.g. converting tf to np without eager
execution to debug portion of the code), or during prototyping, e.g.
develop in numpy and convert in tf.
I find myself more than at one occasion to getting syntax errors because of
this particular function np.concatenate. It is unnecessarily long. I
imagine there are more people that also run into the same problems. Pandas
uses concat (torch on the other extreme uses simply cat, which I don't
think is as descriptive).
Hi all,
I'd like to share this announcement blog post about the creation of a
consortium for array and dataframe API standardization here:
https://data-apis.org/blog/announcing_the_consortium/. It's still in the
beginning stages, but starting to take shape. We have participation from
one or more maintainers of most array and tensor libraries - NumPy,
TensorFlow, PyTorch, MXNet, Dask, JAX, Xarray. Stephan Hoyer, Travis
Oliphant and myself have been providing input from a NumPy perspective.
The effort is very much related to some of the interoperability work we've
been doing in NumPy (e.g. it could provide an answer to what's described in
https://numpy.org/neps/nep-0037-array-module.html#requesting-restricted-sub…
).
At this point we're looking for feedback from maintainers at a high level
(see the blog post for details).
Also important: the python-record-api tooling and data in its repo has very
granular API usage data, of the kind we could really use when making
decisions that impact backwards compatibility.
Cheers,
Ralf
Hi all,
There will be a NumPy Community meeting Wednesday September 16th at 1pm
Pacific Time (20:00 UTC). Everyone is invited and encouraged to
join in and edit the work-in-progress meeting topics and notes at:
https://hackmd.io/76o-IxCjQX2mOXO_wwkcpg?both
Best wishes
Sebastian
I propose adding support for datetime64/timedelta64 in linspace and solicit
feedback on the feature. As is, linspace raises UFuncTypeError when
parameters start and stop are datetime64/timedelta64. The complementary
function arange supports these types. Work was started on this feature in PR
14700 <https://github.com/numpy/numpy/pull/14700> but has stalled and I
would like to complete it, but there are some issues worth getting feedback
on.
1. Supporting datetime64/timedelta64 will require a special case code
path within linspace. The code path is selected based on the start
parameter data type.
2. The output dtype has to be explicitly set.
3. The step size resolution is determined by the lesser resolution of
start and dtype.
Issue 3 may lead to an unexpected result for an end-user. For example,
>>> import numpy as np
>>> np.linspace(np.timedelta64(0, "s"), np.timedelta64(1, "s"), 4,
dtype="timedelta64[ms]")
array([ 0, 0, 0, 1000], dtype='timedelta64[ms]')
The existing solution in PR 14700 does not override the end-user's start
and dtype resolution. In this case, the end-user would have to set both
start and dtype to "ms" resolution to get the expected result.
>>> np.linspace(np.timedelta64(0, "ms"), np.timedelta64(1, "s"), 4,
dtype="timedelta64[ms]")
array([ 0, 333, 666, 1000], dtype='timedelta64[ms]')
In PR 14700, there is some discussion of "NaT" handling. In my
implementation, "NaT" works the same as "NaN" and I am not aware of any
corner cases.
I am pleased to announce the release of SfePy 2020.3.
Description
-----------
SfePy (simple finite elements in Python) is a software for solving systems of
coupled partial differential equations by finite element methods. It is
distributed under the new BSD license.
Home page: https://sfepy.org
Mailing list: https://mail.python.org/mm3/mailman3/lists/sfepy.python.org/
Git (source) repository, issue tracker: https://github.com/sfepy/sfepy
Highlights of this release
--------------------------
- new script for visualizations based on pyvista
- generalized Yeoh hyperelastic term + example
For full release notes see [1].
Cheers,
Robert Cimrman
[1] http://docs.sfepy.org/doc/release_notes.html#id1
---
Contributors to this release in alphabetical order:
Robert Cimrman
Jan Heczko
Vladimir Lukes
Dear all,
I would like to know your opinion on how to address a specific need of
the new PEP 637:
https://github.com/python/peps/blob/master/pep-0637.txt
Such PEP would make a syntax like the following valid
obj[2, x=23]
obj[2, 4, x=23]
Which would resolve to a call in the form
type(obj).__getitem__(obj, 2, x=23)
type(obj).__getitem__(obj, (2, 4), x=23)
and similar for set and del.
After discussion, we currently have one open point we are unsure how
to address, that is what to pass when no positional index is passed,
that is:
obj[x=23]
We are unsure if we should resolve this call with None or the empty
tuple in the positional index:
type(obj).__getitem__(obj, None, x=23)
type(obj).__getitem__(obj, (), x=23)
You can see a detailed discussion in the PEP at L913
https://github.com/python/peps/blob/1fb19ac3a57c9793669ea9532fb840861d4d756…
One of the commenters on python-ideas reported that using None might
introduce an issue in numpy, as None is used to create new axes, hence
the proposal for rejection of None as a solution.
However, we are unsure how strongly this would affect numpy and
similar packages, as well as what developer will find more natural to
receive in that case. We would like to hear your opinion on the topic.
Thank you for your help.
--
Kind regards,
Stefano Borini