This surprised me:
Where did the \r go? ast.literal_eval() has the same problem:
Is this a bug/worth fixing?
On Sat, 15 Jun 2013 00:44:11 +0200 (CEST)
victor.stinner <python-checkins(a)python.org> wrote:
> changeset: 84127:6661a8154eb3
> user: Victor Stinner <victor.stinner(a)gmail.com>
> date: Sat Jun 15 00:37:46 2013 +0200
> Issue #3329: Add new APIs to customize memory allocators
> * Add a new PyMemAllocators structure
> * New functions:
> - PyMem_RawMalloc(), PyMem_RawRealloc(), PyMem_RawFree(): GIL-free memory
> allocator functions
> - PyMem_GetRawAllocators(), PyMem_SetRawAllocators()
> - PyMem_GetAllocators(), PyMem_SetAllocators()
> - PyMem_SetupDebugHooks()
> - _PyObject_GetArenaAllocators(), _PyObject_SetArenaAllocators()
My two cents, but I would prefer if this whole changeset was reverted.
I think it adds too much complexity in the memory allocation APIs,
for a pretty specialized benefit. IMHO, we should be able to get by with
less allocation APIs (why the new _Raw APIs) and less hook-setting
Issue #18224 (http://bugs.python.org/issue18224) highlights a problem on
Windows with the pydoc script provided with venvs created by pyvenv. On
POSIX, the script is named pydoc and causes no problems: on Windows, it is
called pydoc.py and this causes problems because it shadows the stdlib pydoc
1. Remove the pydoc script altogether from created venvs, on Windows but
also on POSIX (for consistency).
2. Rename the pydoc script on both Windows and POSIX (e.g. to pydocs.py and
3. Rename the pydoc.py script to pydoc-script.py and introduce a simple .exe
launcher pydoc.exe adjacent to it (which is how setuptools and distlib
handle installed scripts).
The first two approaches are backwards-incompatible, while the third is less
likely to lead to breakage, but involves adding a Windows script launcher to
Python. While this is a bigger change, I think any built-in Python installer
functionality should include such a launcher (as setuptools and distlib do).
Still, that's probably a discussion for another day.
Does anyone have any comments? Approach #2 seems the most appropriate. I
assume it would be reasonable to implement this in both 3.3 and 3.4, as it's
not a change in core Python APIs.
In the absence of adverse feedback here, I propose to implement approach #2
on both 3.3 and 3.4.
Hi. This is the last place where I want to ask a question. I have searched
for lots of tutorials and documentation on the web but, didn't find a
decent one to develop extensions for Python 3 using a custom compiler
(mingw32, nvcc). Please help me.
PS: Don't point me to Python Documentation. It is not good for beginners.
It doesn't elaborate about calls and implementation.
Aditya Avinash Atluri
Now we have (at least) the following libraries backported from 3.2+ to
older versions of Python by members of the core team:
There are also unofficial backports like billiard (multiprocessing).
I would be happy if all those were more discoverable by the community
at large. Having a single namespace for backports would be great but
my spidey sense forebodes large flocks of bike sheds flying that way.
Can we put links to those backports in the docs of older versions of
Python? Most users would be better off using the updated packages
while still deploying on an older release of Python.
IRC: ambv on #python-dev
As much as I love python, the following drives me crazy, and I would wish
that some future version would come up with a more consistent approach for
this. And please don't reply with "Too bad if you don't know what type your
data are..." - if I want to implement some generic functionality, I want to
avoid constrains on the user where not absolutely necessary, and I believe
this approach is truely pythonic.
OK - here comes the problem set. These are in fact several related issues.
Clearly, a solution exists for each of them, but you will have to admit
that they are all very different in style and therefore unnecessarily
complicate matters. If you want to write code which works under many
circumstances, it will become quite convoluted.
1. Testing for None:
From what I read elsewhere, the preferred solution is `if x is None` (or
`if not x is None` in the opposite case). This is fine and it works for
scalars, lists, sets, numpy ndarrays,...
2. Testing for empty lists or empty ndarrays:
In principle, `len(x) == 0` will do the trick. **BUT** there are several
- `len(scalar)` raises a TypeError, so you will have to use try and
except or find some other way of testing for a scalar value
- `len(numpy.array(0))` (i.e. a scalar coded as numpy array) also raises
a TypeError ("unsized object")
- `len([])` returns a length of 1, which is somehow understandable,
but - I would argue - perhaps not what one might expect initially
Alternatively, numpy arrays have a size attribute, and
`numpy.array().size`, `numpy.array(8.).size`, and
`numpy.array([8.]).size` all return what you would expect. And even
`numpy.array([]).size` gives you 0. Now, if I could convert everything to
a numpy array, this might work. But have you ever tried to assign a list of
mixed data types to a numpy array? `numpy.array(["a",1,[2,3],(888,9)])`
will fail, even though the list inside is perfectly fine as a list.
3. Testing for scalar:
Let's suppose we knew the number of non-empty elements, and this is 1.
Then there are occasions when you want to know if you have in fact `6` or
`` as an answer (or maybe even `[]`). Obviously, this question is
also relevant for numpy arrays. For the latter, a combination of size and
ndim can help. For other objects, I would be tempted to use something like
`isiterable()`, however, this function doesn't exist, and there are
numerous discussions how one could or should find out if an object is
iterable - none of them truly intuitive. (and is it true that **every**
iterable object is a descendant of collections.Iterable?)
4. Finding the number of elements in an object:
From the discussion above, it is already clear that `len(x)` is not very
robust for doing this. Just to mention another complication: `len("abcd")`
returns 4, even though this is only one string. Of course this is correct,
but it's a nuisance if you need to to find the number of elements of a list
of strings and if it can happen that you have a scalar string instead of a
1-element list. And, believe me, such situations do occur!
5. Forcing a scalar to become a 1-element list:
Unfortunately, `list(77)` throws an error, because 77 is not iterable.
`numpy.array(77)` works, but - as we saw above - there will be no len
defined for it. Simply writing `[x]` is dangerous, because if x is a list
already, it will create `[]`, which you generally don't want. Also,
`numpy.array([x])` would create a 2D array if x is already a 1D array or a
list. Often, it would be quite useful to know for sure that a function
result is provided as a list, regardless of how many elements it contains
(because then you can write `res` without risking the danger to throw an
exception). Does anyone have a good suggestion for this one?
6. Detecting None values in a list:
This is just for completeness. I have seen solutions using `all` which
solve this problem (see [question #1270920]). I haven't digged into them
extensively, but I fear that these will also suffer from the
above-mentioned issues if you don't know for sure if you are starting from
a list, a numpy array, or a scalar.
Enough complaining. Here comes my prayer to the python gods: **Please**
- add a good `isiterable` function
- add a `size` attribute to all objects (I wouldn't mind if this is None
in case you don't really know how to define the size of something, but it
would be good to have it, so that `anything.size` would never throw an error
- add an `isscalar` function which would at least try to test if something
is a scalar (meaning a single entity). Note that this might give different
results compared to `isiterable`, because one would consider a scalar
string as a scalar even though it is iterable. And if `isscalar` would
throw exceptions in cases where it doesn't know what to do: fine - this can
be easily captured.
- enable the `len()` function for scalar variables such as integers or
floats. I would tend to think that 1 is a natural answer to what the length
of a number is.
I (and Guido) are accepting PEP 442 (Safe object finalization) on the
condition that finalizers are only ever called once globally.
Congratulations to Antoine on writing yet another PEP that deeply
touches the core language in a way that everyone can agree is an
improvement.. I look forward to reviewing the code.
I would like to remove the "GIL must be held" restriction from
PyMem_Malloc(). In my opinion, the restriction was motived by a bug in
Python, bug fixed by the issue #3329. Let me explain why.
The PyMem_Malloc() function is a thin wrapper to malloc(). It returns
NULL if the size is lager than PY_SSIZE_T_MAX and have a well defined
behaviour for PyMem_Malloc(0) (don't return NULL). So it is surprising
to read in Include/pymem.h:
"The GIL must be held when using these APIs."
The reason is more surprising: in debug mode, PyMem_Malloc() is no
more a thin wrapper to malloc(), but it calls internally
PyObject_Malloc(), the "Python allocator" (called pymalloc). (Many
other checks are done in debug mode, but it's unrelated to my point.)
The problem is that PyObject_Malloc() is not thread-safe, the GIL must
fb45791150d1 (Mar 23 2002) "gives Python a debug-mode pymalloc"
f294fdd18b5b (Mar 28 2002) removes the "check API family"
e16dbf875303 (Apr 22 2002) redirects indirectly PyMem_Malloc() to
PyObject_Malloc() in debug mode
b6aff7a59803 (Sep 28 2009) reintroduces API checks
So the GIL issue is almost as old as the debug mode for Python memory
My patch attached to http://bugs.python.org/issue3329 changes the
design of the debug memory allocators: they are now wrapper (hooks) on
the underlying memory allocator (PyMem: malloc, PyObject: pymalloc),
instead of always redirecting to pymalloc (ex: PyObject_Malloc).
Using my patch, PyMem_Malloc() now always calls malloc(), even in
debug mode. Removing the "GIL must be held" restriction is now safe.
Do you agree?
May it cause backward compatibility issue? PyMem_Malloc() and
PyMem_MALLOC() call malloc(), except if the Python source code was
manually modified. Does this use case concern many developers?
Removing the GIL restriction would help to replace direct calls to
malloc() with PyMem_Malloc(). Using PyMem_SetAllocators(), an
application would be able to replace memory allocators, and these
allocators would be used "everywhere".
=> see http://bugs.python.org/issue18203
Ethan, did you forget to run ``hg add`` before committing? If not then why
the heck did we argue over enums for so long if this was all it took to
make everyone happy? =)
On Fri, Jun 14, 2013 at 3:31 AM, ethan.furman <python-checkins(a)python.org>wrote:
> changeset: 84117:fae92309c3be
> parent: 84115:af27c661d4fb
> user: Ethan Furman <ethan(a)stoneleaf.us>
> date: Fri Jun 14 00:30:27 2013 -0700
> Closes issue 17947. Adds PEP-0435 (Enum, IntEnum) to the stdlib.
> Doc/library/datatypes.rst | 1 +
> 1 files changed, 1 insertions(+), 0 deletions(-)
> diff --git a/Doc/library/datatypes.rst b/Doc/library/datatypes.rst
> --- a/Doc/library/datatypes.rst
> +++ b/Doc/library/datatypes.rst
> @@ -30,3 +30,4 @@
> + enum.rst
> Repository URL: http://hg.python.org/cpython
> Python-checkins mailing list