I apologise in advance. This is slightly off topic. This will be my only post on the subject.
PyCXX (http://sourceforge.net/projects/cxx/ <http://sourceforge.net/projects/cxx/>) accomplishes roughly the same as Boost::Python (C++ wrapper for CPython), only without requiring Boost.
The original code has become tangled and bloated.
Over the winter I've rewritten it using C++11. New C++11 features have allowed for more direct/compact/concise/readable code.
I've called my rewrite PiCxx and put it up here: https://github.com/p-i-/PiCxx <https://github.com/p-i-/PiCxx>
PiCxx only supports Python3 currently, as that is all I need for my own use. It wouldn't be too much work to add 2.x support. Also I've only tested it on my OSX system (although it is designed to be cross-platform). It offers an alternative to Boost::Python that doesn't require Boost.
Improvements, suggestions, bug fixes, etc are all welcome.
Well, that's all. I originally subscribed to this list because I thought I might need some help navigating some of the dark corners of the CPython API, but thanks to a handful of Samurai on SO and IRC I seem to have scraped through.
I will unsubscribe in due course; it's been charming to spend awhile in the belly of the snake, and humbling to witness how hard you guys work.
Happy spring everyone,
We are having random, rare, nonreproducible segfaults/hangs with python2 on
ubuntu 14.04 in EC2. I've managed to attach GDB to some hung ones and
there looks like clear memory corruption in the 'interned' hash table,
causing lookdict_string() to spin forever because all remaining slots have
a garbage 'key' pointer. This happens just loading the 'site' module
dependencies, like 're' or 'codecs', before any of our code even gets run.
So we then tried running it under valgrind, and we got a lot of nasty
errors. Even after reading the Misc/README.valgrind, which talks about
*uninitialized* reads being ok, I still don't see how reading from *freed*
memory would ever be safe, and why the suppression file thinks thats ok:
$ valgrind ./pymd79/bin/python -c ""
==14651== Memcheck, a memory error detector
==14651== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
==14651== Using Valgrind-3.9.0 and LibVEX; rerun with -h for copyright info
==14651== Command: ./pymd79/bin/python -c
==14651== Invalid read of size 4
==14651== at 0x461E40: Py_ADDRESS_IN_RANGE (obmalloc.c:1911)
==14651== by 0x461EA3: PyObject_Free (obmalloc.c:994)
==14651== by 0x4789AB: tupledealloc (tupleobject.c:235)
==14651== by 0x5225BA: code_dealloc (codeobject.c:309)
==14651== by 0x4CFFC3: load_source_module (import.c:1100)
==14651== by 0x4D0E16: import_submodule (import.c:2700)
==14651== by 0x4D1E19: PyImport_ImportModuleLevel (import.c:2515)
==14651== by 0x4AE49A: builtin___import__ (bltinmodule.c:49)
==14651== by 0x422C89: PyObject_Call (abstract.c:2529)
==14651== by 0x4B12E5: PyEval_EvalFrameEx (ceval.c:3902)
==14651== by 0x4B6A47: PyEval_EvalCodeEx (ceval.c:3265)
==14651== by 0x4B6B71: PyEval_EvalCode (ceval.c:667)
==14651== Address 0x5bcd020 is 2,256 bytes inside a block of size 2,801
==14651== at 0x4C28577: free (in
==14651== by 0x4DB2B0: PyMarshal_ReadLastObjectFromFile (marshal.c:1145)
==14651== by 0x4CFE71: load_source_module (import.c:801)
On 22.03.15 17:33, paul.moore wrote:
> changeset: 95126:0b2993742650
> user: Paul Moore <p.f.moore(a)gmail.com>
> date: Sun Mar 22 15:32:36 2015 +0000
> #23657 Don't explicitly do an isinstance check for str in zipapp
> As a result, explicitly support pathlib.Path objects as arguments.
> Also added tests for the CLI interface.
Congratulate with your first commit Paul!
does it have any sense for a linux distribution (arch to be specific) to
provide default Python package compiled with valgrind support? I thought
this flag was just about silencing false positives generated by valgrind
(in other words a workaround for "bugs" of another software) and useful
only when developing Python itself or C extensions.
The same distribution also compiles by default to a shared library and this
has a quite noticeable impact on performance on x64 (surprisingly for me)
for CPU bound processing; in a few test cases I measured valgrind+shared
Python running at 66% the speed of plain ./configure && make Python on my
system. Is this setting reasonable for general users?
If they are good defaults, why aren't them the default?
Something that hit me today, which might become a more common issue
when the Windows installers move towards installing to the user
directory, is that there appear to be some bugs in handling of
Two that I spotted are a failure of the "script wrappers" installed by
pip to work with a non-ASCII interpreter path (reported to distlib)
and a possible issue with the py.exe launcher when a script has
non-ASCII in the shebang line (not reported yet because I'm not clear
on what's going on).
I've only seen Windows-specific issues - I don't know how common
non-ASCII paths for the python interpreter are on Unix or OSX, or
whether the more or less universal use of UTF-8 on Unix makes such
issues less common. But if anyone has an environment that makes
testing on non-ASCII install paths easy, it might be worth doing some
checks just so we can catch any major ones before 3.5 is released.
On which note, I'm assuming neither of the issues I've found are major
blockers. "pip.exe doesn't work if Python is installed in a directory
with non-ASCII characters in the name" can be worked around by using
python -m pip, and the launcher issue by using a generic shebang like
Thanks for the feedback, and apologies for my late reply. I have to
say, I'm not entirely sold on the argument for raising an exception on
First, I'll note what happens if we overflow an IPv4Address:
>>> ipaddress.IPv4Address('255.255.255.255') + 1
Traceback (most recent call last):
ipaddress.AddressValueError: 4294967296 (>= 2**32) is not permitted
as an IPv4 address
Now, I used "IPv4Interface() + 1" to mean "Give me the IP next to the
current one in the current subnet", knowing from the context that the
address would be valid and available.
>>> host = ipaddress.IPv4Interface('10.0.0.2/24')
>>> peer = host + 1
In this context, I would welcome an exception, as it would certainly be
an error if I overflowed the subnet.
However, there are also situations in which overflowing would be valid
and expected, e.g. as a way to skip to the "same" IP in the next subnet:
>>> ip = ipaddress.IPv4Interface('10.0.0.42/24')
>>> ip + ip.network.num_addresses
It's not even a hypothetical example; I've been working on a distributed
embedded system where all the hosts have two (redundant) addresses
differing only by their subnet; this could be a convenient way calculate
one address from the other.
There's an additional issue with raising an exception, and that is that
it still won't catch overflow errors in my example use case:
>>> host = ipaddress.IPv4Interface('10.0.0.254/24')
>>> peer = host + 1
This doesn't overflow and does not trigger an exception, but the
resulting peer address is still invalid (it's the subnet broadcast
address, not a host address). As such, the exception isn't even useful
as an error detection tool. (I'll not suggest raising an exception when
hitting the broadcast or network address; that way lies madness.)
As for consistency with IPv4Address, I can argue either way:
"Overflowing an IPv4Interface raises AddressValueError just like with
"An IPv4Interface behaves exactly like an IPv4Address, except that it
also has an associated subnet mask." (This is essentially how the type
is currently documented).
Hi, I'm an Art and CG student learning Python and today's exercise was
about positions in a tiled room. The fact that I had to check if a position
was inside the room and given that in a 1x1 room, 0.0 was considered in and
1.0 was considered out, it kept me thinking about 0-base indexing iterables
Read some articles and discussions, some pros and cons to each 0-base and
1-base, concerns about slicing, etc. But ultimately the question that got
stuck in me and didn't found an answer was:
Why can't both 0-base and 1-base indexing exist in the same language, and
why can't slicing be customized?
If I'm indexing the ruler marks, intervals, boundaries, dots, makes sense
to start of at 0; rul=[0,1,2,3,4,5,6] would index every mark on my ruler so
that accordingly rul=0, rul=5.
If I'm indexing the blue circles, natural number quantities, objects,
spans, makes sense to start at 1; cir= [1,2,3,4,5] so that cir=1 and
Now, a lot of the discussion was to do with slicing coupled with the
indexing and I don't totally understand why.
a ≤ x < b is not so intuitive when dealing with objects ("I want balls 1 up
to the the one before 3"), so on one side, you put the finger on what you
want and on the other, on what you don't want. But this method does have
the neat property of producing neighbor selections that border perfectly,
as in [:a][a:b][b:c]. Although in inverse order(-1), the results can be
unexpected as it returns values off-by-one from its counterpart like;
L=[0,1,2,3,4,5] so that L[1:3]=[1,2] and L[3:1:-1]=[3:2]. So it's
consistent with the rule a ≤ x < b, grabbing the lower limit item, but it
can feel strange by not producing the same selection with inverse order.
a ≤ x ≤ b is a natural way to select objets ("I want the balls 1 up to 3"),
so you're putting the finger on the things you want. If you inverse the
order(-1) it's still very easy to grasp what are you picking because
whatever you select it's included like: L=[0,1,2,3,4,5] so that
L[1:3]=[1,2,3] and L[3:1:-1]=[3,2,1]. Problems seem to arrive though, when
trying to do neighbor selections, where one would have to do
[:a][a+1:b][b+1:c] to have the border perfectly. That terrible?
Even though one could see a ≤ x < b to be more adept to 0-base, and a ≤ x ≤
b to be more adept to 1-base, the way I see it, index base and slicing
rules could be somehow independent. And one would index and slice the way
it would fit the rationale of the problem or the data, because even slicing
a 1-base indexed array with a ≤ x < b, would still produce an expected
outcome, as in cir=[1,2,3,4,5] so that cir[1:3]=[1,2] or cir[:3]=[1,2].
Same thing applying a ≤ x ≤ b to a 0-base indexed array, as in
rul[0,1,2,3,4,5] so that rul[:2]=[0,1,2] or rul[0:2]=[0,1,2].
Given that python is an example of human friendly code language,
emphasizing readability, wouldn't having 0 and 1 base indexing and custom
slicing methods, improve the thought process when writing and reading the
code, by fitting them better to specific contexts or data?
Is there some language that provides both or each language picks only one?
Alpiarça dos Santos Animator 3DModeler Illustrator >>
How to document functions with optional positional parameters?
For example binascii.crc32(). It has two positional parameters, one is
mandatory, and one is optional with default value 0. With Argument
Clinic its signature is crc32(data, crc=0, /). In the documentation it
is written as crc32(data[, crc]) (and noted that the default value of
the second parameter is 0). Both are not valid Python syntax. Can the
documentation be change to crc32(data, crc=0)?
https://bugs.python.org/issue21488 (changed encode(obj,
encoding='ascii', errors='strict') to encode(obj, [encoding[, errors]]))
https://bugs.python.org/issue22832 (changed ioctl(fd, op[, arg[,
mutate_flag]]) to ioctl(fd, request, arg=0, mutate_flag=True))
https://bugs.python.org/issue22341 (discussed changing crc32(data[,
crc]) to crc32(data, crc=0))