On 15-04-15, Akira Li <4kir4.1i(a)gmail.com> wrote:
> Isaac Schwabacher <ischwabacher(a)wisc.edu> writes:
> > On 15-04-15, Akira Li <4kir4.1i(a)gmail.com> wrote:
> >> Isaac Schwabacher <ischwabacher(a)wisc.edu> writes:
> >> > ...
> >> >
> >> > I know that you can do datetime.now(tz), and you can do datetime(2013,
> >> > 11, 3, 1, 30, tzinfo=zoneinfo('America/Chicago')), but not being able
> >> > to add a time zone to an existing naive datetime is painful (and
> >> > strptime doesn't even let you pass in a time zone).
> >> `.now(tz)` is correct. `datetime(..., tzinfo=tz)`) is wrong: if tz is a
> >> pytz timezone then you may get a wrong tzinfo (LMT), you should use
> >> `tz.localize(naive_dt, is_dst=False|True|None)` instead.
> > The whole point of this thread is to finalize PEP 431, which fixes the
> > problem for which `localize()` and `normalize()` are workarounds. When
> > this is done, `datetime(..., tzinfo=tz)` will be correct.
> > ijs
> The input time is ambiguous. Even if we assume PEP 431 is implemented in
> some form, your code is still missing isdst parameter (or the
> analog). PEP 431 won't fix it; it can't resolve the ambiguity by
> itself. Notice is_dst paramter in the `tz.localize()` call (current
...yeah, I forgot to throw that in there. It was supposed to be there all along. Nothing to see here, move along.
> .now(tz) works even during end-of-DST transitions (current API) when the
> local time is ambiguous.
I know that. That's what I was complaining about-- I was trying to talk about how astimezone() was going to be inadequate even after the PEP was implemented because it couldn't turn naive datetimes into aware ones, and people were giving examples that started with aware datetimes generated by now(tz), which completely went around the point I was trying to make. But it looks like astimezone() is going to grow an is_dst parameter, and everything will be OK.
recently, I stumbled over the new 3.5 release and in doing so over PEP 0492.
After careful consideration and after reading many blog posts of various
coders, I first would like to thank Yury Selivanov and everybody else
who brought PEP 0492 to its final state. I therefore considered usage
within our projects, however, still find hazy items in PEP 0492. So, I
would like to contribute my thoughts on this in order to either increase
my understanding or even improve Python's async capability.
In order to do this, I need a clarification regarding the rationale
behind the async keyword. The PEP rationalizes its introduction with:
"If useful() [...] would become a regular python function, [...]
important() would be broken."
What bothers me is, why should important() be broken in that case?
Sven R. Kunze
I expect this has been asked before, but I can't find out much about it...
I'm trying to embed Python as a scripting language and I need to capture
the output of PyRun_String(), PyEval_EvalCode(), or whatever as a char *
(or wchar_t * or whatever) rather than have it go to stdout.
Python 3.3.2 under plain C, not C++
And, while I'm interrupting everyone's afternoon, another question: if
I pass Py_single_input to PyRun_String() or
Py_CompileString()/PyEval_EvalCode(), it accepts statements like "a=10"
and can then properly do stuff like "print(a)". If I use Py_eval_input
instead, I get error messages. In both cases, I'm using the same
global_dict and local_dict, if that makes any difference. What am I
Firstly, a big sorry to all those unittest issues I haven't commented on.
Turns out I simply don't get mail from the issue tracker. :(.
Who should I speak to to try and debug this?
In the interim, if you want me to look at an issue please ping me on
IRC (lifeless) or mail me directly.
Robert Collins <rbtcollins(a)hp.com>
HP Converged Cloud
On 28.06.15 08:03, raymond.hettinger wrote:
> changeset: 96697:637e197be547
> user: Raymond Hettinger <python(a)rcn.com>
> date: Sat Jun 27 22:03:35 2015 -0700
> Minor refactoring. Move reference count logic into function that adds entry.
This looks not correct. key should be increfed before calling
PyObject_RichCompareBool() for the same reason as startkey.
A type slot for tp_as_async has already been added (which is good!) but we
do not currently seem to have protocol functions for awaitable types.
I would expect to find an Awaitable Protocol listed under Abstract Objects
Layer, with functions like PyAwait_Check, PyAwaitIter_Check, and
Specifically its currently difficult to test whether an object is awaitable
or an awaitable iterable, or use said objects from the c-api without
relying on method testing/calling mechanisms.
As you may know, Steve Dower put significant effort into rewriting the
project files used by the Windows build as part of moving to VC14 as
the official compiler for Python 3.5. Compared to the project files
for 3.4 (and older), the new project files are smaller, cleaner,
simpler, more easily extensible, and in some cases quite a bit more
I'd like to backport those new project files to 2.7, and Intel is
willing to fund that work as part of making Python ICC compilable on
all platforms they support since it makes building Python 2.7 with ICC
much easier. I have no intention of changing the version of MSVC used
for official 2.7 builds, it *will* remain at MSVC 9.0 (VS 2008) as
determined the last time we had a thread about it. VS 2010 and newer
can access older compilers (back to 2008) as a 'PlatformToolset' if
they're installed, so all we have to do is set the PlatformToolset in
the projects at 'v90'. Backporting the projects would make it easier
to build 2.7 with a newer compiler, but that's already possible if
somebody wants to put enough work into it, the default will be the old
compiler, and we can emit big scary warnings if somebody does use a
compiler other than v90.
With the stipulation that the officially supported compiler won't
change, I want to make sure there's no major opposition to replacing
the old project files in PCbuild. The old files would move to
PC\VS9.0, so they'll still be available and usable if necessary.
Using the backported project files to build 2.7 would require two
versions of Visual Studio to be installed; VS2010 (or newer) would be
required in addition to VS2008. All Windows core developers should
already have VS2010 for Python 3.4 (and/or VS2015 for 3.5) and I
expect that anyone else who cares enough to still have VS2008 probably
has (or can easily get) one of the free editions of VS 2010 or newer,
so I don't consider this to be a major issue.
The backported files could be added alongside the old files in
PCbuild, in a better-named 'NewPCbuild' directory, or in a
subdirectory of PC. I would rather replace the old project files in
PCbuild, though; I'd like for the backported files to be the
recommended way to build, complete with support from PCbuild/build.bat
which would make the new project files the default for the buildbots.
I have a work-in-progress branch with the backported files in PCbuild,
which you can find at
are still a couple bugs to work out (and a couple unrelated changes to
PC/pyconfig.h), but most everything works as expected.
Looking forward to hearing others' opinions,
When I call fork() inside a daemon thread, the main thread in the child
process has the "daemon" property set to True. This is very confusing,
since the program keeps running while the only thread is a daemon.
According to the docs, if all the threads are daemons the program should
exit. Here is an example:
assert not threading.current_thread().daemon # This shouldn't fail
new_pid = os.fork()
if new_pid == 0:
t = threading.Thread(target=parent)
Is it a bug in the CPython implementation?
Also let's assume the second example. I have another non-daemon thread
in the child process and want to detect this situation. Does anybody
know a way to find such fake daemon threads that are really main