This is my first time on Python-dev, so I apologize for my newbie-ness.
I have been doing some performance experiments with memcmp, and I was
surprised that memcmp wasn't faster than it was in Python. I did a whole,
long analysis and came up with some very simple results.
Before I put in a tracker bug report, I wanted to present my findings
and make sure they were repeatable to others (isn't that the nature
of science? ;) as well as offer discussion.
The analysis is a pdf and is here:
The testcases are a tarball here:
I have three basic recommendations in the study: I am
curious what other people think.
> I have been doing some performance experiments with memcmp, and I was
> surprised that memcmp wasn't faster than it was in Python. I did a whole,
> long analysis and came up with some very simple results.
Paul Svensson suggested I post as much as I can as text, as people would be more likely to read it.
So, here's the basic ideas:
(1) memcmp is surprisingly slow on some Intel gcc platforms (Linux)
On several Linux, Intel platforms, memcmp was 2-3x slower than
a simple, portable C function (with some optimizations)
(2) The problem: If you compile C programs with gcc with any optimization on,
it will replace all memcmp calls with an assembly language stub: rep cmpsb
instead of the memcmp call.
(3) rep cmpsb seems like it would be faster, but it really isn't:
this completely bypasses the memcmp.S, memcmp_sse3.S
and memcmp_sse4.S in glibc which are typically faster.
(4) The basic conclusion is that the Python baseline on
Intel gcc platforms should probably be compiled with -fno-builtin-memcmp
so we "avoid" gcc's memcmp optimization.
The numbers are all in the paper: I will endeavor to try to generate a text form
of all the tables so it's easier to read. This is much first in the Python dev
arena, so I went a little overboard with my paper below. ;)
> Before I put in a tracker bug report, I wanted to present my findings
> and make sure they were repeatable to others (isn't that the nature
> of science? ;) as well as offer discussion.
> The analysis is a pdf and is here:
> The testcases are a tarball here:
> I have three basic recommendations in the study: I am
> curious what other people think.
So I'm trying to generate dynamic choices for django form. Here i'm usig formset concept (CODE is mentioned below)
Suppose i have list called criteria_list = ['education', 'know how', 'managerial', 'interpersonal', ]
now i need to generate choices as follows
list1 = [('education', 1), ('education', 2), ('education', 3), (''education' , 4) , ('know how', 1) ('know ho', 2), ('know ho', 3), ('know ho', 4)]
list2 = [('education', 1), ('education', 2), ('education', 3), (''education' , 4) , ('managerial', 1) ('managerial', 2), ('managerial', 3), ('managerial', 4)]
list3 = [('education', 1), ('education', 2), ('education', 3), (''education' , 4) , ('interpersonal', 1) ('interpersonal', 2), ('interpersonal', 3), ('interpersonal', 4)]
list4 = [('know how', 1), ('know how', 2), ('know how ', 3), ('know how' , 4) , ('managerial', 1) ('managerial', 2), ('managerial', 3), ('managerial', 4)]
list5 = [('know how', 1), ('know how', 2), ('know how ', 3), ('know how' , 4) , ('interpersonal', 1) ('interpersonal', 2), ('interpersonal', 3), ('interpersonal', 4)]
list6= [('managerial', 1), ('managerial', 2), ('managerial ', 3), ('managerial' , 4) , ('interpersonal', 1) ('interpersonal', 2), ('interpersonal', 3), ('interpersonal', 4)]
How can i achive this in python?
The above all eachh list become the choices for each form.
Suppose i have formset of 6 forms. Then how can i assign above dynamic generates list to the choice field of each form.
I tried by using this following code but no luck
evaluation_formset = formset_factory(EvaluationForm, formset=BaseEvaluationFormset, extra=6)
formset = evaluation_formset(request.POST)
##validation and save
formset = evaluation_formset()
value = forms.ChoiceField(widget=forms.RadioSelect(renderer=HorizontalRadioRenderer))
def __init__(self, *args, **kwargs):
super(BaseEvaluationFormSet, self).__init__(*args, **kwargs)
for form_index, form in enumerate(self.forms):
form.fields["value"].choices = self.choice_method(form_index)
def choice_method(self, form_index):
list = 
item_list = 
criteria_list = 
criteria_length = len(sub_criterias)-1
for criteria_index in range(criteria_length):
counter = 1
if criteria_index == form_index:
for j in range(criteria_length-counter):
x = 1
for i in range(6):
item_list.append((sub_criterias[criteria_index+ 1], sub_criterias[criteria_index+1]))
list = criteria_list +item_list
counter = counter + 1
if x != criteria_length:
x = x + 1
Does everybody feel comfortable with 'stage' and 'resultion' fields in tracker?
I understand that 'stage' defines workflow and 'resolution' is status
indicator, but the question is - do we really need to separate them?
For example, right now when a ticket's 'status' is closed (all right -
there is also a 'status' field), we mark 'stage' as
'committed/rejected'. I see the 'stage' as a workflow state and
'committed/rejected' value is confusing because further steps are
actually depend on if the state is actually 'committed' or 'rejected'.
stage: patch review -> committed/rejected
When I see a patch was rejected, I need to analyse why and propose a
better one. To analyse I need to look at 'resolution' field:
out of date
works for me
The resolution will likely be 'fixed' which doesn't give any info
about if the patch was actually committed or not. You need to know
that there is 'rejected' status, so if the status 'is not rejected'
then the patch was likely committed. Note that resolution is also a
state, but for a closed issue.
Let me remind the values for the state of opened issue (recorded in a
There is a clear duplication in stage:'committed/rejected',
resolution:'fixed,rejected' and status:'closed'. Now `status` can be
For me the only things in `status` that matter are - open and closed.
Everything else is more descriptive 'state' of the issue. So I'd merge
all our descriptive fields into single 'state' field that will accept
the following values depending on master 'status':
out of date
works for me
Renamed 'test needed' -> 'needs test'. For a workflow states like
'later', 'postponed' and 'remind' are too vague, so I removed them.
These are better suit to user tags (custom keywords) like 'easy' etc.
Implementing this change will
1. define clear workflow to pave the road for automation and future
enhancements (commit/review queue, etc.)
2. de-clutter tracker UI to free space for more useful components
3. reduce categorization overhead
Do we have buildbots with the rpm programs installed? There is a patch
I want to commit to fix a bug in distutils’ bdist_rpm; it was tested by
the patch author, but I cannot verify it on my machine, so I would feel
safer if our buildbot fleet would cover that.
> diff --git a/Misc/NEWS b/Misc/NEWS
> --- a/Misc/NEWS
> +++ b/Misc/NEWS
> @@ -54,6 +54,9 @@
> the following case: sys.stdin.read() stopped with CTRL+d (end of file),
> raw_input() interrupted by CTRL+c.
> +- Issue #10860: httplib now correctly handles an empty port after port
> + delimiter in URLs.
> - dict_proxy objects now display their contents rather than just the class
Looks like your entry went into the Interpreter Core section instead of
BTW, I don’t understand “3.x version will come as a separate patch” in
your commit message; isn’t that the case for all patches? They’re
changesets with no relationship at all from Mercurial’s viewpoint, and
often their contents are also different.
Can we disable by default disabling the cyclic gc in timeit module?
Often posts on pypy-dev or on pypy bugs contain usage of timeit module
which might change the performance significantly. A good example is
json benchmarks - you would rather not disable cyclic GC when running
a web app, so encoding/decoding json in benchmark with the cyclic GC
disabled does not make sense.
What do you think?
The title of the "Global Module Index" for 3.2 documentation is "Python
See the report below (attached screenshot removed).
All the best,
-------- Original Message --------
Subject: Issue with the link to python modules documentation
Date: Sun, 16 Oct 2011 22:44:52 +0200
From: Carl Chenet <chaica(a)ohmytux.com>
Browsing http://www.python.org/doc/ , I click on the link to Python 3.x
Module Index linking to http://docs.python.org/3.2/modindex.html and I'm
redirected to http://docs.python.org/py3k/modindex.html to the Python
module list documentation but for the version 3.1.3.
I'm using Chromium 12. I tried several times and cleared the cache
before retrying but the issue remains.
I'm joining a screenshot showing the finale page with the url
http://docs.python.org/py3k/modindex.html which should be the Python
Module list for current 3.x version, which is I guess 3.2.
I see that the Packaging documentation is now more complete (at least
at docs.python.org) - I don't know if it's deemed fully complete yet,
but I scanned the documentation and "Installing Python Projects" looks
pretty much converted (and very good!!), but "Distributing Python
Projects" still has quite a lot of distutils-related text in, and I
need to read more deeply to understand if that's because it remains
unchanged, or if it is still to be updated.
But one thing struck me - the "Installing Python Projects" document
talks about source distributions, but not much about binary
On Windows, binary distributions are significantly more important than
on Unix, because not all users have easy access to a compiler, and
more importantly, C library dependencies can be difficult to build,
hard to set up, and generally a pain to deal with. The traditional
solution was always bdist_wininst installers, and with the advent of
setuptools binary eggs started to become common. I've noticed that
since pip became more widely used, with its focus on source builds,
binary eggs seemed to fade away somewhat. I don't know what format
The problem when Python 3.3 comes out is that bdist_wininst/bdist_msi
installers do not interact well with pysetup. And if native virtual
environment support becomes part of Python 3.3, they won't work well
there either (they don't deal well with today's virtualenv, for that
matter). So there will be a need for a pysetup-friendly binary format.
I assume that the egg format will fill this role - or is that not the
case? What is the current thinking on binary distribution formats for
The main reason I am asking is that I would like to write an article
(or maybe a series of articles) for Python Insider, introducing the
new packaging facilities from the point of view of an end user with
straightforward needs (whether a package user just looking to manage a
set of installed packages, or a module author who just wants to
publish his code in a form that satisfies as many people as possible).
What I'd hope to do is, as well as showing people all the nice things
they can expect to see in Python 3.3, to also start package authors
thinking about what they need to do to support their users under the
new system. If we get the message out early, and make people aware of
the benefits of the new end user tools, then I'm hoping more authors
will see the advantage of switching to the new format rather than just
sticking with bdist_xxx because "it's always worked".
I suspect I should (re-)join the distutils SIG and take this
discussion there. But honestly, I'm not sure I have the time - the
traffic was always fairly high, and the number of relevant posts for a
casual observer was quite low. So even if that's the right place to
go, some pointers to some "high spots" to get me up to speed on the
current state of affairs would help.