Introduce `start=1` argument to `math.factorial`

I suggest introducing a `start=1` argument to `math.factorial`, so the result would be (the C-optimized version of) `product(range(start, x+1), start=1)`. This'll be useful for combinatorical calculations.

-1 on overloading math.factorial to compute something that isn't a factorial, but a falling factorial. Such a new function would be easy to add, though, if deemed useful. math.falling_factorial(x, n) = product(range(x - n + 1, x + 1)) and the similar function math.rising_factorial(x, n) = product(range(x, x+n)) On Wed, Sep 17, 2014 at 7:02 PM, Ram Rachum <ram.rachum@gmail.com> wrote:
I suggest introducing a `start=1` argument to `math.factorial`, so the result would be (the C-optimized version of) `product(range(start, x+1), start=1)`. This'll be useful for combinatorical calculations.
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/

On Wed, Sep 17, 2014 at 04:02:44PM -0700, Ram Rachum wrote:
I suggest introducing a `start=1` argument to `math.factorial`, so the result would be (the C-optimized version of) `product(range(start, x+1), start=1)`. This'll be useful for combinatorical calculations.
Then it wouldn't be the factorial function any more. There are lots of functions which could be useful for combinatorical calculations, including !n and n!!, do you think this particular one would be of broad enough interest that it deserves to be in the standard library? Do you know of any other programming languages which offer this "partial factorial" function in their standard library? -- Steven

On 18 September 2014 14:13, Steven D'Aprano <steve@pearwood.info> wrote:
On Wed, Sep 17, 2014 at 04:02:44PM -0700, Ram Rachum wrote:
I suggest introducing a `start=1` argument to `math.factorial`, so the result would be (the C-optimized version of) `product(range(start, x+1), start=1)`. This'll be useful for combinatorical calculations.
Then it wouldn't be the factorial function any more.
There are lots of functions which could be useful for combinatorical calculations, including !n and n!!, do you think this particular one would be of broad enough interest that it deserves to be in the standard library?
Do you know of any other programming languages which offer this "partial factorial" function in their standard library?
It's also worth noting that "pip install mpmath" will provide rising and falling factorials (http://mpmath.org/doc/current/functions/gamma.html#rising-and-falling-factor...) and a whole lot more. There's no need to add such complexity to the standard library. However, now that CPython ships with pip by default, we may want to consider providing more explicit pointers to such "If you want more advanced functionality than the standard library provides" libraries. Yes, that may be contentious in the near term as folks argue over which "stdlib++" modules to recommend, but in some cases there are clear "next step beyond the standard library" category winners that are worth introducing to newcomers, rather than making them do their own research. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On 9/18/2014 2:15 AM, Nick Coghlan wrote:
On 18 September 2014 14:13, Steven D'Aprano <steve@pearwood.info> wrote:
On Wed, Sep 17, 2014 at 04:02:44PM -0700, Ram Rachum wrote:
I suggest introducing a `start=1` argument to `math.factorial`, so the result would be (the C-optimized version of) `product(range(start, x+1), start=1)`. This'll be useful for combinatorical calculations.
Then it wouldn't be the factorial function any more.
There are lots of functions which could be useful for combinatorical calculations, including !n and n!!, do you think this particular one would be of broad enough interest that it deserves to be in the standard library?
Do you know of any other programming languages which offer this "partial factorial" function in their standard library?
It's also worth noting that "pip install mpmath" will provide rising and falling factorials (http://mpmath.org/doc/current/functions/gamma.html#rising-and-falling-factor...) and a whole lot more. There's no need to add such complexity to the standard library.
However, now that CPython ships with pip by default, we may want to consider providing more explicit pointers to such "If you want more advanced functionality than the standard library provides" libraries.
Having used pip install a few times, I have begun to regard pip-installable packages as almost being extensions of the stdlib. I think the main remaining problem is equally easy access to documentation as to the code. It would be nice, for instance, if /Doc, like /Lib, had a site-packages subdirectory with an index.html updated by pip with a link to either a package.html put in the directory or an external file, such as one at readthedocs.org. If there were something like this, I would add an item to Idle's help menu.
Yes, that may be contentious in the near term as folks argue over which "stdlib++" modules to recommend, but in some cases there are clear "next step beyond the standard library" category winners that are worth introducing to newcomers, rather than making them do their own research.
Choosing 1 *or more* packages to list should not be more contentions than choosing just 1 package to add to the stdlib. -- Terry Jan Reedy

On Thu, Sep 18, 2014 at 10:14:18AM -0400, Terry Reedy wrote:
On 9/18/2014 2:15 AM, Nick Coghlan wrote:
However, now that CPython ships with pip by default, we may want to consider providing more explicit pointers to such "If you want more advanced functionality than the standard library provides" libraries.
Having used pip install a few times, I have begun to regard pip-installable packages as almost being extensions of the stdlib.
Sounds great, but let's not get carried away. Remember that many people, for reasons of company policy, cannot easily, or at all, install unapproved software. Whether for good or bad reasons, they're still stuck with what is in the std lib and nothing else. -- Steven

On Thu, Sep 18, 2014 at 4:45 PM, Steven D'Aprano <steve@pearwood.info> wrote:
On Thu, Sep 18, 2014 at 10:14:18AM -0400, Terry Reedy wrote:
On 9/18/2014 2:15 AM, Nick Coghlan wrote:
However, now that CPython ships with pip by default, we may want to consider providing more explicit pointers to such "If you want more advanced functionality than the standard library provides" libraries.
Having used pip install a few times, I have begun to regard pip-installable packages as almost being extensions of the stdlib.
Sounds great, but let's not get carried away. Remember that many people, for reasons of company policy, cannot easily, or at all, install unapproved software. Whether for good or bad reasons, they're still stuck with what is in the std lib and nothing else.
Not just company policy -- it can be licencing issues. Or just general trust/paranoia -- installing packages from PyPI just because they look useful is not the most secure thing to do. Another reason is sustainability -- I trust Python won't go unmaintained in a few years, and the few necessary breaking API changes will be well thought out and properly announced. For a PyPI project, there are no expectations. Even if it is well run (which would presumably be a requirement to land in a "stdlib++" list), you need to gauge an extra project's health, and keep up with an extra release note stream. I believe that's what Nick meant by "[doing] research". Listing "stdlib++" projects would mean vouching for them, even if only implicitly. Indeed, let's not get too carried away.

On 18 September 2014 16:23, Petr Viktorin <encukou@gmail.com> wrote:
Listing "stdlib++" projects would mean vouching for them, even if only implicitly. Indeed, let's not get too carried away.
Nevertheless, there is community knowledge "out there" on what constitute best of breed packages. For example "everyone knows" that requests is the thing to use if you want to issue web requests. And equally, requests is "clearly" well-maintained and something you can rely on. Collecting that knowledge together somewhere so that people for whom the above is *not* self-evident could easily find it, would be a worthwhile exercise. Paul.

On Thu, 18 Sep 2014 16:37:23 +0100 Paul Moore <p.f.moore@gmail.com> wrote:
On 18 September 2014 16:23, Petr Viktorin <encukou@gmail.com> wrote:
Listing "stdlib++" projects would mean vouching for them, even if only implicitly. Indeed, let's not get too carried away.
Nevertheless, there is community knowledge "out there" on what constitute best of breed packages. For example "everyone knows" that requests is the thing to use if you want to issue web requests.
Is it? That sounds like a caricatural statement. If I'm using Tornado, Twisted or asyncio, then requests is certainly not "the thing to use" to issue Web requests. And there are many cases where urlopen() is good enough, as well. Not to mention other contenders such as pycurl.
Collecting that knowledge together somewhere so that people for whom the above is *not* self-evident could easily find it, would be a worthwhile exercise.
If it's community knowledge, then surely that job can be done by the community. I don't think Python's official documentation is the right place to reify that knowledge. Regards Antoine.

On Sep 18, 2014, at 8:37, Paul Moore <p.f.moore@gmail.com> wrote:
On 18 September 2014 16:23, Petr Viktorin <encukou@gmail.com> wrote:
Listing "stdlib++" projects would mean vouching for them, even if only implicitly. Indeed, let's not get too carried away.
Nevertheless, there is community knowledge "out there" on what constitute best of breed packages. For example "everyone knows" that requests is the thing to use if you want to issue web requests.
Except in those cases that requests actually makes harder, like trying to send files over SOAP+MIME. Or when you can most easily explain what you want in terms of libcurl code or a curl command line. But as Terry pointed out earlier in the thread, one advantage of the "stdlib++" idea is that you don't have to pick one, you can pick one or more. If the urllib.request docs said that requests makes easy cases, and even many pretty complex ones, easy; urllib3 provides as much flexibility as possible in a stdlib-like interface for those rare cases that requests can't make easy; pycurl makes it easier to translate web requests from C programs or shell scripts; etc., then there's no problem.
And equally, requests is "clearly" well-maintained and something you can rely on.
If Kenneth got hit by a bus, requests would be in more trouble than something in the stdlib would if Guido, or the module's maintainer, did. The risk isn't _high_--it's certainly never deterred me from using it, or even convincing managers in corporate settings that we should use it--but that doesn't mean it's not _higher than the stdlib_.
Collecting that knowledge together somewhere so that people for whom the above is *not* self-evident could easily find it, would be a worthwhile exercise.
Agreed.

On Sep 17, 2014, at 23:15, Nick Coghlan <ncoghlan@gmail.com> wrote:
However, now that CPython ships with pip by default, we may want to consider providing more explicit pointers to such "If you want more advanced functionality than the standard library provides" libraries.
libraries.
I love this idea, but there's one big potential problem, and one smaller one. Many of the most popular and useful packages require C extensions. In itself, that doesn't have to be a problem; if you provide wheels for the official 3.4+ Win32, Win64, and Mac64 CPython builds, it can still be as simple as `pip install spam` for most users, including the ones with the least ability to figure it out for themselves. But what about packages that require third-party C libraries? Does lxml have to have wheels that statically link libxml2, or that download the DLLs at install time for Windows users, or some other such solution before it can be recommended? Many of the most popular packages fall into similar situations, but lxml may be the most obvious because many of its users don't think of it as a wrapper around libxml2, they just think of it as a better ElementTree (or even a thing that magically makes BeautifulSoup work better). Also, is it acceptable to recommend packages whose C extension modules don't work, or don't work well, with PyPy?
Yes, that may be contentious in the near term as folks argue over which "stdlib++" modules to recommend, but in some cases there are clear "next step beyond the standard library" category winners that are worth introducing to newcomers, rather than making them do their own research.
There are plenty of clear winners that are worth introducing to newcomers, but aren't the next step beyond a particular module. In fact, I think that's the case for _most_ of them. pytz, dateutil, requests, urllib3, pycurl, and maybe more-itertools or blist and a couple of math libs, the main things people are going to want to find (not counting frameworks like Django or Scrapy or PySide) are things like NumPy, Pandas, Pillow, PyYAML, lxml, BeautifulSoup, PyWin32, PyObjC, pyparsing, paramiko, … and where do any of those go? Does this mean we have to add pages in the docs for things the stdlib doesn't do, just to provide external references? Or turn the chapter-header blurbs into real pages? Or reorganize the docs more dramatically? Or just leave out some of the most prominent and useful libraries on PyPI just because they don't fit anywhere, while mentioning others?

On Thu, Sep 18, 2014 at 11:10 AM, Andrew Barnert <abarnert@yahoo.com.dmarc.invalid> wrote:
On Sep 18, 2014, at 8:37, Paul Moore <p.f.moore@gmail.com> wrote:
On 18 September 2014 16:23, Petr Viktorin <encukou@gmail.com> wrote:
Listing "stdlib++" projects would mean vouching for them, even if only implicitly. Indeed, let's not get too carried away.
Nevertheless, there is community knowledge "out there" on what constitute best of breed packages. For example "everyone knows" that requests is the thing to use if you want to issue web requests.
Except in those cases that requests actually makes harder, like trying to send files over SOAP+MIME. Or when you can most easily explain what you want in terms of libcurl code or a curl command line.
But as Terry pointed out earlier in the thread, one advantage of the "stdlib++" idea is that you don't have to pick one, you can pick one or more. If the urllib.request docs said that requests makes easy cases, and even many pretty complex ones, easy; urllib3 provides as much flexibility as possible in a stdlib-like interface for those rare cases that requests can't make easy; pycurl makes it easier to translate web requests from C programs or shell scripts; etc., then there's no problem.
And equally, requests is "clearly" well-maintained and something you can rely on.
If Kenneth got hit by a bus, requests would be in more trouble than something in the stdlib would if Guido, or the module's maintainer, did. The risk isn't _high_--it's certainly never deterred me from using it, or even convincing managers in corporate settings that we should use it--but that doesn't mean it's not _higher than the stdlib_.
Seeing as Kenneth has two core-developers working on it, one of whom works on it as part of their contributions to OpenStack, I think your estimation of the bus number is too low. Granted either Cory or I need the ability to push to PyPI, but I think Richard and Donald know Cory and I well enough to trust us to maintain the package as we currently do.

On 09/18/2014 07:01 PM, Andrew Barnert wrote:
Yes, that may be contentious in the near term as folks argue over which "stdlib++" modules to recommend, but in some cases there are clear "next step beyond the standard library" category winners that are worth introducing to newcomers, rather than making them do their own research.
There are plenty of clear winners that are worth introducing to newcomers, but aren't the next step beyond a particular module. In fact, I think that's the case for _most_ of them. pytz, dateutil, requests, urllib3, pycurl, and maybe more-itertools or blist and a couple of math libs, the main things people are going to want to find (not counting frameworks like Django or Scrapy or PySide) are things like NumPy, Pandas, Pillow, PyYAML, lxml, BeautifulSoup, PyWin32, PyObjC, pyparsing, paramiko, … and where do any of those go?
Does this mean we have to add pages in the docs for things the stdlib doesn't do, just to provide external references? Or turn the chapter-header blurbs into real pages? Or reorganize the docs more dramatically? Or just leave out some of the most prominent and useful libraries on PyPI just because they don't fit anywhere, while mentioning others?
I don't think the docs should generically recommend external packages, except for cases like "if you need this functionality that exists only in 3.5 and higher, use backports.foobar from PyPI". Sure, you basically can't wrong with recommending the "big players" like Numpy, but usually they are well known anyway. Any smaller package could quickly become obsolete, and we're not exactly quick with updating outdated docs (that do not deal with a specific API item) anyway -- see e.g. the HOWTO documents. I think that a list of "stdlib ++" should be maintained by the greater community, after all, it is about stuff *not* prepared by the CPython team. It may be that a better categorization of PyPI is all we need (i.e. replace the Trove classifiers with something more prominent and more straightforward). cheers, Georg

On 9/18/2014 11:55 AM, Antoine Pitrou wrote:
On Thu, 18 Sep 2014 16:37:23 +0100 Paul Moore <p.f.moore@gmail.com> wrote:
On 18 September 2014 16:23, Petr Viktorin <encukou@gmail.com> wrote:
Listing "stdlib++" projects would mean vouching for them, even if only implicitly. Indeed, let's not get too carried away.
Nevertheless, there is community knowledge "out there" on what constitute best of breed packages. For example "everyone knows" that requests is the thing to use if you want to issue web requests.
Is it? That sounds like a caricatural statement. If I'm using Tornado, Twisted or asyncio, then requests is certainly not "the thing to use" to issue Web requests. And there are many cases where urlopen() is good enough, as well. Not to mention other contenders such as pycurl.
Collecting that knowledge together somewhere so that people for whom the above is *not* self-evident could easily find it, would be a worthwhile exercise.
If it's community knowledge, then surely that job can be done by the community. I don't think Python's official documentation is the right place to reify that knowledge.
In some cases, perhaps most, the official docs could simply point to community-maintained wiki pages. Web requests seems to be a topic with multiple alternatives, and to me a good candidate for a wiki page. -- Terry Jan Reedy
participants (11)
-
Andrew Barnert
-
Antoine Pitrou
-
Clint Hepner
-
Georg Brandl
-
Ian Cordasco
-
Nick Coghlan
-
Paul Moore
-
Petr Viktorin
-
Ram Rachum
-
Steven D'Aprano
-
Terry Reedy