I would like to know if Numpy accepts addition of new distributions since the implementation of the Generator interface. If so, what is the criteria for a particular distribution to be accepted? The reason why i'm asking is because I would like to propose adding the Polya-gamma distribution to numpy, for the following reasons:
1) Polya-gamma random variables are commonly used as auxiliary variables during data augmentation in Bayesian sampling algorithms, which have wide-spread usage in Statistics and recently, Machine learning.
2) Since this distribution is mostly useful for random sampling, it since appropriate to have it in numpy and not projects like scipy .
3) The only python/C++ implementation of the sampler available is licensed under GPLv3 which I believe limits copying into packages that choose to use a different license .
4) Numpy's random API makes adding the distribution painless.
I have done preliminary work on this by implementing the distribution sampler as decribed in ; see: https://github.com/numpy/numpy/compare/master...zoj613:polyagamma .
There is a more efficient sampling algorithm described in a later paper , but I chose not to start with that one unless I know it is worth investing time in.
I would appreciate your thoughts on this proposal.
Disclaimer - University of Cape Town This email is subject to UCT policies and email disclaimer published on our website at http://www.uct.ac.za/main/email-disclaimer or obtainable from +27 21 650 9111. If this email is not related to the business of UCT, it is sent by the sender in an individual capacity. Please report security incidents or abuse via https://csirt.uct.ac.za/page/report-an-incident.php.
Here is a long overdue update of the draft NEP about backwards
compatibility and deprecation policy:
- This is NEP 23:
- Link to the previous mailing list discussion:
It would be nice to get this NEP to Accepted status. Main changes are:
- Removed all examples that people objected to
- Removed all content regarding versioning
- Restructured sections, and added "Strategies related to deprecations"
(using suggestions by @njsmith and @shoyer).
- Added concrete examples of deprecations, and a more thorough description
of how to go about adding warnings incl. Sphinx directives, using
As always, feedback here or on the PR is very welcome!
In this NEP we describe NumPy's approach to backwards compatibility,
its deprecation and removal policy, and the trade-offs and decision
processes for individual cases where breaking backwards compatibility
Motivation and Scope
NumPy has a very large user base. Those users rely on NumPy being stable
and the code they write that uses NumPy functionality to keep working.
NumPy is also actively maintained and improved -- and sometimes improvements
require, or are made much easier by, breaking backwards compatibility.
Finally, there are trade-offs in stability for existing users vs. avoiding
errors or having a better user experience for new users. These competing
needs often give rise to long debates and to delays in accepting or
contributions. This NEP tries to address that by providing a policy as well
as examples and rationales for when it is or isn't a good idea to break
In scope for this NEP are:
- Principles of NumPy's approach to backwards compatibility.
- How to deprecate functionality, and when to remove already deprecated
- Decision making process for deprecations and removals.
Out of scope are:
- Making concrete decisions about deprecations of particular functionality.
- NumPy's versioning scheme.
When considering proposed changes that are backwards incompatible, the
main principles the NumPy developers use when making a decision are:
1. Changes need to benefit users more than they harm them.
2. NumPy is widely used so breaking changes should by default be assumed to
3. Decisions should be based on data and actual effects on users and
packages rather than, e.g., appealing to the docs or for stylistic
4. Silently getting a wrong answer is much worse than getting a loud error.
When assessing the costs of proposed changes, keep in mind that most users
not read the mailing list, do not look at deprecation warnings, and
wait more than one or two years before upgrading from their old version. And
that NumPy has millions of users, so "no one will do or use this" is very
Benefits include improved functionality, usability and performance, as well
lower maintenance cost and improved future extensibility.
Fixes for clear bugs are exempt from this backwards compatibility policy.
However in case of serious impact on users (e.g. a downstream library
build anymore or would start giving incorrect results), even bug fixes may
to be delayed for one or more releases.
Strategies related to deprecations
Getting hard data on the impact of a deprecation of often difficult.
that can be used to assess such impact include:
- Use a code search engine (_) or static (_) or dynamic (_) code
analysis tools to determine where and how the functionality is used.
- Testing prominent downstream libraries against a development build of
containing the proposed change to get real-world data on its impact.
- Making a change in master and reverting it, if needed, before a release.
do encourage other packages to test against NumPy's master branch, so this
often turns up issues quickly.
If the impact is unclear or significant, it is often good to consider
alternatives to deprecations. For example discouraging use in documentation
only, or moving the documentation for the functionality to a less prominent
place or even removing it completely. Commenting on open issues related to
that they are low-prio or labeling them as "wontfix" will also be a signal
users, and reduce the maintenance effort needing to be spent.
Implementing deprecations and removals
Deprecation warnings are necessary in all cases where functionality
will eventually be removed. If there is no intent to remove functionality,
then it should not be deprecated either. A "please don't use this" in the
documentation or other type of warning should be used instead.
- shall include the version number of the release in which the functionality
- shall include information on alternatives to the deprecated
functionality, or a
reason for the deprecation if no clear alternative is available.
- shall use ``VisibleDeprecationWarning`` rather than ``DeprecationWarning``
for cases of relevance to end users. For cases only relevant to
downstream libraries, a regular ``DeprecationWarning`` is fine.
*Rationale: regular deprecation warnings are invisible by default; library
authors should be aware how deprecations work and test for them, but we
expect this from all users.*
- shall be listed in the release notes of the release where the deprecation
- shall set a ``stacklevel``, so the warning appears to come from the
- shall be mentioned in the documentation for the functionality. A
``.. deprecated::`` directive can be used for this.
Examples of good deprecation warnings:
.. code-block:: python
warnings.warn('np.asscalar(a) is deprecated since NumPy 1.16.0, use '
'a.item() instead', DeprecationWarning, stacklevel=3)
warnings.warn("Importing from numpy.testing.utils is deprecated "
"since 1.15.0, import from numpy.testing instead.",
# A change in NumPy 1.14.0 for Python 3 loadtxt/genfromtext, slightly
# tweaked in this NEP (original didn't have version number).
"Reading unicode strings without specifying the encoding "
"argument is deprecated since NumPy 1.14.0. Set the encoding, "
"use None for the system default.",
Removal of deprecated functionality:
- shall be done after at least 2 releases (assuming the current 6-monthly
release cycle; if that changes, there shall be at least 1 year between
deprecation and removal).
- shall be listed in the release notes of the release where the removal
- can be done in any minor (but not bugfix) release.
For backwards incompatible changes that aren't "deprecate and remove" but
which code will start behaving differently, a ``FutureWarning`` should be
used. Release notes, mentioning version number and using ``stacklevel``
be done in the same way as for deprecation warnings. A ``..
directive can be used in the documentation to indicate when the behavior
.. code-block:: python
def argsort(self, axis=np._NoValue, ...):
axis : int, optional
Axis along which to sort. If None, the default, the flattened
.. versionchanged:: 1.13.0
Previously, the default was documented to be -1, but that
in error. At some future date, the default will change to
Until then, the axis should be given explicitly when
``arr.ndim > 1``, to avoid a FutureWarning.
"In the future the default for argsort will be axis=-1, not the
"current None, to match its documentation and np.argsort. "
"Explicitly pass -1 or None to silence this warning.",
In concrete cases where this policy needs to be applied, decisions are made
to the `NumPy governance model
All deprecations must be proposed on the mailing list, in order to give
with an interest in NumPy development to be able to comment. Removal of
deprecated functionality does not need discussion on the mailing list.
Functionality with more strict deprecation policies
- ``numpy.random`` has its own backwards compatibility policy,
see `NEP 19 <http://www.numpy.org/neps/nep-0019-rng-policy.html>`_.
- The file format for ``.npy`` and ``.npz`` files must not be changed in a
We now discuss a few concrete examples from NumPy's history to illustrate
typical issues and trade-offs.
**Changing the behavior of a function**
``np.histogram`` is probably the most infamous example.
First, a new keyword ``new=False`` was introduced, this was then switched
over to None one release later, and finally it was removed again.
Also, it has a ``normed`` keyword that had behavior that could be considered
either suboptimal or broken (depending on ones opinion on the statistics).
A new keyword ``density`` was introduced to replace it; ``normed`` started
``DeprecationWarning`` only in v.1.15.0. Evolution of ``histogram``::
def histogram(a, bins=10, range=None, normed=False): # v1.0.0
def histogram(a, bins=10, range=None, normed=False, weights=None,
def histogram(a, bins=10, range=None, normed=False, weights=None,
def histogram(a, bins=10, range=None, normed=False, weights=None):
def histogram(a, bins=10, range=None, normed=False, weights=None,
def histogram(a, bins=10, range=None, normed=None, weights=None,
# v1.15.0 was the first release where `normed` started emitting
The ``new`` keyword was planned from the start to be temporary. Such a plan
forces users to change their code more than once, which is almost never the
right thing to do. Instead, a better approach here would have been to
deprecate ``histogram`` and introduce a new function ``hist`` in its place.
**Disallowing indexing with floats**
Indexing an array with floats is asking for something ambiguous, and can be
sign of a bug in user code. After some discussion, it was deemed a good
to deprecate indexing with floats. This was first tried for the v1.8.0
release, however in pre-release testing it became clear that this would
many libraries that depend on NumPy. Therefore it was reverted before
to give those libraries time to fix their code first. It was finally
introduced for v1.11.0 and turned into a hard error for v1.12.0.
This change was disruptive, however it did catch real bugs in, e.g., SciPy
scikit-learn. Overall the change was worth the cost, and introducing it in
master first to allow testing, then removing it again before a release, is a
Similar deprecations that also look like good examples of
- removing deprecated boolean indexing (in 2016, see `gh-8312 <
- deprecating truth testing on empty arrays (in 2017, see `gh-9718 <
**Removing the financial functions**
The financial functions (e.g. ``np.pmt``) had short non-descriptive names,
present in the main NumPy namespace, and didn't really fit well within
scope. They were added in 2008 after
`a discussion <
on the mailing list where opinion was divided (but a majority in favor).
The financial functions didn't cause a lot of overhead, however there were
still multiple issues and PRs a year for them which cost maintainer time to
deal with. And they cluttered up the ``numpy`` namespace. Discussion on
removing them happened in 2013 (gh-2880, rejected) and then again in 2019
(:ref:`NEP32`, accepted without significant complaints).
Given that they were clearly outside of NumPy's scope, moving them to a
separate ``numpy-financial`` package and removing them from NumPy after a
deprecation period made sense.
**Being more aggressive with deprecations.**
The goal of being more aggressive is to allow NumPy to move forward faster.
This would avoid others inventing their own solutions (often in multiple
places), as well as be a benefit to users without a legacy code base. We
reject this alternative because of the place NumPy has in the scientific
ecosystem - being fairly conservative is required in order to not increase
extra maintenance for downstream libraries and end users to an unacceptable
- `Mailing list discussion on the first version of this NEP in 2018 <
References and Footnotes
- `Issue requesting semantic versioning <
..  https://searchcode.com/
..  https://github.com/Quansight-Labs/python-api-inspect
..  https://github.com/data-apis/python-record-api
I recently took a bit of time to study the comment "The ecological impact of high-performance computing in astrophysics" published in Nature Astronomy (Zwart, 2020, https://www.nature.com/articles/s41550-020-1208-y, https://arxiv.org/pdf/2009.11295.pdf), where it is stated that "Best however, for the environment is to abandon Python for a more environmentally friendly (compiled) programming language.".
I wrote a simple Python-Numpy implementation of the problem used for this study (https://www.nbabel.org) and, accelerated by Transonic-Pythran, it's very efficient. Here are some numbers (elapsed times in s, smaller is better):
| # particles | Py | C++ | Fortran | Julia |
| 1024 | 29 | 55 | 41 | 45 |
| 2048 | 123 | 231 | 166 | 173 |
The code and a modified figure are here: https://github.com/paugier/nbabel (There is no check on the results for https://www.nbabel.org, so one still has to be very careful.)
I think that the Numpy community should spend a bit of energy to show what can be done with the existing tools to get very high performance (and low CO2 production) with Python. This work could be the basis of a serious reply to the comment by Zwart (2020).
Unfortunately the Python solution in https://www.nbabel.org is very bad in terms of performance (and therefore CO2 production). It is also true for most of the Python solutions for the Computer Language Benchmarks Game in https://benchmarksgame-team.pages.debian.net/benchmarksgame/ (codes here https://salsa.debian.org/benchmarksgame-team/benchmarksgame#what-else).
We could try to fix this so that people see that in many cases, it is not necessary to "abandon Python for a more environmentally friendly (compiled) programming language". One of the longest and hardest task would be to implement the different cases of the Computer Language Benchmarks Game in standard and modern Python-Numpy. Then, optimizing and accelerating such code should be doable and we should be able to get very good performance at least for some cases. Good news for this project, (i) the first point can be done by anyone with good knowledge in Python-Numpy (many potential workers), (ii) for some cases, there are already good Python implementations and (iii) the work can easily be parallelized.
It is not a criticism, but the (beautiful and very nice) new Numpy website https://numpy.org/ is not very convincing in terms of performance. It's written "Performant The core of NumPy is well-optimized C code. Enjoy the flexibility of Python with the speed of compiled code." It's true that the core of Numpy is well-optimized C code but to seriously compete with C++, Fortran or Julia in terms of numerical performance, one needs to use other tools to move the compiled-interpreted boundary outside the hot loops. So it could be reasonable to mention such tools (in particular Numba, Pythran, Cython and Transonic).
Is there already something planned to answer to Zwart (2020)?
Any opinions or suggestions on this potential project?
PS: Of course, alternative Python interpreters (PyPy, GraalPython, Pyjion, Pyston, etc.) could also be used, especially if HPy (https://github.com/hpyproject/hpy) is successful (C core of Numpy written in HPy, Cython able to produce HPy code, etc.). However, I tend to be a bit skeptical in the ability of such technologies to reach very high performance for low-level Numpy code (performance that can be reached by replacing whole Python functions with optimized compiled code). Of course, I hope I'm wrong! IMHO, it does not remove the need for a successful HPy!
Pierre Augier - CR CNRS http://www.legi.grenoble-inp.fr
LEGI (UMR 5519) Laboratoire des Ecoulements Geophysiques et Industriels
BP53, 38041 Grenoble Cedex, France tel:+184.108.40.206.86.16
I just opened https://github.com/numpy/numpy/pull/18084, "NumPy Sponsorship
Guidelines". Below are the most important parts for review (for Related
Work, References, etc. see the PR). Please bring up broader points here,
and small/textual feedback on the PR.
This NEP provides guidelines on how the NumPy project will acknowledge
financial and in-kind support.
Motivation and Scope
In the past few years the NumPy project has gotten significant financial
support, as well as dedicated work time for maintainers to work on NumPy.
is a need to acknowledge that support - funders and organizations expect or
it, it's helpful when looking for new funding, and it's the right thing to
Furthermore, having a clear policy for how NumPy acknowledges support is
helpful when searching for new support.
This NEP is aimed at both the NumPy community - who can use it when looking
support and acknowledging existing support - and at past, current and
prospective sponsors, who often want or need to know what they get in return
for their support (other than a healthier NumPy).
The scope of this proposal includes:
- direct financial support, employers providing paid time for NumPy
and regular contributors, and in-kind support such as free hardware
- where and how NumPy acknowledges support (e.g., logo placement on the
- the amount and duration of support which leads to acknowledgement.
- who in the NumPy project is responsible for sponsorship related topics,
how to contact them.
How NumPy will acknowledge support
There will be two different ways to acknowledge financial and in-kind
one to recognize significant active support, and another one to recognize
support received in the past and smaller amounts of support.
Entities who fall under "significant active supporter" we'll call Sponsor.
The minimum level of support given to NumPy to be considered a Sponsor are:
- $30,000/yr for unrestricted financial contributions
- $60,000/yr for financial contributions for a particular purpose
- $100,000/yr for in-kind contributions
The rationale for the above levels is that unrestricted financial
are typically the most valuable for the project, and the hardest to obtain.
The opposite is true for in-kind contributions. The dollar value of the
also reflect that NumPy's needs have grown to the point where we need at
a few paid developers in order to effectively support our user base and
continue to move the project forward. Financial support at or above these
levels is needed to be able to make a significant difference.
Sponsors will get acknowledged through:
- a small logo displayed on the front page of the NumPy website
- prominent logo placement on https://numpy.org/about/
- logos displayed in talks about NumPy by maintainers
- announcements of the sponsorship on the NumPy mailing list and the
In addition to Sponsors, we already have the concept of Institutional
(defined in NumPy's
`governance document <https://numpy.org/devdocs/dev/governance/index.html
for entities who employ a NumPy maintainer and let them work on NumPy as
of their official duties. The governance document doesn't currently define a
minimum amount of paid maintainer time needed to be considered for
Therefore we propose that level here, roughly in line with the sponsorship
- 6 person-months/yr of paid work time for one or more NumPy maintainers or
Institutional Partners get the same benefits as Sponsors, in addition to
is specified in the NumPy governance document.
Finally, a new page on the website (https://numpy.org/funding/, linked from
About page) will be added to acknowledge all current and previous sponsors,
partners, and any other entities and individuals who provided $5,000 or
financial or in-kind support. This page will include relevant details of
support (dates, amounts, names and purpose); no logos will be used on this
page. The rationale for the $5,000 minimum level is to keep the amount of
maintaining the page reasonable; the level is the equivalent of, e.g., one
or a person-week's worth of engineering time in a Western country, which
like a reasonable lower limit.
The following content changes need to be made:
- Add a section with small logos towards the bottom of the `numpy.org
- Create a full list of historical and current support and deploy it to
- Update the NumPy governance document for changes to Institutional Partner
eligibility requirements and benefits.
- Update https://numpy.org/about with details on how to get in touch with
NumPy project about sponsorship related matters (see next section).
A NumPy Funding Team
At the moment NumPy has only one official body, the Steering Council, and no
good way to get in touch with either that body or any person or group
responsible for funding and sponsorship related matters. The way this is
typically done now is to somehow find the personal email of a maintainer,
email them in private. There is a need to organize this more transparently
potential sponsor isn't likely to inquire through the mailing list, nor is
easy for a potential sponsor to know if they're reaching out to the right
person in private.
https://numpy.org/about/ already says that NumPy has a "funding and grants"
team, however that is not the case. We propose to organize this team, name
members on it, and add the names of those team members plus a dedicated
address for the team to the About page.
Status before this proposal
Acknowledgement of support
At the time of writing (Dec 2020), the logos of the four largest financial
sponsors and two institutional partners are displayed on
https://numpy.org/about/. The `Nature paper about NumPy <
mentions some early funding. No comprehensive list of received funding and
in-kind support is published anywhere.
Decisions on which logos to list on the website have been made mostly by the
website team. Decisions on which entities to recognize as Institutional
have been made by the NumPy Steering Council.
NumPy governance, decision-making and financial oversight
*This section is meant as context for the reader, to help put the rest of
NEP in perspective, and perhaps answer questions the reader has when reading
this as a potential sponsor.*
NumPy has a formal governance structure defined in
`this governance document <
Decisions are made by consensus among all active participants in a
(typically on the mailing list), and if consensus cannot be reached then the
Steering Council takes the decision (also by consensus).
NumPy is a sponsored project of NumFOCUS, a US-based 501(c)3 nonprofit.
NumFOCUS administers NumPy funds, and ensures they are spent in accordance
its mission and nonprofit status. In practice, NumPy has a NumFOCUS
subcommittee (with its members named in the NumPy governance document) who
authorize financial transactions. Those transactions, for example paying a
contractor for a particular activity or deliverable, are decided on by the
NumPy Steering Council.
*Tiered sponsorship levels.* We considered using tiered sponsorship levels,
rejected this alternative because it would be more complex, and not
communicate the right intent - the minimum levels are for us to determine
to acknowledge support that we receive, not a commercial value proposition.
Entities typically will support NumPy because they rely on the project or
to help advance it, and not to get brand awareness through logo placement.
*Listing all donations*. Note that in the past we have received many smaller
donations, mostly from individuals through NumFOCUS. It would be great to
all of those contributions, but given the way we receive information on
donations right now, that would be quite labor-intensive. If we manage to
to a more suitable platform, such as `Open Collective <
in the future, we should reconsider listing all individual donations.
I'm still learning proper mailing list etiquette so I'm not sure if this is
where I should respond.
But users just getting into debugging might also benefit from knowing this:
You can turn off optimizations when compiling numpy by passing CFLAGS to
setup.py like so:
*CFLAGS="-O0 -g3" python setup.py build_ext -i *
**Assuming you have the source code and setup.py available *
This will remove optimizations while compiling and will make it easier to
see more variables.
That took me a long time to figure out so I wanted to share the knowledge
On Mon, Dec 28, 2020 at 10:38 PM <numpy-discussion-request(a)python.org>
> Send NumPy-Discussion mailing list submissions to
> To subscribe or unsubscribe via the World Wide Web, visit
> or, via email, send a message with subject or body 'help' to
> You can reach the person managing the list at
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of NumPy-Discussion digest..."
> Today's Topics:
> 1. Re: Addition of new distributions: Polya-gamma (Robert Kern)
> 2. Help needed GDB (Amardeep Singh)
> 3. ANN: NumExpr 2.7.2 (Robert McLeod)
> Message: 1
> Date: Mon, 28 Dec 2020 13:06:33 -0500
> From: Robert Kern <robert.kern(a)gmail.com>
> To: Discussion of Numerical Python <numpy-discussion(a)python.org>
> Subject: Re: [Numpy-discussion] Addition of new distributions:
> Content-Type: text/plain; charset="utf-8"
> My view is that we will not add more non-uniform distribution (i.e. "named"
> statistical probability distributions like Polya-Gamma) methods to
> `Generator`. I think that we might add a couple more methods to handle some
> more fundamental issues (like sampling from the unit interval with control
> over whether each boundary is open or closed, maybe one more variation on
> shuffling) that helps write randomized algorithms. Now that we have the C
> and Cython APIs which allow one to implement non-uniform distributions in
> other packages, we strongly encourage that.
> As I commented on the linked PR, `scipy.stats` would be a reasonable place
> for a Polya-Gamma sampling function, even if it's not feasible to implement
> an `rv_continuous` class for it. You have convinced me that the nature of
> the Polya-Gamma distribution warrants this. The only issue is that scipy
> still depends on a pre-`Generator` version of numpy. So I recommend
> implementing this function in your own package with an eye towards
> contributing it to scipy later.
> On Sun, Dec 27, 2020 at 6:05 AM Zolisa Bleki <BLKZOL001(a)myuct.ac.za>
> > Hi All,
> > I would like to know if Numpy accepts addition of new distributions since
> > the implementation of the Generator interface. If so, what is the
> > for a particular distribution to be accepted? The reason why i'm asking
> > because I would like to propose adding the Polya-gamma distribution to
> > numpy, for the following reasons:
> > 1) Polya-gamma random variables are commonly used as auxiliary variables
> > during data augmentation in Bayesian sampling algorithms, which have
> > wide-spread usage in Statistics and recently, Machine learning.
> > 2) Since this distribution is mostly useful for random sampling, it since
> > appropriate to have it in numpy and not projects like scipy .
> > 3) The only python/C++ implementation of the sampler available is
> > under GPLv3 which I believe limits copying into packages that choose to
> > a different license .
> > 4) Numpy's random API makes adding the distribution painless.
> > I have done preliminary work on this by implementing the distribution
> > sampler as decribed in ; see:
> > https://github.com/numpy/numpy/compare/master...zoj613:polyagamma .
> > There is a more efficient sampling algorithm described in a later paper
> > , but I chose not to start with that one unless I know it is worth
> > investing time in.
> > I would appreciate your thoughts on this proposal.
> > Regards,
> > Zolisa
> > Refs:
> >  https://github.com/scipy/scipy/issues/11009
> >  https://github.com/slinderman/pypolyagamma
> >  https://arxiv.org/pdf/1205.0310v1.pdf
> >  https://arxiv.org/pdf/1405.0506.pdf
> > Disclaimer - University of Cape Town This email is subject to UCT
> > and email disclaimer published on our website at
> > http://www.uct.ac.za/main/email-disclaimer or obtainable from +27 21 650
> > 9111. If this email is not related to the business of UCT, it is sent by
> > the sender in an individual capacity. Please report security incidents or
> > abuse via https://csirt.uct.ac.za/page/report-an-incident.php.
> > _______________________________________________
> > NumPy-Discussion mailing list
> > NumPy-Discussion(a)python.org
> > https://mail.python.org/mailman/listinfo/numpy-discussion
> Robert Kern
I am trying to debug c code of numpy via gdb.Can someone help me with this?
i am getting " Python scripting is not supported in this copy of GDB". How
to install python supported gdb on win10?
I am following the steps in the docs. machine is windows 10.
Another frequently asked question is “How do I debug C code inside NumPy?”.
First, ensure that you have gdb installed on your system with the Python
extensions (often the default on Linux). You can see which version of
Python is running inside gdb to verify your setup:
micro=0, releaselevel='final', serial=0)
$ gdb -v
GNU gdb (GDB) 7.6.1
This GDB was configured as "mingw32".
(gdb) Python scripting is not supported in this copy of GDB.