
I’m not knowledgable about GPUs, but from limited conversations with others, it is important to first decide what exactly the problem area is. Unlike currently available environment markers, there’s currently not a very reliable way to programmatically determine even if there is a GPU, let alone what that GPU can actually do (not every GPU can be used by Tensorflow, for example). IMO it would likely be a good route to first implement some interface for GPU environment detection in Python. This interface can then be used in projects like tensorflow-auto-detect. Projects like Tensorflow can also detect directly what implementation it should use, like many projects do platform-specific things by detection os.name of sys.platform. Once we’re sure we have all the things needed for detection, markers can be drafted based on the detection interface. TP
On 01/9/2018, at 03:57, Dustin Ingram <di@python.org> wrote:
Hi all, trying to pull together a few separate discussions into a single thread here.
The main issue is that currently PEP 508 does not provide environment markers for GPU/CUDA availability, which leads to problems for projects that want to provide distributions for environments with and without GPU support.
As far as I can tell, there's been multiple suggestions to bring this issue to distutils-sig, but no one has actually done it.
Relevant issues:
(closed) "How should Python packages depending on TensorFlow structure their requirements?" https://github.com/tensorflow/tensorflow/issues/7166
(closed) "Adding gpu or cuda specification in PEP 508" https://github.com/python/peps/issues/581
(closed) "More support for conditional installation" https://github.com/pypa/pipenv/issues/1353
(no response) "Adding gpu or cuda markers in PEP 508" https://github.com/pypa/interoperability-peps/issues/68
There is now a third-party project which attempts to amend this for tensorflow (https://github.com/akatrevorjay/tensorflow-auto-detect) but this approach is somewhat fragile (depends on version numbers being in sync), doesn't directly scale to all similar projects, and would require maintainers for a given project to maintain _three_ separate projects, instead of just one.
I'm not intimately familiar with PEP 508, so my questions for this list:
* Is the demand sufficient to justify supporting this use case? * Is it possible to add support for GPU Environment markers? * If so, what would need to be done? * If implemented, what should the transition look like for projects like tensorflow?
Thanks! D. -- Distutils-SIG mailing list -- distutils-sig@python.org To unsubscribe send an email to distutils-sig-leave@python.org https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/ Message archived at https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/L...