Re: [Numpydiscussion] Stateoftheart to use a C/C++ library from Python
Date: Wed, 31 Aug 2016 13:28:21 +0200 From: Michael Bieri
I'm not quite sure which approach is stateoftheart as of 2016. How would you do it if you had to make a C/C++ library available in Python right now?
In my case, I have a C library with some scientific functions on matrices and vectors. You will typically call a few functions to configure the computation, then hand over some pointers to existing buffers containing vector data, then start the computation, and finally read back the data. The library also can use MPI to parallelize.
Depending on how minimal and universal you want to keep things, I use the ctypes approach quite often, i.e. treat your numpy inputs an outputs as arrays of doubles etc using the ndpointer(...) syntax. I find it works well if you have a small number of welldefined functions (not too many options) which are numerically very heavy. With this approach I usually wrap each method in python to check the inputs for contiguity, pass in the sizes etc. and allocate the numpy array for the result. Peter
On Fri, Sep 2, 2016 at 1:16 AM, Peter Creasey
Date: Wed, 31 Aug 2016 13:28:21 +0200 From: Michael Bieri
I'm not quite sure which approach is stateoftheart as of 2016. How would you do it if you had to make a C/C++ library available in Python right now?
In my case, I have a C library with some scientific functions on matrices and vectors. You will typically call a few functions to configure the computation, then hand over some pointers to existing buffers containing vector data, then start the computation, and finally read back the data. The library also can use MPI to parallelize.
Depending on how minimal and universal you want to keep things, I use the ctypes approach quite often, i.e. treat your numpy inputs an outputs as arrays of doubles etc using the ndpointer(...) syntax. I find it works well if you have a small number of welldefined functions (not too many options) which are numerically very heavy. With this approach I usually wrap each method in python to check the inputs for contiguity, pass in the sizes etc. and allocate the numpy array for the result.
FWIW, the broader Python community seems to have largely deprecated ctypes in favor of cffi. Unfortunately I don't know if anyone has written helpers like numpy.ctypeslib for cffi... n  Nathaniel J. Smith  https://vorpus.org
maybe https://bitbucket.org/memotype/cffiwrap or
https://github.com/andrewleech/cfficloak helps?
C.
20160902 11:16 GMT+02:00 Nathaniel Smith
On Fri, Sep 2, 2016 at 1:16 AM, Peter Creasey
wrote: Date: Wed, 31 Aug 2016 13:28:21 +0200 From: Michael Bieri
I'm not quite sure which approach is stateoftheart as of 2016. How would you do it if you had to make a C/C++ library available in Python right now?
In my case, I have a C library with some scientific functions on matrices and vectors. You will typically call a few functions to configure the computation, then hand over some pointers to existing buffers containing vector data, then start the computation, and finally read back the data. The library also can use MPI to parallelize.
Depending on how minimal and universal you want to keep things, I use the ctypes approach quite often, i.e. treat your numpy inputs an outputs as arrays of doubles etc using the ndpointer(...) syntax. I find it works well if you have a small number of welldefined functions (not too many options) which are numerically very heavy. With this approach I usually wrap each method in python to check the inputs for contiguity, pass in the sizes etc. and allocate the numpy array for the result.
FWIW, the broader Python community seems to have largely deprecated ctypes in favor of cffi. Unfortunately I don't know if anyone has written helpers like numpy.ctypeslib for cffi...
n
 Nathaniel J. Smith  https://vorpus.org _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpydiscussion
How do these two relate to each other !?
 Sebastian
On Fri, Sep 2, 2016 at 12:33 PM, Carl Kleffner
maybe https://bitbucket.org/memotype/cffiwrap or https://github.com/ andrewleech/cfficloak helps?
C.
20160902 11:16 GMT+02:00 Nathaniel Smith
: On Fri, Sep 2, 2016 at 1:16 AM, Peter Creasey
wrote: Date: Wed, 31 Aug 2016 13:28:21 +0200 From: Michael Bieri
I'm not quite sure which approach is stateoftheart as of 2016. How would you do it if you had to make a C/C++ library available in Python right now?
In my case, I have a C library with some scientific functions on matrices and vectors. You will typically call a few functions to configure the computation, then hand over some pointers to existing buffers containing vector data, then start the computation, and finally read back the data. The library also can use MPI to parallelize.
Depending on how minimal and universal you want to keep things, I use the ctypes approach quite often, i.e. treat your numpy inputs an outputs as arrays of doubles etc using the ndpointer(...) syntax. I find it works well if you have a small number of welldefined functions (not too many options) which are numerically very heavy. With this approach I usually wrap each method in python to check the inputs for contiguity, pass in the sizes etc. and allocate the numpy array for the result.
FWIW, the broader Python community seems to have largely deprecated ctypes in favor of cffi. Unfortunately I don't know if anyone has written helpers like numpy.ctypeslib for cffi...
n
 Nathaniel J. Smith  https://vorpus.org _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpydiscussion
fork / extension of cffiwrap:
*"cfficloak  A simple but flexible module for creating objectoriented,
pythonic CFFI wrappers.This is an extension of
https://bitbucket.org/memotype/cffiwrap
https://bitbucket.org/memotype/cffiwrap"*
20160902 13:46 GMT+02:00 Sebastian Haase
How do these two relate to each other !?  Sebastian
On Fri, Sep 2, 2016 at 12:33 PM, Carl Kleffner
wrote: maybe https://bitbucket.org/memotype/cffiwrap or https://github.com/andrewleech/cfficloak helps?
C.
20160902 11:16 GMT+02:00 Nathaniel Smith
: On Fri, Sep 2, 2016 at 1:16 AM, Peter Creasey
wrote: Date: Wed, 31 Aug 2016 13:28:21 +0200 From: Michael Bieri
I'm not quite sure which approach is stateoftheart as of 2016. How would you do it if you had to make a C/C++ library available in Python right now?
In my case, I have a C library with some scientific functions on matrices and vectors. You will typically call a few functions to configure the computation, then hand over some pointers to existing buffers containing vector data, then start the computation, and finally read back the data. The library also can use MPI to parallelize.
Depending on how minimal and universal you want to keep things, I use the ctypes approach quite often, i.e. treat your numpy inputs an outputs as arrays of doubles etc using the ndpointer(...) syntax. I find it works well if you have a small number of welldefined functions (not too many options) which are numerically very heavy. With this approach I usually wrap each method in python to check the inputs for contiguity, pass in the sizes etc. and allocate the numpy array for the result.
FWIW, the broader Python community seems to have largely deprecated ctypes in favor of cffi. Unfortunately I don't know if anyone has written helpers like numpy.ctypeslib for cffi...
n
 Nathaniel J. Smith  https://vorpus.org _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpydiscussion
I think you can use ffi.from_buffer and ffi.cast from cffi.
On Fri, Sep 2, 2016 at 8:53 AM Carl Kleffner
fork / extension of cffiwrap:
*"cfficloak  A simple but flexible module for creating objectoriented, pythonic CFFI wrappers.This is an extension of https://bitbucket.org/memotype/cffiwrap https://bitbucket.org/memotype/cffiwrap"*
20160902 13:46 GMT+02:00 Sebastian Haase
: How do these two relate to each other !?  Sebastian
On Fri, Sep 2, 2016 at 12:33 PM, Carl Kleffner
wrote: maybe https://bitbucket.org/memotype/cffiwrap or https://github.com/andrewleech/cfficloak helps?
C.
20160902 11:16 GMT+02:00 Nathaniel Smith
: On Fri, Sep 2, 2016 at 1:16 AM, Peter Creasey
wrote: Date: Wed, 31 Aug 2016 13:28:21 +0200 From: Michael Bieri
I'm not quite sure which approach is stateoftheart as of 2016. How would you do it if you had to make a C/C++ library available in Python right now?
In my case, I have a C library with some scientific functions on matrices and vectors. You will typically call a few functions to configure the computation, then hand over some pointers to existing buffers containing vector data, then start the computation, and finally read back the data. The library also can use MPI to parallelize.
Depending on how minimal and universal you want to keep things, I use the ctypes approach quite often, i.e. treat your numpy inputs an outputs as arrays of doubles etc using the ndpointer(...) syntax. I find it works well if you have a small number of welldefined functions (not too many options) which are numerically very heavy. With this approach I usually wrap each method in python to check the inputs for contiguity, pass in the sizes etc. and allocate the numpy array for the result.
FWIW, the broader Python community seems to have largely deprecated ctypes in favor of cffi. Unfortunately I don't know if anyone has written helpers like numpy.ctypeslib for cffi...
n
 Nathaniel J. Smith  https://vorpus.org _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpydiscussion
On Fri, 2 Sep 2016 02:16:35 0700
Nathaniel Smith
Depending on how minimal and universal you want to keep things, I use the ctypes approach quite often, i.e. treat your numpy inputs an outputs as arrays of doubles etc using the ndpointer(...) syntax. I find it works well if you have a small number of welldefined functions (not too many options) which are numerically very heavy. With this approach I usually wrap each method in python to check the inputs for contiguity, pass in the sizes etc. and allocate the numpy array for the result.
FWIW, the broader Python community seems to have largely deprecated ctypes in favor of cffi.
I'm not sure about "largely deprecated". For sure, that's the notion spreaded by a number of people. Regards Antoine.
On Fri, Sep 2, 2016 at 1:16 AM, Peter Creasey
wrote:
I'm not quite sure which approach is stateoftheart as of 2016. How would you do it if you had to make a C/C++ library available in Python right now?
In my case, I have a C library with some scientific functions on matrices and vectors. You will typically call a few functions to configure the computation, then hand over some pointers to existing buffers containing vector data, then start the computation, and finally read back the data. The library also can use MPI to parallelize.
Cython works really well for this. ctypes is a better option if you have a "black box" shared lib you want a call a couple functions in. Cython works better if you want to write a little "thicker" wrapper around youe C code  i.e. it may do a scalar computation, and you want to apply it to an entire numpy array, at C speed. Either would work in this case, But I like Cyton better, as long as I don't have compilation issues. Chris  Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 5266959 voice 7600 Sand Point Way NE (206) 5266329 fax Seattle, WA 98115 (206) 5266317 main reception Chris.Barker@noaa.gov
participants (7)

Antoine Pitrou

Carl Kleffner

Chris Barker

Nathaniel Smith

Peter Creasey

Sebastian Haase

Thiago Franco Moraes