[C++-sig] Weave Scipy Inline C++

eric jones eric at enthought.com
Sun Sep 15 07:00:48 CEST 2002

> > Hey Mark,
> >
> > > http://www.scipy.org/site_content/weave
> > >
> > > Can Boost do anything like this?  These folks are planning to use
> > > while I am trying to get them to use Boost Python for C++
> > > They keep talking about Pythonic C++ classes, but I think that
> > > limiting inline C++ to such classes will severely limit Weave.
> >
> > What is this "severe limitation"?  I just don't understand.  Do you
> > a use case to help me out here?
> >
> > We have a very light weight need -- making dicts, tuples, and lists
> > to use in C++ so that inline C++ code looks as much like Python as
> > possible.  This is about 0.3% of boost's capabilities.
> I agreed with that at first, but on second thought, well, maybe not.
> capability draws on Boost.Python's core C++<=>Python conversion
> which accounts for most of the hard stuff in the library.
> > On the other
> > hand, it uses about 95% of SCXX capabilities which means it is a
> > fit.  SCXX is 900 lines of code and easy to port anywhere.
> If that's true, I doubt it's accomplishing the job as well as
> does. Note that you never have to explicitly convert C++ objects to
> when they are interacting with a Python object:
>   object f(boost::python::object x)
>   {
>     x[1] = "hello";
>     return x(1,2,3,4,5,6,x); // __call__
>     ...

This is definitely visually cleaner, and I like it better.  Maybe a few
overloads in SCXX would make 

	x[1] = "hello";

work though for ints, floats, strings, etc.  I'll look at this and
report back...  Yep, worked fine.

Also, I understand the first line, but not what the second is doing.  Is
x like a UserList that has a __call__ method defined?

Note also that boost has more work to do in this case than weave does.
Boost::python::object can be pretty much anything I'm guessing.  When we
get to the 'x[1] = "hello";' line in weave, the code is explicitly
compiled again each time for a list or a tuple or a dict.  The following
happens at the command line:

>>> a = [1]
>>> weave.inline('a[0] = "hello";',['a'])
<compile a function that takes 'a' as a list>
>>> a[0]
>>> a = {}
>>> weave.inline('a[0] = "hello";',['a'])
<compile a function that takes 'a' as a dict>

So I'm guessing the cleverness you probably had to go through to get
things working in C++ is handled by the dynamic typing mechanism in

> not
>     x[Py::Int(1)] = Py::Str("hello");
>     // ??? what does __call__ look like?

Currently I just use the Python API for calling functions -- although
SCXX does have a callable class that could be used.  Also, nothing
special is done to convert instances that flow from Python into C++
unless a special converter has been written for them (such as for
wxPython objects).  Things weave doesn't explicitly know about are left
as PyObject* which can be manipulated in C code.

> or whatever. Getting this code to work everywhere was one of the
> porting jobs I've ever faced. Compilers seem to have lots of bugs in
> areas I was exercising.

The porting comment scares me.  Is it still ticklish? C++ bugs pop in
areas where they shouldn't -- even in the same compiler but on separate
platform.  There is currently some very silly code in weave explicitly
to work around exception handling bugs on Mandrake with gcc.  Since
spending a couple of days on this single issue, I've tried to avoid
anything potentially fragile (hence the move to SCXX).  CXX compile
issues also pushed me that direction.  The compilers I really care about
are gcc 2.95.x (mingw and linux), gcc 3.x, MSVC, MIPSPro, SunCC, DEC,
and xlc.  How is boost doing on this set? Weave isn't tested all these
places yet, but needs to run on all of them eventually (and shouldn't
have a problem now).
There is some other compiler intensive stuff in weave, but its use is
optional.  I include alternate converters for use blitz++ converters for
Numeric arrays because they also make code cleaner and are used in the
blitz() routine for automatically converting Numeric expressions to C++.
Up to now, this stuff has just worked with gcc, and I've been satisfied
to leave it at that.
> However, you may still be right that it's not an appropriate solution
> weave.

I think boost would work fine -- maybe even better.  I really like that
boost project is active -- SCXX and CXX aren't very.  The beauty of SCXX
is it takes about 20 minutes to understand its entire code base.  The
worries I have with boost are:

1) How much of boost do I have to carry around for the simple
functionality I mentioned.
2) How persnickety is compiling the code on new platforms?
3) Would people have to understand much of boost to use this
4) How ugly are the error reports from the compiler when code is
malformed?  Blitz++ reports are incomprehensible to anyone except
template gurus.
5) How steep is my learning curve on the code? (I know, only I can
answer that by looking at it for a while which I haven't yet.)

Note that I'm really looking for the prettiest and fastest solution
*with the least possible headaches*.  For weave, least headaches trumps
pretty and fast in a major way.  I've even considered moving weave back
to generating pure C code to make sure it works everywhere and leaving
the user to wrestle with refcounts and the Python API.  I think C++ is
getting far enough along though that this shouldn't be necessary (and
allows the "pretty").  Note though, that I was extremely disappointed
with CXX's speed when manipulating lists, etc.  It was *much* slower
than calling the raw Python API.  For computationally intense stuff on
lists, etc., you had to revert to API calls.  I haven't benchmarked SCXX
yet, but I'm betting the story is the same.  Most things I care about
are in Numeric arrays, but that isn't true for everyone else.

One other thought is that once we understand each others technologies
better, we may see other places where synergy is beneficial.

> > If you need the other 99.7% of boost's capabilities, then you
> > need to be using boost instead of weave anyhow.  They serve
> > purposes.  Weave is generally suited for light weight wrapping and
> > speeding up computational kernels with minimum hassle -- especially
> > numeric codes where Numeric isn't fast enough.
> >
> > Oh, and I'm happy to except patches that allow for boost type
				 ^^^^^^ err... accept :-|

> > in weave (they should, after all, be easy to write).  Then you can
> > boost instead of SCXX.
> What did you have in mind?

The code for a new type converter class that handles translating Python
code to C++ is rather trivial after the latest factoring of weave.  Here
is an example of a weave expression and the underlying C++ code that is
generated on the fly:

>>> a = {}
>>> weave.inline('a["hello"] = 1;',['a'])

# underlying ext func
static PyObject* compiled_func(PyObject*self, PyObject* args)
    PyObject *return_val = NULL;
    int exception_occured = 0;
    PyObject *py__locals = NULL;
    PyObject *py__globals = NULL;
    PyObject *py_a;
    py_a = NULL;
        return NULL;
        PyObject* raw_locals = py_to_raw_dict(py__locals,"_locals");
        PyObject* raw_globals = py_to_raw_dict(py__globals,"_globals");
        /* argument conversion code */     
        py_a = get_variable("a",raw_locals,raw_globals);
        PWODict a = convert_to_dict(py_a,"a");
        /* inline code */                   
        a["hello"] = 1;        
        return_val =  NULL;    
        exception_occured = 1;       
    /* cleanup code */                   
    if(!return_val && !exception_occured)
        return_val = Py_None;            
    return return_val;           

So the line that has to change is:

        PWODict a = convert_to_dict(py_a,"a");

and the convert_to_dict function -- but it is automatically generated by
the converter class (although you could customize it if needed).

And here is the pertinent code out of weave/c_spec.py
# List, Tuple, and Dict converters.
# Based on SCXX by Gordon McMillan
import os, c_spec # yes, I import myself to find out my __file__
local_dir,junk = os.path.split(os.path.abspath(c_spec.__file__))   
scxx_dir = os.path.join(local_dir,'scxx')

class scxx_converter(common_base_converter):
    def init_info(self):
        self.headers = ['"scxx/PWOBase.h"','"scxx/PWOSequence.h"',
        self.include_dirs = [local_dir,scxx_dir]
        self.sources = [os.path.join(scxx_dir,'PWOImp.cpp'),]

class list_converter(scxx_converter):
    def init_info(self):
        self.type_name = 'list'
        self.check_func = 'PyList_Check'    
        self.c_type = 'PWOList'
        self.to_c_return = 'PWOList(py_obj)'
        self.matching_types = [ListType]
        # ref counting handled by PWOList
        self.use_ref_count = 0

I think the only things that need changing to boost types are the header
list, the source file list, and self.c_type and self.to_c_return for a
first cut.

Hmmm.  Guess I'll show the template that these values fill in:

py_to_c_template = \
class %(type_name)s_handler
    %(c_type)s convert_to_%(type_name)s(PyObject* py_obj, const char*
        if (!py_obj || !%(check_func)s(py_obj))
            handle_conversion_error(py_obj,"%(type_name)s", name);    
        return %(to_c_return)s;

%(type_name)s_handler x__%(type_name)s_handler =
#define convert_to_%(type_name)s(py_obj,name) \\

The class and #define stuff is the silliness required for Mandrake --
really only the convert_to_xxx function is needed.


More information about the Cplusplus-sig mailing list