[C++-sig] Re: Interest in luabind

Ralf W. Grosse-Kunstleve rwgk at yahoo.com
Sun Jun 22 15:44:03 CEST 2003

--- Daniel Wallin <dalwan01 at student.umu.se> wrote:
> Right. We didn't really intend for luabind to be used in
> this way, but rather for binding closed modules. It seems to
> me like this can't be very common thing to do though, at
> least not with lua. I have very little insight in how python
> is used.

Boost.Python's "cross-module" feature is absolutely essential for us.
Unfortunately my cross-module web page seems to have fallen through the
cracks in the V1->V2 transition, but here it is, resurrected:


Adding to this: imagine you had to link all extensions statically into
Python. This is analog to not having cross-module support. Maybe it is
not important if you don't expect others to extend your system, but such
a barrier against natural growth is unacceptable for us.

Anecdotal comment: If you go way back in the Boost mailing list (4th
quarter of 2000) you can see that David wasn't very fond of the cross-module
idea at all :-)

Regarding the "static vs. dynamic dispatch" discussion: It seems to me
(without having thought it through) that static dispatch is associated
with explicitly importing and exporting converters a la Boost.Python
V1 (see cross_module.html referenced above). This made building
extensions quite cumbersome as our system got bigger. In practice it
was a *big* relieve when we upgraded to Boost.Python V2. The dynamic
dispatch allowed me to be very generous with introducing a large number
of "convenience converters" which would have been impractical in V1.
To get an idea look at this fragment of our system for wrapping
multi-dimensional C++ arrays:


Each of the 20 C++ types in the signatures of the friend functions in
the flew_fwd<> struct (which is just a workaround for MIPSpro but nice
to show all the types in one place) needs custom from_python converters
(plural!) *for each* T, of which we have 14 right now. Due to the
dynamic dispatch I can define as many converters as I need *in one
place* and use them "just like that" in any other extension module.
In contrast, with Boost.Python V1 I had to bring all the right
converters into each C++ translation unit.

Regarding efficiency considerations: Boost.Python's (V2) conversions
are amazingly fast, on the order of 100000 per second on a recent
Intel/Linux system (I say "amazingly" because I know, to a certain
degree, how involved the C++ code is that makes this happen). But
anyway, I believe if you have to cross the language boundary 100000
times you are making a big mistake. If you have to do something 100000
times it can only be in a loop of some form. Simplified:

  for i in xrange(100000):

Our approach is to take full advantage of Boost.Python's ease of
use in wrapping additional functions:

E.g. in C++:

  vectorized_function(array<type_of_some_argument>& a)
    for(std::size_t i=0;i<a.size();i++) {


  def("function", vectorized_function);
  def("function", function); // just in case

Of course, critical to this approach is the ability to easily wrap
arrays of user-defined types, and critical to that is the cross-module

To summary my practical experience: Maybe (?) static dispatch is more
efficient if most of your loops are in the interpreted layer, but it is
vastly more efficient if you push the rate-limiting loops down into the
compiled layer. This requires wrapping arrays of user-defined types
which is much easier handled in a system based on dynamic dispatch. So
overall dynamic dispatch wins out by a large margin.


Do you Yahoo!?
SBC Yahoo! DSL - Now only $29.95 per month!

More information about the Cplusplus-sig mailing list