[C++-sig] Re: Interest in luabind

David Abrahams dave at boost-consulting.com
Mon Jun 23 15:30:49 CEST 2003

"Daniel Wallin" <dalwan01 at student.umu.se> writes:

>> > Instead of doing this we have general converters which is used to
>> > convert all user-defined types.
>> I have the same thing for most from_python conversions;
>> the registry
>> is only used as a fallback in that case.
> Hm, doesn't the conversion of UDT's pass through the normal
> conversion system?

See get_lvalue_from_python in
libs/python/src/converter/from_python.cpp.  First it calls
find_instance_impl, which will always find a pointer to the right type
inside a regular wrapped class if such a pointer is findable.  The
only thing that gets used from the registration in that case is a
type_info object, which has been stored in the registration, as
opposed to being passed as a separate parameter, just to minimize the
amount of object code generated in extension modules which are
invoking the conversion.

That's for from-python conversions, of course.  For to-python
conversions, yes, we nearly always end up consulting the registration
for the type.  But that's cheap, after all - there's just a single
method for converting any type to python so it just pulls the function
pointer out of the registration and invokes it.  Compared to the cost
of constructing a Python object, an indirect call gets lost in the
noise.  The fact that there can only be one way to do that also means
that we can introduce some specializations for conversion to python,
which bypasses indirection for known types such as int or std::string.
Note however that this bypassing actually has a usability cost because
the implicit conversion mechanism actually consults the registration
records directly, and I'm not currently filling in the to-python
registrations for these types with specializations, so some implicit
conversion sequences don't work.  It may have been premature
optimization to use specializations here.

>> > To do this we need a map<..> lookup to find the appropriate
>> > converter and this really sucks.
>> I can't understand why you'd need that, but maybe I'm missing
>> something.  The general mechanism in Boost.Python is that
>> instance_holder::holds(type_info) will give you the address of the
>> contained instance if it's there.
> Right, we have a map<const type_info*, ..> when performing
> c++ -> lua conversions. You just need to do
> registered<T>::conversions.to_python(..); Correct?

Roughly speaking, yes.  But what I'm confused about is, if you're
using full compile-time dispatching for from-lua conversions, why you
don't do the same for to-lua conversions.  AFAICT, it's the former
where compile-time dispatch is most useful.  What's the 2nd argument
to the map?

>> > As mentioned before, lua can have multiple states, so it would be
>> > cool if the converters would be bound to the state somehow.
>> Why?  It doesn't seem like it would be very useful to have
>> different states doing different conversions.
> It can be useful to be able to register different types in different
> states.


> Otherwise class_() would register global types and def()
> would register local functions. Or am I wrong in assuming that
> class_<T>() instantiates registered<T> and add's a few converters?

No, you're correct.  However, it also creates a Python type object in
the extension module's dictionary, just as def() creates callable
python objects in the module's dictionary.  I see the converter
registry as a separate data structure which exists in parallel with
the module's dictionary.  I don't see any reason to have a given
module register different sets of type conversions in different
states, even if it is going to contain different types/functions
(though I can't see why you'd want that either).

>> > Anyway, I find your converter system more appealing than
>> > ours. There are some issues which need to be taken care of;
>> > We choose best match, not first match, when trying different
>> > overloads. This means we need to keep the storage for the
>> > converter on the stack of a function that is unaware of the
>> > converter size (at compile time). So we need to either have
>> > a fixed size buffer on the stack, and hope it works, or
>> > allocate the storage at runtime.
>> I would love to have best match conversion.  I was going to do it
>> at one point, but realized eventually that users can sort the
>> overloads so that they always work so I never bothered to code it.
> Do you still think best match is worth adding, or is sorting an
> acceptable solution?

I think in many cases, it's more understandable for users to be able
to simply control the order in which converters are tried.  It
certainly is *more efficient* than trying all converters, if you're
going to be truly compulsive about cycles, though I don't really care
about that.  We do have one guy, though, who's got a massively
confusable overload set and I think he's having trouble resolving it
because of the easy conversions between C++ (int, long long) and
Python (int, LONG).


In general, I'd prefer to have more things "just work" automatically,
so yeah I think it's worth adding to Boost.Python.

>> > For clarification:
>> >
>> > void dispatcher(..)
>> > {
>> >   *storage here*
>> >   try all overloads
>> >   call best overload
>> > }
>> I've already figured out how to solve this problem; if we can
>> figure out how to share best-conversion technology I'll happily
>> code it up ;-)
> :) How would you do it? 

I'll give you a hint, if you agree to cooperate on best-conversion:

     Your cycle-counters would probably go apoplectic.

> I guess you could have static storage in the match-function and
> store a pointer to that in the converter data, but that wouldn't be
> thread safe.

OK, here it is, I'll tell you: you use recursion.

> It seems to me that sharing the conversion code out of the box is
> going to be hard. 

You mean without any modification to existing conversion source?  I
never expected to achieve that.  Remember, I'm considering a
refactoring of the codebase anyway.

> Perhaps we should consider parameterizing header
> files?
> namespace luabind
> {
>     (2, (lua_State*, int))
>   #include <blabla/conversions.hpp>
> }

Hmm, I'm not sure what you're trying to achieve here, but that kind of
parameterization seems unneccessary to me. we probably ought to do it
with templates if there's any chance at all that these systems would
have to be compiled together in some context... though I guess with
inclusion into separate namespaces you could get around that.  Well
OK, let's look at the requirements more carefully before we jump into
implementation details.  I may be willing to accept additional state.

Dave Abrahams
Boost Consulting

More information about the Cplusplus-sig mailing list