[C++-sig] Re: Interest in luabind

dalwan01 dalwan01 at student.umu.se
Thu Jun 19 16:48:16 CEST 2003


> "dalwan01" <dalwan01 at student.umu.se> writes:
>
> >>
> >> Moving this to the C++-sig as it's a more appropriate
> >> forum...
> >>
> >> "dalwan01" <dalwan01 at student.umu.se> writes:
> >>
> >> >> Daniel Wallin <dalwan01 at student.umu.se> writes:
> >> >>
> >> >> > Note however that there are quite a few
> differences in design,
> >> >> > for instance for our scope's we have been
> experimenting with
> >> >> > expressions ala phoenix:
> >> >> >
> >> >> > namespace_("foo")
> >> >> > [
> >> >> >    def(..),
> >> >> >    def(..)
> >> >> > ];
> >> >>
> >> >> I considered this syntax but I am not convinced it
> is an advantage.
> >> >> It seems to have quite a few downsides and no
> upsides. Am I
> >> >> missing something?
> >> >
> >> > For us it has several upsides:
> >> >
> >> >   * We can easily nest namespaces
> >>
> >> IMO, it optimizes for the wrong case, since namespaces
> are
> >> typically flat rather than deeply nested (see the Zen
> of Python),
> >> nor are they represented explicitly in Python code, but
> inferred
> >> from file boundaries.
> >>
> >> >   * We like the syntax :)
> >>
> >> It is nice for C++ programmers, but Python programmers
> at least are
> >> very much more comfortable without the brackets.
> >
> >
> >>
> >> >   * We can remove the lua_State* parameter from
> >> >     all calls to def()/class_()
> >>
> >> I'm not sure what that is.  We handle global state in
> Boost.Python
> >> by simply keeping track of the current module ("state")
> in a global
> >> variable.  Works a treat.
> >
> > As pointed out lua can handle multiple states, so using
> > global variabels doesn't strike me as a very good
> solution.
>
> I am not committed to the global variable approach nor am
> I opposed
> to the syntax.
>
> >> > What do you consider the downsides to be?
> >>
> >> In addition to what I cited above,
> >>
> >> a. since methods and module-scope functions need to be
> wrapped
> >> differently, you need to build up a data structure
> which stores the
> >> arguments to def(...) out of the comma-separated items
> with a
> >> complex expression-template type and then interpret
> that type using
> >> a metaprogram when the operator[]s are applied.  This
> can only
> >> increase compile times, which is already a problem.
> >
> > We don't build a complex expression-template, instead we
> > build a list of objects with a virtual method to commit
> that
> > object to the lua_State.
>
> Very nice solution!  My brain must have been trapped into
> thinking
> "compile-time".
>
> > This doesn't increase compile times.
>
> Good.  Virtual functions come with bloat of their own, but
> that's an
> implementation detail which can be mitigated.

Right. The virtual functions isn't generated in the
template, so there is very little code generated.

>
> >> I guess these two are essentially the same issue.
> >>
> >> >> > Also, we don't have a type-converter registry; we
> make all
> >> >> > choices on what converter to use at compile time.
> >> >>
> >> >> I used to do that, but it doesn't support
> >> >> component-based-development has other serious
> problems.  Are you
> >> >> sure your code is actually conformant?  When
> converters are
> >> >> determined at compile-time, the only viable and
> conformant way
> >> >> AFAICT is with template specializations, and that
> means clients
> >> >> have to be highly conscious of ordering issues.
> >> >
> >> > I think it's conformant, but I wouldn't swear on it.
> >> > We strip all qualifiers from the types and specialize
> on
> >> >
> >> >   by_cref<..>
> >> >   by_ref<..>
> >> >   by_ptr<..>
> >> >   by_value<..>
> >> >
> >> > types.
>
> I'm not really sure what the above means yet... I'm
> certainly
> interested in avoiding runtime dispatching if possible, so
> if this
> approach is viable for Boost.Python I'm all for it.

I don't know if i fully understand the ordering issues you
mentioned. When we first implemented this we had converter
functions with this sig:

T convert(type<T>, ..)

This of course introduces problems with some compilers when
trying to overload for T& and T* and such. So we introduced
a more complex type<..>.. T& -> by_ref<T>, const T& ->
by_cref<T> etc.

>
> >>
> >> How do people define specialized converters for
> particular
> >> types?
> >
> > This isn't finished, but currently we do:
> >
> > yes_t is_user_defined(by_cref<my_type>);
> > my_type convert_from_lua(lua_State*, by_cref<my_type>);
> >
> > something like that..
>
> I assume that means the user must define those two
> functions?  Where
> in the code must they be defined?

Right. The user declares the first function and defines the
other before binding functions that use the types.

>
> How will this work when multiple extension modules need to
> manipulate
> the same types?

I don't know. I haven't given that much thought. Do you see
any obvious issues?

>
> How do *add* a way to convert from Python type A to C++
> type B
> without masking the existing conversion from Python type Y
> to C++
> type Z?

I don't understand. How are B and Z related? Why would a
conversion function for B mask conversions to Z?

>
> >> > It works on all compilers we have tried it on (vc
> 6-7.1,
> >> > codewarrior, gcc2.95.3+, intel).
> >>
> >> Codewarrior Pro8.x, explicitly using the
> '-iso-templates on'
> >> option?  All the others support several common
> nonconformance bugs,
> >> many of which I was exploiting in Boost.Python v1.
> >
> > I haven't tried with -iso- option, I'll try it when i
> get home. We
> > do not however use the bug you where exploiting i bpl.v1
> (i assume
> > you are referreing to friend templates?).
>
> No, friend functions declared in templates being found
> without Koenig
> Lookup.

Right, that's what I meant. :)

>
> >> > For us it doesn't seem like an option to dispatch the
> converters at
> >> > runtime, since performance is a really high priority
> for our users.
> >>
> >> What we're doing in Boost.Python turns out to be very
> efficient,
> >> well below the threshold that anyone would notice IIUC.
>  Eric Jones
> >> did a test comparing its speed to SWIG and to my great
> surprise,
> >> Boost.Python won.
> >
> > Lua is used a lot in game developement, and game
> developers tend to
> > care very much about every extra cycle. Even an extra
> function call
> > via a function pointer could make difference for those
> users.
>
> I'm not convinced yet.  Just adding a tiny bit of lua code
> next to any
> invocation of a wrapped function would typically consume
> much more
> than that.
>
> > We like the generated bindings to be almost equal in
> speed to one
> > that is hand written.
>
> Me too; I just have serious doubts that once you factor in
> everything
> else that you want going on (e.g. derived <==> base
> conversions), the
> ability to dynamically register conversions has a
> significant cost.

You might be right. I'll investigate how runtime dispatch
would affect luabind the next couple of days, in particular
I will look at what this would do to our policy system.

--
Daniel Wallin




More information about the Cplusplus-sig mailing list