[C++-sig] Re: Interest in luabind
dave at boost-consulting.com
Thu Jun 19 14:04:23 CEST 2003
"dalwan01" <dalwan01 at student.umu.se> writes:
>> Moving this to the C++-sig as it's a more appropriate
>> "dalwan01" <dalwan01 at student.umu.se> writes:
>> >> Daniel Wallin <dalwan01 at student.umu.se> writes:
>> >> > Note however that there are quite a few differences in design,
>> >> > for instance for our scope's we have been experimenting with
>> >> > expressions ala phoenix:
>> >> >
>> >> > namespace_("foo")
>> >> > [
>> >> > def(..),
>> >> > def(..)
>> >> > ];
>> >> I considered this syntax but I am not convinced it is an advantage.
>> >> It seems to have quite a few downsides and no upsides. Am I
>> >> missing something?
>> > For us it has several upsides:
>> > * We can easily nest namespaces
>> IMO, it optimizes for the wrong case, since namespaces are
>> typically flat rather than deeply nested (see the Zen of Python),
>> nor are they represented explicitly in Python code, but inferred
>> from file boundaries.
>> > * We like the syntax :)
>> It is nice for C++ programmers, but Python programmers at least are
>> very much more comfortable without the brackets.
>> > * We can remove the lua_State* parameter from
>> > all calls to def()/class_()
>> I'm not sure what that is. We handle global state in Boost.Python
>> by simply keeping track of the current module ("state") in a global
>> variable. Works a treat.
> As pointed out lua can handle multiple states, so using
> global variabels doesn't strike me as a very good solution.
I am not committed to the global variable approach nor am I opposed
to the syntax.
>> > What do you consider the downsides to be?
>> In addition to what I cited above,
>> a. since methods and module-scope functions need to be wrapped
>> differently, you need to build up a data structure which stores the
>> arguments to def(...) out of the comma-separated items with a
>> complex expression-template type and then interpret that type using
>> a metaprogram when the operators are applied. This can only
>> increase compile times, which is already a problem.
> We don't build a complex expression-template, instead we
> build a list of objects with a virtual method to commit that
> object to the lua_State.
Very nice solution! My brain must have been trapped into thinking
> This doesn't increase compile times.
Good. Virtual functions come with bloat of their own, but that's an
implementation detail which can be mitigated.
>> b. You don't get any order-of-evaluation guarantees.
>> Things like
>> staticmethod() need to operate on an existing function
>> object in the
>> class' dictionary
>> and if you can't guarantee that it gets executed after
>> a def()
>> call you need to further complicate your expression
>> template to
>> delay evaluation of staticmethod()
> As we don't build a expression template, I don't think this
> is an issue.
Actually I think it's a non-issue because you *do* build a
runtime-bound version of an expression template.
>> I guess these two are essentially the same issue.
>> >> > Also, we don't have a type-converter registry; we make all
>> >> > choices on what converter to use at compile time.
>> >> I used to do that, but it doesn't support
>> >> component-based-development has other serious problems. Are you
>> >> sure your code is actually conformant? When converters are
>> >> determined at compile-time, the only viable and conformant way
>> >> AFAICT is with template specializations, and that means clients
>> >> have to be highly conscious of ordering issues.
>> > I think it's conformant, but I wouldn't swear on it.
>> > We strip all qualifiers from the types and specialize on
>> > by_cref<..>
>> > by_ref<..>
>> > by_ptr<..>
>> > by_value<..>
>> > types.
I'm not really sure what the above means yet... I'm certainly
interested in avoiding runtime dispatching if possible, so if this
approach is viable for Boost.Python I'm all for it.
>> How do people define specialized converters for particular
> This isn't finished, but currently we do:
> yes_t is_user_defined(by_cref<my_type>);
> my_type convert_from_lua(lua_State*, by_cref<my_type>);
> something like that..
I assume that means the user must define those two functions? Where
in the code must they be defined?
How will this work when multiple extension modules need to manipulate
the same types?
How do *add* a way to convert from Python type A to C++ type B
without masking the existing conversion from Python type Y to C++
>> > It works on all compilers we have tried it on (vc 6-7.1,
>> > codewarrior, gcc2.95.3+, intel).
>> Codewarrior Pro8.x, explicitly using the '-iso-templates on'
>> option? All the others support several common nonconformance bugs,
>> many of which I was exploiting in Boost.Python v1.
> I haven't tried with -iso- option, I'll try it when i get home. We
> do not however use the bug you where exploiting i bpl.v1 (i assume
> you are referreing to friend templates?).
No, friend functions declared in templates being found without Koenig
>> > For us it doesn't seem like an option to dispatch the converters at
>> > runtime, since performance is a really high priority for our users.
>> What we're doing in Boost.Python turns out to be very efficient,
>> well below the threshold that anyone would notice IIUC. Eric Jones
>> did a test comparing its speed to SWIG and to my great surprise,
>> Boost.Python won.
> Lua is used a lot in game developement, and game developers tend to
> care very much about every extra cycle. Even an extra function call
> via a function pointer could make difference for those users.
I'm not convinced yet. Just adding a tiny bit of lua code next to any
invocation of a wrapped function would typically consume much more
> We like the generated bindings to be almost equal in speed to one
> that is hand written.
Me too; I just have serious doubts that once you factor in everything
else that you want going on (e.g. derived <==> base conversions), the
ability to dynamically register conversions has a significant cost.
More information about the Cplusplus-sig