[C++-sig] Re: Interest in luabind
dave at boost-consulting.com
Wed Jun 18 19:35:43 CEST 2003
Rene Rivera <grafik.list at redshift-software.com> writes:
> [2003-06-18] David Abrahams wrote:
>>Moving this to the C++-sig as it's a more appropriate forum...
>>"dalwan01" <dalwan01 at student.umu.se> writes:
>>>> Daniel Wallin <dalwan01 at student.umu.se> writes:
>>>> > namespace_("foo")
>>>> > [
>>>> > def(..),
>>>> > def(..)
>>>> > ];
>>>> I considered this syntax but I am not convinced it is an advantage.
>>>> It seems to have quite a few downsides and no upsides. Am I
>>>> missing something?
>>> For us it has several upsides:
>>> * We can easily nest namespaces
>>IMO, it optimizes for the wrong case, since namespaces are typically flat
>>rather than deeply nested (see the Zen of Python), nor are they
>>represented explicitly in Python code, but inferred from file
> I must be atipical. I make heavy, nested, use of namespaces in my C++ code.
> So having an easy way to represent that would be nice.
>>> * We like the syntax :)
>>It is nice for C++ programmers, but Python programmers at least are
>>very much more comfortable without the brackets.
>>> * We can remove the lua_State* parameter from
>>> all calls to def()/class_()
>>I'm not sure what that is. We handle global state in Boost.Python by
>>simply keeping track of the current module ("state") in a global
>>variable. Works a treat.
> It's not global state. Unlike Python Lua can handle multiple "instances" of
> an interpreter by keeping all the interpreter state in one object.
Python can handle multiple interpreter instances too, but hardly
anyone does that. In any case, it still seems to me to be a handle to
> So having a single global var for that is not an option.
Why not? I don't get it. Normally any module's initialization code
will be operating on a single interpreter, right? Why not store its
identity in a global variable?
> It needs to get passed around explicitly or implicitly. I imagine
> Lua is not the only interpreter that does this. So it's something to
> consider carefully as we'll run into it again (I fact if I remember
> correctly Java JNI does the same thing).
As long as modules don't initialize concurrently, I don't see how
there could be a problem. Of course, if they *do* initialize
concurrently, everything I've said about the viability of globals is
wrong. For that case you'd need TLS if you wanted to effectively hide
the state :(.
>>> For us it doesn't seem like an option to dispatch the converters at
>>> runtime, since performance is a really high priority for our users.
>>What we're doing in Boost.Python turns out to be very efficient, well
>>below the threshold that anyone would notice IIUC. Eric Jones did a
>>test comparing its speed to SWIG and to my great surprise,
> It's a somewhat different audience that uses Lua. The kind of audience that
> looks at the assembly generated to make sure it's efficient. People like
> game developers, embeded developers, etc.
<politically incorrect sweeping generalization>
That audience tends to be superstitious about cycles, rather than
measuring, and I think this concern is almost always misplaced when
applied at the boundary between interpreted and compiled languages.
The whole point of binding C++ into an interpreter, when you're
concerned with performance, is to capture a large chunk of
high-performance execution behind a single function call in the
interpreter. I would think that once you are willing to use a
language like lua you're not going to be that parsimonious with the
execution of lua code adjacent to the call into C++, and it's easy for
an extra instruction or two in the interpreter to swamp the cost of
dynamic type conversion. Furthermore, purely compile-time lookups can
have costs in code size, which is another important concern for this
</politically incorrect sweeping generalization>
> so having a choice between compile time and runtime they, and I,
> would choose compile time. But perhaps the important thing about
> this is to consider how to support both models.
I know it can be a hard sell to that group, but I'd want to see some
convincing numbers before deciding to support both models.
Boost.Python used to use static converter lookups, but the advantages
of doing it dynamically are so huge that I'm highly reluctant to
complicate the codebase by supporting both without scientific
Oh, and BTW: I think people have vastly overestimated the amount of
avoidable dynamic lookup, and the amount of code actually executed in
the dynamic converter lookup. There is no map indexing or anything
like that, except at the time the module is loaded, when references to
converter registry for each type are initialized. The converter
registry for a given type generally contains only one converter (in
each direction), so there is no cost for searching for an appropriate
converter. When C++ classes are extracted from wrapped class objects,
a *static* procedure for finding the C++ class is tried before
consulting the registry.
Finally, if you care about derived <==> base class conversions (and I
think you do), there will always be some dynamic type manipulation
and/or RTTI, leading to some dynamic dispatching, because that's the
only way to implement it.
I am not trying to be difficult here. If there are significant
technical advantages to purely-static converter lookups, I will be the
first to support the idea. In all, however, I believe it's not an
accident that Boost.Python evolved from purely-static to a model which
supports dynamic conversions, not just because of usability concerns,
but also because of correctness and real efficiency. So, let's keep
the conversation open, and try to hammer on it until we reach
More information about the Cplusplus-sig