[C++-sig] Re: Interest in luabind

David Abrahams dave at boost-consulting.com
Wed Jun 18 20:30:09 CEST 2003


Rene Rivera <grafik666 at redshift-software.com> writes:

> [2003-06-18] David Abrahams wrote:
>
>>>>I'm not sure what that is.  We handle global state in Boost.Python by
>>>>simply keeping track of the current module ("state") in a global
>>>>variable.  Works a treat.
>>>
>>> It's not global state. Unlike Python Lua can handle multiple "instances" of
>>> an interpreter by keeping all the interpreter state in one object. 
>>
>>Python can handle multiple interpreter instances too, but hardly
>>anyone does that.  In any case, it still seems to me to be a handle
>>to global state.
>
> Perhaps because Python has a higher interpreter cost? 

I'm not really sure why.  What's an "interpreter cost?"

> The thing is it's the recomended way to do things in Lua.
>
>>> So having a single global var for that is not an option. 
>>
>>Why not?  I don't get it.  Normally any module's initialization code
>>will be operating on a single interpreter, right?
>
> No. The LuaState is the complete interpreter state. So to do bindings, or
> anything else, you create the state for each context you are calling
> in.

"Context", possibly meaning "module?"

If so, I still don't see a problem with using namespace-scope
variables in an anonymous namespace (for example).

> There's no limitation as to matching the state to anything else
> other than the calling context. For eaxmple I could create a set of
> states, say 20, and have a pool of, say 50, threads that all "share"
> those on an as needed basis. Something like this is in fact my
> current need for Lua.

Wow, cool and weird!  Why do you want 20 separate interpreters?

>>> It needs to get passed around explicitly or implicitly. I imagine
>>> Lua is not the only interpreter that does this. So it's something to
>>> consider carefully as we'll run into it again (I fact if I remember
>>> correctly Java JNI does the same thing).
>>
>>As long as modules don't initialize concurrently, I don't see how
>>there could be a problem.  Of course, if they *do* initialize
>>concurrently, everything I've said about the viability of globals is
>>wrong.  For that case you'd need TLS if you wanted to effectively hide
>>the state :(.
>
> Ah, well, there's the rub ;-) They can initialize concurrently. And
> to make it more interesting the same state can be used by different
> threads (but not at the same time) from time to time.

If the same module can be initialized simultaneously by two separate
interpreters, I can see that there might be a problem.  Of course, one
could put a mutex guard around the whole module initialization, but
the people who read the ASM would probably be upset with that.

>><politically incorrect sweeping generalization>
>
> I hate politics ;-)

Me too; it wasn't meant to be a political statement.  I was trying to
put the technical issues in perspective.

> But yes measuring the performace is a requirement. The group in
> question tends to resort to looking at the ASM only when they've run
> out of other options to find out why their program is slow.

Maybe this is a different group from the one that invented EC++
because "namespaces and templates have negative performance impact."
;-)

>>I am not trying to be difficult here.  If there are significant
>>technical advantages to purely-static converter lookups, I will be the
>>first to support the idea.  In all, however, I believe it's not an
>>accident that Boost.Python evolved from purely-static to a model which
>>supports dynamic conversions, not just because of usability concerns,
>>but also because of correctness and real efficiency.  So, let's keep
>>the conversation open, and try to hammer on it until we reach
>>consensus.
>
> Being difficult is the point ;-) If there's no difficulty there's no
> discussion.
>
> OK, I'm bassically convinced with that argument. If the majority of the
> lookups are O(1) then the extra cycles at runtime is worth the
> convenience.

They are.  Furthermore, to-python conversions for specific known
types can be fixed at compile-time.

> I just worry about O(n) lookups at a junction point in a program. It tends
> to poroduce O(n2) algos ;-)

When n != 1 it's usually 0 or 2.  As long as you're not nervous about
everything that calls through a function pointer I think we're OK.
Let's see what the luabind guys think.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com





More information about the Cplusplus-sig mailing list