COM/CORBA/DCOP (was: Hello people. I have some questions)

Neil Hodgson nhodgson at bigpond.net.au
Wed Sep 5 08:34:02 EDT 2001


Alex Martelli:

> >    At the risk of being pedantic (and with Alex, it is *such* a
temptation
> > :-) ), you can write your automation code (both client and server) to
deal
>
> If you have control of both client and server, yes, but then there's
> no point using Automation -- its interpretive nature is meant to help
> when either end is NOT under your control:-).

   No, it is symmetric. As a client I can keep all my data around in
variants, even in preallocated argument arrays of VARIANTs - think of making
many calls to particular methods. As an independently developed server, I
can reach into the VARIANT array directly, there is nothing particularly
difficult in subscripting an array of VARIANTs (each is the same size) as
long as the type is the expected one.

> But it's still a cost of marshaling, as opposed to one of dispatching.
> Most Automation clients cache dispatch-ID's, and so the Invoke call
> reduces to one test of the usual idiomatic form (one-test index-in-
> range checking):
>     ((unsigned)dispid)<maxdispid
> i.e. 2 machine instructions (unsigned-comparison, branch-if-above-equal
> with the latter mostly-not-taken),

   I wouldn't do it myself, but refraining from checking that a dispid is
valid could be considered a compliant implementation of Invoke.

> then a lookup-and-branch using
> dispid as the index in the (user-implemented) virtual table -- say
> 4/5 machine cycles in all versus the 2/3 machine cycles for a vtable
> use that could dispense with the index-in-range checking (wide
> per-architecture variation of course, but mostly in this ballpark).
>
> The marshaling cost of checking all variant arguments is a loop of
> N equality comparisons (for the vt fields) -- plus a lot if one or
> more equality-checks fail, of course:-).  But even without any
> translations, each argument access is quite a bit costlier than
> going directly to the stack for it -- indeed, if your code does
> a lot with its args, it may be worthwhile unmarshaling them to
> stack beforehand:-).

   Here is an implementation placed directly in the Invoke method:

// Apologies if this doesn't compile, its from memory but I did check
// the documentation:
HRESULT X::Invoke(DISPID dispIdMember,
  REFIID, LCID, WORD, DISPPARAMS FAR* params,
  VARIANT *result,
  EXCEPINFO *, unsigned int *) {
  VARIANTARG *rg = params->rgvarg;
  switch (dispIdMember) {
    case X_SET_FLAVOUR:
      if (params->cargs == 1 && // More paranoia if warranted
        rg[0].vt == VT_R8) {
      result->vt = VT_R8;
      result->dblVal = m_flavour;
      m_flavour = rg[0].dblVal;
      return S_OK;
    } else {
      // Coercion time possibly using DispGetParam
    }
  }
}

   Yes, my chosen operation is unfairly short but a lot of the Automation
servers I've worked on publish a lot of simple accessor methods which are
also short.

   With this code, I expect that the biggest cost is in the signature guard
conditions with pushing the arguments to Invoke also being significant.
There is argument access overhead here but I would not call it marshalling
as I would define marshalling as the transformation of one call format to
another instead of as the implementation of a function.

   Why do I feel justified in special casing one incoming type signature?
Because client programming languages and programmers are fairly predictable
with some always wanting to use one particular numeric type (often VT_R8) or
responding to available typeinfo by using the declared argument type.

   Neil





More information about the Python-list mailing list